[reportlab-users] A riddle...

Andy Robinson reportlab-users@reportlab.com
Fri, 19 Sep 2003 14:30:01 +0100


Interesting news, Dinu...thanks.  Is this a bug report?
I feel like I have missed the start of the story.

I have added comments below on testing generally which may interest
many people.

> Amoury, onze points! ;-) It's actually the partial result of a run of
> the RLTK testsuite, under both 1.17 and 1.18, after first converting
> all PDF output test files to bitmaps, then pixel-comparing respective
> files for both versions and generating one PNG for every page having
> at least one different pixel. The resolution is freely configurable,
> of course. This is really nice for testing and it takes only a few
> minutes!

This sounds cool.  We're moving to new offices and setting
up a bigger 'test workbench' in October and we shold try to
do this ourselves.  Can  you enlighten us what is doing
the rasterizing?  is it a Mac-OS X specific thing?

> The setup for automatically fetching and installing RL, then running
> its respective test suite was almost as interesting as doing the com-
> parison itself. I think older versions of the toolkit will not install
> as easily without distutils, but well, one more challenge...

Well, we aren't going to backport distutils to old 
versions.  I still think our setup script only
half works :-)
> 
> Another interesting test setup would be to take some RL code creating
> a PDF document and compare output using different versions of the RL
> toolkit. And, of course, there is the obvious test of comparing the
> output for more than one version of your own code, which is something
> like the trivial application of this.

Here we are working on something very important.  Currently,
if you run the same test script multiple times, you get
different PDFs.  This is because PDF is supposed to contain
unique document IDs, and we escape these so that 16 effectively
random bytes in the ID can be a varying length escaped string.
Also, you get different output between Python versions and
expecially between CPython/Jython, because we have used
things like objects' addresses i memory as comments and
because we use str() to format numbers in the PDF file.

We are currently debugging/finishing an 'invariant mode'. This
means that when you doc Canvas(..., invariant=1) you should
get totally repeatable results.  This makes regression
testing possible even without rasterizing PDF files.

What has driven this, which may excite a few people, is that
we are trying to have reportlab "Jython-certified".  Close
CVS watchers may have seen a few 
  'if sys.platform[0:4] == "java"'
lines creeping in.    We're making sure it can use java.awt.image
or PIL depending on the platform, and writing _rl_accel.java
whichw e will check in when it produces identical results.
This should result in a ReportLab Toolkit that "just works" on 
Java with reasonable performance.

The only sane way to test all of this is to run the test suite 
and compare all PDFs produced byte-for-byte with CPython.
Hence "invariant mode".

Dragan Andric, our part-time Java developer, is currently on vacation,
but he's going fast and I am pretty sure this will be ready
to play with and reliable in October some time.  

Best Regards,

Andy Robinson