[reportlab-users] Moving to Bitbucket, and development roadmap..

Andy Robinson andy at reportlab.com
Tue Mar 26 01:21:29 EDT 2013


On 25 March 2013 20:50, Stephan Richter <stephan.richter at gmail.com> wrote:

> On Monday, March 25, 2013 02:52:43 PM Andy Robinson wrote:

>> - pure unit tests, runnable anywhere

>

> Well, I think you really want to make this work:

>> $ python setup.py test


We currently use distutils, not setuptools, and there is AFAICT no
'setup.py test' in distutils. So it seems to be an informal
convention which has got more popular. Long ago we added support for
'setup.py tests' and 'setup.py tests-preinstall' to CD into the
'tests' directory and execute our 'runAll.py', and the README is at
least correct now. But I am happy to change this next week.

A bigger question is whether there is a benefit to using setuptools or
distribute. I read stuff about them merging. I just don't want to
spend time keeping up with a subject that is evolving a lot.




> That said, it should compile the necessary extensions. Also, I have lots of

> good examples on how to hook up your custom test collector/runner to this

> mechanism.


So if someone wants to test without installing, running 'setup.py
test' WITHOUT a separate 'setup.py build' or 'install' should compile
the extensions and put them on a path temporarily? Doesn't that mean
we have to hack distutls/setuptools in a very non-standard way and do
a lot of work? Isn't that going a bit over the top?

As I see it, we run the tests before commits and every night before
building packages. We won't ship code where they fail. So they are
valuable (a) to check everything installed correctly on YOUR platform,
and (b) if you are working on our code. I don't think we should do a
lot of extra work to create our own ringfenced test environment on the
target machine BEFORE installing. If the user wants to do that, just
use a virtualenv then throw it away if it did not work.

So, I am suggesting that we just silently skip tests which do not
apply (extensions, loading image from URL with lack of net access).
The user will get warm fuzzy feelings from the line of dots. But if
they install, run the tests and something is wrong, it will do its
job.

I agree that it would be good to have a specific set of images
generated from specific PDFs as a regression test. If we did it with
all output, we'd get spurious warnings quite often. I am not quite
ready to say it's a 'bug' or 'failure' if we improve some piece of
typography or alignment. Anyway, that sounds to me more like a
separate script we could run to compare the output of two repository
revisions, or to run on an internal test server nightly and warn us of
any unintended changes, rather than something which should happen in
'setup.py test'.

- Andy


More information about the reportlab-users mailing list