[Scons-dev] Optimizing SCons...
Gary Oberbrunner
garyo at oberbrunner.com
Sun Dec 2 11:58:53 EST 2012
This looks very interesting. Speed and memory use are two "hot button"
issues for many SCons users.
What does it do that would break existing projects? Is it the not storing
of full paths? (When were slots introduced? 2.2? In that case we're fine
on that front.)
One of the SCons buildbot builders runs a set of memory and timing
benchmarks -- maybe we could get your version run through that. I suspect
making a branch is the best way to start that -- what do you think, Bill?
On Sun, Dec 2, 2012 at 9:01 AM, Dirk Bächle <tshortik at gmx.de> wrote:
> Hi,
>
> over the last days I created a patch that aims at reducing SCons'
> memory footprint, especially for large (in terms of number of
> files) C/C++ projects.
> It can be applied to the current latest revision (b496d47c4efb)
> and is attached as archive to this email.
>
> The amount of changes is split into the basic steps:
>
> 1) Make Node classes use slots.
> 2) Make SigInfo classes use slots.
> 3) Make Executor and Batch use slots and
> stop caching full paths in File nodes.
>
>
> I had to rewrite the memoizer count framework to use
> decorators instead of the original metaclass approach.
> Additionally, I had to correct a lot of tests and Tools
> to ensure that they still pass without any fails
> (at least for those tests that I can run locally under Linux,
> some further adaptions might be necessary).
>
> Here are some numbers for the testsuite that I compiled
> (it consists of several real-life SCons projects):
>
> Project | before | after |
> ==============================**===
>
> ascend 96MB 82MB
> bombono 120MB 104MB
> lumiera 114MB 101MB
> mapnik 148MB 129MB
> sconsbld 540MB 377MB
>
> ==============================**===
>
> They list the maximum of allocated memory for a clean build, before and
> after the optimizations.
> For the first four projects the results don't show that much of an
> improvement. But this is because they are still relatively small,
> compared to the basic overhead that is needed while parsing the
> SConstruct/SConscript files for example.
> The last (sconsbld) is a benchmark, created by a Perl script,
> with a number of 12000 source files, and 12000 includes...resulting in
> 12000 object files and 600 executables.
> Increasing the number of files seems to drive up the percentage
> of memory you can save. Doubling them up for "sconsbld" (24000 source
> and include files) gives 1579MB vs 1052MB.
>
> I also used cProfile and simple timing of the single runs with the
> "time" command, to ensure that no speed penalties are involved with
> the changes. Good news is that all runs, clean builds as well as
> updates, tend to get a little faster...probably due to the usage
> of slots.
>
> So, that's what I have right now. My questions are:
>
> - Is this of any interest (or is it still "too early" for
> optimizations ;) )?
> - If yes, what could be the next steps?
>
> I didn't simply upload this as a pull request, because the changes
> would definitely break existing projects and custom Tools.
> So, either some form of deprecation cycle or a separate branch/repository
> would be needed (default vs optimized).
>
> Comments anyone?
>
> Best regards,
>
> Dirk
>
>
> _______________________________________________
> Scons-dev mailing list
> Scons-dev at scons.org
> http://two.pairlist.net/mailman/listinfo/scons-dev
>
>
--
Gary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://two.pairlist.net/pipermail/scons-dev/attachments/20121202/bd417a21/attachment.htm>
More information about the Scons-dev
mailing list