[dojo-contributors] Fwd: Using Dojo 1.6 with Closure -- document attached
rgill at altoviso.com
Mon Mar 7 10:26:37 EST 2011
On Thursday 03 March 2011 00:47:20 Stephen Chung wrote:
> Hi Rawld,
> You've raised very valid questions, and I think you're absolutely right.
> The Closure Compiler is not for everybody. Its restrictions, pain and
> limitations must be counter-acted by huge benefits; if there are no large
> benefits, then it is really not worth to use it on a project.
> Currently, I think that the compiler is worthwhile to use only on projects
> mobile device). With more CPU and RAM on mobile devices in each new
> generation, in a year or two even this need may go away.
If we can do some smart transforms--and I think we can--to leverage the
advanced static optimization of closure, we should go for it.
It's important to remember that improving size/speed numbers have two key
benefits: actual performance and marketing. Going from 21K to 19K will likely
have little real effect on performance, but may have a large effect on
> Regarding some of your points:
> 1) Properties renaming: my opinion is that it is really valuable only for
> obfuscation purposes, not actually saving bits, although I did get a
> roughly 1/3 smaller file after running through the compiler than with the
> regular build (compress with closure in simple mode). I expected the
> gzipped difference to be minimal, but strangely the difference is again
> roughly 1/3. I suspect that is more due to elimination of not-used
> functions than due to variables renaming.
I don't fine obfuscation valuable.
> 2) Dead code removal: Dojo is already very modularized so there is much
> less to gain. Moving to AMD format should make it even better. However,
> even for highly modularized code, I remember based on Google's analysis
> (that I have read somewhere) that roughly 1/3 to 1/2 of library code is
> typically not used -- not every application calls every function. For
> example, it can potentially reduce the essential Dojo core portion to,
> say, to 50K from 89K, depending on what functions are used. That savings
> of 40K really is not much of a benefit, especially when the web app is
> 500KB and up...
Also, I think we need to consider the dojo API design carefully as we move
forward. There are cases where the function signature is highly overloaded so
that one signature can do many things (for a simple example, consider
dojo.cache). This results in 5-20 lines of code to decode the args into the
true intent of the caller. If we redesign some of these APIs, I think we would
see benefit (more functions become dead, remaining functions are shorter) with
an optimizer like closure.
> * Note: Due to the particularities of the language, I believe that any
> same roadblocks faced by the programmers of the Closure compiler and will
> require similar restrictions, so my point is that in *any* dead-code
> elimination scenario, we'll have to do similar things to make sure that the
> code is easy to analyze. Therefore, the learnings from using the Closure
> compiler will be beneficial regardless. Now, it may not be that we'll
> always *need* to remove dead-code...
> 3) What I find beneficial is flattening of namespaces and virtualization of
> prototype methods. These have great performance benefits, especially on
> slower mobile device. However, with fast mobile CPUs and better browsers,
> or with the move to AMD format (which eliminates namespacing), namespace
> flattening will be superceded.
Agree. Though not simple, there are lots of possibilities here. For example...
* some of the runtime work done by dojo.declare could be done at build-time
* generally, any closure can be rewritten as an object with a prototype
method plus per-instance data. This may provide great performance benefit to
> 4) Virtualization of prototype methods currently cannot be done with
> dojo.declare'd classes. However, it can be done for third-party libraries
> though. I refactored a library that I use (LINQ.js) and Closure largely
> eliminated the performance penalties from using LINQ by virtualizing entire
> call chains.
> 5) Function inlining -- currently this has great performance benefits when
> it happens (though the restrictions are so severe that it rarely happens).
> It is great for single-trace code that is written in a way that makes it
> flexible to re-configure with different options but automatically inlined
> and have all these abstractions unwrapped by the compiler -- something like
> dependency injection which can produce an optimized build for each config.
> In fact, this is how Google handles i18n -- an optimized build for each
> 6) Code rewriting -- can probably be done by any build processor with a
> So, as much as I like the Closure Compiler, I must admit that you are right
> in many aspects. Especially that, when we move to the next few generations
> of mobile devices, and Dojo moving to AMD, there may not be a need to
> remove dead code, flatten namespaces, virtualize prototype functions and
> etc. Renaming of properties is only valuable for obfuscation (although for
> many commercial web apps, this is a great benefit). Inlining only
> valuable for making optimized builds based on a settings file.
> So, to wrap up my long rantings, improvements to mobile hardware/browser
> and to Dojo will render the Closure Compiler less useful, except for
> several potential areas:
> - Very small projects that require the smallest downloads (where
> savings a few KB's on dead code elimination on the library actually makes
> sense) - Using Dojo with non-Dojo third-party library code (that is
> written in a style to take advantange of prototype virtualization)
> - Obfuscation of application logic
> - Making optimized builds based on a (large) set of settings that
> permeates throughout the application
I generally agree with you on these points. I'm not anti-closure. Clearly,
some of its optimizations are quite advanced and would be expensive to develop
ourselves. We should continue to pursue this work.
More information about the dojo-contributors