[ng-dhtml] putting the build tool thing to bed
alex at dojotoolkit.org
Tue Sep 28 11:23:46 CDT 2004
-----BEGIN PGP SIGNED MESSAGE-----
On Tuesday 28 September 2004 7:55 am, Tom Trenka wrote:
> >By including in your distribution package only what you need.
> > Sure, you can manually cat all the file togeather (and hope that
> > all the dependency satisfaction code still works right), but it's
> > a much eaiser thing to get right inside the build process.
> >And so now you ask why I'd want them all in one file? Becuase HTTP
> >setup/tear down is a significant cost, and we're all about
> > reducing adoption costs. We will be providing a "standard" set of
> > files packaged togeather, but it even makes that easier to do it
> > through the build process.
> Do you have proof of this?
Yes. The original NW single-core-file build came about as a result of
profiling (mostly done by MDA) which identified the multiple fetches
(and more tellingly, the requests to check if-modified-since) as a
major source of latency in app response. Given that onload can't fire
until the browser knows that everything that is relevant to the
display of the page (or thereabouts) is correctly gathered, reducing
the ammount of HTTP overhead in order to get this done was a
Likewise, on the product I've been working on for the last 9 months
(PowerAnalyzer), much work was done in order to ensure exactly this:
that initial HTTP requests were always gzip encoded, that as few
subsequent 200/ok were required later on, and that overall latency
was reduced by putting things into externally referencable
(cacheable) files. HTTP overhead isn't the only thing in play here,
but given the option of having our users put 30 <script url="..."/>
tags in their page in order to get fast caching and good ordering
(and thereby removing dependency satisfaction from the purview of the
toolkit), allowing the toolkit to manage dependencies inefficiently,
or still allowing the toolkit to manage dependencies yet be
contientious of network resources, I'm going to choose the 3rd.
> You've mentioned this before. I've
> personally spent a year in coding hell, dealing with a system that
> was predicated on that exact assumption; so instead of pushing just
> what was needed down the pipe, they shoved all of it at
> once--because they were overly concerned about HTTP socket setup
> and tear down.
Well, that's idiotic too. See my point above about HTTP
setup/tear-down not being the only thing in play. In performance
optimization, you have to strike an optimal ballance. I'm not by any
stretch of the imagination suggesting that we do something like what
you've been fighting with.
> And never once did I see any definitive proof that
> pushing something all at once, down one socket connection, is more
> efficient than letting the browser mechanisms in place handle that.
The first time, it's very unlikely to pay dividiends. It's subsequent
checks to determine up-to-dateness of the file where it pays off the
> I'm not saying it isn't a nice option. I'm saying that it's
> unnecessary, and would fall under a "nice to have" category as
> opposed to a "requirement" category.
I dissagree about the lack of necessity. I have found in several
wide-scale deployments that it is indeed necessaray.
> > Essentially, unless you've done the release engineering on
> > something like netWindows or burst, it's hard to grok how much
> > goes into getting the right things in the right place in the
> > right order. Build tools make that a process that I need to get
> > right only once. This is a Good Thing (TM).
> As I've mentioned before, I solved this already for f(m), without
> using anything on the server beforehand.
Are you building docs from source? Are you running automated unit
tests? What shell does your tarball/zip creation code rely on? Or are
you doing all that by hand?
> There are going to be
> name dependencies no matter what happens, no matter how hard you
> try to avoid it or apply a set of tools to minimize it.
> IMHO, *any* dependency on a tool that is *not* JS-based (and
> therefore truly portable) is a BAD thing (TM).
And you're going to get a multi-platform JS interpreter for the
command line to handle these tasks from....?
I know! try Rhino!
> The strength of
> what we're talking about doing is that it truly *is* portable,
> without anything but a host application...the only way I personally
> wouldn't have a problem with it is if all of these tools did the
> builds/doc gen/etc on our server, accessible through a web browser,
> as a customized build download tailored to the supposed developer's
> needs (and the developer didn't have to know any of those tools,
> there'd be a decent UI there to aid the process). And I would be
> very leery of requiring any contributor--invited or not--to have to
> learn a complete suite of tools just to contribute, say, a bug fix.
I think that goes into my nice-to-have category. I'm very concerned
with getting people up-and-running quickly (hence my decision against
Make). OTOH, I do assume a base level of competence. Everyone on this
list today is more than past that level, yourself included. I'll
worry about the average programmer's ability to easily build the core
when the average programmer starts producing high-quality patches for
it. Until then, the average programmer gets my sumpathy when they
start to use the toolkit, and we (non-average programmers) get my
sympathy the rest of the time. I'm doing Dojo to make my life better
> And I'd be real leery of trusting a custom build that didn't have
> it's own regression testing for that particular build
alex at dojotoolkit.org
alex at netWindows.org F687 1964 1EF6 453E 9BD0 5148 A15D 1D43 AB92 9A46
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
-----END PGP SIGNATURE-----
More information about the NG-DHTML