On Sun, May 19, 2002 at 12:44:13AM -0700, Colin Putney wrote:
> I have two somewhat contradictory responses to this.
> On the one hand, I'd say that this system is not significantly better
> than the one used by the Subversion team. Instead of concentrating the
> task of detecting problems at one point, that responsibility is
> distributed among the developers, each of whom is responsible for
> ensuring that his changes don't break anything. Since commits are
> atomic, each developer doesn't have to worry about a combinatorial
> explosion of changeset permutations. He just has to make sure that his
> changes, when applied to the current state of the tree, don't break
> At the same time, build breakage is probably the easiest to detect and
> easiest to fix of all the possible problems a changeset could introduce.
> To really detect problems you need to test the behaviour of the software
> once it's built.
> The Subversion team does this with a suite of automated tests developed
> along-side of Subversion its self. Each developer ensures that his
> changes not only don't break the build, but also don't break any of the
> tests, again avoiding the need to test changeset permutations. This type
> of automated testing can't be built into the version control system
> because it's too domain-specific. So much so that it's part of the
> project being versioned.
On the one hand, if you can get things to work as Subversion has, then
the automated testing is not very important, but some environments
make it hard to maintain the discipline that Subversion does.
- The longer it takes to build your project or to run your tests, then
more tempted the developers will be to say, "Well it worked for me
before. All I've changed was to update someone elses changes, it
should still work, so I'll commit it without a full build."
- The longer the build or testing takes, the more likely someone will
commit something before you finish. If you are a disciplined and
good citizen, then you are now required to throw out all of that
work on the current build and tests and reupdate, rebuild, and
- My personal philosophy is that, if a computer can do it for me, why
should I do it by hand. By this philosophy, my double checking
that everyone else's changes haven't screwed up mine is unnecessary
manual work. Note: if you don't have a good test suite, the
computer can't do a very good job of validating your change, so you
should probably do it manually.
- I've been in environments where enough people didn't have the
patience and discipline to check that they didn't break the build.
The end result was that the repository was unbuildable for 75% or
more of any given day.
So, yeah, the automated validation isn't needed if you are in a good
situation, but no all situations are good.
On the other hand, I totally agree with you that subversion should not
contain this functionality as an integral piece of it. The
flexibility gained from making subversion "just" a version control
system is essential to some uses of it. Since subversion exposes
programatic interfaces to itself, a build system should be able to use
subversion without having to modify it (I hope).
> On the other hand, where automated testing along the lines Zeller
> proposes *is* useful, it's quite possible to build it on top of a
> version control system that knows nothing about the build process. The
> svn-breakage mailing list is a good example. Various machines with
> different CPU architectures and operating systems do automated
> checkouts, builds and tests of each changeset and mail the results to
> svn-breakage. It's simple, effective and flexible.
> Zeller's system or the user-work/user-commit system you propose could be
> implemented on top of Subversion as easily as within it.
> >Another major issue that you can address with integration of build with
> >repository is product cacheing. If you're working with a really big
> >system, builds take a *long* time. And lots of programmers are doing
> >builds to test their own stuff. All of the programmers are spending a
> >of time idle waiting for build results. (At one point, I was involved
> >with the VisualAge C++ project. Builds of the system could take two or
> >three hours. We'd spend half of our work day waiting for builds to
> >But almost all of that build time is redoing the exactly same work on
> >many different systems. We had about 7 people in our building, working
> >a part of a 4 million line system. Each person would be changing one or
> >two source files, and then waiting for the build result of those changes
> >compared with the nightly builds.
> >If the SCM system understands the build process, it can store the
> >intermediate results, and before starting a build step, check if anyone
> >else has either done that already, or is in the process of doing it. At
> >the least, you avoid a lot of redundant builds; at the best, you get
> >build parallelization.
> Again, this is properly part of the build system rather than the SCM
> system. It doesn't require the integration of the two. Heck, the easiest
> way to achieve this would be to check in the object files along with the
> source. As long as all the developers are working on the same platform,
> you could do it with CVS or Subversion today!
I'd be surprised if it would be *that* simple to get a functional
build caching system working. For one thing, you need to manage those
build products carefully or your repository will balloon to absurd
sizes when a slightly better system should keep it much more
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Mon May 20 00:55:19 2002