On 8/30/13 4:01 AM, Bert Huijben wrote:
>On 8/30/13 1:37 AM, Julian Foad wrote:
> So we only see problems a day later?
> And can check our fixes the day after.
>
> Maybe we should then also ask every committer to test on Windows and Unix
> before every commit?
> (Or maybe ask them to setup their own bots, because we can't provide them
> the current test coverage)
>
> Why reduce the testing on the existing buildbots?
You realize I'm talking about the release branch build bots. I don't think any
particular change is needed for trunk build bots.
As it is right now the branch build bots are generally broken anyway because
they live on the same host machines as trunk and trunk marches on and requires
dependencies that the release branches can't use.
So even a more basic set of tests on every commit would be an improvement. I
also don't think we need as much effort on every commit in the release branches
because they are review then commit and tend to move much more slowly.
> We should just add 'fast bots' if we think the current bots are too slow. I
> don't think new hardware was added at the primary bots for the last 2-3
> years, so just spending some money there should improve the build times
> without reducing the test coverage given within a few hours.
Sure if we can find resources to do more testing fast I'm not objecting to
that. I'm just trying to prioritize resources where they are needed most.
I didn't really mention it but it's critical that the release branches have
build bots that are stable. They can't be shared with trunk if we're going to
use OS packages (and we probably should).
> The openbsd bot is now broken for almost a week, and with a setup to only
> test once per 24 hours more bots would be in this state for longer periods.
Agreed, The openbsd bot is a great example of what's not working.
> Didn't we already have one?
>
> See:
> http://ci.apache.org/builders/svn-trunk-nightly/builds/875
Yep, but only for trunk. I'm talking about release branch nightlies.
> I don't see what it would help to run the bots run on tarballs vs running it
> on specific revisions.
Validation of our libtool/autoconf/swig/etc... version choices. The current
build bot process runs autogen.sh. Our current process only tests the checkout
from SVN. However, almost none of our users or packagers use that.
A great example of an error that has persisted for a long time is the bindings
breakage on OS X in our tarballs. SWIG is run by the RM on Linux, which often
turns triggers some APIs being enabled by the CPP. Then when you go to build
the bindings on OS X they break. An average user is going to think our
bindings are broken on OS X if they're building them. Rather they just need to
know the magic incantation to throw away the already produced C files and have
their own copy of SWIG installed. (Which reminds me I need to get back to
fixing that).
That's at least part of the reason for us doing manual testing on the tarballs.
Otherwise, we could just tag, test, generate tarballs and then sign the
tarballs with nothing other than a simple comparison of the source files.
Testing tarballs on the release branches does avoid more diverse autogen.sh
dependencies, but I don't think our build bots really have that diverse of
dependencies in this case anyway.
> It will certainly cost a lot of time to rewrite the scripts, while the
> buildbot can just handle this for tags/branches/revisions.
I realize this isn't entirely cost free. It's not going to happen overnight.
But it's something we should strive for.
I also don't think it'll take much for the build bots to use tarballs. Just
have to replace the checkout script with a tarball retrieving/expanding script
and then skip autogen.sh in the existing buld scripts.
> I try to build from tags/revisions whenever possible, as this allows me to
> always get the exact sourcecode for files back via the SCM information
> stored in the debug information without having to keep all versions of all
> tarballs.
>
>
> I trust that our tags and revisions in Subversion don't change. Don't you?
I do the same. I trust our tags and revisions. That wasn't the point.
>> One more thing we'd need to to before signing, of course: verify that the
>> buildbot test runs have completed to our satisfaction (right tarball,
>> sufficient platforms, etc.).
Right, this was assumed but I should have mentioned it in my original email.
> The manual testing also adds coverage outside the scripted tests. My scripts
> make my tests run almost identical as on the buildbot, but over the last 5
> years I found more than a few release problems in the difference between
> that 'almost identical' and the buildbots itself.
As I noted in my original email, nobody is stopping interested parties from
running additional manual tests if they want to. I just want to encourage
people to test/sign rather than assuming that someone else will do it because
they aren't excited about spending the time to do the testing.
There will always be things that are missed by the build bots and even manual
testing becuase they are random failures. However, I'd guess that we'll
probably see more of them if we started running extensive tests on a nightly
basis because the test suites would get run a lot more times than even our
manual testing is showing.
Received on 2013-08-30 23:10:03 CEST