On Fri, Mar 20, 2015 at 10:30 AM, Johan Corveleyn <jcorvel_at_gmail.com> wrote:
> On Fri, Mar 20, 2015 at 9:42 AM, Johan Corveleyn <jcorvel_at_gmail.com> wrote:
> ...
>> Unfortunately, I can't verify the rev
>> file, since I don't have it anymore, it has been overwritten by trying
>> to reproduce it (grrrr, should remember to backup leftover
>> repositories and working copies after a failed test run, before trying
>> to reproduce it). Whatever I try now, I can't reproduce it anymore
>> :-(.
>
> I'm wondering if something can be improved in our test suite to help
> diagnosis of hard-to-reproduce test failures. When this happens, you
> typically wish you could analyse as much data as possible (i.e. the
> potentially corrupt repository, working copy, dump file, ... that was
> used in the test).
>
> Currently, I can think of three causes for losing this information:
>
> 1) You run a series of test runs in sequence from a script
> (ra_local, ra_svn, ra_serf), all using the same target directory for
> running the tests (R:\test in my case, where R: is a ram drive). If
> something fails in ra_svn, but succeeds in ra_serf, your broken test
> data is overwritten.
>
> 2) You don't know in advance that the failure will turn out to be
> non-reproducible. You can't believe your eyes, try to run it again to
> be sure, and lo and behold, the test succeeds (and the broken test
> data is overwritten), and succeeds ever since.
>
> 3) Your test data is on a RAM drive, and you reboot or something. Or
> you copy the data to a fixed disk afterwards, but lose a bit of
> information because last-modified timestamps of the copied files are
> reset by copying them between disks.
>
>
> For 1, maybe the outer script could detect that ra_svn had a failure,
> and stop there (does win-tests.py emit an exit code != 0 if there is a
> test failure? That would make it easy. Otherwise the outer script
> would have to parse the test summary output)?
>
> Another option is to let every separate test run (ra_local, ra_svn,
> ra_serf) use a distinct target test directory. But if you're running
> them on a RAM disk, theoretically you might need three times the
> storage (hm, maybe not, because --cleanup ensures that successful test
> data is cleaned up, so as long as you don't run the three ways in
> parallel, it should be fine). I guess I will do that already, and
> adjust my script accordingly.
>
>
> Addressing 2 seems harder. Can the second test execution, on
> encountering stale test data, put that data aside instead of
> overwriting it? Or maybe every test execution can use a unique naming
> pattern (with a timestamp or a pid) so it doesn't overwrite previous
> data? Both approaches would leak data from failed test runs of course,
> but that's more or less the point. OTOH, you don't know that stale
> test data is from a previous failed run, or from a successful run that
> did not use --cleanup.
>
>
> And 3, well, I already reboot as little as possible, so this is more
> just a point of attention.
>
>
> Maybe addressing all three at the same time could also be: after a
> failed test run, automatically zip the test data and copy it to a safe
> location (e.g. the user's home dir).
>
> Thoughts?
>
FWIW, for the time being (I don't know enough about the test
infrastructure to build a more robust solution), I've come up with
this quick hack to address point 1, by uniquifying the test directory
accross the different test variations:
I have a batch script runtest.bat with following contents (note the
TESTKEY variable which I use as part of the test-path):
[[[
@echo off
SETLOCAL
SET PATH=%~dp0\dist\bin;%PATH%
mkdir R:\temp 2>NUL:
SET TMP=R:\temp
SET TEMP=R:\temp
if "%CONFIG%" == "" SET CONFIG=release
SET TESTKEY=%CONFIG%%1%2%3%4%5%6%7%8%9
SET TESTKEY=%TESTKEY::=_%
SET TESTKEY=%TESTKEY:/=_%
SET TESTKEY=%TESTKEY:\=_%
python win-tests.py -c --log-level=DEBUG --%CONFIG% %1 %2 %3 %4 %5 %6
%7 %8 %9 R:\test_%TESTKEY%
ENDLOCAL
@echo on
]]]
And I call this runtest.bat from within another script named alltests.bat:
[[[
call runtest -p
call runtest -p -u svn://localhost
call runtest -p --httpd-dir=C:\Apache24 --httpd-port=1234 --httpd-no-log
]]]
Which results in following directories with the test output on my R:
(ram-)drive:
[[[
R:\>dir /b
Temp
test_release-p
test_release-p--httpd-dirC__Apache24--httpd-port1234--httpd-no-log
test_release-p-usvn___localhost
]]]
This also helps a bit with point 2 above, because if I retest a
failing test, I usually add a -t option to rerun a single test (which
would result in another separate test path)
A problem might be that I'm making the path too long, and some tests
might fail because of that (Windows path length limitations etc).
We'll see ... For now it works :-).
--
Johan
Received on 2015-03-24 22:59:53 CET