[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

regression test system design question.

From: Mo DeJong <mdejong_at_cygnus.com>
Date: 2001-03-10 00:59:57 CET

Hi all.

I have a regression test system design question
for folks to ponder.

First, a quick blurb about the generic interface
that I am considering. It would be a `svntest`
program that would start a test case run and
automatically log test results as the system
runs. By default, it would print nothing
when a test case passed. If a test case failed,
it would print info about how it failed.

% svntest
...
FAILED client-test14 COREDUMP
...
...
svntest: Total 100 Passed 90 Skipped 0 Failed 10

Note that running `make check` from the toplevel
directory would just run `cd $SVNTEST ; ./svntest`.

The results of the test run would be a default.log
(the running test log) and a default.changes
(shows regressions and other result changes).
Note that these test results need to be saved
in the CVS so that test results can be compared
to results from previous runs. Now this is nice
and simple, but it fails to deal with multiple
hosts. So here is the real question.

Should test results and logging info
be separated out on a per host/build basis?

For example, I might want to run the tests
on Linux and Solaris. If it is a strict
requirement that one be able to log and
compare test results on a per host basis,
that is going to make the logging
component of the test system more complex.

If test results are stored on a per host
basis, the svntest/logging dir would
look something like:

% ls
default_i686-pc-linux-gnu.log
default_i686-pc-linux-gnu.changes
default_sparc-sun-solaris2.6.log
default_sparc-sun-solaris2.6.changes

What is the alternative?
Well, you could just save one
log that was generated on a
reference platform. Results
for each system would be
generated and then compared
to the reference platform's results.
This approach has some benefits
and some drawbacks.

The reference platform approach
is more simple since you only
need to keep one log file in
the CVS. It also works for 99%
of the things that you need to
test since the same code is being
executed on different platforms.
It also does a good job of
showing how test results
differ from the reference
platform.

The problems with the reference
platform approach show up when
you have test result differences
between the reference platform
and the test platform. For example,
suppose your reference platform
is Linux and you run the tests
on Solaris. You then get a
report indicating regressions
in 5 test cases. You don't know
if these 5 regressions were
caused by recent changes or
if they are just Solaris specific
problems that produce different
results when compared to Linux.
You would have to look at each
test by hand to figure that out.

If results are stored on a per
host basis, the system would
only report regressions
that showed up on Solaris.
If would be up to the user
to compare Solaris and Linux
(but a script could be
written to do that too).

So, what do folks think of
these options?

Mo DeJong
Red Hat Inc
Received on Sat Oct 21 14:36:25 2006

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.