Hello.
I meant to have this done some time ago, but I got
sidetracked. At any rate, the attached .tar file
contains the code that implements each of the
test goals defined in www/testing-goals.html.
To get this working, you need to do the following:
% cd $src/subversion/subversion
% tar -xzvf svntest.tgz
That will create a subversion/subversion/svntest directory.
(I am not married to that directory name, if folks
think this would work better as a subdirectory of
subversion/subversion/test that would be fine too)
At this point you would need to patch the toplevel
configure.in and rerun configure. This chmod
thing is a bit ugly, it seems to be a bug in
the AC_OUTPUT macro. If you just want to init
the thing by hand instead of rerunning configure,
you can create the subversion/svntest/ dir
in the build dir, copy the svntest.in file
to svntest and then fill in the @VAR@ entries
in the svntest script.
Index: configure.in
===================================================================
RCS file: /cvs/subversion/configure.in,v
retrieving revision 1.76
diff -u -r1.76 configure.in
--- configure.in 2001/03/30 19:32:51 1.76
+++ configure.in 2001/04/14 21:52:36
@@ -245,6 +261,7 @@
subversion/libsvn_ra_local/Makefile \
subversion/svnadmin/Makefile \
subversion/mod_dav_svn/Makefile \
+ subversion/svntest/svntest \
subversion/tests/Makefile \
subversion/tests/libsvn_delta/Makefile \
subversion/tests/libsvn_fs/Makefile \
@@ -260,6 +277,9 @@
subversion/bindings/tcl/Makefile \
])
+dnl This version of autoconf does not seem to respect the executable
+dnl bit on a shell script
+chmod +x subversion/svntest/svntest
dnl Print warning messages about what we did and didn't configure at the
dnl end, where people will actually see them.
Once that is finished, you can check that the svntest script
is working by adding the subversion/svntest to your PATH
and running `svntest help`. That should print out the
usage text for the script. If you get an error at this
point, make sure that tclsh8.3 is installed and on your PATH.
Newer systems will already have tclsh8.3 installed. You can
get an RPM for Red Hat 6.X here:
http://jfontain.free.fr/tcl-8.3.2-1.i386.rpm
You can also get the source code here:
http://dev.scriptics.com/download/tcl/tcl8_3/tcl8.3.2.tar.gz
If Windows folks have Cygwin installed already, you can
use it to run the script by editing the svntest file
to uncomment the Cygwin bit. You will also need Tcl,
which is available here:
ftp://ftp.scriptics.com/pub/tcl/tcl8_3/tcl832.exe
Ok, now the fun starts. An example set of test cases
for the command line client is already included in
the tar file. The client tests are located here:
.../svntest/tests/client/tests.tcl
To actually run these tests, you just type:
% svntest
Running this command will recurse into test directories
in the srcdir and run each of the tests it finds. As
the tests are run, output is printed to the screen.
All of the tests should pass, so you should not
see any output other than the names of directories
that are being recursed through.
When the tests finish, summary results will be printed.
The results might look like this (yours will differ):
svntest.tcl: Total 48 Passed 39 Skipped 0 Failed 9
This functionality alone satisfies a number of the design
goals. We have exact test results, they can be accessed
in a random fashion, test results do not require
monitoring, the list goes on.
Of course, the really interesting functionality is
the logging, crash recovery, and regression monitoring
features build on top of this. When you ran the
svntest script, it created a logging/ directory in
the toplevel of the build directory.
% ls logging
host.changes host.log host.out
The host.out file duplicates the test output that was
printed to the screen. The host.log file is a machine
readable log of test results, it simply indicates
if the test passed or failed. The host.changes file
is a generated log of how test results changed
based on the data in the host.log file.
Initially, you will not have any results to compare
against, so the changes file will not be all that
interesting. It will simply show a number of tests
were added. We will spice things up a bit by
doing a commit-prep of these changes, and then
introducing some regressions and core dumps.
First, you will want to move the log files you
just generated over to the srcdir. This would
typically only need to be done by the test
maintainer, but this is just a demo. Run
`svntest commit-prep` on the command line
and the .log and .changes file will be
copied over to the srcdir logging directory.
Now, we will want to introduce some regressions
into the test process. This sort of thing would
typically happen when a developer checking in
a bad chunk o code, but we will just simulate
it here by patching the tests.tcl file.
cd to the svntest/tests/client in the srcdir
and run `patch -p 0 < patch`. That will modify
the tests.tcl so that a couple of the tests
will crash and a couple of other will suffer
regressions. cd back to the build/.../svntest
directory when you are finished.
Now, run the svntest script and take a look
at the output. You should see some output
that looks like this:
...
tests
tests/client
!!!! Crashed during "client-help-alias-2" Recovering ...
tests
tests/client
!!!! Crashed during "client-help-checkout-1" Recovering ...
tests
tests/client
...
This demonstrates the automated crash recovery
design goal. Basically, we are simulating an
in-process core dump while running the
client-help-alias-2 test. It is really important
that a core dump halfway through does not keep
other tests from running. As you can see here,
a second core dump in client-help-checkout-1
would never have been seen if the tests
stopped running after the first crash.
While automatic crash recovery is really cool,
it is not really the most important feature
of the framework. The automated tracking of
regression is much more important, and that
is also demonstrated here. You should see
output from 2 failing tests, client-help-checkout-2
and client-help-delete-alias-1. The actual
output does not matter in this case because
the patch to tests.tcl just returned a
bogus value, the important part is how
the framework tracks the failure.
After each of the test is run and the results
are displayed, the host.log file will be compared
to the host.log from the previous run (the
commit-prep was needed so we could do the compare).
The results of this comparison should have been
appended to the hosts.changes file, it should
look something like this:
2001-04-14 {Skipped {0 10} Passed {39 27} Total {48 39} Failed {9 2}} {
client-help-alias-2 {PASSED {}}
client-help-checkout-1 {PASSED {}}
client-help-checkout-2 {PASSED FAILED}
client-help-delete-alias-1 {PASSED FAILED}
}
Note that the client-help-alias-2 and client-help-checkout-1 were
the ones that crashed. The client-help-checkout-2 and
client-help-delete-alias-1 moved from the PASSED state to
the FAILED state, which shows that a regression was recorded.
Neat eh? Naturally, we don't actually want regressions so
the developer would want to do some more review of his
patch at this point.
Folks might also be interested in some of the other
docs I wrote up, they live in the svntest/docs
subdirectory. For example, there is one about what
command line args the svntest shell accepts and
another about how to add a new test case. There
are also some additional tools that will be built
on top of these basic log features. I have not
really focused on lots of examples of how to write
new tests, or how to create in-process tests. Getting
the right framework in place is more important.
If you are interested in how the tests will actually
look, check out the slew of tests in
svntest/tests/client/tests.tcl.
Well, that about does it. As I mentioned before,
I am using this framework in other testing projects
and it scales very well. I leave it up to you
folks to give it a try and tell me what you
think. If folks like it and want to incorporate
it into subversion, I can get you the CVS files
instead of these flat files.
cheers
Mo DeJong
Red Hat Inc
Received on Sat Oct 21 14:36:28 2006