[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: CVS update: subversion/subversion/tests/clients/cmdline README common.sh

From: Karl Fogel <kfogel_at_collab.net>
Date: 2001-03-27 19:38:30 CEST

I'm glad this thread is alive. Please don't be offended that my
commits seem to ignore it right now; this is a temporary situation,
resulting from these three facts:

   1) We need to have some basic automated tests of client
      functionality, just to make it to Milestone 2 with reliable
      code.

   2) Milestone 2 is this Sunday.

   3) Designing a real client testing harness is not something that's
      going to happen between now and Sunday. However, rudimentary
      shell scripting is very low-overhead.

All that my commit was about is extending what we're already doing in
svn-test.sh and svn-test2.sh, specifically, extending it to probe the
results of svn operations in the working copy. This may or may not
turn out to be a great & general test suite. It's goal is merely to
save us time right now.

No one should think of it as some sort of statement about how a client
test suite should be. That question will be pondered in depth soon,
but before April 2nd, developers working on M2 will not have to time
to do that pondering.

-K

Mo DeJong <mdejong@cygnus.com> writes:
> Well, before folks go off deciding on a solution, could
> we agree on the problem? I wrote up and attached an HTML
> page that details the functional requirements I
> see for a SVN test suite. I have tried to avoid
> language wars here, this document simply outlines
> the problems that need to be solved without specifying
> what tool would be use to solve them.
>
> Can folks take a look at these requirements and suggest
> any revisions or clarifications you think might be needed?
> It is a first draft so don't be shy about pointing
> out bits that need to be clarified.
>
> thanks
> Mo DeJong
> Red Hat Inc<html>
> <title>SVN Test</title>
>
> <body bgcolor="white">
>
> <h1>Design goals for the SVN test suite</h1>
>
> <ul>
> <li>
> Why Test?
> </li>
> <li>
> Audience
> </li>
> <li>
> Requirements
> </li>
> <li>
> Ease of Use
> </li>
> <li>
> Location
> </li>
> <li>
> External dependencies
> </li>
> </ul>
>
>
>
> <A NAME="WHY"><H3>Why Test?</H3></A>
>
> Regression testing is an essential
> element of high quality software.
> Unfortunately, some developers
> have not had first hand exposure
> to a high quality testing framework.
> Lack of familiarity with the positive
> effects of testing can be blamed
> for statements like:
> <br>
> <blockquote>
> "I don't need to test my code,
> I know it works."
> </blockquote>
> It is safe to say that the
> idea that developers do not
> introduce bugs
> has been disproved.
> </p>
>
>
> <A NAME="AUDIENCE"><H3>Audience</H3></A>
>
> The test suite will be used by
> both developers and end users.
>
> <p>
> <b>Developers</b> need a test suite to help with:
> </p>
>
> <p>
> <b><i>Fixing Bugs:</i></b><br>
> Each time a bug is fixed, a test case should be
> added to the test suite. Creating a test case
> that reproduces a bug is a seemingly obvious
> requirement. If a bug cannot be reproduced,
> there is no way to be sure a given change
> will actually fix the problem. Once a
> test case has been created, it can be used
> to validate the correctness of a given patch.
> Adding a new test case for each bug also
> ensures that the same bug will not be
> introduced again in the future.
> </p>
>
> <p>
> <b><i>Impact Analysis:</i></b><br>
> A developer fixing a bug or adding
> a new feature needs to know if
> a given change breaks other parts
> of the code. It may seem obvious,
> but keeping a developer from
> introducing new bugs is one
> of the primary benefits of
> a using a regression test
> system.
> </p>
>
> <p>
> <b><i>Regression Analysis:</i></b><br>
> When a test regression occurs,
> a developer will need to manually
> determine what has caused the failure.
> The test system is not able
> to determine why a test case
> failed. The test system should
> simply report exactly which test
> results changed and when the
> last results were generated.
> </p>
>
> <b>Users</b> need a test suite to help with:
>
> <p>
> <b><i>Building:</i></b><br>
> Building software can be a scary process.
> Users that have never built software
> may be unwilling to try. Others may
> have tried to build a piece of software
> in the past, only to be thwarted by
> a difficult build process. Even if
> the build completed without an error,
> how can a user be confident that the
> generated executable actually works?
> The only workable solution to this
> problem is to provide an easily
> accessible set of tests that the
> user can run after building.
> </p>
>
> <p>
> <b><i>Porting:</i></b><br>
> Often, users become porters when
> the need to run on a previously
> unsupported system arises. This
> porting process typically require
> some minor tweaking of include files.
> It is absolutely critical that
> testing be available when porting
> since the primary developers
> may not have any way to test
> changes submitted by someone
> doing a port.
> </p>
>
>
> <p>
> <b><i>Testing:</i></b><br>
> Different installations
> of the exact same OS can
> contain subtle differences
> that cause software to
> operate incorrectly.
> Only testing on different
> systems will expose problems
> of this nature. A test suite
> can help identify these sorts
> of problems before a program
> is actually put to use.
> </p>
>
>
>
>
> <A NAME="REQUIREMENTS"><H3>Requirements</H3></A>
>
> Functional requirements of
> an acceptable test suite include:
>
>
> <p>
> <b><i>Exact Results:</i></b><br>
> A test case must have one expected
> result. If the result of running the
> tests does not exactly match the
> expected result, the test must fail.
> </p>
>
> <p>
> <b><i>Reproducible Results:</b></i><br>
> Test results should be reproducible.
> If a test result matches the expected
> result, it should do so every time
> the test is run. External
> factors like time stamps must
> not effect the results of a test.
> </p>
>
> <p>
> <b><i>Self-Contained Tests:</b></i><br>
> Each test should be self-contained.
> Results for one test should not
> depend on side effects of previous
> tests. This is obviously a good
> practice, since one is able to
> understand everything a test is
> doing without having to look
> at other tests. The test system
> should also support random access
> so that a single test or set of
> tests can be run. If a test is not
> self-contained, it cannot be run
> in isolation.
> </p>
>
> <p>
> <b><i>Selective Execution:</i></b><br>
> It may not be possible to run
> a given set of tests on certain
> systems. The suite must provide
> a means of selectively running
> tests cases based on the
> environment. The test system
> must also provide a way to
> selectively run a given
> test case or set of test
> cases on a per invocation
> basis. It would be incredibly
> tedious to run the entire
> suite to see the results
> for a single test.
> </p>
>
> <p>
> <b><i>No Monitoring:</i></b><br>
> The tests must run from start to
> end without operator intervention.
> Test results must be generated
> automatically. It is critical
> that an operator not need to
> manually compare test results
> to figure out which test failed
> and which ones passed.
> </p>
>
>
> <p>
> <b><i>Automatic Recovery:</i></b><br>
> The test system must be able
> to recover from crashes and
> unexpected delays. For
> example, a child process might
> go into a infinite loop and
> would need to be killed. The
> test shell itself might
> also crash or go into
> an infinite loop. In these
> cases, the test run must
> automatically recover and
> continue running starting
> with the tests directly
> after the one that crashed.
> If this was not supported,
> then a test halfway through
> could crash causing the
> system to fail to report
> results for the second half
> of the tests.
>
> The process must
> be completely automated,
> no operator intervention
> should be required.
> </p>
>
>
> <p>
> <b><i>Report Results Only:</i></b><br>
> When a regression is found, a
> developer will need to manually
> determine the reason for the
> regression.
> The system should tell the
> developer exactly what
> tests have failed and
> when the last set of
> results were generated,
> but that is all. Any additional
> functionality is outside the
> scope of the test system.
> </p>
>
> <p>
> <b><i>Platform Specific Results:</i></b><br>
> Each supported platform should
> have an associated set of
> test results. The alternative
> would be to only maintain
> results for a reference
> platform. The downside
> to the reference platform
> approach is that is does
> not keep track of the
> case where a specific
> set of test results
> differs from the results
> on the reference platform.
> With a platform specific
> set of test results, these
> failures would have already
> been recorded in a previous
> result log and would therefore
> not be flagged as regressions.
> </p>
>
> <p>
> <b><i>Test Types:</i></b><br>
> The test suite should support two
> types of tests. The first makes
> use of an external program
> like the svn client.
> These kinds of tests will need
> to exec the second program and
> check the output and exit status
> of the child process. Note that
> it will not be possible to run
> this sort of test on Mac OS.
> The second type of test will
> load subversion shared libraries
> and invoke methods directly.
> This provides the ability to
> do extensive testing of the
> various subversion APIs without
> using the svn client. This also
> has the nice benefit that it
> will work on Mac OS as well.
> </p>
>
> <A NAME="EASEOFUSE"><H3>Ease of Use</H3></A>
>
> <p>
> Developers will tend to avoid using
> a test suite if it is not easy to
> add new tests and maintain old ones.
> If developers are uninterested in
> using the test suite, it will
> quickly fall into disrepair
> and become a burden instead of
> an aide.
> </p>
>
> <p>
> Users will simply avoid running
> the test suite if it is not
> extremely simple to use. A
> user should be able to build
> the software and then run:
>
> <blockquote>
> <code>
> % make check
> </code>
> </blockquote>
>
> This should run the test suite
> and provide a very high level
> set of results that include
> how many tests results have
> changed since the last run.
> </p>
>
> <p>
> While this high level report
> is useful to developers, they
> will often need to examine
> results in more detail.
> The system should provide a
> means to manually examine
> results and compare output
> from a previous run.
> </p>
>
>
>
> <A NAME="LOCATION"><H3>Location</H3></A>
>
> <p>
> The test suite should be packaged
> along with the source code instead
> of being made available as a separate
> download. This significantly
> simplifies the process of running
> tests since since they are
> already incorporated into
> the build tree.
> </p>
>
> <p>
> The test suite must support
> building and running inside
> and outside of the source
> directory. For example,
> a developer might want to
> run tests on both Solaris
> and Linux. The developer
> should be able to run the
> tests concurrently in two
> different build directories
> without having the tests
> interfere with each other.
> </p>
>
>
> <A NAME="EXTERNAL"><H3>External program dependencies</H3></A>
>
> <p>
> As much as possible, the test suite should avoid
> depending on external programs or libraries.
>
> Of course, there is a nasty bootstrap problem
> with a test suite implemented in a
> scripting language. A wide variety
> of systems provide no support for modern
> scripting languages. We will avoid
> this issue for now and assume that
> the scripting language of choice is
> supported by the system.
> </p>
>
> <p>For example, the test suite should not depend
> on CVS to generate test results. Many users
> will not have access to CVS on the system
> they want to test subversion on.</p>
>
> <hr>
>
> </body>
> </html>
Received on Sat Oct 21 14:36:26 2006

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.