Hi @dev!
SITUATION
Reading the recent discussions confirmed my
experiences with TSVN:
* TSVN is an extremely useful and efficient tool
* TSVN is most stable and quite reliable
(it has to for being a SCM tool)
* Stefan Küng is The Man not only when it
comes to writing the TSVN code but also
spends tremendous amounts of time helping
people on the @dev list.
* There is a great community around TSVN:
many other people volunteered and got
involved in documentation, translation, release
management, helping users on the lists or
providing patches.
But:
* From time to time, TSVN code gets "randomly"
broken just like any other code
* These bugs sometimes make it into the STABLE
branches undetectedly
The latter is a problem as it effectively voids the
"if you want a stable release, go for x.y.1 instead
of x.y.0" strategy and similar ones.
ANALYSIS
We cannot follow the SVN approach of reviewing
the changes before merging them into the stable
branches because there is are just not enough
developers. Hence, we have to test and luckily there
seem to be many people are actually willing to do that.
Currently, our testing is "three-way-blind-testing":
Nobody knows
* how many people run tests
* what their findings are (many won't report their
results to the @dev list)
* what got tested under what conditions
In an ideal world, we should be able to track all
of these points. If the test results are public,
people can easily improve them systematically
(increasing coverage and confidence where
still lacking).
PROPOSAL
Please note that I do not know whether the
following is at all feasible and how it would
"feel". It is more of a requirements document
than an actual design description.
The implementation will probably require some
form of DB backend, a reporting engine and
XML data import (test specs).
1. Create a comprehensive but coarse-grained
feature list including a short description of the
expected behavior is. Example:
* add / modify / remove a property to a file.
Commit and changes dialogs must report
the change.
The list should support be grouped (at least
one level) and be represented in some generic
format to generate a user representation from.
2. People should be able to get it as a printable
check-list.
3. Create a web page that presents the list in
some form and allows people to check "ok"
or "broken" for each item. So, basically, it
is a voting process.
It is important, that people don't have to fill
a full report but just mark what they tried.
Brokeness is expected to be reported to
the list.
Initially, access to the page should not be
restrickted. If something is reported as "broken"
but there is no @dev post, this is a good
measure for the quantity of noise on *both*
sides of the vote.
4. The results (sum of votes for each item)
should be available to everyone but not
on the same page as the check-boxes.
"Most wanted" test (having fewest testers
so far) should be reported along with "most
probably broken".
For some people (mostly developers) it must
be possible to reset the votes for individual
items (e.g. after bug fixes).
5. Improve the feature list to become more
detailed. Of course, this will take many
iterations to complete and may finally comprise
hundreds of tests.
It should try to follow a natural workflows to
improve test efficiency.
6. Regression tests are added to the list, (only)
if there had been a reported issue under specific
conditions ("a single deleted files in an empty
directory causes TSVN commit dialog to crash")
Do you think my analysis is correct?
What about the proposal?
-- Stefan^2
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tortoisesvn.tigris.org
For additional commands, e-mail: dev-help@tortoisesvn.tigris.org
Received on Sat Jun 17 16:45:29 2006