Nik Clayton wrote:
>> 2. How do we test merge-tracking effectively?
> I imagine the same way other features are tested. Lots of automated
> tests to try and cover expected success and failure modes. Is there
> anything inherent in the merge tracking functionality that makes it
> difficult to test in this manner?
I suspect that merge tracking testing is no more complex than any other
test, but *unlike* most of our other testing, it involves arbitrarily
large and/or deep-historied datasets. Most of our regression tests can
do their job in under 5 revisions of change. Merge tracking tests
almost certainly need a bigger and more branch-ful / history-interesting
dataset to play with.
Because of this, I unfortunately can't help but to suspect that the push
for "real-world testing" comes partly out of a lack of willingness to
invest the required energies into writing fully automated functional and
regression tests for this feature. It's simply going to take a
non-negligible amount of time to define and compose those tests, and
that work isn't very glorious. It would be "easier" just to do adhoc
testing, where "easier" simply means that the testing cost is amortized
across the life of the unreleased feature rather than paid upfront
during automated test composition. But to get the best quality out of
the thing, you need those automated regression tests *anyway*.
C. Michael Pilato <email@example.com>
CollabNet <> www.collab.net <> Distributed Development On Demand
Received on Thu Dec 14 21:38:45 2006