Re: Testing equality between svnrdump and svnadmin dump
From: Julian Foad <julianfoad_at_btopenworld.com>
Date: Mon, 26 Jan 2015 15:57:15 +0000
Hi Bert! Thanks for airing your concerns.
Bert Huijben wrote:
Certainly! That's one of the three TODO tasks I listed.
> I don't see why every test in the testsuite needs a double dump and
Every test potentially generates a different repo. Every RA layer potentially gives different behaviour with 'svnrdump' (issue #4551 includes an example). Each FS type potentially behaves differently.
The problem of excessive duplication of coverage in our testing regime is not a new concern here.
> (And then the patch appears to ignore the fact that we have tests that create
Ignore? No, just not implemented yet. In the patch's log message it says:
Ideas for improvement:
- Implement the same cross-checking for the C tests.
> I can't see why the coverage is better this way, than running this just in a
Obviously the coverage is "better" in the sense of "more", so maybe you mean "better" in the sense of amount of coverage in proportion to the time taken?
> except by slowing developers down (and thereby reducing
This extra test coverage will be optional. Don't enable it if you don't want to.
Trying to unpick what you really mean, I feel you are unhappy that the current set of tests that you run frequently (before each commit, perhaps) is too slow for your liking, and you think this addition will make it slower without a proportional increase in coverage. You are right about the last part -- this extra testing doubtless doesn't add as much coverage, in proportion to its run time, as adding a regression test targeted to a specific bug.
So maybe the point you are trying to make here is that this kind of "blanket" testing is not as "efficient", in the sense of coverage over execution time, as specifically targeted tests. Is that right?
Of course in another sense it is very efficient, in that it can detect a large class of bugs with very little human effort.
> With the same reasoning: better coverage is better, we can just as well remove
What's your point? Of course we don't want to run all the possible test permutations a hundred times a day during our own development work flow. And of course we DO want to run all the possible tests sometimes, before shipping software.
You seem to be thinking that there is exactly one set of tests, and that everybody has to run the same set of tests every time for every purpose.
As developers, each of us chooses what subset of all possible tests to run, and how often, depending on our work patterns, our machine speed, the likelihood that the change we're working on will be detected by
> We separated these tests over multiple configurations for a reason, and I think
What way do you mean?
By being so negative about it, you are sending out a signal that additional testing is unwelcome. It is very easy to spread a negative feeling. I feel that any time I commit or even propose to implement some extra testing, you'll likely argue against it. Why should developers bother trying to make the software well tested if that's the attitude?
I'm sure that's not what you mean, but that's how it comes across. Please can we resolve this argument so we're clear on how extra testing can both be welcomed and its run time kept under control?
- Julian
|
This is an archived mail posted to the Subversion Dev mailing list.
This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.