[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: [svnbench] Revision: 1302254 compiled Mar 19 2012, 00:21:25

From: Neels J Hofmeyr <neels_at_elego.de>
Date: Thu, 22 Mar 2012 11:10:38 +0100

Hi Johan,

I'm thinking about it. I agree that graphs would be quite nice to have, and
now that someone explicitly asked for it, I might be able to find a few
cycles some time or other. Still, anyone else, feel free to chip in...

(To all, if you've been wondering what's up, I'm quite busy with non-svn
issues these days, helping colleagues to meet deadlines and stuff.)

~Neels

On 03/19/2012 11:12 AM, Johan Corveleyn wrote:
> On Mon, Mar 19, 2012 at 2:52 AM, <neels_at_apache.org> wrote:
>> /home/neels/svnbench/20120319-002438
>> Started at Mon Mar 19 00:24:38 UTC 2012
>>
>> *Disclaimer:* this tests only file://-URL access on a GNU/Linux VM.
>> This is intended to measure changes in performance of the local working
>> copy layer, *only*. These results are *not* generally true for everyone.
>>
>> Averaged-total results across all runs:
>> ---------------------------------------
>>
>> COMPARE total_1.7.x to total_trunk
> [...]
>
> Neels (or anyone),
>
> I think it would be (even more) interesting to see the evolution of
> performance on trunk, from week to week. The comparisons with 1.7.x
> are interesting, but it's difficult to see from these numbers that
> something significantly changed relative to the week before. Getting a
> more direct view onto the week-by-week evolution might help spot
> improvements or regressions that were made in recent commits.
>
> Now, first of all this requires that the numbers can be compared over
> time, which hinges on the stability of the perf-testsuite, and also
> the stability of the machine and its environment. The former seems
> relatively stable. I don't know about the latter. Is that (virtual)
> machine relatively isolated from external influences etc... ? Can we
> be relatively certain that no other processes run during the benchmark
> etc?
>
> If those pre-conditions are met: would someone be able to do the work
> of setting up something to process these numbers, creating nice tables
> and/or graphs out of them showing the weekly evolution? Maybe even go
> back to the last N reports and process them to include some historical
> data?
>
> I don't have the cycles to implement this myself, so it's just a
> suggestion. If someone can do this, I think it would be a valuable
> tool for devs to keep an eye on performance.
>

Received on 2012-03-22 11:11:29 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.