On Tue, Apr 22, 2014 at 10:37 AM, Florian Ludwig
<vierzigundzwei_at_gmail.com>wrote:
>
>
>> One thing I recall about 1.7, is that virtually none of the changes did
>> anything that really sped up checkout. So that is probably the worst thing
>> to be testing with. If all you care about is checkout, then there was
>> really little done in 1.7 or 1.8 to speed it up. Most of the big
>> performance wins in 1.7 came in other areas. For example, update got a lot
>> faster on Windows on working copies with lots of folders because the time
>> to "lock" the working copy got a lot slower.
>>
>
> commit / update seems slower as well but I don't have any numbers - I
> decided to test checkout since it is easier tested (just a single command).
>
>
>>
>> During the run-up to 1.7, I wrote some benchmarks that were being used to
>> compare overall performance of a lot of operations on a lot of different
>> scenarios:
>>
>> https://ctf.open.collab.net/sf/projects/csvn/
>>
>> Something like this would be a better way to compare performance between
>> different versions or the impact of different tweaks on performance. For
>> example, you could run it with and without Anti-Virus enabled to see what
>> impact your tool has in performance.
>>
>
> For the test I had:
> * AV deactivated
> * IPv6 deactivated
> * Windows file indexing service deactivated
> * Windows auto updates disabled
> * Windows Media Player * Service(s) deactivated
>
> I was looking for the fasted "base line" to archive - before activating
> anything like av or moving to tortoise gui. I will look into doing some
> commit/update benchmarks / look into csvn.
>
FWIW, I was not implying the way you tested or the results were invalid.
My main point was that my recollection was that the new WC architecture did
very little to speed up checkout, and I seem to recall we even had to
scrape around to find things to optimize just to even make it as fast as
1.6. Most of the performance benefits were in the other working copy
commands.
The only other point is that using a common benchmark suite to compile your
results at least makes it easier for others to run the same tests and get
the same results or share their differences. The benchmark itself is
mainly useful to establish a baseline and then tweak something and see how
the results vary, such as how turning on/off A/V impacts the results.
--
Thanks
Mark Phippard
http://markphip.blogspot.com/
Received on 2014-04-22 16:47:17 CEST