RE: Significant checkout performance degradation between 1.6.1 and 1.7b2
From: Ketting, Michael <michael.ketting_at_rubicon.eu>
Date: Thu, 11 Aug 2011 12:46:02 +0000
Hi Bert!
No, that wasn't running. And just the make sure, I redid my checkout tests, closed all explorer windows and checked with TaskManager. Got the same timing results as before.
Regards, Michael
A completely different question:
Do you have a recent TortoiseSVN (TSvnCache.exe) running while checking out for those tests?
I just ruined a testrun by accidentally enabling TSvnCache on it (by accessing a parent directory with the Windows explorer), and for the rest of the checkout TSvnCache took consistently 3 times more CPU than the checkout process by continuously calling ‘svn status’ on the same working copy.
Bert
From: Ketting, Michael [mailto:michael.ketting_at_rubicon.eu]
Bert, you can access the repository here, in case you want to take a closer look: https://svn.re-motion.org/svn/Remotion/trunk/
> What about svn:needs-lock?
> svn:eol-style
> svn:keyword
Michael
Does it use svn:keywords in many places?
More svn:eol-style keywords than the other working copies?
Bert
From: Ketting, Michael [mailto:michael.ketting_at_rubicon.eu]<mailto:[mailto:michael.ketting_at_rubicon.eu]>
Just a bit more information:
Regards, Michael
I updated to the latest Beta of TortoiseSVN and it looks to me like they have changed the default HTTP client to Neon already. So unless you have specifically made serf the default client in your servers file it is not likely that this is your problem.
I developed a set of open-source benchmarks to measure Subversion performance that you can get here:
https://ctf.open.collab.net/sf/sfmain/do/viewProject/projects.csvn
Perhaps you could set up the repository on your server and run the benchmarks using 1.6 and 1.7 to see what kind of results you see? When I run the tests I see considerable performance gain with 1.7. The "FolderTests" are probably the closes tests to your scenario. It will be easier to focus on any remaining performance issues if we can identify and measure them in an open and consistent manner so we can see progress and the impact of different changes.
If these benchmarks do not show the same problems you see on your real code, then we need to add more benchmarks so that we can capture whatever the problem is.
-- Thanks Mark Phippard http://markphip.blogspot.com/Received on 2011-08-11 14:46:37 CEST |
This is an archived mail posted to the Subversion Dev mailing list.
This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.