I agree with the general advice already given in this thread: you
should first try to find out where the bottleneck is.
- It can be client-side (e.g. before even sending the commit the
working copy will be locked. This can be very slow with big working
copies (lots of directories), especially on Windows clients).
- If it's server side, it's probably your hook script (commit is one
of the faster svn operations if you look at the server).
I guess your hooks are the primary suspect right now.
If you want to analyze/measure your hook script, here's a code snippet
from my post-commit hook to add some timing logging, just to get you
started ...:
[[[
#!/bin/sh
...
############
# Helper functions
############
# Collect interesting info (a.o. accurate timestamp)
logStart()
{
if [ $DEBUG ]
then
START_DATE=`date +'%F %T'`
START=`/usr/bin/perl -MTime::HiRes -e 'print
Time::HiRes::gettimeofday.""'`
AUTHOR=`$SVNLOOK author $REPOS -r $REV`
NR_CHANGES=`$SVNLOOK changed $REPOS -r $REV | wc -l`
fi
}
# Logs the end of the script with some timing information
logEnd()
{
if [ $DEBUG ]
then
ELAPSED=`/usr/bin/perl -MTime::HiRes -e 'printf ("%.3f",
Time::HiRes::gettimeofday - $ARGV[0])' $START`
echo "$START_DATE\t`date +'%F %T'`\t$AUTHOR
\t$NR_CHANGES\t$REV\t$ELAPSED" >> $REPOS/hooks/logs/post-commit.log
fi
}
##############
# The real script
##############
DEBUG=1
logStart
# Do stuff ...
...
logEnd
]]]
(I have similar functions in pre-commit hook (also logging the error
codes if commits are rejected; so I can keep an eye on which errors
occur most often)
Note:
- My server runs on Solaris without gnu date, so I resorted to perl
for getting a high resolution timestamp. If you have gnu date it might
be better to use that (don't know the details).
- The overhead of this logging is quite low (200 - 300 milliseconds),
even though it invokes perl to get the timestamps, and "svnlook
author" and "svnlook changed". So I keep it enabled all the time.
HTH,
--
Johan
Received on 2010-05-06 01:49:50 CEST