[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: svn log using apache is SLOW

From: Jon Bendtsen <jbendtsen_at_laerdal.dk>
Date: Wed, 12 Mar 2008 11:48:25 +0100

On 10/03/2008, at 18.37, Phil Endecott wrote:

> Jon Bendtsen wrote:
>> Hi
>>
>> My apache 2.0 based subversion 1.4 is quite slow at doing svn log
>> commands.
>
> Mine too, though probably not as slow as yours (see below).

svnserve is fast, very fast.

>> Using strace i find stuff like this
>
> Is this strace from the client or the server? I guess the server,
> right?

it is from the server

> And what OS is it? I'm guessing Linux.

it is linux.

>> close(0) = 0
>> close(1) = 0
>> close(14) = 0
>> dup2(13, 0) = 0
>> getrlimit(RLIMIT_NOFILE, {rlim_cur=1024*1024, rlim_max=1024*1024})
>> = 0
>> close(2) = 0
> [snip]
>> close(1048576) = -1 EBADF (Bad file
>> descriptor)
>>
>> And that takes alot of time.
>>
>> But why are it trying to close 1024 * 1024 file descriptors?
>
> It's trying to close all open file descriptors, and it has chosen to
> do this by attempting to close every possible file descriptor.
>
> Some OSes have specific functions to do this, e.g. fcntl(F_CLOSEM).
> Linux doesn't have a feature like that as far as I'm aware. Closing
> all possible file descriptors (i.e. up to RLIMIT_NOFILE) is a common
> way to do it. I think that the most effective method on Linux is to
> examine /proc/self/fd.
>
> But the interesting thing is that on your system, the maximum number
> of file descriptors is 1 million. On mine it's much lower, so
> trying to close all files in this way will is "fast enough that I
> don't complain", if not actually "fast". Maybe you have
> deliberately increased the maximum number of open files as some sort
> of Apache tuning, or something?

I have not done so deliberately. ulimit (now) says unlimited. I have
tried setting it to 1024 when starting apache.
        ulimit -S -n 1024
but it doesnt appear to be working. Maybe i should also set it
somewhere else? But i dont know where, and i've
only started looking. Hints would be appriciated.

> As I understand it - and I'm not an expert - the maximum number of
> open files is set by /proc/sys/fs/file-max (aka sysctl fs.file-max);
> on my systems I see values of 48 000 and 100 000. So you could try
> reducing that. I'm not sure where the default comes from, but it
> may be a heuristic based on the amount of RAM that you have.

mine is 408964

> Disclaimer: I know nothing about the internals of Subversion. I'm
> replying only because I've had to solve the problem of closing all
> files in my own code. If someone else answers differently, they're
> probably right.

Thanks for replying, i believe your answer more than the other answer.

JonB

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe_at_subversion.tigris.org
For additional commands, e-mail: users-help_at_subversion.tigris.org
Received on 2008-03-12 11:48:46 CET

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.