Greg Stein <firstname.lastname@example.org> writes:
> On Tue, Jun 04, 2002 at 10:18:52AM -0500, Ben Collins-Sussman wrote:
> > "Gerald Richter" <email@example.com> writes:
> > > (EVerything is on the same machine, so there is no network traffic
> > > time involved)
> > I *thought* that the server would be streaming the humongous
> > 40,000-entry response, so client-side parsing can start right
> > away... we have a brigade for that now, or something. Maybe gstein
> > can comment?
> Actually, PROPFIND is not (yet) streamy. I switched over REPORT reponses a
> while back, and content has always been streamy.
> So if *one* directory had 40k files, then there could be problems :-)
PROPFIND certainly doesn't scale well at present.
When checking-out a repository over ra_dav with 100 files in a single
directory the PROPFIND takes less than 1 second, with 200 files it's
about 2 seconds, with 300 files it's 5 seconds, with 400 files it's 14
seconds and with 600 files it is 45 seconds. Memory usage in the
server doesn't appear to scale well either. Is switching to a streamy
interface likely to fix these problems?
Gerald, could memory usage be what is causing your check-out to fail?
I can easily drive my machine (512M RAM, 1G swap) first into swap and
then beyond its swap limit using a repository with a few hundred files
in one directory. When the Linux OOM killer kills the Apache httpd
process I see a message in the syslog but not in the Apache log.
I created the files in each repository with a single commit, using the
tools/dev/stress.pl script. The commit doesn't scale well either, but
that does not seem to be ra_dav related as the poor scaling occurs
over ra_local as well. The bit that doesn't scale is whatever happens
after 'Transmitting file data ........'
To create a repository with 200 files using stress.pl I use
% sw/subversion/svn/tools/dev/stress.pl -c -F200 -N1 -D1 -n0
% svn co -d wc http://localhost:8888/repostress
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Wed Jun 5 02:06:06 2002