On 5/4/06, Eric Gillespie <epg@pretzelnet.org> wrote:
> "Garrett Rooney" <rooneg@electricjellyfish.net> writes:
>
> > On 5/3/06, David Young <dyoung@pobox.com> wrote:
> [...]
> > > When I run commands such as 'svn co' and 'svn status -u' on the
> > > repository, they take several minutes to run (i.e., too long), and svn's
> > > memory usage climbs to 150MB, which seems unusually high. This is not
> [...]
> > Operations on large working copies (including checkout and status)
> > will use amounts of memory proportional to its size, due to the
> > caching of information about the entries in each directory.
>
> Hmm. Only the entries for the directory currently being scanned
> and all its parents need be in memory at once. There should be a
> subpool for each directory, i would think. If it's growing
> without bound, it sounds to me like *all* entries are staying in
> memory. In other words, given
>
> wc
> wc/bin
> wc/bin/ls
> wc/bin/sh
>
> wc/bin/ls entries should be in a pool that is freed before moving
> on to wc/bin/sh.
>
> > either that or you get unacceptable speed hits from parsing the
> > entries file multiple times.
>
> Once svn is finished printing status for wc/bin/ls, it will never
> need those entries again.
Sure, but doing a pool per directory means you've just multiplied your
memory usage for small and medium sized trees dramatically, since each
pool allocates 8k of memory. That also assumes that we never make
multiple passes, and I'm not sure that's the case.
-garrett
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Fri May 5 00:11:37 2006