[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: FSFS optimization

From: Garrett Rooney <rooneg_at_electricjellyfish.net>
Date: 2007-08-22 00:36:26 CEST

On 8/13/07, Eric Gillespie <epg@pretzelnet.org> wrote:
> "Dan Christian" <dchristian@google.com> writes:
>
> > A completely separate idea is to work on caching revision files on
> > local disk (so the full text version doesn't have to be regenerated
> > repeatedly). The import case below does none of this, but a typical
> > commit would reconstruct the previous revision of every file. This
> > takes advantage of the fact that revisions never change after they are
> > written.
>
> Not to mention read operations like update and merge. Here are
> the numbers of times the server opens various files when I svn
> merge a revision changing 5439 files. First, the diabolically
> interesting ones:
>
> revprops/4/4661 5967
> revs/0/2 20445
> revs/4/4661 330459
>
> r4661 is the revision I'm merging (svn merge -c 4661).
>
> The complete (almost; some small time passes between starting the
> svn merge and attaching strace to the httpd) report:

<snip>

Perhaps a "smart" cache could recognize cases like this. Track how
many times each file has been opened and keep the top N open at all
times? For read only files you even avoid the "multiple
threads/processes need to care" issues Dan was referring to in his
initial mail.

I seem to recall trying to build some sort of open file cache for fsfs
back in the day, and not getting too far, but it's certainly possible
that aided by some good profiling we could do better now.

-garrett

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Wed Aug 22 00:34:00 2007

This is an archived mail posted to the Subversion Dev mailing list.