"Kirby C. Bohling" <kbohling@birddog.com> writes:
> If each entry were in
> a seperate file, writing an entry would be proportional to the number of
> properties of that entry, rather then proportional to the number of
> properties of all the files in the directory. Doing this would remove a
> performance penalty for having directories with a large number of files
> or sub-directories.
>
> When I looked at it, this appeared to be the best optimization I could
> find for speeding up imports and checkouts, which we're painfully slow.
> When I was playing with a few of my own personal code bases it sure
> seemed too slow to be usable on a daily basis.
It's possible it might work, but it's another tradeoff. Writing is a
win, but reading is a loss. It is faster to stat/read a single large
file compared to several hundred small files. Some people using
working copies over NFS already believe we have too many files in the
.svn area.
There is another idea in notes/entries-caching which is to write lots
of "intermediate" files and then combine them into one file at some
strategic point, an explict flush, or at access baton close, or
something. Then you get the best of both worlds, faster write access
without any read penalty. It just needs to be implemented...
Just had one more idea: if the access baton is read-write we can
assume that the entries set with deleted items is going to be
required, even if the first request is for the entries set without
deleted items. So in that case we should go ahead and create both
entries sets, so that when the request for the second set comes along
the file does not need to be parsed again.
--
Philip Martin
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Tue Nov 26 01:57:40 2002