Kyle Kline wrote:
> The notes on FSFS warn that on some OS's/file systems, large numbers
> of files per dir doesn't scale well.
>
> Since I'm clueless -- how about NTFS (Win2K)? Anybody know? Talking
> in the range of 10K+ revisions.
NTFS should perform okish, but it is definitely not the best filesystem
out there fragmentationwise.
I am on a NTFS machine with projects which regularily having to copy
a few thousand files (with small changes in them), and even with
around 80% of space free, after a while the
fragmentation statistics show a deep red over the affected paths.
I never really encountered such an issue on Reiser 3.x and on HFS+
(well in HFS+ it is impossible because HFS+ defrags during the writing
process).
Seems like NTFS lacks a good reallocation if you do constant changes
on the same files over and over again. If I am right in my assumption,
you might get a long term issue by using NTFS on the server side.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Wed Apr 27 17:45:13 2005