[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: http protocol very slow for moderate-sized data sets

From: Mark Parker <mark_at_msdhub.com>
Date: 2005-04-28 16:45:09 CEST

Chermside, Michael wrote:
>>If the bottleneck is the local file system, strive to use a modern
>>file system that deals well with lots of small files. Microsoft FSs
>>are said to be bad at this.
>
>
> Now that might well be an issue, this IS running on an NTFS file system.
> There IS some possibility of running a unix OS if we know it will
> improve performance significantly.

I have 1 comment and 1 question:

If your server is running the BDB backend, then that's not where you're
dealing with lots of little files.

As for my question, does anyone actually have anything but rumours,
general bad feelings, and someone-told-me-once anecdotes verifying that
NTFS is really so bad at handling lots of files? The only people with
real first-hand experience that have responded here have both said that
they have no problems with this sort of situation (I'm one of those, I
have directories with more than 270,000 files in them, and I have seen
nothing to make me worried).

Also, there are (some, at least) differences between NTFS as implemented
in NT4 and earlier, and Windows 2000 and later. Could these problems be
historical?

Is there some sort of cross-platform filesystem stress testing tool that
could be used to tell whether this is a legitimate concern or a red herring?

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Thu Apr 28 17:49:56 2005

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.