[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: http protocol very slow for moderate-sized data sets

From: Anders J. Munch <ajm_at_flonidan.dk>
Date: 2005-04-29 10:03:10 CEST

From: Mark Parker [mailto:mark@msdhub.com]
>
> As for my question, does anyone actually have anything but rumours,
> general bad feelings, and someone-told-me-once anecdotes verifying
> that NTFS is really so bad at handling lots of files? The only
> people with real first-hand experience that have responded here have
> both said that they have no problems with this sort of situation
> (I'm one of those, I have directories with more than 270,000 files
> in them, and I have seen nothing to make me worried).

As I understand NTFS directories are B-trees or something similar, so
there is no reason why you should be worried.

My vague experiences suggesting otherwise have to do with the W2K
recycle bin (moving to a loaded recycle bin can be painfully slow),
and temporary internet files. But those are probably Windows
Explorer's fault, not the filesystem's.

The other issue is handling small files.

I remember reading about a filesystem for Linux storing small files
directly in the inode, perhaps even storing multiple files, directory
entries and all, in a single disk block. Alas I can't remember which
filesystem it was, or if it was production-ready. The point is that
there is no such thing as *the* Unix file system, and it may be
possible to find one that is particularly SVN-working-copy-friendly.

- Anders

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Fri Apr 29 10:05:58 2005

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.