[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Suggestion: preventing inode crowding with FSFS

From: Greg Hudson <ghudson_at_MIT.EDU>
Date: 2005-10-07 18:24:53 CEST

On Fri, 2005-10-07 at 11:59 +0200, Peter N. Lundblad wrote:
> > > We might prefer something simpler; I'm not sure if the load-spreading
> > > goal of the Squid cache layout is of any great value to a Subversion
> > > repository. Also, although 2^64 is "plenty" of revisions, the current
> > > FSFS layout does not impose an upper limit on the number of revisions,
> > > and it would be nice to keep that property.
> > The FSFS layout may not impose such a limitation but the data
> > type for a revision does. In fact, it limits it to 2^31 on
> > a 32-bit platform or 2^63 on a 64 bit one (its a long int).
> Heh, it seems that you could commit a revision per millisecond for some
> hundred million years before hitting that 2^64 revisions limit. Greg, is
> this only a theoretical concern, or are you expecting bug reports from the
> future? :-)

It's just more elegant not to have the layout imposing size
restrictions. Yes, the data type used in the implementation also
imposes a restriction, but persistent data formats need to be more
conscious of this issue than implementations.

> > I could do almost all the rest except for that bit. I just
> > dont "get" Python (but admittedly havent tried too hard either).

> To me this seems more like a nice-to-have than a must-have.

Yes; I don't think a conversion script is a pre-condition to having this
in the code base.

> > Its a nice exponential degradation for each multiple of 4096 inodes.

Perhaps you mean quadratic? If access time doubles for every 4096
inodes, then SCO is more broken than one might expect. :)

To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri Oct 7 18:26:19 2005

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.