Mark Mielke wrote:
> Branko Čibej wrote:
>> Nah, that's just stupid ancient file systems; hardly better than the
>> ancient 14-character filename length limit. XFS and a few others on
>> Unix, NTFS, HFS+ and the like don't have that problem.
>> You can bet that anyone who has a really big repository isn't hosting it
>> on ext2 these days; Or as least if they do, it's some sysadmins should
>> be shot.
>> Block size overhead and directory sizes are the real issue here.
> You sure you aren't making things up? :-) What's wrong with ext3? I
> would bet ext3 is the most common in use for Linux platforms, and ext3
> has inode limits. All file systems I can think of reserve
> administration space for tracking purposes, and once the space is
> exhausted, problems occur.
Using tune2fs on my ext3 / partition shows:
Inode count: 768544
Block count: 3072000
Reserved block count: 153600
Free blocks: 1102994
Free inodes: 529442
So, effectively if I use 1 block per inode, I run out of inodes on this
file system at 768,544 using up only 768,544 blocks which equals 2.5 Gb,
even though my entire partition is 12 Gb.
Perhaps you've never hit this problem before? I've hit it in the past.
Usually the defaults for block:inode ratio are good and its rare to hit,
but I have hit it before. My solution at the time was move it to a
If you are talking about other "more advanced" file systems such as
reiserfs, they often offer tail packing, minimizing any gain from the
block at the end of the file taking up extra space.
FSFS packing of completed shards does a good job of dealing with this
problem. Perhaps it isn't the common one people deal with every day
(because their disks are usually much larger than the data they are
putting on it and at least some commits are large), but inode exhausting
is a real people that some people have experienced before.
Mark Mielke <mark_at_mielke.cc>
Received on 2008-11-29 01:38:01 CET