kfogel@collab.net wrote:
>middle metric was about), we can multiply 51,236 * 8 to get 409,888.
>Nice, about half a meg.  Of course, we need to add in the usual tree
>structure overhead, which is a whole hash-table per unique entry
>except for the leaf nodes.  I'm not really sure how to estimate that.
>It's more than log(51,236), but less than 51,236.  Plus we need a
>4-byte pointer per entry...
>
>  
>
Regular hashtable overhead is ~ 24 bytes per node. The per node lookup 
table needn't be a hash table; a binary search table may be better, 
especially if  the input data is mostly sorted.  That could bring the 
overhead down to ~4 bytes.
>So, is it really looking so much better than 9 MB, in the long run?
>
>I don't mean to be reflexively skeptical, but at least this back-of-the-envelope estimate doesn't look promising.  Maybe I'm missing something, though?
>  
>
Reflexive skepticism is what keeps us alive :)  There may also be 
interactions with the pool allocator;  I do think that path length 
explains at least 25% of the memory growth.  I think it's time to run a 
profiler and see where the memory is going (but that spoils the fun).
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Mon Dec 13 23:34:33 2004