Greg Stein <email@example.com> writes:
> > tests were on imports, so all the data coming into the filesystem was
> > svndiff's equivalent of full-text. I'll betcha that those were 100K
> > windows with one op: NEW (and 102400 bytes of data). The buffering
> Yup, my thought, too. But later on, with "svn commit" the structure of the
> delta will be quite a bit different.
Different, but presumably a) smaller, and b) still limited by the
delta window size. Not a hard limit, but svndiff is flawed if
examining a 100k window produces much more than 100k of output. :-)
> > earns us nothing if it drops to a value smaller than the average size
> > of a chunk of data written to the filesystem. It needs to float at
> > a value that is like The Most Memory I Can Stand For the FS to Use.
> Actually, I believe it can be set to something a lot lower. Like a single
> delta window, then tune again from there (to make sure we don't end up with
> off-by-one errors that cause spillover or something).
To set it to a single delta window is to disable it altogether,
methinks. That is, the caller sending those windows is most likely
limited by the delta window size, so basically all writes will be
> > As for the BTree thing, I don't see the advantage in this case. Sure,
> > it might help our reads of data (or it might hurt, if getting a range
> > of text means we have to potentially hit the database more than once
> > because that range is spread out over multiple records), but the
> > problems I had today are strictly related to the number of times
> > record data was written to the database. Perhaps you're forking a new
> > thread, though, and I'm missing it?
> The BTree thing is about writes, not reads. Let me explain a bit more.
> Currently, 'strings' is a BTree, but duplicate keys are not allowed. Thus,
> when we write a record, it replaces whatever was there. When duplicate keys
> are allowed, then a new write will *add* another record, rather than
So...when we use svn_fs__string_append, BDB is not actually appending
data? I have trouble buying this. I can't help but to think that
that BDB fills up the existing page with whatever portion of the new
data it can, then starts allocating overflow pages to store the new
data. I don't see how this differs in any way from *us* telling
Berkeley to write the data into what basically boils down to more
pages of allocation.
That said, I am *not* a database guru. :-)
> The hope here is that we aren't *modifying* a record, but just adding one.
> Thus, the log should stay much smaller (given Daniel's comment about
> modifying a record needing to make a copy of the old one).
> [ altho on IRC, DannyB said that log writes are possibly optimized in some
> cases, when it detects that you're appending or something. not clear. ]
> The end result is that we don't have to buffer huge amounts of data in
> memory. We just spool it to the database as new records. This avoids the
> "modify a record" problem, keeping the log small.
> My comments re: reads was simply that it should not impact the reading much
> at all. I believe the big win occurs in the writes.
> Looking at the code, it is just a change to strings-table.c (thank god for
> layers and abstractions! :-).
Hm...I'm not sold. I'd have to see it in action, watch it drastically
improve on my use-case. But I'm willing to consider it.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Sat Oct 21 14:37:10 2006