[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

dup key btrees (was: svn commit: rev 1380 ...)

From: Greg Stein <gstein_at_lyra.org>
Date: 2002-02-27 02:11:46 CET

On Tue, Feb 26, 2002 at 06:34:36PM -0600, cmpilato@collab.net wrote:
> Greg Stein <gstein@lyra.org> writes:
>...
> > [ I *really* believe in measuring, rather than assuming; not saying that
> > happened here, but I didn't see measurements, so I don't know that it did ]
>
> I was sharing some numbers on IRC. You wouldn't have seen that
> traffic since you weren't connected at the time.

No problem... ease back on the defense dial :-) Just wondering.

But 4 meg is definitely going to work better, but at the cost of large
memory usage.

> > svn_stringbuf_setempty() is a bit more optimal.
>
> Okey dokey.

Note: we're talking "nit" here. That's really sort of "when you want to do
something like that in the future"

>...
> > All that said, looking at switching the 'strings' table to a BTree with dup
> > keys (meaning, it has [consecutive] multiple records) could be a win. Random
> > access will be tougher, but that happens *very* rarely. Usually, we sequence
> > through it. But even if we *did* need random access, it is possible to fetch
> > just the *length* of a record, determine whether you need content from it,
> > or to skip to the next record.
> >
> > Understanding the question above: how often does the delta code call the
> > stream->write function will tell us more information about the buffering. My
> > guess is that it depends heavily upon the incoming delta. If we're talking
> > about a pure file upload, then we'll have a series of large windows and
> > writes. But if we're talking about a serious diff, then we could have a
> > whole bunch of little writes as the file is reconstructed.
> >
> > I'd say that a buffer is good, but we could probably reduce the size to 100k
> > or something, and use duplicate keys to improve the writing perf.
> >
> > Are you up for it Mike? :-)
>
> I have a feeling that the incoming data tends to be no larger than
> something near 100k, the size of the svndiff encoding windows. My

Right. Given that the stream is being written to via the delta processor,
I'd say that is a given :-)

> tests were on imports, so all the data coming into the filesystem was
> svndiff's equivalent of full-text. I'll betcha that those were 100K
> windows with one op: NEW (and 102400 bytes of data). The buffering

Yup, my thought, too. But later on, with "svn commit" the structure of the
delta will be quite a bit different.

> earns us nothing if it drops to a value smaller than the average size
> of a chunk of data written to the filesystem. It needs to float at
> a value that is like The Most Memory I Can Stand For the FS to Use.

Actually, I believe it can be set to something a lot lower. Like a single
delta window, then tune again from there (to make sure we don't end up with
off-by-one errors that cause spillover or something).

> As for the BTree thing, I don't see the advantage in this case. Sure,
> it might help our reads of data (or it might hurt, if getting a range
> of text means we have to potentially hit the database more than once
> because that range is spread out over multiple records), but the
> problems I had today are strictly related to the number of times
> record data was written to the database. Perhaps you're forking a new
> thread, though, and I'm missing it?

The BTree thing is about writes, not reads. Let me explain a bit more.

Currently, 'strings' is a BTree, but duplicate keys are not allowed. Thus,
when we write a record, it replaces whatever was there. When duplicate keys
are allowed, then a new write will *add* another record, rather than
replacing.

The hope here is that we aren't *modifying* a record, but just adding one.
Thus, the log should stay much smaller (given Daniel's comment about
modifying a record needing to make a copy of the old one).

[ altho on IRC, DannyB said that log writes are possibly optimized in some
  cases, when it detects that you're appending or something. not clear. ]

The end result is that we don't have to buffer huge amounts of data in
memory. We just spool it to the database as new records. This avoids the
"modify a record" problem, keeping the log small.

My comments re: reads was simply that it should not impact the reading much
at all. I believe the big win occurs in the writes.

Looking at the code, it is just a change to strings-table.c (thank god for
layers and abstractions! :-).

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Sat Oct 21 14:37:10 2006

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.