Greg Stein <email@example.com> writes:
> I haven't done a full review of this yet, but a little yellow flag is
> creeping out. Do we really want to hold a DB transaction open for the
> duration of the entire merge?
> For example, let's say that we want to do an "optimistic" merge (we are
> optimistic that we merge once, and somebody won't create a new rev and mean
> we need to merge again). So we go thru and do the full merge. After the
> merge is done, then we open a DB transaction and check whether we're still
> okay; presuming so, then we stabilize the tree and spin off a new revision.
> If we aren't okay (another revision popped into existence), then we loop and
> try again.
> The intent is to avoid locks as long as possible. We do the bulk of the work
> on the assumption it will remain valid.
Yeah, I understand what you mean. But I think the way it works now is
The revisions table isn't locked while the merge is happening, because
merely asking what is youngest_rev doesn't lock the table. It just
means that if Jane obtains youngest_rev, and tries to commit the next
rev in this DB txn, Jane's commit will fail if the next rev has
already been committed by Bill. The current code retries if that
However, now you and Yoshiki have got me worried (he said something
similar to what you're saying, in an earlier mail today). Have I got
it all wrong? I'll check into the Berkeley DB documentation, too. I
had thought that reads don't lock anything.
Oh, heh. Well, I just looked, and it seems you're right.
I'll fix this, thanks!
Either way, there was still a disadvantage with the current code, in
that it always merges the changes between txn_base and youngest,
instead of between "last merge point" and current youngest. The
latter is better no matter what.
Also need some more test cases...
Received on Sat Oct 21 14:36:25 2006