[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

how does our commit-finalization race logic *really* work?

From: Ben Collins-Sussman <sussman_at_red-bean.com>
Date: Wed, 11 Feb 2009 09:51:31 -0600

A googlecode user gave me this bash one-liner today:

$ for a in 0 1 2 3 ; do for b in 0 1 2 3 4 5 6 7 8 9 ; do svn mkdir -m
"" http://host/repos/$a$b & done & done

It simply spawns 40 simultaneous processes, and each attempting to do
'svn mkdir' on a unique URL.

When I run this against either a googlecode repository, or even a
stock mod_dav_svn + fsfs repository, somewhere between 5 and 15 jobs
succeed in their commits. All the others return error:

subversion/libsvn_ra_neon/commit.c:492: (apr_err=160024)
svn: File or directory '.' is out of date; try updating
subversion/libsvn_ra_neon/util.c:723: (apr_err=160024)
svn: version resource newer than txn (restart the commit)

I'm sort of bewildered here, because this is not at all what I would
expect. If you look at tree.c:svn_fs_base__commit_txn(), we have
kfogel's famous "while (1729)" loop which attempts to infinitely
re-merge the pending transaction against ever-newer HEAD revisions.
Because these 40 simultaneous commits are *all mergeable* with each
other, I'd expect every one to eventually succeed. Maybe they'd all
block on each other awkwardly at first, but the traffic jam should
slowly clear up and they should all complete in some random order.

So what's causing this behavior? Why am I not seeing the ideal behavior?

Received on 2009-02-11 16:56:48 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.