Ben Collins-Sussman wrote:
> A googlecode user gave me this bash one-liner today:
>
> $ for a in 0 1 2 3 ; do for b in 0 1 2 3 4 5 6 7 8 9 ; do svn mkdir -m
> "" http://host/repos/$a$b & done & done
>
> It simply spawns 40 simultaneous processes, and each attempting to do
> 'svn mkdir' on a unique URL.
>
> When I run this against either a googlecode repository, or even a
> stock mod_dav_svn + fsfs repository, somewhere between 5 and 15 jobs
> succeed in their commits. All the others return error:
>
> subversion/libsvn_ra_neon/commit.c:492: (apr_err=160024)
> svn: File or directory '.' is out of date; try updating
> subversion/libsvn_ra_neon/util.c:723: (apr_err=160024)
> svn: version resource newer than txn (restart the commit)
>
> I'm sort of bewildered here, because this is not at all what I would
> expect. If you look at tree.c:svn_fs_base__commit_txn(), we have
> kfogel's famous "while (1729)" loop which attempts to infinitely
> re-merge the pending transaction against ever-newer HEAD revisions.
> Because these 40 simultaneous commits are *all mergeable* with each
> other, I'd expect every one to eventually succeed. Maybe they'd all
> block on each other awkwardly at first, but the traffic jam should
> slowly clear up and they should all complete in some random order.
>
> So what's causing this behavior? Why am I not seeing the ideal behavior?
How, if at all, does the behavior differ if you do this over svnserve or
ra-local?
--
C. Michael Pilato <cmpilato_at_collab.net>
CollabNet <> www.collab.net <> Distributed Development On Demand
------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462&dsMessageId=1139523
Received on 2009-02-11 17:10:04 CET