J Robert Ray <firstname.lastname@example.org> writes:
> I am developing a script (using the svn perl bindings) to do a mass
> rearrange of a large software project. In very simple terms, my
> script does many, many SVN::Client::ls() (non-recursive) calls, with
> the occasional SVN::Client::mkdir() and SVN::Client::move().
> These are the only calls I am making, and they are all against
> respository URLs, I am not using a working copy.
> As I run my script, and it is off sniffing around the repository via
> ls, I am running 'watch db_stat -c' in another terminal.
> 150 Number of current lockers.
> 1681 Maximum number of lockers at any one time.
> The 'Number of current lockers' count grows rapidly and eventually
> meets up with the max, and the script dies with a 400 error. I can
> always go to the apache server log and dig up a message like this one:
> (20014)Error string not specified yet: Berkeley DB error while opening
> 'representations' table for filesystem
> /mnt/d3/subversion/move_test/db:\nCannot allocate memory
This is very exciting! We've encountered the same error message from
Subversion's own repository a couple of times, and have heard
occasional reports of it happening elsewhere. But this is the first
thing resembling a reproduction recipe we've gotten.
Can you share your script with us? (And the repository would be good
too, although probably just looking at the script would be enough to
start tracing down the bug in Subversion.)
> Vital stats:
> server: svn 1.0.6, apache 2.0.59
> client: svn 1.1.0, perl 5.8.0, swig 1.3.21
> berkeley db: 4.2.52
If you upgrade the server to 1.1.0, does it still have the problem?
(If you have time to try it with svnserve and svn://, that could be
useful information too.)
> Any suggestions on how I can avoid running out of lockers?
Based on what you've described so far, this is just a bug in the
server -- the lock count shouldn't be climbing so high in the first
place. So I don't have a suggestion as to a workaround, but I'm
hoping we can fix this, assuming we can reproduce it.
> I have tried creating a new SVN::Client object to use after n
> operations to see if that would flush out old transactions and free up
> locks. This doesn't really seem to help. Sometimes the script will
> run for a few minutes before running out of locks, sometimes it dies
> within a few seconds.
> After reading the berkeley db docs, I understand that the number of
> lockers in use is akin to the number of concurrent transactions. As I
> am only calling ls(), mkdir(), and move() serially, and I am operating
> directly on the repository, I should in theory only ever have one
> active transaction at a time.
Right. Except, it seems, Subversion is leaving Berkeley transactions
open when it shouldn't.
> Maybe unrelated, my script segfaults periodically in libneon
> _init(). Here's a sample stack trace:
Oooh. No idea what's going on there, sorry. I suspect the neon init
segfaults are unrelated to the server-side lock count problem, unless
you notice some correlation between them (?).
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Sun Oct 10 03:11:52 2004