[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: svn client fails with Berkeley DB error Cannot allocate memor y - repository wedges

From: Martin J. Evans <martin.evans_at_easysoft.com>
Date: 2004-06-01 18:06:38 CEST

I basically agree that an ls operation should not need that many locks. We only
got to a discussion of locks because Ben Collins-Sussman thought the error

Berkeley DB error while opening 'transactions' table for filesystem
/var/subversion/distribution/linux-x86/db:
Cannot allocate memory

might be related to running out of BDB object locks (and in deed it may still
be).

I tried doubling the locks in DB_CONFIG but it made no difference. However
since that repository does not recover (fails saying I need to run recover)
that test is probably meaningless. I've tried again with a completely new
repository that is working and changing the lock limits in DB_CONFIG (doubled
them to 4000). That did not help - the perl script fails in exactly the same
place.

You don't say if you were doing an ls() with recurse enabled - I am not. I am
doing ls() with no recurse and recursing myself in my perl (i.e. doing another
ls).

Anyway, I can replicate the problem at will now by doing this (admittedly this
is with subversion 1.0.2 but I cannot upgrade the server today, BDB is 4.2.52):

1. get hold of subversion sources
   cd /tmp
   svn co http://svn.collab.net/repos/svn/trunk subversion
   cd subversion
   find . -name .svn -exec rm -fr {} \;

2. create test repository
   cd dir where repositories create (/var/subversion/distribution for me).
   svnadmin create test-martin

3. import subversion tree into test repository.
   cd /tmp/subversion
   svn import . file:///var/subversion/distribution/test-martin -m "test"

4. prove the db is OK

   cd /var/subversion/distribution
   svnadmin recover test-martin

sh-2.05$ svnadmin recover test-martin/
Please wait; recovering the repository may take some time...

Recovery completed.
The latest repos revision is 1.

5. run the attached perl program with:

   /tmp/x.pl -u file:///var/subversion/distribution/test-martin

Couldn't open a repository: Unable to open an ra_local session to URL: Unable
to open repository
'file:///var/subversion/distribution/test-martin/doc/book/README': Berkeley DB
error while opening 'uuids' table for filesystem
/var/subversion/distribution/test-martin/db:
Cannot allocate memory at /tmp/x.pl line 17

6. demonstrate DB is now broken and unrecoverable

   cd /var/subversion/distribution
   svnadmin recover test-martin

sh-2.05$ svnadmin recover test-martin
Please wait; recovering the repository may take some time...
svn: DB_RUNRECOVERY: Fatal error, run database recovery

This is repeatable at will for me.

I've followed the excellent hint from C. Michael Pilato and used db_stat before
and after running the perl script. The results were:

before:
2000 Maximum number of locks possible.
2000 Maximum number of lockers possible.
2000 Maximum number of lock objects possible.
9 Maximum number of locks at any one time.
17 Maximum number of lockers at any one time.
9 Maximum number of lock objects at any one time.

after:
2000 Maximum number of locks possible.
2000 Maximum number of lockers possible.
2000 Maximum number of lock objects possible.
883 Maximum number of locks at any one time.
1760 Maximum number of lockers at any one time.
19 Maximum number of lock objects at any one time.

I have never managed to recover a repository broken in this way but am open to
suggestions. All the arguments for changing the attached perl script to make it
succeed are irrelevant to me - I can make the script work fine via http by
reducing the number of max_client in apache or by introducing sleeps into it
(speed was never important for it). The error appears to happen at the
server end and it leaves my repository broken.

What I particularly don't like about it is that I can make this
happen using a http URL which implies to me I could break other public
repositories and they could break my public repositories. Needless to say I've
not tried this.

Martin

--
Martin J. Evans
Easysoft Ltd, UK
Development
On 01-Jun-2004 C. Michael Pilato wrote:
> "Martin J. Evans" <martin.evans@easysoft.com> writes:
>> On 01-Jun-2004 Alblas Huibert wrote:
>> > Locks cannot be released until the scripts ends,
>> > and the script only ends when a subprocess is done.
>> > I can imagine that 4000 is a limmit way to small on
>> > a medium sized repository.
>> 
>> Fair enough but at most I would expect a read lock per ls and I
>> don't see why that cannot be released when the ls completes. Since
>> the depth of recursion I get to is only 9 I don't have any more than
>> 9 ls's going at once.
> 
> Locks aren't taken out per-Subversion-operation.  Their granularity is
> more tightly bound to Berkeley DB implementation details, something
> like per-row or per-page or something.  That said, these locks don't
> outlive a Berkeley DB transaction, and there is usually a many-to-1
> relationship between Berkeley DB transactions and Subversion
> operations.
> 
> I said all that to say that an 'ls' shouldn't eat *that* many locks.
> 
> You can see how many locks a particular operation consumes (roughly)
> by doing this:
> 
>    svnadmin recover /path/to/repos
>    (do the operation)
>    db_stat -ch /path/to/repos/db | grep Maximum
> 
> For example, I'm seeing about 31 locks used for a recursive 'ls' of
> the entire Subversion repository's /trunk directory.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: users-help@subversion.tigris.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
End of MIME message

  • application/octet-stream attachment: x.pl
Received on Tue Jun 1 18:08:17 2004

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.