APR still has a 2GB limit so what you're trying to do shouldn't work.
Not sure why SVN isn't failing elegantly though.
Using BerkeleyDB as the backend is one way to get around it for the time
being. One of my BDB repositories has some commits larger than 2GB and
that's why I can't migrate them to FSFS yet.
From: Andrew Webber [mailto:email@example.com]
Sent: Wednesday, March 22, 2006 10:33 AM
Subject: Apache cpu usage on large commit
Forgive me if this has already gone around on the mailing list, but I
didn't see anything about it.
I'm trying to perform a very large commit (~4 Gb) into my repository and
it's not working. The commit begins and starts tranferring the data.
After a while, no more dots appear showing me that the transfer is
proceeding. The svn client hangs there indefinitely without spitting
out an error. On the server, I can see an apache process running at
around 98% cpu usage. It looks like there is still some memory free on
the server, so I don't think it's thrashing. There is plenty of hard
drive space available both on the client and the server. Once things
get this way I need to kill the client and restart apache. I didn't see
anything useful in my apache access or error logs. I can see a
transaction in the repository using 'svnadmin lstxns /path/to/repos'.
The transaction folder weighs in at 2.1 Gb. Here are some stats about
Subversion 1.3.0 (built from source)
Repository on ext3 filesystem
Have I reached a commit size limit in apache? ra_dav?
I have sucessfully done commits around 1.5 Gb and I can split the big
one up to get around the problem. Does anyone have any advice on how to
fix the larger problem?
Received on Wed Mar 22 16:51:48 2006