[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: svnadmin dump issue - E200015

From: Nico Kadel-Garcia <nkadel_at_gmail.com>
Date: Sun, 7 Aug 2016 18:22:54 -0400

On Sun, Aug 7, 2016 at 4:45 PM, Johan Corveleyn <jcorvel_at_gmail.com> wrote:
> On Sun, Aug 7, 2016 at 8:41 PM, William Muriithi
> <william.muriithi_at_gmail.com> wrote:
>> Hello,
>>
>> I have a repository thats around 113 GB in size. Its on a VM and the
>> performance hasn't been that ideal. So we decided to source a new
>> hardware and set it up on its own dedicated system.
>>
>> The current subversion system is subversion-1.8, and plan to move it
>> to subversion-1.9. I have en
>>
>> source server:
>> subversion-1.8.13-1.x86_64
>>
>> I have attemped svnadmin dump, svnadmin hotcopy and svnrdump. I am
>> getting the error before from all these utilities

Break it up. Use svnadmin dump for 1000 commits at a time, to help
isolte if it's a specific commit that is killing you or if perhaps you
are running out of resources for such a large dump

In particular, 113 GB is pretty bulky. It's probably a good time to
think about trimming away some of the fat, of old branches and tags
that are no longer needed, or even to think about splitting the repo
up into smaller, individual projects. If that can be trimmed away to
proper use of svndumpfilter, more power to you. This violates the
tenet that the history, all of it, is the important thing, but I'm not
personally a big believer in it.,

There are also several nasty approaches to get something working
*now*, in parallel with your old repository. One is to do svn
export/import of the relevant code in a new working repository: you'd
need to reconcile the inevitable split brain after you get the full
repository copied, but if you're out of time it can save your ass.
(And yes, I'm bringing it up again, even though it consistently gets
me yelled at.)

There is also a nasty, but in a pinch workable, trick for getting
similar end results for unsophisticated repositories to get code up
and running if you're out of time. If you're not worried about
preserving svn:keywords, svn:ignore, or other added repository
attributes, you can get a fast clone working when svnadmin is giving
troubling by using git.

* Use "git svn" to make a "fast" copy.
* Delete unwanted debris, such as unuseful tags and branches.
* Use "git gc" to obliterate content that is no longer used.
* Use "git svn" to upload the content to a new, *much smaller* working
repository.

This flushes a great deal of attribute information and potentially
useful history, but it's often a much faster way to clear excess
debris and get a new, non-identical, but workable Subversion
repository with most of the important content history intact.

>> svnadmin: E200015: Caught signal
>>
>> or
>>
>> * Dumped revision 2968.
>> svnrdump: E200015: Caught signal
>>
>> My two questions are:
>>
>> - Is it safe to use svnadmin dump on the source repository? As in
>> does it do it change in the source repo?
>
> Yes, it's absolutely safe to run on the source repository, it should
> not cause anything to change in the source repo.

The big risk I see with a 113 GB repository is running out of
resources while doing "svnadmin dump".

>> - What causes E200015? I don't think its permission as I have even
>> attempted to run svnadmin dump with root permissions? How can I
>> overcome the issue?
>
> Is that the full error message you gave?
>
> "Caught signal" would be what you'd get if the process were aborted by
> a signal, like aborting it with Ctrl-C, or if some other user with
> enough permissions sends a kill signal to your process or something
> like that.
>
> Is it always aborting at the same revision?
>
> Maybe someone else is interfering with your work, and (accidently)
> killing your process? Or maybe your remote (ssh / whatever) session
> gets disconnected which causes the dump process to abort?

That... could also be a problem I would use "nohup" to wrap the dump
and get a usable output of both stdout and stderr.

If you're logging out in mid process and use a recent Linux, I'd also
be cautious about this:

            https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825394

Recent versions of systemd kill user processes when the original user
login is disconnected, even if some process launched by that user is
backgrounded. It kills nohup'ed jobs, jobs in "screen" and "tmux", and
it doesn't log anywhere that it killed the process.

> --
> Johan
Received on 2016-08-08 00:23:00 CEST

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.