[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Accidental double tag copy

From: Rob Hubbard <rob.hubbard_at_softel.co.uk>
Date: Fri, 26 Sep 2008 18:07:19 +0100

Dear SVN Users,

I have a suggestion / feature request for SVN.

First: the problem

In the past, I've encountered a problem with tag creation. If the SVN
copy command is accidentally issued twice, something rather nasty

Suppose you have a project called "tag_copy_twice" with a few files in a

version-controlled trunk (in some repository svn://repos/) as follows:


You create a tag for version 1.0 as follows, e.g.

    $ svn copy -m"tag v1.0" "svn://repos/tag_copy_twice/trunk" \

All is still well and good. You now also have these files:


Accidentally issuing the tagging command a second time, you end up with


This is perfectly reasonable. It's what you asked for. The behaviour
matches what a shell copy (or cp) command would do. It's certainly not
an SVN bug.

(I think this is a special case that TortoiseSVN warns about: changes to

areas with "tag" in the URL. I don't think that's the right approach for

SVN: paths should not have special meanings.)

The problem stems from the fact that the copy command means something
slightly different depending upon whether the target already exists.
An alternative possible cause of this problem is if two engineers create

the same tag at almost the same time.

Fortunately, if you accidentally issuing the tagging command a third
time, SVN issues an error:

    svn: Path 'tag/v1.0/trunk' already exists

so it doesn't get worse still.

Second: comparison with Bash cp

Now consider the following Bash shell script (this is nothing to do with



    #rm -Rf to ### dangerous!
    mkdir to

    ### ordinary copy
    cp -R "from" "to/norm"
    ### whoops, did it again!
    cp -R "from" "to/norm"

    ### copy using -t or --target-directory==DIR
    cp -R "from" -t "to/targ1"

    mkdir "to/targ2"
    cp -R "from" -t "to/targ2"
    ### whoops, did it again!
    cp -R "from" -t "to/targ2"

    ### copy using -T or --no-target-directory
    cp -R "from" "to/no_targ" -T
    ### whoops, did it again!
    cp -R "from" "to/no_targ" -T

    find "to" -iname "*.txt"

This outputs something like this (I've rearranged and spaced out the
output slightly for clarity):

    $ ./copy_test.sh


    cp: accessing `to/targ1': No such file or directory



    * the 'ordinary' cp exhibits the same sort of behaviour as SVN copy
    * 'cp -t' won't perform the copy unless the target directory already

    * 'cp -T' doesn't create 'from' in the target

Third: suggested solution

It would be great if SVN copy had these

    --target-directory DIR


The above switches would help, as they would make the required behaviour

explicit. However, this wouldn't solve the whole problem, as the copy
operations would still be able to overwrite and/or mix the source
directories in the target.

Thus perhaps further switch


might be useful for URL to URL copies. The meaning of this would be to
prevent the copy in its entirety if the target already exists.
The main problem with this approach is that it relies on the issuer of
the copy command to actively decide to use the switch.

Perhaps a new (versioned) property on directories

    svn:final (or svn:sealed or svn:read-only)

in conjunction with a copy switch

    --mark-final (or --seal or --mark-read-only)

could be implemented. (As usual, I suppose --force should overcome the
block.) I realise that SVN has a locking mechanism, but I don't think
that's quite the right thing for this.

I dare say that these suggestions are not ideal, but there is something
to be solved here. Perhaps someone else might come up with a more
elegant solution.

Many thanks,
Rob Hubbard.

This email has been scanned for Softel by Star.

To unsubscribe, e-mail: users-unsubscribe_at_subversion.tigris.org
For additional commands, e-mail: users-help_at_subversion.tigris.org
Received on 2008-09-26 19:07:51 CEST

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.