Hello,
I'm writing a script (also used as a cron job on the server) that
should work independent of whether or not a checkout has already been
done or not. I would like to do "repeated sparse checkout" as
explained below, but I'm not sure how to do it properly. (I have some
kind of a workaround for this problem, but I would like to know if
there is a more elegant solution to this.)
Repository has the following structure:
- project/trunk (I would like to have it checked out)
- project/branches/xxx (I don't want them)
- project/tags/xxx (I would only like to have the latest one, but it
doesn't bother me too much if I don't delete older ones)
The first time I can do the following:
latest=`svn list $URL/tags | grep beta | tail -1 | tr -d '/'`
svn co --depth=empty $URL
svn up project/trunk
svn up --set-depth empty project/tags # temporary workaround for a
bug in svn
svn up project/tags/$latest
I could use "svn up" from that moment on, but since I would like the
script to work even if nothing has been checked out yet, I would like
to keep "svn co" in the script.
The problem is that "svn co --depth=empty $URL" will delete all the
contents next time when I call it. Is there any way to prevent
deleting existing contents within the scope of command line arguments?
A workaround is to use something like
if [ ! -d "project" ]; then
svn co --depth=empty $URL
fi
but I would be really happy if there was some command like:
"please checkout $URL, but no need to fetch any files yet, in
particular don't fetch 'branches'; on the other hand please don't
delete 'trunk' and 'tags' if already present"
Thank you very much,
Mojca
Received on 2011-12-20 13:09:26 CET