[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Best practice for working with external vendor repositories?

From: Jamie Thompson <subversion-users_at_jamie-thompson.co.uk>
Date: Tue, 17 Feb 2009 21:58:37 +0000

Ryan Schmidt wrote:
> svn_load_dirs.pl does not preserve any more history from the remote
> code base than is included in the repository export that you import.
> Further, since it is importing exports and is not accessing the
> repository directly, you will lose any Subversion properties,
> externals definitions, etc.

That's quite a shame.

> svn_load_dirs.pl is a Perl script and is in the Subversion source
> distribution.
>
> svn-load is a new script written in Python which is available at
> http://free.linux.hp.com/~dannf/svn-load/ . I don't know what its
> features are compared with svn_load_dirs.pl.

I'll certainly take a look at it. In the meantime, I've been thinking about
the problem of merging from an external repository.

As I see it, the way that SVN's merge tracking currently works is that it
simply tracks on the server side which revisions have already been merged
in for a given object, so when subsequent merges are performed it defaults
to the range n+1 to HEAD, (where n is the revision of the previous merge).

Thus you could conceivably support merging from an external repo by storing
the url of the revision merged from, so that history could traverse back
into the external information once it runs out of history for a given line.
You could possibly even fake it client-side using properties and a script.

...or something like that.

Actually, I wonder if I could fake/simulate what I want thought by writing
a script that externally syncs to each revision in turn and re-commits them
to my own repository with the comments of the original authors (with all
the authors pre-added to my setup so the changes can be recognisably
differentiated). That way, individual commits and their commentary could be
preserved along with all their properties, and all you'd lose is the date
information (as you can't inject revisions into the repository's history so
no accurate dates unless you're going from a clean slate and start messing
around with the date).

Going from what I've read so far, I'm kind of surprised that svn load dirs
and it's siblings doesn't work this way actually, as it seems to me to be
far more useful than merging changes en masse at once. Revision numbers are
cheap, so I can't see you gaining anything by collapsing them down. I've
also come across a project called "Piston", which looks worthy of a play (
http://piston.rubyforge.org/index.html ), though I'm doubtful it doesn't
just do the merge-at-once method (from what I've read).

So in summary, I guess I'm after some kind of Subversion replication mirror
that doesn't require special access to the repository. Once I have a branch
with full history, I can then merge as usual.

After all, what's the point of the vendor publishing a subversion repo if
the only way to get the code into your own repo (so you can commit) loses
all the metadata, so you may as well just be getting tarballs?

Thoughts?

- Jamie

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=1181366

To unsubscribe from this discussion, e-mail: [users-unsubscribe_at_subversion.tigris.org].

Received on 2009-02-17 23:00:39 CET

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.