On Thu, Jul 27, 2017 at 3:23 PM, James H. H. Lampert
> My employer has put me on a project of moving our SVN and Trac servers from
> the old Windows Server 2003 box on which they're currently running over to a
> Google Compute Engine instance.
> To that end, I've set up the instance using Bitnami's canned Trac image,
> which includes SVN 1.9.5 (r1770682) and Trac 1.0.15 (our old SVN server is
> 1.5.0, r31699, and our old Trac server is 1.0).
> I've got a test repository set up, and I've arranged access via both https:
> and svn+ssh: protocols, which I then spent a few hours testing from Eclipse.
> But I'm not the one who set up the original SVN and Trac environments in the
> first place, and so what little I know about administration on these
> products is what I've picked up over the past few weeks.
> Now, Trac's wiki page on the process of a dual migration,
> seems to be pretty straightforward on the subject of migrating Trac, but the
> section on migrating SVN is not so.
That page is good stuff.
> They recommend setting up a "pre-revprop-change" script with nothing in it
> but the initial "shebang", for each target repository, and then using
> "svnsync" to migrate the repositories. It also assumes the existence of an
> "svnsync" user-ID on the target system, which (at least assuming it's an
> operating system user-ID) we don't currently have.
That is just the account name of the user who has access to the
upstream repository. If you don't have access to that upstream
repository via Subversion https://, or some CIFS mounted filesystem
access ot the filesystem, or a local filesystem copy or *something*
it's going to be very difficulty to copy the repository. And https://
access or svn+ssh:// or a CIFS mount gets you access to the live
upstream repository for updates.
> Everything else I've read, especially The SVN Book, says to use "svnsync"
> only for mirroring, and instead migrate using some combination of "svnadmin
> dump," "svnadmin load," "svnrdump," and "svnrload."
svnsync has gotten popular because it lets you keep the new repo
up-to-date until you're ready to switch. svnadmin dump, etc. are more
useful when you want to make an offline backup, or when you want to
filter out content. Note that this is about the *only* chance you're
going to get to clear out old content switching to a new repository.
If you have a cluttered "branch" layout, or bulky iso images someone
accidentally committed, or old passwords embedded in files you want to
clear, here is your chance with dump, filter, and load operations. .
I'm not sure how much that kind of filtering would do to Trac, just
> I'm not seeing a lot about copying configuration files or hook scripts. Is
> that just a matter of sending them over?
Going from Windows 2003 to a Google Compute Engine? You *wish*. In
theory, yes, but in practice, if they've been locally customized, they
may have hardcoded dependencies on particular scripting languages. One
step that may help is if you have access to the old box and can run
"svnadmin hotcopy", to get a copy to play with containing all the old
scripts so you can set it aside and play with it separately.
> And I don't quite understand how this whole business impacts the authors of
> commits. Does SVN care whether the author of a commit is a user known to SVN
> or to the operating system? I've already copied an "authz" file from one of
> the existing repositories into the test repository, and given the current
> users Apache user-IDs and passwords, but that's all, so far.
It Depends(tm). For HTTPS access, the author of a commit is known to
the httpd daemon as an authenticated user. The httpd daemon needs
write access to the file system of the server. For svn:// access,
ditto, the author is known to the svnserve protocol, not the local
filesystem, and the svnserve daemon user needs write access. For
svn+ssh://, the author is typically *set* in the configuration for the
SSH key, and the user designated for the SSH access or SSH key access
is local and needs write access. For file:/// access, the user would
need to exist in some way and have write access to the filesystem.
What you have seems quite correct. The httpd daemon needs write
access, and httpd cares about their credentials for https:// and for
Trac software. (I'm picky about Apache being apache-1.x, and release
2.x being renamed httpd, which is why I don't call it Apache.)
Received on 2017-07-28 13:34:26 CEST