On Jan 29, 2008 10:44 PM, Justin Erenkrantz <justin_at_erenkrantz.com> wrote:
> On Jan 29, 2008 8:22 AM, C. Michael Pilato <cmpilato_at_collab.net> wrote:
> > But doing this for mergeinfo gives me pause, because I don't think that
> > having 1.5 clients writing mergeinfo to the repository and then having that
> > info being ignored by 1.4 clients is actually good. As 1.4 clients clone,
> > tweak, and write node-revision information in the course of doing the whole
> > version control thang, I think this will have the effect of disturbing the
> > integrity of the data. As such, it seems we pretty much need to
> Is this a realistic opportunity for corruption?
> > And then, we have to deal with the upgrade path. Do we implement 'svnadmin
> > upgrade', or do we require a dump and load?
> I think 'svnadmin upgrade' is an awful idea. I think we'd be opening
> a bigger can of worms then we'd be addressing "what do you mean this
> doesn't do the 1.3->1.4 xdelta conversion?" So, if we go this route,
> I'd be concerned that we'd then have to explain what it upgrades and
> what it doesn't. Ugh.
> I don't see a particular reason to be scared of dump/load - if you
> choose to have lotsa repositories, you shouldn't be afraid of
> dump/load. (Or, perhaps you'll realize the drawbacks of having
> multiple repositories!)
> If the admin doesn't want to be bothered by a dump/load, then we could
> just disable merge tracking (iow, just report it as without merge
> capability). Even if they set svn:mergeinfo, no biggie. -- justin
The more I think about requiring a dump/load the more I hate it. I
might even go so far as to say I think this will begin the decline in
our user base and uptake. Think how confusing it is going to be for
users that install 1.5 on their servers and merge tracking type
features do not work or issue cryptic (to them) errors. Are large
hosting sites like sourceforge.net going to manage the process of
doing a dump/load on thousands of repositories? I also imagine we
have huge portion of users that are not sysadmins and do not have a
sysadmin and will just be lost trying to understand all of this. Not
to mention just being scared by the dump/load process.
It seems like the main driver for a dump/load is the fsfs node origins
cache. Since we already have the code written, it would be relatively
easy to support a hybrid approach where existing repositories
(node-ID's) could use the big cache and you could dump/load if you
wanted the new more efficient format.
We still need to bump the format for the mergeinfo node changes. We
could do this with an svnadmin upgrade or other utility that made the
necessary changes, but could run quickly. Personally, I would still
be in favor of doing this automatically and catering to the bulk of
our user base. I know this would cause problems for groups that share
a repository via file:// but that is still a tiny fraction of our
users. If a 1.5 client bumps the format, it shuts out the 1.4
clients. So at least there is not any corruption that will happen.
It is inconvenient but I think less of an inconvenience then requiring
everyone else to do something special.
This is not a CollabNet issue either. Our repositories are BDB and if
you decide not to include an upgrade process then we are perfectly
capable of writing a script for our operations group that can handle
this for the sites we manage. I am speaking purely as someone that
has been involved for a long time and spent a lot of time helping
users on the various mailing lists and forums. These current plans
are going to cause a lot more problems than everyone seems to want to
To unsubscribe, e-mail: dev-unsubscribe_at_subversion.tigris.org
For additional commands, e-mail: dev-help_at_subversion.tigris.org
Received on 2008-01-30 17:42:55 CET