[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index


From: Hyrum K. Wright <hyrum_wright_at_mail.utexas.edu>
Date: Tue, 16 Feb 2010 12:16:55 +0000

On Feb 16, 2010, at 11:58 AM, Radomir Zoltowski wrote:

> All,
> I am reading WC-NG design from http://svn.apache.org/repos/asf/subversion/trunk/notes/wc-ng/design. I expect deployment of 1.7.x in my environment mid this year, therefore, I would ask a few questions here. This is purely administrative approach, which by some may be considered simplistic, minimalist or even conservative. Nevertheless, can I?
> 1.
> According to the user's config, the metadata will be placed in one of
> three areas:
> wcroot: at the root of the working copy in a .svn subdirectory
> home: in the .subversion/wc/ subdirectory
> /some/path: stored in the given path
> What will be the default location for meta-data directory? How one tells that a specific location on a disk is a part of working copy when .svn directiory is relocated?

The default (and only method supported in 1.7) will be a .svn directory at the root of the working copy. As with previous versions of Subversion, please don't manually move or edit the contents of the .svn directory.

> 2.
> If the user has moved the wcroot (the stored path
> is different from the current/actual path), then Subversion will exit
> with an error. The user must then ###somehow tell svn that the wc has
> been copied (duplicate the metadata for the wcroot) or moved (tweak
> the path stored in the metadata and in the linkage file).

We're still working out some of the issues. I believe the expected behavior will be the same when the entire working copy is copied. When subdirectory of the working copy are moved or copied, they will need to be 'detached'. The detach feature has not yet been implemented or conceived (the demand has not yet reached critical mass).

> It should be understood here, that large repositories in some (enterprise) environments are very resource expensive to be checked out multiple times to all users of the repository. Working copies should still work on a "check-out-once copy-everywhere" deployment model. I would say, that not everything about CVS (or even RCS) was bad. Nevertheless, if the above was implemented as described, would it be possible to reset meta-data (without full binary check-out) to a new location of working-copy? Naturally, it would probably land in extended 'svn cleanup' with perhaps already known '--relocate' option, but I am leaving it as an open suggestion. Also, I am assuming that one extra step can be accepted by most administrators and users, which may not be the case initially.
> ... or putting things differently. Let's say, there is a team of 100 people somewhere in Europe awaiting an access to a 250 GB repository somewhere in Australia. What to do to avoid 100 check-outs? Assume the repository contains binary data and must be checked out in full. Slave with http-based proxy is not an option (until svn+ssh-based proxy is invented).

This is a valid concern. The complete behavior is still unknown (see statement above).

Eventually (but not in 1.7), we plan on letting users share the pristine data store, which would avoid this problem.

> 3.
> absent <none> Server has marked the node as "absent",
> meaning the user does not have
> authorization to view the content.
> Is there a plan to make server aware of it's working copies, specifically nodes in this case? If yes, what is it going to solve? I am seeing extra management tasks and points of failures here. Please correct me if I am wrong.

There is not a plan to make the server aware of "its" working copies. As you point out, this adds management burden, and it not very scalable.

> All metadata will be stored into a single SQLite database. This
> includes all of the "entry" fields *and* all of the properties
> attached to the files/directories. SQLite transactions will be used
> rather than the "loggy" mechanics of wc-1.0.
> What is SQLite going to solve? If metadata is in one location, it is assumed that amount of data stored will be significantly reduced. Can anybody explain the rationale for using SQLite here, please? Again, from my perspective, it is another layer which brings another point of failure.

SQLite actually *removes* points of failure. Instead of our custom on-disk data format (which has had to be continually updated and improved), we let SQLite do that for us. SQLite is well-tested, widely used, and provides atomicity semantics which are very useful to Subversion. Instead of re-inventing the wheel, we can use a much better engineered wheel.

Hope this helps,
Received on 2010-02-16 13:17:37 CET

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.