On Thu, Aug 15, 2013 at 12:39 PM, <dlellis_at_rockwellcollins.com> wrote:
>> But regardless of how you identify the target
>> file, there shouldn't be any effective difference between copying a
>> version into your directory or using a file external as long as you
>> don't modify it in place and commit it back - something your external
>> tool could track.
> We do want to modify in place. Copying back creates an additional step that
> is already managed quite well by SVN with externals.
I've never done that with a file external - where does the commit go?
>> Again, you get the history in a copy. You can tell if they are the
>> same. Or, on unix-like systems you can use symlinks to a canonical
>> copy within the project.
> We're not a unix-like system but that is what would work great (with the
> exception that you can't revision control symlinks, right?)
I think so, but the links and the target would be versioned
independently which might complicate your tracking.
>> I agree that externals are very useful, but most projects would use
>> them at subdirectory levels for component libraries where they work
>> nicely, not for thousands of individual file targets. Is there
>> really no natural grouping - perhaps even of sets of combinations that
>> have been tested together that you could usefully group in
>> release-tagged directories?
> The whole discovery we found is that most of our reuse occurred in unplanned
> ways (I'd imagine if you took two linux distros and compare files which
> changed and didn't change, it would be a huge collection of random files
> that aren't easily abstracted out. You might be able to do it once, but as
> each new distribution branches out, the commonality between each of them
> becomes impossible to form groupings on.
I was thinking of just adding an extra layer of grouping management
that would be versioned and able to be duplicated as much as
necessary. Suppose you made 10 directories and copied 100 files into
each with tagged versions of these directories for every combination
you need to access. Normally there would be natural groupings where
there is a common manager making decisions, etc., but for the moment
just consider it for performance. Within the repository, the copies
are cheap like symlinks - you could have a large number of
pre-arranged tagged choices. Then your top level project becomes 10
directory-level externals instead of 1000 file externals.
> I'm not sure what a reasonable number of external files per folder is, but
> I'd think it'd be similar to a reasonable number of regular files would be.
> Two million is nuts, but 50 seems reasonable.
Think of this in terms of client-server activity. With directory
level externals, the client can ask the server if anything under the
directory has newer revisions in one exchange and if it hasn't, you
are done. So what's reasonable is the amount of activity you want to
> The issue is that I'm
> currently forced to deal with not just the current directory, but the
> recursion on all nested directories (--depth infinity). If, as the subject
> of this thread requests, we could perform work on the directory at hand at
> not the full checkout, we're golden!
I tend to think that if you don't always follow nested directories you
made a mistake in your layout or your checkout point, but I suppose
there are always exceptions.
> I do appreciate this discussion.
Usually I'd consider the 'human' side of organization first, so if you
can come up with any groupings that could be done as copies into
tagged directories you might want to arrange them by the people/groups
who make the choices - and then the performance win would just be an
Received on 2013-08-15 20:34:15 CEST