[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: How to change paths on an external file without a full update --depth infinity?

From: <dlellis_at_rockwellcollins.com>
Date: Thu, 15 Aug 2013 12:03:13 -0700

> On Thu, Aug 15, 2013 at 12:39 PM, <dlellis_at_rockwellcollins.com> wrote:
> >
> >> But regardless of how you identify the target
> >> file, there shouldn't be any effective difference between copying a
> >> version into your directory or using a file external as long as you
> >> don't modify it in place and commit it back - something your external
> >> tool could track.
> >
> > We do want to modify in place. Copying back creates an additionalstep
that
> > is already managed quite well by SVN with externals.
>
> I've never done that with a file external - where does the commit go?

It commits a new revision of what the file external pointed to - pretty
handy. If you are pegged, it will not automagically update your pegged
revision (as I'd expect), so unless you are on the HEAD or update your peg
to what just committed, an update will revert your WC back to the pegged
version.

>
> >> Again, you get the history in a copy. You can tell if they are the
> >> same. Or, on unix-like systems you can use symlinks to a canonical
> >> copy within the project.
> >
> >
> > We're not a unix-like system but that is what would work great (with
the
> > exception that you can't revision control symlinks, right?)
>
> I think so, but the links and the target would be versioned
> independently which might complicate your tracking.

Yes, it would complicate things quite a bit and introduce areas for
defects to get introduced.

> >
> > The whole discovery we found is that most of our reuse occurred
inunplanned
> > ways (I'd imagine if you took two linux distros and compare files
which
> > changed and didn't change, it would be a huge collection of random
files
> > that aren't easily abstracted out. You might be able to do it once,
but as
> > each new distribution branches out, the commonality between each of
them
> > becomes impossible to form groupings on.
>
> I was thinking of just adding an extra layer of grouping management
> that would be versioned and able to be duplicated as much as
> necessary. Suppose you made 10 directories and copied 100 files into
> each with tagged versions of these directories for every combination
> you need to access. Normally there would be natural groupings where
> there is a common manager making decisions, etc., but for the moment
> just consider it for performance. Within the repository, the copies
> are cheap like symlinks - you could have a large number of
> pre-arranged tagged choices. Then your top level project becomes 10
> directory-level externals instead of 1000 file externals.

With more complexity comes more bugs and process missteps. We're really
striving to keep things as simple as possible. We're fundamentally
accepting of update times going from 2 seconds to 2 minutes. Its harder
when 2 minutes becomes 20 minutes.

>
> > I'm not sure what a reasonable number of external files per folder is,
but
> > I'd think it'd be similar to a reasonable number of regular files
would be.
> > Two million is nuts, but 50 seems reasonable.
>
> Think of this in terms of client-server activity. With directory
> level externals, the client can ask the server if anything under the
> directory has newer revisions in one exchange and if it hasn't, you
> are done. So what's reasonable is the amount of activity you want to
> wait for.

The whole discussion has centered on an attempted work around for the
connection caching that doesn't currently occur for externals. If that
can happen, I think we'd be very content. We're accepting of some
performance issues. There was an XKCD a while ago that talked about how
much time a task takes, how many times you do it and how much waste is
created over a year. It was interesting (even if obvious if you thought
about it). I think with connection caching we'd hit the sweet spot and
working further would result in diminishing returns. This thread is an
attempt a hopefully short-term work around this limitation.

> Usually I'd consider the 'human' side of organization first, so if you
> can come up with any groupings that could be done as copies into
> tagged directories you might want to arrange them by the people/groups
> who make the choices - and then the performance win would just be an
> extra bonus.

That's just it, we can't find (let alone maintain over time) any
consistent groupings by function. Trying to create groupings other ways
could confuse the developers, or if we try and hide the fact that we have
an optimized backend of sorts, we then have to write more tool software
(we don't like writing tools, we like writing embedded software). In the
end, revision controlled symlinks are the best answer and file externals
appear to be very close. And we're oh so close with file externals right
now.

Thanks
Dan
Received on 2013-08-15 21:03:51 CEST

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.