dlellis_at_rockwellcollins.com wrote on 08/14/2013 01:13:52 PM:
> > I'm not sure that current SVN users accept problems with depth !=
> > infinity as much as they arrange their layout so they don't have to do
> > that. What's a common use case for needing some disjoint arrangement
> > of components that you can't assemble with externals and normal
> > recursion?
> >
>
> This is a case of trying to improve performance on externals by only
> updating externals that have changed. Without connection caching,
> performing an external update over a WAN is a test of patience. For
> us, our repo is accessed over a WAN. Its not an issue for non-"file
> externals". For example, a WC of 1000 file externals will take over
> 15 minutes to update with zero changes but the same WC with no file
> externals (1000 normal files) takes 30 seconds tops. Keep in mind,
> no actual file revisions are being downloaded, just checked.
>
> We have been attempting to put in place a workaround to give us the
> benefit of file externals with a general algorithm of:
>
> update external table (svn ps svn:external -F tmpfile)
> commit external table (svn commit . --depth empty --ignore_externals)
> update the externals table for paranoia (svn update . --depth empty
> --ignore_externals)
> foreach external in svn:properties (compare svn:externals to svn info
-v)
> if the local revision (svn info) is not the same as the external
> update the external (svn update foo.c)
>
> This has proven to be difficult to do since:
> a. a corruption issue was identified (bug 4409)
> b. we can't change a path on a file external (bug 4001) with out a
> full update (depth infinity) (old file gets deleted, new one doesn't
> get brought in)
>
> Either of these issues prevents doing a simple update of a file
> external or doing a current directory update (50 files is more
> bearable than a full 1000).
>
> That said, this attempted work around would not be needed if the
> connection caching can bring file external updates to something
> closer to the performance of regular files.
Even with connection caching, you will still be asking the
server a separate question for each external instead of one
question for the whole working copy. I think there will
always be a performance cost to using externals, and using
thousands is just asking for problems. Designing a build
management system on top of subversion using only externals
is risky.
> Hope this helps, I'd be glad to explain in more detail how we
> implement this. I think we have a very successful strategy for our
needs.
I do have an http proxy application that does authentication and
connection
reuse I've been testing. I'll contact you separately to perform testing
for your situation. This can help considerably for larger numbers
of externals in higher latency situations. Since 1.8 uses serf it has
been a struggle getting it to authenticate as a proxy correctly. I
think serf 1.3.1 might finally be workable.
Kevin R.
Received on 2013-08-14 21:11:02 CEST