On Wed, Jun 10, 2009 at 6:45 AM, Les Mikesell <lesmikesell_at_gmail.com> wrote:
> It just seems rare these days to not have network access for any length
> of time.
either way, network i/o is slow. i svn log and send a couple of packets
over the network, then realize ive asked for quite a few revs.. .. ctrl-C ..
cd .. git svn log - done. after you start using an offline tool, you can
start to see the performance youre missing (see my earlier post about which
and how many svn commands are network based that are not in most cases in
git-svn). being able to actually work offline, literally, does come in
handy from time-to-time as well, for those of us heavily vested in public
> And I usually consider being able to get other's updates at
> least equally important as being able to commit.
in dvcs, you have the same instant access to changes over the network you
did before. only now, users / groups have the ability to control when they
want to expose things upstream, (and likewise the upstream folks can take
time and publish whenever they wish as well). in the meantime (before
upstream exposure) theyve been branching and merging to their hearts
content. not sitting on un-versioned changs (aka wasting the use of vcs in
the first place) until finally sharing w/ upstream (or finally being proded
into an update w/ deathly fear of destroying their work). and the later
case i mention is the tendency w/ centralized systems including svn, b/c
branching & merging are slow and clumsy.
> What we really need
> is a standardized API-level interface or a rest-full wrapper for
> different version control systems.
a standard integration point between several of the systems rather than 2
work together here or 2 together there, would def be nice.
> What happens when people working in isolation with DVCS make large
> conflicting changes before the code ends up in the same place?
well, the flow is actually cleaner than what youll find in a centralized
model; b/c while theyve been offline, everything has been versioned. so,
when they signal upstream saying changes are ready and upstream finds that
the work conflicts badly w/ mainline; what happens ? downstream is given
the url to the latest upstream and tasked w/ integrating the changes on
their side. and this is ideal since the conflicts are coming in part b/c of
these folks whove created them. so then once theyve resolved the conflicts,
they again publish the downstream work in a public location. upstream can
then pull w/o conflict.
note similar flows can be attained in centralized systems via branching /
meriging, however the tendency is for centralized users to avoid this
because of the overhead in doing so. this you will find across the spectrum
of features in dvcs vs cvcs - the tools work better, thereby encouraging use
of and fostering best practices.
ive actually used a very similar flow w/ svn in our 30 dev env. basically
what i do is branch off mainline and check it out in a common location.
then run the merge there (which will be the same as merging directly to
upstream). then i send a message to the developers, to log in and resolve
the conflicts theyre reponsible for. once all the conflicts are resolved; i
can then merge that dedicated branch to upstream w/o conflict (assuming the
issues have been resolved w/o too many commits on upstream in the meantime,
otherwise some other iteration of the process has to occur).
there are many benefits to this flow, however its time consuming in svn so i
tend to save it for the truly painful merges.. anyway, like i said, the
tendency in my exp, is for centralized vcs users to sit around not
committing (eg sharing upstream) let alone branching and merging on their
own; lol! whether or not its a tendency of dvcs users to sit around not
pushing to upstream remains to be seen.
ideally resolving merge conflicts is best handled by the folks responsible,
not some third-party maintainer. i find this goal easier to attain in dvcs
To unsubscribe from this discussion, e-mail: [users-unsubscribe_at_subversion.tigris.org].
Received on 2009-06-10 16:04:20 CEST