On Fri, Mar 12, 2004 at 02:37:35PM -0500, Rob van Oostrum wrote:
> The potential problem with this merge into the private branch
> from the trunk before merging the private branch in to the trunk:
>
> - it destroys the integrity of the private branch. Instead
> of being able to easily identify the changes made on the
> branch for the purpose for which the branch was created
I think I noted that. I don't think it destroys the integrity,
just makes it harder to identify (it is still there though).
When/if SVN can do smart merging (as some other tools do today)
then the problem goes away. The other issue is how often is
that going to be a problem and how bad is it when it is? Well
- the more often you do this practice (properly :) the less
often it becomes a problem and the less it hurts when it
does. (sounds counterintuitive, I know)
> - it duplicates changes over a multitude of branches.
I think it depends on the storage format used. If you are using
RCS-style storage, I think what you say is true. With SCCS-style
storage however, it does not duplicate the carried-forward lines
in the delta-table. It might add additional sequence-IDs for
the same set of lines for a deletion/insertion/changed entry
in the delta archive, that would be very minimal (not enough
to be a scalability issue)
> On the other hand, I agree with you in that you shouldn't
> want to dump changes into the main line of development if
> the baseline of a private branch is significantly out-of-date.
> So what I like to do in these cases is setup a temporary
> integration branch off the main line of development and merge
> the private branch changes back into this.
Actually - do you need a branch for that, can't you just use a
separate integration sandbox (with or without a new branch),
in which case I believe it is basically the same as the
single-release-point (e.g., machine or sandbox) approach.
But doing that without the update still gives you a more likely
to be more stale private branch+sandbox with greater merge and
reconciliation effort that very often more than offsets the
disadvantages while at the same time minimizing the likelihood
and impact of the most significant concerns.
The other solution is to break your task up into "micro-tasks"
and commit much more frequently in much smaller-sets of changes.
This can work too, but essentially suffers the same traceability
problems of the continuous update approach.
In the case of SVN (and certainly CVS) I believe most of
the things you say are true, though it has still been the
predominant best practice to do the update anyway because
it tends to improve things much more drastically while
also drastically reducing the likelihood of the major risks
discussed. When/if SVN can address the "smart merging" issue
I believe it can eliminate the scalability and identification
concerns.
--
Brad Appleton <brad@bradapp.net> www.bradapp.net
Software CM Patterns (www.scmpatterns.com)
Effective Teamwork, Practical Integration
"And miles to go before I sleep." -- Robert Frost
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Fri Mar 12 20:59:07 2004