Bicking, David (HHoldings, IT) wrote:
> Note: if you meant to post this on the list, you didn't. I'm including
> your entire text along with my response for the benefit of the group.
>> -----Original Message-----
>> From: Rob van Oostrum [mailto:firstname.lastname@example.org]
>> I am a full-time build guy, and I've done feature-branching
>> on web projects where we'd have half a dozen or more active
>> branches at any one time, and production releases every day,
>> every other day, or sometimes multiple times a day.
>> Each feature branch would hold changes related to one or more
>> bug/enhancement/feature tickets, and they would get
>> "committed" to the release stream (a branch usually called
>> "maintenance") only when we had QA signoff on the feature
>> branch. In some cases there would be 2 QA stages: one after
>> work on the feature branch was thought to be complete, and
>> once that had passed, the branch would be merged with
>> production changes since the branch was created (i.e.
>> re-baselined to production), and a second round of QA would
>> be done. Usually this would be because there was a
>> significant lag between work being completed and the
>> scheduled release of the branch.
> Okay, so your teams do their work on feature branches, wherein you have
> several developers working together on that branch. There is no
> code-review/audit/process in the middle of their work while they're
> hacking away. However, when the time comes after unit testing and
> debugging produces a reasonable product, your team steps back and lets
> the QA team go over it with a fine-toothed comb.
> They would merge that with production updates and ensure the merge
> didn't break anything, at which point the result would either become the
> next production deployment, or be merged into a codeline that becomes
> the next deployment.
> Meanwhile, other teams are working on other features. Some made it into
> the same production release yours did, others did not. The ones that
> did not would then merge or "rebase" with the production-level code and
> continue their work until the next cycle comes around.
> Is that right?
>> After a production release was done successfully, and a smoke
>> test of production revealed no new problems, the maintenance
>> branch would either be merged back to trunk, but after a
>> while we would replace trunk with the maintenance branch to
>> save ourselves the headache.
>> FYI this was a 4-5GB or so working copy type codebase, with
>> anywhere from a couple to a dozen people actively
>> contributing at any point in time. There would typically be a
>> couple of longer-term feature release type projects on the
>> go, and as much as a dozen or more smaller bug
>> fixes/enhancements going live on any given day. We were
>> running 2 QA environments, 2 UAT environments, and different
>> sets of changes would be put through their respective paces
>> on any or all of those at the same time.
>> During any given week, I would also be responsible for the
>> same on up to a dozen other albeit smaller accounts.
>> There is nothing about Subversion in and of itself that makes
>> this exercise difficult let alone impossible. It certainly
>> worked well for us. Not having the tool do more of the
>> process enforcement and housekeeping for you (a la
>> ClearCase/ClearQuest) can be a problem if you don't have a
>> single owner (i.e. a Build Manager) who is not also one of
>> the developers. Being in the loop on expectations being set
>> with clients also helps, since that's where breakdowns in
>> proces are usually born (e.g. a PM caving in to pressure from
>> client prior to validating with his/her team that doing X
>> amount of work in Y amount of time is even physically
>> possible; sometimes what's best for a client is to get less
>> 'stuff' and avoid a ripple effect of trying to play catchup
>> while not losing ground on the never-ending flow of new
>> work), and it only goes south from there. In my experience
>> anyway, this is a human problem long before it ever becomes a
>> technical one, and it wouldn't be fair to blame the tool for
>> not being perfect when all else falls apart.
>> Sorry for the length, but I hope this is what you were looking for.
> It is exactly what I'm seeking. We're reviewing SCM products now, and
> the other option on our table is Surround. It has lots of bells and
> whistles, including "code statuses" (which seem quite a bit like
> "promotion groups" from Serena), and nifty workflow examples for optimum
> marketing. One example that is fresh in my mind is the workflow in
> which everything a developer commits goes in with a status of "needs
> review", and a hook emails some "reviewer" who then goes in and reviews
> it. If no problems are seen, it is promoted to whatever (say "QA"),
> I'm curious about people's experience with these kinds of features - are
> they helpful, or do they make life hellish?
I don't have any personal experience with surround. An acquaintance of
mine does work for a company that spend a bundle on it and then dump it
after about 6 months. His comment was that it is complicated to use and
the terminology was hard to follow. He believe that this led to people
doing things wrong. The end result was a screwed up source tree and
unresponsive support. They now use Subversion.
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Thu Oct 18 02:09:02 2007