[I have changed order of some comments to group related issues below.]
>>> About 16GB, 190,000 files, and 90,000 folders,
>>> including WC .svn directories
>> That's a pretty massive WC indeed! :-)
> I think this will start to become more common as
> more true multimedia projects start to see the
> benefits of Subversion.
Probably true. Subversion does have a nice way of handling binary files,
and versioning *is* important for other things than code.
Unrelated question out of curiosity: As you are working with "media",
how are you handling diffs? Have you written any diff scripts of your
own (for TSVN)? Have you tried TortoiseIDiff that comes bundled with the
> New XML multimedia formats like SVG and Collada
> lend themselves much better to concurrent
> development environments than old binary formats.
I have not looked into SVG that much (and Collada was a new format for
me), but my earlier experiences with concurrent collaborating thru
XML-formats has been quite disappointing. Often the tools modify the
XML-source so heavily margining becomes difficult or impossible...
Have you had any other experiences I would be happy to hear about them.
(No doubt the formats and tools will become better over time, but right
now I haven't found any that cuts it.)
> I also don't see any harm in exposing *more*
> features to the user, when feasible.
More features are not always a good thing. Not when the features lures
you into thinking it does one thing, but in reality does something else.
(Not that your suggestions necessarily does that, but I do feel that
it's a bit of jeopardy to rely on file-system-monitoring. What about
thumb-drives? Dual boot systems? WC:s on networked drives? And so on...)
One also has to think about usability when adding features. If dialogs
get too cluttered, it gets more difficult to use them. I do not mean
that one should avoid new features, only think hard about where to put
them and how to design the UI for them.
> I think it's always better to let the user
> customize their environment, than to force
> policy via synthetic implementation limits.
Absolutely yes*! That is, if you have the time to implement all these
customization options in the first place...
*(Disclamier: This might not apply if you are the guy sitting in
helpdesk and tries to help ignorant users with their clients
tailor-configured beyond all help...)
>> Then, of course, there's the small matter of
>> implementation. The best way to help there
>> is to provide some (working) source code!
> I am in the early process of getting my VS.NET
> TortoiseSVN Solution environment set up.
Nice! Keep the list posted with your development progress if you are
interested in (possible) feedback!
In the end it's Stefan who has the saying of which patches that goes
into the TSVN-source, but as long as it does not conflict with his ideas
of what TSVN should be; all patches (in my experience) are very welcome
>> I was referring to changes in the GUI.
>> A mock-up screenshot or similar is a
>> good starting point for discussion.
> Good idea. I'll put that on the users
> list when I have something to show...
I think the dev-list would be a good place to post too! :-)
>>> Who wants to commit every single file?
>> Most knowledgeable programmers for example.
> Touche. ;P Actually, our project is fairly
> well segmented, so we often limit commits
> to one segment when we are working on multiple
> segments at the same time.
That is sensible. Dividing larger projects in many independent modules
tend to be a good thing(tm).
>> Ahh! You want to guarantee serialized commits. Hmm... Maybe not a bad
> Yes. We are using manual locks on some committed build binaries as
> ad-hoc semaphores to get around these "concurrent commit" issues, but
> such human-maintained semaphores are very limited in value.
> I think temporary locks on all intended-commit paths would serve this
> purpose much better.
Maybe you would simply be better of simply switching to a more
"module-based" repository setup. I assume that each of your "commit
paths" consist of a strict sub-tree of your repository. That is, one
folder and all its subfiles/subfolders. In that case, define the "commit
paths" you have, and create "modules" (see below) for them.
For example create separate "modules" for network source, graphics
source and art. Let each one have it's own setup of trunk,tags,branches,
but let them all reside in the same repository.
You can the let them all come together in another "module" where you use
externals to include net,gfx and art.
That way you can formalize the segments of you product, and let the
developers work on each of them independently. The criteria is that each
module must be consistent and compileable in itself.
(Oh, I do not know if I'm making any sense here. Just tell me if it's
not possible to understand what I'm trying to say...)
But what do I know, this might not fit your product at all. Just trying
to understand where you come from. Always interesting to discuss
Versioning and "SCM".
>> 1. Make changes
>> 2. Test
>> 3. If test fails, goto 1
>> 4. Update
>> 5. If new stuff in update, goto 2
>> 6. Commit everything
> We often find step 5 unnecessary if there are
> no conflicts, because if the committed code
> changes work independently, it's usually a safe
> assumption that they will work together as well.
I guess that's a choice you have to make. By assuming no errors may be
created from successful automatic merges you might break trunk without
knowing it. I do not feel that's OK, but for some it might not be such a
>> I might be misunderstanding you, but since you
>> need to test the result of the merge, you DO
>> need user intervention. (See workflow above.)
> The committer is not necessarily the right person
> to do all testing.
That's true. That's why you (IMHO should) have an automated test-system
that automates the tests each developer has to do at point two.
> QA and automated build/test servers handle that
> work post-commit just fine for us.
I would try to move as much of that as possible to step 2. Of course
stress-tests and such may only be possible to do on a nightly basis, for
> I would agree if we considered trunk to be a stable
> branch, but we consider it our iterative-collaboration
> branch. Maybe this isn't a "best practice", but it
> works for us.
That's cool. Does explain our, at times, slightly different points of
How do you handle the occasional breaking of trunk? Do you dissencourage
regular updates? Otherwise the risk is to have a large part of your
development team waiting for one individual to fix the problem that
prevents everybody else from continue working. (Or if everyone fixes it
locally you *will* have a conflict later on, not to mention the wasted
time on many developers doing the same ting.) And not doing regular
updates might bring merge-problems when the update is done. (That might
not be a big issue if all development can be done in small and very
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Wed Aug 2 15:47:46 2006