Kean Johnston wrote:
>>Almost all open source projects have many developers who are not
> And the vast majority of CVS users are not open source developers,
> I'll warrant. There are some development shops with litterally
> thousands of employees who use CVS. There are more than one of
> that size shop, and there are a plethora of other smaller shops
> all of whom use CVS. Although I don't have real numbers, lets take
> a wild-ass-guess and say there are 100000 people on the plannet who
> use cvs. I'd be VERY surprised if more than about 2000 of those are
> OpenSource developers who *REGULARLY USE CVS* (as opposed to the
> occasional cvs update). That's 2% of the total cvs market. Maybe
> the guess is several orders of magnitude wrong. Maybe its 10% of
> all CVS users are open source developers. That still leaves 90 000
> people who you are targeting and whose needs you are completely
Well, you've made me question one of my assumptions, anyway. I've always
assumed that proprietary shops were much more likely to choose
ClearCase, Perforce, etc. I can't immediately think of a way to prove
this one way or the other.
Even if you are right, you might find it a bit difficult to sell changes
really weighted toward the other model, because anyone working on
Subversion (itself an open source project) would be naturally biased
>>I'd argue that the multiple ways does hamper my usage. To repeat what
>>Greg Hudson said, the more complex code is more difficult to maintain
> That's a nebulous statement. We don't KNOW how much it will complicate
> the code because no-one has written it yet. But I am guessing it
> will add SOME complexity to the WC code, but it wont increase it
> by an order of magnitude, so I really don't buy that as an argument.
> If code complexity was a ruler by which you measured suitability,
> we wouldn't have GCC, X11, or even SVN. We'd be with TTY's, shell
> scripts and SCCS.
It may be a nebuluous statement, but it's almost guaranteed to be true.
Greg Hudson said that from a perspective of a developer. From more of a
user perspective, it's not uncommon for me to experience working copy
weirdness. So I'm a bit frightened by the idea of making the existing
system more complex when it is already not working perfectly for me.
GCC and X11 are mature systems - they weren't originally designed with
all the complexity they had now. There was much more time for debugging
and refactoring between stages. Even so, if new alternatives appeared
that what I needed as well with less code, I'd probably switch.
And I think in those projects there is a lot of amount of resistance to
complexity without sufficient justification. You've made it pretty clear
to me anyway how important this change is to you, but it took the
details in this latest message to do it.
>>Are you in an environment in which you develop over
>>the LAN but the extra disk space is a significant expense?
> I wouldn't say a "significant" expense but it is certainly
> and avoidable one. In todays economic climate where a lot of
> companies are surviving by the skin of their teeth, it is hard
> to justify buying $600 72G SCSI drives when we already have
> perfectly good workstations that can cope. Moving to a tool
> that would require is to upgrade every deevelopers machine
> just because someone thought the ability to do local diffs
> was a justification for double disk usage is really not on
> in the real world.
Okay. With these details, this is much more clear to me. I understand it
is a real problem for you. And the same for the inodes to some extent.
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Tue Dec 17 08:09:51 2002