> Almost all open source projects have many developers who are not
And the vast majority of CVS users are not open source developers,
I'll warrant. There are some development shops with litterally
thousands of employees who use CVS. There are more than one of
that size shop, and there are a plethora of other smaller shops
all of whom use CVS. Although I don't have real numbers, lets take
a wild-ass-guess and say there are 100000 people on the plannet who
use cvs. I'd be VERY surprised if more than about 2000 of those are
OpenSource developers who *REGULARLY USE CVS* (as opposed to the
occasional cvs update). That's 2% of the total cvs market. Maybe
the guess is several orders of magnitude wrong. Maybe its 10% of
all CVS users are open source developers. That still leaves 90 000
people who you are targeting and whose needs you are completely
> I'd argue that the multiple ways does hamper my usage. To repeat what
> Greg Hudson said, the more complex code is more difficult to maintain
That's a nebulous statement. We don't KNOW how much it will complicate
the code because no-one has written it yet. But I am guessing it
will add SOME complexity to the WC code, but it wont increase it
by an order of magnitude, so I really don't buy that as an argument.
If code complexity was a ruler by which you measured suitability,
we wouldn't have GCC, X11, or even SVN. We'd be with TTY's, shell
scripts and SCCS.
> I'm having trouble seeing the "double the disk space" as a
> significant problem.
If what has been said before doesn't shopw you the problem I am not
sure there is anything I can say that will convince you there is
one. I will just point out that YOUR usage is significantly
different to MY usage. I don't think a tool should force your
usage model on me, and more than it should force my usage model
on you. SVN is trying to penetrate a market that is dominated
by a lightweight tool. Its light on everything except time,
as some operations take CVS a long time to perform. As things
currently stand, it seems as if SVN is much "heavier" than
cvs INCLUDING on time, except in the case of cheap copies. I get
the sense that because those are fast people assume that the
rest of it is. Its not.
> Are you in an environment in which you develop over
> the LAN but the extra disk space is a significant expense?
I wouldn't say a "significant" expense but it is certainly
and avoidable one. In todays economic climate where a lot of
companies are surviving by the skin of their teeth, it is hard
to justify buying $600 72G SCSI drives when we already have
perfectly good workstations that can cope. Moving to a tool
that would require is to upgrade every deevelopers machine
just because someone thought the ability to do local diffs
was a justification for double disk usage is really not on
in the real world.
> are? It sounds to me like this is a hypothetical situation, not one
Not at all. Most of our current developers have 18G SCSI hard
drives. With those, you have just enough room to do a full get,
build and PI run. That's if we just build OSR5. If we include
the Java, UnixWare and other open source builds, as SOME of us
do, then a 36G drive just fits, with about 4G to spare.
I must be honest, I find it quite hard to believe that people
can defend a system where there is a 100% decrease in
efficiency, that in many cases will NEVER be used (refer back to
my original post about build machines). That's what the current
text-base penalty is ... it is a *100 percent* increase in
> you've actually encountered. You've also said disk prices are
> higher in
> other countries...do you have a figure?
When I was living in south africa, a large SCSI hard drive
cost about 60-70% of a senior engineer's monthly salary.
That's a lot higher than it is in this country, or in the UK,
and a lot of development gets done in South Africa. I also have
friends in the Czech Republic where even mice are expensive
let alone SCSI hard drives :)
> Same with the inode problem. What system doesn't have enough
> inodes for
> your working copy? Does what Greg Hudson mentioned (going from 4*file
> extra inodes to 2*file extra inodes) solve the problem?
Yes that would go a long way to solving it. And some surprisingly
modern systems have inode limitations. UnixWare 7 (SVR5) had ISL
defaults that limited you to 64K inodes. Yes you can increase
those by recreating the filesystem, and it was a bug, but still,
there are systems out there that have inode limitations. Its not
a HUGE deal, but again, if every file in the WC is now consuming
5 inodes, that's a 400% increase in resources over CVS. When you
couple the 100% resource increase in disk space usage with the
400% increase in disk inode usage, you have to admit that makes
svn sound a little less than attractive as a replacement for cvs.
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Tue Dec 17 07:02:25 2002