On 1/22/07, Peter Lundblad <firstname.lastname@example.org> wrote:
> Peter Samuelson writes:
> > [Peter Lundblad]
> > > Another thing is that the API is rather specific to the current way
> > > of working.
> > [...]
> > > I suggest we instead create a new API and then try to implement it in
> > > terms of the old one.
> > Usually when you rev an API you do the opposite: implement the new API
> > directly, and reimplement the old API on top of the new one. So I'm
> > curious about why you think this should not be handled that way.
> Because in this case, we are not talking about API revision in the usual sense.
> We're talking about a more or less completely new set of APIs.
> Reworking the old (err, current) WC code in terms of a new API and then trying
> to reimplement it on top of that just seems to be a lot of hard work
> to me. Normally, compatibility wrappers are very trivial, but I assume
> these will not.
Well... more specifically: I saw Peter Lundblad's suggestion as
"create a new API based on our current understanding, but implemented
the same as today." IOW, we take everything that we've learned, take
actual usage, and create a new API. That API is then implemented in
terms of the old/current API and *code* to ensure correctness (or as
close to it; there may be some coupling across the wc APIs that would
surface in this rebuild scenario).
We could then migrate all code to the new API, and then in a third
step reimplement that API with new designs/schemes/whatever.
This scenario works well to ensure compatibility of the new API, but
it has drawbacks:
1) how do you define the new API? generally, you need the new
implementation to drive your efforts at creating that new API. IOW,
you need step 3 to be completed, then you fill in the other bits.
2) this approach does not solve the problem of supporting the classic
API in terms of the new implementation.
These two problems need to be solved for a successful migration. I
believe that the new implementation is the right place to start. Then
examine how to squeeze that into the current implementation, leading
to a definition of a middle ground... a new API... which encompasses
both old and new implementations. Then you hobble the new
implementation to support the old API as appropriate. Ideally, this is
done in such a way that if *only* new users interact with the working
copy on disk, then you're golden. Only the old guys get sucked with
So, imo: start with a blue sky. Build it. Then start the compromise
and the reaching of middle ground. Record those compromises for things
to remove during the 2.0 rebuild.
And FWIW, I am very, very strongly against any notion of 2.0.
Personally, I see it as a failure in creativity. Shooting for 2.0 is a
shortcut. It's a way to avoid the difficult problems. It is a way to
shove development problems/maintenance at the bazillions of users that
Subversion has today. Consider: 1.x clients and 2.0 servers are not
compatible. 2.x clients and 1.x servers are not compatible. Each time
a user wants to check something out, they will need to know the
version of the server. Holy shit will that suck. Big time. Badly. One
month of extra development to ensure 1.x compatibility, or a bazillion
man-months of productivity loss due to a major version change. Eesh.
Doesn't seem like a fair tradeoff. Hey... to be fair, it's true that
I'm not personally spending that extra dev time, but I don't think
that means the point is any less valid. And if/where I get to assign
or volunteer time for svn dev? It'll be on 1.x. I think that the
*concept* of 2.0 is just giving up.
(and why does revlog mean 2.0? isn't that just another backend like
bdb vs fsfs?)
A new working copy library would be awesome. Start with code, then
figure out the middle ground for 1.x. At some future time, 2.x can
iterate to improve and to shed dead weight.
Greg Stein, http://www.lyra.org/
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Tue Jan 30 09:36:44 2007