On Wed, Nov 23, 2005 at 09:36:12AM -0500, Greg Hudson wrote:
> On Wed, 2005-11-23 at 02:01 -0800, Greg Stein wrote:
> > The complexity of the software increases, yes, but the user story is
> > greatly simplified. Many places are already running caches of some
> > form. Deploying squid is no big deal, and many people do that. HTTP
> > caching proxies are well known items, and people have a great choice
> > in how to install and configure those.
> I dunno. If I'm, say, gnome.org, and my servers can't handle the
> Subversion traffic, I think I'm more likely to want to set up a bunch of
> mirrors than to point people at a bunch of Squid caches.
You don't have to tell people *anything* different. You switch the
front end server into a reverse-proxy. Same IP and all, but it can
serve from its own cache, or relay the requests to N backend servers
with their own caches. And if a request isn't satisfiable, then the
server can just invoke the appropriate svn functionality.
This frontend that relays out to N backend servers can be Apache or
other reverse proxies, or even hardware such as a NetScalar.
When I was at CollabNet, we did a bunch of scalability testing and
found that the servers were CPU bound computing diffs. It kind of
sucked at the time because BDB was the only option, and that was
effectively impossible to have N servers against a single (NFS)
backing storage system. With FSFS, it is rather straight-forward to
have a farm of frontend Apache servers grunting thru diffs/deltas, all
talking to an FSFS repo on a networked storage device (e.g. a NetApp).
> > I'd much rather improve the client than to develop yet another server,
> > with its own host issues related to networking code, security,
> > documentation, logging, monitoring, and performance tweaking.
> Are you imagining the mirror system would be yet another network server?
> Much more likely, it would be one of our existing network servers
> pointed at a mirror maintained by a cron job.
Yup. I assumed a transparent proxy/cache thingy. Try to serve from
cache, or to relay the request back to the "real" server if it can't
be satisfied locally.
You could have a mirror like thing, but you'd still want to relay
> > When I
> > look back at svnserve, the original idea was "small and light", but I
> > note that it has grown a ton of functionality since then.
> It has? It got path-based acls, but that was just moving logic from
> mod_authz_svn (where it never should have been) into libsvn_repos, and
> adding a few calls.
Logging? Threads? Doesn't it have a config file now?
> (I admit to some frustration at the amount of crud involved in having a
> network server which satisfies everyone's needs, which svnserve
> currently does not. I believe the answer is to encourage the creation
> of better support libraries. I do not believe the answer is to
> implement everything inside Apache httpd.)
Right. Shove people at httpd rather than building all that into
> > And besides, serf already does pipelining (and deflate/gzip and basic
> > SSL). There are a ton of "friendly" bits that it is lacking, but the
> > core is there. IMO, it is much more feasible to complete that thing
> > and hook it into svn, than it is to write a new mirror system.
> We already have the performance benefit of HTTP pipelining (at the
> expense of giving up on generic HTTP caching), and ra_dav is still much
> slower than ra_svn. I believe this mostly comes from wc-props and other
> impedance mismatches between svn and DAV. Perhaps it's possible to fix
> all of the resulting performance problems in other ways, but for years
> now, no one who holds that theory has been writing much Subversion code.
Hopefully, the wc improvements branch will fix perf related to the
wcprops. Have to see.
In terms of writing code, while I haven't had time personally, I can
make it possible for others :-) I have an intern starting in January
to work on serf and svn integration. I'll make sure he knows that he
can talk about it whenever he's up for it.
Greg Stein, http://www.lyra.org/
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Thu Nov 24 12:05:55 2005