On 10/21/06, John Waycott <javajohn@cox.net> wrote:
> This sounds very similar to what Wandisco provides. We haven't tried
> their product, but we are considering it for the future.
> -- John
Wandisco is probably a little too much for our production -- we only
really have one developer outside of our immediate region, and he's
only two timezones away. It seems like a simple "reverse proxy" like
pound [http://www.apsis.ch/pound/] would work, provided it could be
hacked to route per HTTP request type rather than just per URL. The
documentation seems to indicate it is intended to be used per-URL, but
pound is GNU, and such a great feature may be worth the coding time...
Thinking further, one possible use could be keeping a local
reverse-proxy active on each client. Each client could also run an svn
server locally, and be an svnsync target. Local requests would go
through the proxy, and read-only requests would all go to the local
server. Only write requests, like lock and commit, would be routed to
the main server.
Given this possible configuration, this would mean a lot of
svnsync targets per main server (roughly 50:1 at our office). Are
there any performance metrics or estimates of svnsync push overhead as
opposed to updates? I'm just blindly assuming one svnsync push per
commit per client is less burdensome, than the current possibility of
multiple updates per revision per client.
Of course, having a WC cache based on FSFS, improving svnsync to
allow write actions to any server in a sync pool (perhaps with some
internal forwarder), or allowing for automated cross-repository merges
would all be better (and complimentary) solutions. I just find all
these personally harder to code than proxys. ;)
:) Jared
> Jared Hardy wrote:
> > That's an interesting fail-over clustering option for Subversion
> > repository commit access. One possibility I'm interested in, that this
> > option brings to mind, would be a Subversion cluster proxy front-end,
> > that just redirects requests to several back-end Subversion servers
> > from any given client.
> > At any point in time, the front-end would know the current active
> > Commit server, but any read-only actions like Update or Checkout could
> > just be directed to any available mirror, in a load-balancing fashion.
> > The front-end servers could be fail-over clustered as well, to
> > maximize availability. They could even serve as arbitrators to help
> > determine the best current Commit server.
> > Perhaps every user site could host their own front-end/proxy
> > servers, and each front-end could factor latency into its load-balance
> > choices, so available LAN mirrors would usually be chosen over remote
> > mirrors for any read-only actions. Commit server responsibility could
> > even be shifted on a schedule, based on predictable geographical usage
> > pattern changes. Just imagine -- each developer site could have its
> > own clustered mirror and front-end set (possibly in the same boxes),
> > so all read-only operations would behave at LAN speed, where only
> > Commits would depend on primary server WAN connection speeds. Each
> > site could have the current Commit server moved closer to them during
> > their peak commit times. That would be awesome!
> >
> > Does anyone know any current way to implement such a front-end?
> >
> > :) Jared
> >
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Wed Oct 25 23:07:55 2006