On Fri, Feb 10, 2012 at 6:14 PM, Stefan Sperling <stsp_at_elego.de> wrote:
> On Fri, Feb 10, 2012 at 04:47:31PM -0600, Ryan Schmidt wrote:
>> So thinking all this through, I agree svnsync does not make sense if
>> you are hosting a repository on a SAN and trying to connect multiple
>> svn servers to it. But it sounds like it would work fine, if you
>> simply don't use svnsync. Configure one server to be the master (let
>> it accept write requests). Configure the other servers to be slaves
>> (read-only, and proxy any incoming write requests to the master). All
>> servers point to the same repository data on the SAN and it can't get
>> corrupted because only one server is writing to it. Sound ok?
>
> Ah, I see what you mean.
>
> Well, I suppose this would work, yes. You are essentially using
> the write-through proxy feature to implement load balancing for
> incoming TCP connections.
>
> But it isn't necessary because the SAN should support file locking
> so multiple concurrent servers writing to the same repository
> synchronise write operations anyway.
I would be *extremely* leery of this kind of multiple simultaneous
write access to a shared resource. Even with a SAN, filesystem changes
on one system are vulnerable to phase delays or interruptions, and
there have been way, way, way too many systems that worked very well
this way until stressed and corrupted the heck out of what were
supposed to be atomic corruptions. The ability of filesystem authors
to change specs and practics and update parameters, and create really
startling changes in systems that used to work well, is amazing.
If what you're running into is performance issues, I'm really going to
urge you to talk to Wandisco about their distributed multi-server
setup, which seems to to a very good job of doing synchronized,
distributed servers.
Received on 2012-02-11 23:37:31 CET