[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: How can I setup two svnservers with svnsync and both should provide checkout and checkins

From: Nico Kadel-Garcia <nkadel_at_gmail.com>
Date: Thu, 28 Apr 2011 00:10:07 -0400

On Wed, Apr 27, 2011 at 4:25 AM, Ian Wild <ian.wild_at_wandisco.com> wrote:
> Hi Nico,
> Can I start by offering to provide a trial copy of Subversion Multisite (or
> even a pre-configured virtual environment to save you time) for you to prove
> to yourself how we solve these challenges? Many enterprise SVN deployments
> use our software and if your assertions were true that certainly wouldn't be
> the case.
> On Wed, Apr 27, 2011 at 12:59 AM, Nico Kadel-Garcia <nkadel_at_gmail.com>
> wrote:
> <Liberal Snipping for attempted brevity...>
>>
>> When the link between active-active servers for any
>> database is broken, *or phase delayed for any reason*, and each
>> database accepts alterations from clients without propagating the
>> change to a fundamentally shared repository, mathematics cannot decide
>> which changes must be merged, in which order.
>
> WANdisco prevents a split brain scenario by ensuring that no writes are
> possible unless an agreement has been reached. The product in fact does make
> that decision and while it's probably true that it's not just a function of
> pure maths, the agreement process takes care of these cases elegantly and
> without any human intervention.
>
>>
>> Single mirrored backend database, synchronizatoin protected some sort
>> of locking mechanism to prevent simultaneous commits from the multiple
>> "active" front ends.
>
> This statement doesn't sound relevant to WANdisco's technology. We don't
> employ mirroring of filesystems and do not have any problems handling as
> many nodes or concurrent transactions as you would conceivably want to throw
> at us.

According to the paper, you *are*. You're mirring the backend
Subversion databases on the multiple servers, keeping them
synchronized by accepting only authorized transactions on a designated
"master" and relaying them to the other available servers as
necessary. That's actually master/slave behind the scenes: the slaves
effectively passthrough the database submissions. This is built into
every major multiple location database or service for the last.... I
dunno, 30 years? It's certainly fundamental to dynamic DNS and NTP
configurations.

You've renamed the categories of service, but that's clearly the
underlying techonology.

>> > WANdisco provide a well-written White Paper explaining this.
>> >
>> >
>> > http://www.wandisco.com/get/?f=documentation/whitepapers/WANdisco_DConE_White_Paper.pdf
>>
>> Just read it. It confirms my description, implemented as a clever set
>> of tools to handle master/slave relationships at high speed on the
>> back end.
>
> Maybe we need to improve the White Paper. What you described doesn't seem to
> reflect how Subversion Multisite operates at all.
> In a situation where one node of three becomes unavailable the remaining two
> nodes would still be able to gain a majority agreement and users of those
> two nodes can continue to read and write normally. The third node where the

Right. Now make it 3 sets of 3, with each set distributed in different
locations. *In each location*, the set of 3 can vote amongst
themselves and go haring off in divergence from the other 6, or even
two other sets of 3. Unless you prescribe that each distrubed set must
vote among all *9* servers, and get a majority, you're in danger of
local sets diverging. And when some idiot in a data center says "huh,
we're disconnected from the main codeline, we need to keep working,
it's active-active, we'll just set our local master and resolve it
later"...... And until the connectivity is re-established, any cluster
chopped off is otherwise read-only. The can commit *nothing*.

Even worse: Unless you have a designated master cluster, losing the
client clusters means the company's core Subversion services at their
main office are read-only if the network connections to enough remote
clusters break. There are environments where this is acceptable, but
if I ever installed a source control system that went offline at the
main offices because we lost overseas or co-location connections,
*which happens when someone mucks up the main firewall in the
corporate offices!!!*, they'd fire me without blinking the first time
it happened.

> VPN had failed would automatically become read only and users would see an
> error to that effect if they attempted a write operation. We do offer a
> configuration option where that situation can be reversed, for example if
> the node in question is the only active one at a particular time of day. See
> the section in the Whitepaper on quorum options for more details.
> The key again is that WANdisco never allows a situation to occur where there
> is risk of a 'split brain'. If a global sequence number can't be generated
> using one of our quorum options (Follow the sun or Majority in effect) then

Until some idiot resets the quorum targre list locally. That's not a
software protection, it's a procedural one.

> the user's change is prevented before it gets to Subversion.
> In your example, as soon as the VPN came back the missed transactions would
> be replayed on the third node in the same order as they were on the other

In read-only mode, sure. That's how DNS slaves, NTP slaves, and "MMM"
or "MySQL-Master-Master" works. The problem is the remote idiot who
activates write access to their local quorum. There is no defense
against this, except to throw a screaming hissy if it happens, and
ensure that *every working copy taken from the split-off repository is
entirely rebuilt from scratch*. And Subversion servers simply have no
reliable record of where the working copies are to enforce this.

> two sites. No admin decisions or effort are needed here whatsoever and this
> is where we guarantee that all nodes will maintain identical copies of the
> data (assuming the nodes started off with the same data and have been
> configured identically).

Needed? No, not if you're willing to leave your remote cluster in
read-only mode for an indefinite period until the VPN or network
connection can be re-established to rejoing it to the distributed set
of clusters. That's likely to kill remote software productivity for
hours, if not days. I've had VPN wackiness last for *weeks* due to
bureaucratic befuddlement.

There is a sane fallback in that situation. Replicate the service to
an alternative backup with a different UUID, tell developers to use
that one in the short term, and provide assistance migrating their
changes to the primary repository when write operations are available.
It's painful, but doable.

>> When, and how, to turn the relevant repos into read-only nodes is left
>> as an exercise in resource management and paranoia. But the potential
>> for fractures and divergence among them is inherent in any network of
>> more than a few nodes, and switching from "active-active" to
>> "active-slave" when the link is broken is begging to set up
>> "slave-slave" for all sorts of confusing scenaries, and breaking the
>> ability to submit code. And cleaning *UP* the mess is horrible if
>> they're not set to "slave" behavior.
>
> Hopefully this is now answered - There is no potential for any horrible mess

*Wrong*. As soon as a manager of an individual node can designate it a
master with write permission, separated from the rest of the network,
chaos is guaranteed. And you *CANNOT* hardcode the full set of nodes,
because nodes have to be replacable or discardable.

> and our customers frequently go through planned and unplanned outages
> without them needing to do anything at all in regard to their SVN platform.

The disasters I've described are still feasible. I can believe, from
your whitepaper, that you've dealt with most short interruption
scenaries: there's a safety on the firearm, and that is a *good*
thing. But the gun is still loaded.

> If it's the server itself that is unavailable, users can simply svnswitch
> and use a different server that can still get a quorum agreement. This is
> exactly what a number of our Japanese based customers recently did following
> the earthquake and need to shut down local servers to conserve power. We
> also offer a third party load balancer which makes that 'failover'
> transparent to end users.

See above. That quorum agreement is at risk from local "quorums".

> To be clear, I said we'd based our original technology on Paxos. WANdisco's
> technology (And patent) does go quite a bit further in terms of the
> agreement process and again I'd encourage you to get your hand on a copy of
> Subversion Multisite and prove this to yourself. Remember this is the
> culmination of over 10 years research and development; you can get a lot
> done in that time!

Well, good. It does sound like there are desirable features for high
availability and distributed services.

>
>>
>> It's workable, but potentially fragile, and it is an *old* distributed
>> computing problem.
>
> I hope you'll come back to this thread at some point with a changed view on
> this. I believe you will find our solution robust and effective when you dig
> deeper. It must be, given some of the customers and use cases we see (18
> nodes in one instance, 18,000,000 transactions per day in another... I could
> go on).
> Best Wishes,
> Ian
>
>
> --
> Ian Wild
> Chief Solutions Architect
> WANdisco, Inc.

The situation I've described is, admittedly, an unusual one. It's
unlikely in a set of, say, 3 nodes or even a dozen well regulated
ones.. But it's not preventable when administration of the local
cluster is out of the primary repository admin's hands, and unless
you've got some kind of transaction checksum stored with each
Subversion databse transaction to check for discrepancies, it's at
risk for discrepancies to circulate, for that split brain situation
under such circumstances.

Sadly, I've seen this sort of thing happen with other databases,
especially involving sensitive and complex information, that are not
well managed.
Received on 2011-04-28 06:10:42 CEST

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.