On 5/10/07, Ryan Schmidt <subversion-2007b@ryandesign.com> wrote:
> I should further clarify that my understanding of SANs is limited to
> Apple's documentation of their Xsan product, which I believe is a
> cluster filesystem, so apparently I don't know know where a SAN stops
> and a cluster filesystem begins. So wherever I have said "SAN" in
> previous postings, please substitute "SAN with cluster filesystem" or
> "Apple Xsan."
An easy way to say this is "SAN volume = networked hard drive volume;
Cluster File System ON SAN volume = Network Accessible File System."
You wouldn't really ever want to give two systems simultaneous
write access to the same volume, at the block level that any SAN
operates on. Cluster File Systems, like RedHat GFS and the Oracle CFS
(and whatever Xsan uses as an FS), are just like regular file systems,
except they also handle the semaphores and resource locking necessary
for simultaneous multi-node access.
In my jaded opinion, the very names of NFS and CIFS are based on
misnomers -- I really just view them as complex structured file
transfer protocols, with lower overhead transfer initiation routines
than more "traditional" file transfer protocols, like FTP or HTTP. I
think of WebDAVfs as functionally equivalent to NFS and CIFS in most
areas, and that's really running on top of the HTTP stack. None of
them replace the need for a true CFS.
It is possible to use Cluster or Cluster-Aware File Systems on any
"shared" hard drive or RAID volumes. If it isn't on a SAN of some
sort, it's usually a single RAID box with two SCSI connections, to two
different servers. This naturally limits you to 2-node clusters, which
usually just end up being High-Availability fail-over type clusters,
rather than true Load Balancing clusters. HA Clusters don't need
simultaneous R/W access, except for a special small "Quorum" volume
that contains failure and priority status data (which is like a
minimal CFS on its own), so Cluster-Aware FS or OS layers are
sufficient for HA systems. True shared CFS is necessary for Load
Balancing Clusters that complete Write operations of any sort, and
SANs are necessary to mount a single CFS partition on more than 2
nodes simultaneously.
Most Load Balanced clusters also utilize a router or reverse-proxy
of some sort, which controls how clients access each node, and defines
a single name and/or IP for all client access. Better routers often
handle problems like cache coherence, and secure client-server
multiple transaction static routing.These firewall/reverse-proxy
cluster routers are also often hosted on HA type Clusters themselves,
to prevent any single point of failure.
I can't wait for SVN 1.5 to get stable -- the HTTP/S reverse-proxy
capability will allow for LB Cluster router nodes to simultaneously
act as Read operation load balancing nodes, which is amazing. That
feature opens up a lot of opportunities for geographically distributed
load balanced SVN and WebDAV cluster configurations, without any cache
issues. My remote employees and contractors will rejoice, and I may
get to work from home more often.
:) Jred
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Thu May 10 11:09:45 2007