[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: DFS alternative for linux

From: Ruslan Sivak <rsivak_at_istandfor.com>
Date: 2006-10-05 03:10:49 CEST

Les Mikesell wrote:
> On Wed, 2006-10-04 at 17:20 -0400, Ruslan Sivak wrote:
>
>
>>>>> Just curious... Why would you want 2 different working copies
>>>>> synchronized without committing to the repository and updating?
>>>>>
>>>>>
>>>>>
>>>>>
>>>> We are using the working copies on the production server for our web
>>>> site. They provide easy and fast updates (deployments) to the code.
>>>>
>>>>
>>> That seems to me like a good reason for making the only way to
>>> change it be through the repository so you always have a clear
>>> history. I wouldn't want a way to modify production without
>>> committing first unless you have fast-changing binaries like
>>> weather maps. Can't you ssh an 'svn update' on the server when
>>> you want something new to appear there?
>>>
>>>
>>>
>> We don't modify production (except in an emergency) except through svn.
>> We do go on and do an update on one of the servers, and currently (as we
>> are on windows), it propagates the changes to another server. I really
>> don't want to have to replicate the changes manually, especially if we
>> are going to add more servers to the farm.
>>
>> Now so far, most things that I see for linux are as good as the
>> microsoft product /or better. DFS has it's weaknesses, but it's just
>> awesome when you have small updates that need to be propagated. I can't
>> believe that linux doesn't have anything similar.
>>
>
> Since you know when the update needs to be done, do it with
> rsync over ssh to as many other places as necessary, or ssh
> an 'svn update' command. It is a good idea to wrap these
> operations in control scripts from the start so things adding
> servers, dropping them out of a load balancer during the update,
> etc. can be added if/when needed and the users just run the same
> script to make a change.
>
>
rsync is still kind of slow on large datasizes. DFS is super slow when
you have a lot of data to sync over, but once the data is there, if you
update 1 file out of 50000 files, it will sync almost instantly. Rsync
will have to check 50000 files.

svn update might be a better solution, and might work, although an
update on a large working copy is still a little slow (which is fine
since I usually know what needs to be updated, and update those folders
specifically). if I had to do an update on each server, on the whole
working copy, that would be pretty slow.

svn update won't work for things like people updating images to the
webserver. The images get uploaded into the working copy, and
eventually i go through and check them into the repo. So the only thing
that might work here would be rsync, but like I mentioned before, that
would be pretty slow.

The best solution would be some sort of filesystem that detects changes
to the filesystem and sends out updates to the other cluster members.
I'm sure there is a filesystem like that out there, I just haven't found
it.

One alternative would be to somehow mount the repository as a folder,
and then have apache sever files off that folder. When people upload
something, it can be written straight into the repo, basically with
webdav. My fear is that this would be kind of slow. Is it possible to
mount the repo in linux as a folder?

Russ

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Thu Oct 5 03:11:26 2006

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.