On Feb 23, 2006, at 03:53, Philip Hallstrom wrote:
>>> In the case where we have a dev server with the vhost and working
>>> repository. We use Samba so the user can mount the vhost on
>>> their desktop. We use TortioseSVN for subversion. When the user
>>> tells Tortiose to update the working copy, it will communicate
>>> with the subversion server, downloading any files that need
>>> updating and then copying them to the samba share. Tortiose
>>> thinks it's local, but it's really copying them down only to copy
>>> them back again.
>> We've had a setup like this for the past year. It's on an internal
>> office network though, so if your designers are not local, that
>> may be more of a problem. It's slow sometimes but our developers
>> haven't complained too loudly yet.
> Sounds like you have exactly the setup I'm looking at, I'm just
> complicating it by distributing it across the west coast (seattle,
> vegas, pheonix).
> How much slower would you say a tortiose update to samba is vs just
> copying files from local to samba?
The biggest problem for us I think is the disk cache on the
development server. When updating a working copy, or committing one
(both of which have to scan all the .svn directories -- I'm assuming
the case where I don't specify a specific directory or file to commit
or update) it can take minutes, and a quick glance at the "top"
command reveals on our 2.6 Linux kernel that much time is expended
waiting for the disks. We only have ten developers, but half of them
might be working on one of our two larger projects (thousands of
files in hundreds of directories). The server's disk cache just
doesn't know what to keep in memory, with all those working copies.
Keeping the working copies on the local machine would mean that the
disk cache on the local machine would be used, which would be better
because while each user is working on his project, hopefully that
project's directories and files could largely stay in the disk cache.
We investigated whether we could turn each user's desktop machine
into an NFS server and mount their working copies directory into the
web server and serve them through apache that way, but NFS is messy
because user accounts on all machines must be identical, and
unmounting the shares at the end of the day became problematic, and
apache had problems if the mount was suddenly gone (as when someone
shut down their office machine) without first unmounting it from the
Other ways we might possibly be able to improve performance would be
to set up separate servers. Currently, our one central development
server does everything, from Apache and PHP to MySQL to mail and
mailing lists to Subversion to Samba to DNS. Having a separate home-
directory server might help, if it had a lot of RAM dedicated to a
disk cache, and/or possibly a separate repository server.
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Thu Feb 23 16:08:43 2006