On 6/17/07, Toby Thain <toby@smartgames.ca> wrote:
>
> On 17-Jun-07, at 8:31 PM, Troy Curtis Jr wrote:
>
> > A solution to this is probably not Subversion specific, therefore the
> > question may not be 100% on-topic, but I am hoping that someone on the
> > list has some helpful experience.
> >
> > I *may* be getting a new server which will have more RAM than you can
> > shake a stick at. In an effort to make the best use of said RAM I am
> > trying to determine whether I can successfully run a Subversion
> > repository straight out of RAM, but in a caching sort of way such that
> > the data is written to disk and can thus survive a power failure
> > (important for version control wouldn't you say? :) )
>
> Unless you're prepared to lose data, commits must always hit the
> disk. So presumably you just want to speed up checkouts?
>
Absolutely true, which is why using a ram disk is not an option. I
was more looking at some kind of persistent disk caching...just some
kind of extension to the current linux disk caching. Some way to tell
the kernel to always keep these particular set of files or directories
in the disk cache.
> >
> > This will be a running on a Redhat Enterprise Linux 4 Update 4 OS
> > being served out via Apache. There will be several repositories, but
> > there is really only one that I think will help, it is ~2.5GB in size
> > and is a BDB. (Actually, running from RAM might improve FSFS enough
> > that I could switch to it!)
>
> Did you find FSFS too slow?
>
About half as slow. With my repository (~200 MB checkout, lots of
directories spread across many depths, >65000 revs) a checkout took
almost 9 min with FSFS and ~4 minutes for BDB.
> >
> > I know that Linux is natively aggressive in it's disk caching and
> > after the first request much of the data WILL be cached. However,
> > this server will also be serving out 5+ GB vmware images which will
> > likely quickly invalidate the cached repository blocks for initial
> > requests. I have been doing some google searches but all the info I
> > have ran into is related to caching network file systems on the client
> > side.
> >
> > Ideas anyone?
>
> Is your server loaded enough that it could make a difference? If so,
> upgrade your network and use a dedicated server.
>
> --Toby
>
Well server load and network isn't really the issue. It really seems
like I/O is a big reason. Network usage is actually very low during a
checkout. Also, another reason that I think persistent caching would
help is because of a particular experience. I had a need to check out
three working copies (slightly different revs but off the same
directory). I started the second checkout about a minute after the
first, then started the third about 30 seconds or so after the second.
The second and third ended up catching up to the first one and all
three proceeding neck to neck for the last minute or so. ( Remember my
check outs are ~4minutes, and these three together took a little
longer because they were all happening on the same machine).
> >
> > Thanks ahead of time for any info.
> >
> > Troy
> >
> > --
> > "Beware of spyware. If you can, use the Firefox browser." - USA Today
> > Download now at http://getfirefox.com
> > Registered Linux User #354814 ( http://counter.li.org/)
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
> > For additional commands, e-mail: users-help@subversion.tigris.org
> >
>
>
Thanks,
Troy
--
"Beware of spyware. If you can, use the Firefox browser." - USA Today
Download now at http://getfirefox.com
Registered Linux User #354814 ( http://counter.li.org/)
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Mon Jun 18 01:09:38 2007