----- Original Message -----
From: "ListMan" <listman@burble.net>
To: "Nico Kadel-Garcia" <nkadel@comcast.net>
Cc: <users@subversion.tigris.org>
Sent: Tuesday, April 11, 2006 8:13 PM
Subject: Re: large binary data repositories with many files
>
> sure, sorry for the garbled message
>
> raw file size = 2.1G
> # files(dirs) = 5331(2061)
> unix copy = 6m
> commit = 68mins
> repos size = 1.2G
> workArea size = 4.2G
> # files(dirs) = 44011(22671)
>
>
> the working area is pretty large, we have 30 users that will have the a
> copy of the repos in their local working area. anyway we can reduce this?
>
> the commit is very slow, there's a strong possiblity that we'll have 30
> users trying to do a commit at the same time when they get in for work in
> the morning. How else can we reduce the commit time?
Let's be clear: what OS is your client? And what OS is your server? And how
is the material laid out?
In particular, directories that have many thousands of files at the top of
the same directory often present..... performance problems, especially for
older operating systems. I'm concerned that you may be running into such
issues on the client, ore with your logs on the server.
Also, hey, when you're dealing with commits that large, or of individual
files that large, I suspect you may run into RAM limitations on the server
and need more RAM to do it gracefully. Actually listen to the server while
this is occurring: can you hear the disk swapping?
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Wed Apr 12 14:24:36 2006