On Tue, Mar 23, 2010 at 1:32 PM, Mark Phippard <markphip_at_gmail.com> wrote:
> On Tue, Mar 23, 2010 at 8:28 AM, Stefan Sperling <stsp_at_elego.de> wrote:
>
>> In most setups I've seen the server hardware is much beefier than
>> the client hardware, so unless we do things that scale really badly
>> (say more than O(n^2)) I don't see a problem.
>
> Think of a hosting site like sf.net with thousands of SVN repos being
> hit by many thousands of users. How many of these operations do you
> think the Apache server could manage before it ran out of RAM? I am
> not saying we cannot ask the server to do more, but if you start
> having it hang on to a list of paths, as you suggest in the rename
> case, I do not see how it will not run into scenarios where that does
> not involve significant amounts of memory usage.
But surely there are some operations right now, in current svn, that
already consume _some_ RAM. Maybe someone has an idea of who the
current "bad boys" are, and what kind of memory usage they have in
normal scenario's and in worst-case scenario's? Of course, it also
depends if it's something that every user does all the time, or
something that's only executed relatively rarely.
Personally, I think that e.g. merging currently falls into the
"relatively rarely" category. I'm guessing that no more than 0,1 %
(maybe even much less) of those thousands of users may be merging at
the same time. Maybe that could change if merge were (a lot) faster,
but then we've got a chicken-and-egg on our hands :).
--
Johan
Received on 2010-03-23 13:47:04 CET