On Mon, Nov 12, 2012 at 9:13 PM, Philip Martin
<philip.martin_at_wandisco.com> wrote:
> Daniel Shahaf <d.s_at_daniel.shahaf.name> writes:
>
>> Greg Stein wrote on Mon, Nov 12, 2012 at 19:01:25 -0500:
>>>
>>> In October, svn.apache.org generated about 900M of logs(*). Is that a
>>> problem? I wouldn't think so. At that rate, a simple 1T drive could
>>> hold over 83 years of logs. Are there installations busier than
>>
>> How many years would those 1TB disks last for if all neon clients were
>> converted to serf?
>
> I have a checkout of the gcc tree, it has 78,000 files. Now it uses
> svn: but if it were to use http: then the serf checkout log would be 4
> orders of magnitude bigger than the neon log. 83 years becomes 1 or 2
> days.
>
> The neon log is independent of the size of the checkout, the serf log
> scales with the size of the checkout. If this were memory we would say
> we have a scaling problem. Do scaling problems not apply to disk space?
The log is proportional to the work done by the server. If you want to
perform capacity planning, then "REPORT" doesn't tell you much. The
serf requests enable better balancing, use of multiple cores,
reverse-proxies to balance across machines, etc.
As Justin states, there are well-known solutions to dealing with logs.
Cheers,
-g
Received on 2012-11-13 03:48:55 CET