[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Potential regression: high server-side memory consumption during import

From: Branko Čibej <brane_at_apache.org>
Date: Sat, 3 Mar 2018 18:54:06 +0100

On 03.03.2018 18:46, Philip Martin wrote:
> Branko Čibej <brane_at_apache.org> writes:
>> So if I understand this debate correctly: The authz code is so much
>> faster now that parsing the authz file and performing the authz lookups
>> beats calculating its MD5 checksum?
> More that reading/checksumming is still too slow to be done repeatedly.
> 1.9 reads the file once, per connection, and then does authz lookups on
> lots of paths. The authz rules are fixed for the duration of the
> connection.
> 1.10 was reading and performing the checksum repeatedly as well as doing
> the authz lookups on lots of paths. The authz rules can change during
> the connection lifetime. The authz lookups are faster than 1.9 but not
> enough to offset the repeated reading/checksumming.
> 1.11 goes back to reading the file once, and still does the same authz
> lookups. The authz rules once again remain fixed for the duration of
> the connection.

Yes, I see the backport proposal now. Makes sense.

In other words ... if we wanted to make authz changes have immediate
effect, we'd need a better (faster, or at least non-blocking) way to
determine that the rules changed than reading the authz file, even if
just to verify its hash without actually parsing it. But that can be
done properly at a later date without causing a regression relative to
1.9 behaviour.

-- Brane

P.S.: Running tests now with the patched 1.10.x, will vote on the
backport as soon as that's done. If it's approved, I believe we have to
move our expected release date from 28th March to 4th April?
Received on 2018-03-03 18:54:12 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.