Re: Potential regression: high server-side memory consumption during import
On Sat, Mar 03, 2018 at 06:01:23PM +0100, Branko Čibej wrote:
> On 03.03.2018 17:44, Stefan Sperling wrote:
> > On Sat, Mar 03, 2018 at 04:32:35PM +0000, Philip Martin wrote:
> >> Stefan Sperling <stsp_at_elego.de> writes:
> >>> Which leads me to believe that r1778923 may have been based on wrong
> >>> assumptions about performance. The new authz is not fast enough to
> >>> significantly reduce per-request overhead.
> >> My testing so far was with a very small authz file -- only a handful of
> >> rules and aliases. If I add a few hundred trival rules to the file then
> >> 1.11 becomes signifcantly slower than 1.9 while reverting is still much
> >> faster:
> >> 1.9: 4.3s
> >> trunk 1.11: 14.6s
> >> reverted 1.11: 1.9s
> > Thanks for testing and confirming this.
> > I think our best course of action is to revert the change on trunk
> > and in 1.10.x. Could you do that? (I could do it, too. I'm just asking
> > you since you've probably already prepared it in a local copy.)
> So if I understand this debate correctly: The authz code is so much
> faster now that parsing the authz file and performing the authz lookups
> beats calculating its MD5 checksum?
No, the file is only parsed/checksummed once. What's making it faster
than 1.9 is that rule lookups happen on a cached copy of a parsed
representation which is more efficient than the representation used
in 1.9 (which uses svn_config_enumerate_sections2() to walk the ruleset).
Re-reading the file and calculating its checksum is still slow and
doing it per request really hurts if the file has a few hundred rules.
Received on 2018-03-03 18:20:47 CET
This is an archived mail posted to the Subversion Dev