On 08/30/2012 08:05 AM, C. Michael Pilato wrote:
> On 08/30/2012 06:10 AM, Justin Erenkrantz wrote:
>> On Wed, Aug 29, 2012 at 4:04 PM, C. Michael Pilato <cmpilato_at_collab.net> wrote:
>>> I misremembered Greg and Justin's attitude toward my approach, thinking they
>>> were just flatly opposed. As I re-read the relevant threads, though, I
>>> think it's clear that perhaps both my approach and their PROPFIND-Depth-1
>>> approach would be valuable. The problem, as I see it, is that the
>>> complexity of the PROPFIND-Depth-1 change is far greater than my simple
>>> patch, and nobody is stepping up to own it.
>>
>> Yes, I don't think it was that we were flatly opposed - it's that we
>> can figure out a way to reduce the number of requests even against
>> older servers - which is a good thing. But, let's not stand in the
>> way of progress if there is a new server. So, commit away! -- justin
>
> Thanks for clarifying. Before I commit away, though, it occurred to me last
> night that I've not done anything to profile the memory usage
> characteristics of this approach. I need to understand what happens if the
> REPORT response processing (and property queuing) vastly out-paces the GETs
> and such.
I hacked my mod_dav_svn to add a sleep(1) before responding to each GET
request. Funny thing ... it made my checkouts against the localhost run
really slowly. ;-)
Anyway, my point was to see if there would be a memory usage difference
between today's ra_serf -- which fetches properties for a node immediately
before GETting the content for that node -- and my patched one, which I
imagine would long have finished parsing the REPORT and caching all the
properties therein before even half of the GETs were completed. I
definitely saw different behaviors in terms of the rate at which peak memory
usage was hit, but the peak memory footprint looked about the same in both
cases.
Theoretically, though, it seems reasonable that my approach would have the
distinct non-feature of potentially having the client caching the properties
for an entire tree in memory, just waiting for a place to put them. That's
obviously not ideal. The question is -- is it a practical concern? Yes,
I'm aware of issue #4194 ("serf memory leak on checkout") and will be
looking into that next.
-- Mike
PS: Another interesting thing I noticed was that -- judging by the
notification output, at least -- where my checkout was clearly grabbing
files in spurts of 3 or 4 at a time at the start (with these spurts coming
about a second apart, as expected), by the end it appeared to only be
getting 1 file per second. Any ideas behind this degradation? Just an
imbalance in the lengths of the various pipelined request queues on the
auxiliary connections?
--
C. Michael Pilato <cmpilato_at_collab.net>
CollabNet <> www.collab.net <> Enterprise Cloud Development
Received on 2012-08-30 14:56:58 CEST