On Tue, Sep 19, 2006 at 12:38:02AM +0200, Erik Huelsmann wrote:
> Well, the patch below addresses the problem in issue 2607.
> I've whipped this patch together this evening, so, I could use some
> review (and comments about the idea), but, even at this
> proof-of-concept level, the run-time of the test script seems to have
> halved on my machine.
The general approach looks good. Writing a new log file per committed
item, then running the logs for each path together, means that we
should only rewrite the entries file once per path, rather than once
per commit item.
Are there any situations in which the fact that we're processing commit
items out of order would make a difference? The only ones I can think
of would involve overlapping items (i.e. a parent with RECURSE set as
one item, a child entry as another item). I'm not sure whether this is
even a valid scenario, and if it is, I'm not sure that it matters.
Some other observations:
* The requirement to keep the arguments to svn_wc_queue_committed() alive
until the call to svn_wc_process_committed_queue() is a little odd,
but is probably better than copying each item into the queue's pool.
We can always relax this requirement later.
* Why is remove_lock part of the commit queue item, but remove_changelist
part of the commit operation? Wouldn't we want to the ability to
remove changelists only for some of the commit items? (through the API;
we wouldn't expose it through the CLI).
* You preallocate the queue to hold 40 items - how much memory does that
actually work out to? What's the difference (speed or memory usage)
between preallocating for 40 and just allocating one item?
* It might be my mailer, but some of the changes to commit.c looks to
* The comments inside svn_wc_process_committed_queue() could make it
clearer that we're writing to a log per queue item, then running the
log per changed directory.
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Tue Sep 19 14:13:50 2006