[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

RE: CVS update: MODIFIED: libsvn_wc ...

From: Sander Striker <striker_at_apache.org>
Date: 2001-09-02 21:45:46 CEST

Hi again,

>> On Fri, Aug 31, 2001 at 09:19:42AM -0500, Ben Collins-Sussman wrote:
>>>...
>>> The working copy commit-crawler needs to commit N targets spread out
>>> randomly over a large tree. Because commits are atomic, it describes
>>> the commit as *one* large edit to the tree; this means starting the
>>> commit from a common "parent" directory and then traversing
>>> directories in an intelligent order. The rule we want to follow is:
>>> after we leave a directory (go "up"), we want to close the directory.
>>> We don't want to have to randomly re-open it again.
>>
>> That's the idea...
>>
>>> Thus, the crawler
>>> is *already* sorting the targets. As Mike said, right now the targets
>>> are being qsorted alphabetically, which guarantees that all children
>>> in the same directory will be examined as a group.
>>
>> Nope. It *isn't* doing that, which is why I posted the question in the
first
>> place. If we were doing a proper traversal, then we wouldn't need to
check
>> whether a lock had been taken out already. Thus, Mike's change to look
for
>> an existing lock is merely covering up a deeper issue (that was my
worry).
>
> Right. I'd like to take on this one tomorrow (Amsterdam time), if you are
> alright with it. It is only a small patch that will get the behaviour of
> visiting each directory only once (by adjusting the sorting).

I've cooked up a patch that I think will do the trick. Ofcourse it needs
to be tested.

It has (IMO) one downside: it assumes that strings are NUL terminated.
OTOH this isn't going to be a big problem when I look at the task
list and issue #406.

Oh, I also removed 'else' statements where there was only one path
for the code to take. The ones like these IOW:

if (a)
  return 1;
else
  return 0;

  ==

if (a)
  return 1;

return 0;

I don't know if this patch is really needed, but I do think that
visiting a dir only once justifies the extra cost of sorting like
this.

Sander

The patch:
--- libsvn_subr/path.c Fri Aug 31 00:31:20 2001
+++ libsvn_subr/path.c~ Sun Sep 2 21:33:50 2001
@@ -216,19 +216,50 @@
   size_t min_len = ((path1->len) < (path2->len)) ? path1->len : path2->len;
   size_t i;
   char dirsep = get_separator_from_style (style);
+ char *sep1, *sep2;

   /* Skip past common prefix. */
   for (i = 0; (i < min_len) && (path1->data[i] == path2->data[i]); i++)
     ;

- if ((path1->len == path2->len) && (i >= min_len))
- return 0; /* the paths are the same */
- else if (path1->data[i] == dirsep)
+ if (i >= min_len) {
+ if (path1->len == path2->len)
+ return 0; /* the paths are the same */
+
+ if (path1->len < path2->len)
+ return -1;
+
+ return 1;
+ }
+
+ if (path1->data[i] == dirsep)
     return 1; /* path1 child of path2, parent always comes before child
*/
- else if (path2->data[i] == dirsep)
+
+ if (path2->data[i] == dirsep)
     return -1; /* path2 child of path1, parent always comes before child
*/
- else
- return strncmp (path1->data + i, path2->data + i, (min_len - i));
+
+ /* 'count' directory seperators
+ * This does assume that strings are NUL terminated.
+ */
+ sep1 = &path1[i];
+ sep2 = &path2[i];
+ do {
+ sep1++;
+ sep1 = strchr(sep1, dirsep);
+ sep2++;
+ sep2 = strchr(sep2, dirsep);
+ } while (sep1 && sep2);
+
+ if (sep1)
+ return 1; /* path1 is deeper in the tree than path2 */
+
+ if (sep2)
+ return -1; /* path2 is deeper in the tree than path1 */
+
+ /* Since we already checked the common prefix, we only need to
+ * check if the next character is bigger or smaller
+ */
+ return path1->data[i + 1] < path2->data[i + 1] ? -1 : 1;
 }

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Sat Oct 21 14:36:39 2006

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.