Jared Hardy wrote:
> On 8/8/06, Stefan Küng <tortoisesvn@gmail.com> wrote:
>> Just updating individual files doesn't speed up anything at all. In
>> fact, it can make things even worse. Fetching the status for a file
>> actually fetches the status for the whole folder and all files in it,
>> but then only the status of the asked file is returned (that's how the
>> Subversion function works, unfortunately).
>
> Really? Maybe I should ask Subversion dev list if this can be changed.
They already know. :)
> In the mean time, I still think it should be possible to limit the
> TortoiseSVN status fetch to non-recursive status fetches, in only
> those paths selected. It shouldn't take much logic to detect when
> multiple files are in the same directory, and just run the status on
> that directory once non-recursively (rather than recursively on each
> and every unrelated sub-directory, as is the case now).
>
>> And a full status update is necessary because you simply can't just
>> guess what the status of files after an update is.
>
> It's not "guessing" if you know the specific list of files/paths that
> have just been updated. Also, the update operation returns status
> changes (ADMCG). Couldn't that information be used to determine the
> new status, or at least guess it with acceptable accuracy, without a
> local status scan? :) Minimally, as I said before, you should be able
> to limit status scans to just those paths directly affected,
> non-recursively.
"limit status scans to affected paths" - ok, try to understand here:
fetching the status of a single path is very expensive (see 'bug' report
above). Imagine you did an update/commit of 100 files, your working copy
consists of 10000 files in 100 folders. Fetching the status for those
100 files would be the same as fetching the status for the whole working
copy. (assuming all 100 folders have the same amount of files in them).
>> You need to lock a file *before you start editing it*. That's the whole
>> point of locking a file. If you just lock it before a commit, then the
>> lock is useless.
>> Maybe an "update-and-lock" is what you meant?
>
> Why would you need to lock before editing, *if the file can be
> merged*? What we want to do is lock the file (even though it can be
> merged, in order to obtain a temporary exclusive commit status on that
> file), immediately update it (in order to merge it with HEAD), deal
> with any conflicts if necessary (including a full build/test cycle in
> some cases), and then commit *last*. The current Commit operation
> order imposes a kind of "Commit and pray" workflow on the user. This
> "new" order of operations guarantees that there are no changes to the
> files between the last Update and Commit action, and thus prevents the
> "can't Commit, out of date" error, assuming the lock was not stolen or
> broken. Locking after Update does not give the same guarantee, and
> would require a redundant Update after the Lock is obtained.
I don't know how you're working in your project. But it really seems you
should better change your project a little.
If you need to lock a file so you can merge/resolve conflicts/commit
without anyone else committing in the mean time, then maybe you should
try to improve the communication in your team. Several people working on
the very same files at the same time usually means destroying each
others work, or the files contain too much functions/methods/classes and
require splitting up into several files.
If you however lock a file after an update, you're locking the file
while you are working on it until you commit that change. That's much
better than your approach "I'm finished, now everybody hold still until
I clean up the mess from others (i.e. resolving conflicts) and could commit"
> We have the svn:needs-lock property set on all files that can't be
> merged, so this workflow doesn't directly affect those files.
>
>> > Why should they be
>> > forced to update or refresh status on files that they absolutely know
>> > have nothing to do with the commit at hand? Even the F5 refresh forces
>> > them to wait for a full Status refresh on totally unrelated paths, and
>> > wastes a huge amount of time on large WC directories.
>>
>> Just how much time are we really talking about here?
>
> A full Update or Status-walk from WC root can take 15 to 45 minutes,
> depending more on local workstation conditions than network or server
> conditions. I think NTFS can take most of the blame here. Cleanup
You told earlier that your working copy is about 16GB, 190000 files,
90000 folders. If that working copy takes you 15 to 45 minutes, you
either have a harddrive from the last century or your working copy
resides on a network share and not your local harddrive. I can't believe
that on a normal local harddrive it would take that much time for a
status scan.
If you really have your working copy on a network share, then I suggest
to change that instead of implementing ugly hacks into TSVN.
>> Those are not artificial interface limits, that's how Subversion is
>> designed and works. To do what you ask for would mean to write an ugly
>> workaround to the Subversion design.
>
> I wholeheartedly disagree. Subversion does give you the option to work
> with specific path lists non-recursively, rather than forcing you to
> run every operation recursively on a base directory. The "--targets"
> and "--non-recursive" command line options are examples of this. All
Select the files you want to commit, right-click, choose "commit".
Stefan
--
___
oo // \\ "De Chelonian Mobile"
(_,\/ \_/ \ TortoiseSVN
\ \_/_\_/> The coolest Interface to (Sub)Version Control
/_/ \_\ http://tortoisesvn.tigris.org
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tortoisesvn.tigris.org
For additional commands, e-mail: dev-help@tortoisesvn.tigris.org
Received on Wed Aug 9 17:56:41 2006