Ben Reser wrote:
> On Wed, Jun 30, 2004 at 12:21:30PM -0500, kfogel@collab.net wrote:
>> If you rename a file F to new name N, while a user is editing F, and
>> then the user saves, what happens? They write their changes out to F
>> again. But that sounds a bit different from what you describe above.
>>
>> In general, if the window of risk is small, I think shared working
>> copy support is worth it. But we should understand exactly what can
>> happen. I don't yet feel I understand precisely what the data loss
>> scenario looks like here. Could you give a concrete example? Maybe
>> we can find a way for Subversion to at least detect that something
>> funny has happened, and warn the user who ran the update...
>
> I'd guess this is a very OS, filesystem, and editor specific
> question...
>
> But here's an example with vim and Linux on ReiserFS:
> $ echo foo > foo
> $ vi foo
> (At this point I append +localmods in my editor buffer but don't save)
>
> From a different shell I do:
> $ mv foo foo.tmp
> $ cp foo.tmp foo
>
> If I try and write to the file at this point vim says:
> WARNING: The file has been changed since reading it!!!
> Do you really want to write to it (y/n)?
>
> I say yes and:
> $ grep -H . foo*
> foo:foo+localmods
> foo~:foo
> foo.tmp:foo
>
> vim in this case will write to the proper file. I tried it again,
> doing:
> $ rm -f foo.tmp
>
> before writing to the file and, vim still behaved properly. It is
> possible that some editors will behave differently. If the editor opens
> the file and holds it open, then writing back to the same filehandle.
> Things would look very differently. Changes would be written to
> foo.tmp. Here's an exmaple perl script and a walk through of what
> happens:
>
>>>>>
> #!/usr/bin/perl
>
> use Fcntl;
>
> open FOO, "+<$ARGV[0]" or die $!;
> my $line = <FOO> or die $!;
> chomp $line or die $!;
> my $wait_for_input = <STDIN> or die $!;
> seek FOO, 0, SEEK_SET or die $!;
> print FOO "$line+localmods\n" or die $!;
> close FOO or die $!;
> <<<<
>
>
> $ rm -f foo*
> $ echo foo > foo
> $ perl editor.perl foo &
> $ mv foo foo.tmp
> $ cp foo.tmp foo
> $ fg
> (hit enter)
> $ grep -H . foo*
> foo:foo
> foo.tmp:foo+localmods
>
> Now let's try again while deleting the temp file before writing:
>
> $ rm -f foo*
> $ echo foo > foo
> $ perl editor.perl foo &
> $ mv foo foo.tmp
> $ cp foo.tmp foo
> $ rm -f foo.tmp
> $ fg
> (hit enter)
> $ grep -H . foo*
> foo:foo
>
> And the users changes silently disappear. I'd imagine this is the most
> common behavior. Some filesystems/OSes might have an error that the
> editor could pick up.
>
> Frankly, I'd say that any editor that behaves like that perl script is
> broken. A proper editor should behave the way vim does, warning you
> that the file changed, and writing to the name (not the filesystem
> node). I'm gonna bet that some people will disagree with me on this and
> maybe even some editor implementations.
Not many, if any at all, since there's one other (formerly very common, less
so now) use case that fails in this. Projects that lack effective source
control systems often end up developing a flow where you create a
hardlinked copy of the folder structure, and do your edits there. That way,
you can diff and get your changes very quickly (diff recognizes when the
two links are the same file and skips the fulltext compare).
Doing this requires that your editor write in a new path and then rename
over it (or unlink and then write). doing write, then rename also gets you
an atomic replace, and a safety net if something goes wrong.
So, vi's behavior (though without the helpful warning about the file
changing behind your back) is by far the prevalent way for editors to work;
it protects you from lots of potentially nasty corners.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri Jul 2 00:01:37 2004