On 2007-03-21 15:50:07 +0100, Erik Huelsmann wrote:
> On 3/21/07, Vincent Lefevre <firstname.lastname@example.org> wrote:
> >On 2007-03-19 20:19:46 -0500, Ben Collins-Sussman wrote:
> >> We've had a policy (since the beginning) of not acknowledging tools
> >> that change file contents without changing the timestamp. The 2 or 3
> >> times we've discovered such tools, we've always shouted them down as
> >> dangerous and broken, and they've been fixed. I don't see the latest
> >> complaint as anything new.
> >This sucks because such tools are common under Unix. "mv" is one of
> >them. And no, such tools are NOT fixed.
> But certainly, you won't *always* replace a checked out file with one
> of *exactly* the same size, right? We do size-caching now (it will be
> available in 1.5), so that should reduce chances of running into this
Not always, of course. The problem would be rare. But losing data
(and not knowning immediately that data are lost), even if this is
rare, is very bad.
> >BTW, has "svn export" been fixed?
> >Also, I don't see the point of not supporting such tools (possibly
> >in an optional way).
> I'm not sure this is true (I'm at work and can't test the behaviour
> right now), but possibly the --force parameter to commit will check
> the entire file before committing?
There could be such an option[*], but it would make various operations
very slow. Remember that the user doesn't necessarily know in advance
if such an option would be needed.
[*] It could be useful to do a "svn --force st" before a "rm -rf" of
the working copy.
> >> without adding any extra stat() calls... which is great. But there is
> >> *no way* we should give up our speed optimizations for 99.99% of the
> >Why would looking at ctime be slower?
> It isn't.
> >Note: the only problem with ctime is with chmod, but in general,
> >users don't do a chmod on their Subversion files, and when they
> >do it, it should be rare.
> ctime is not reliable in the face of networking filesystems.
This is the first time I've heard about that. I know that atime is
not reliable (for efficiency reasons), but what are the problems
> >> universe just to prevent dataloss for the .01% of tools that
> >> blatantly flaunt convention.
> >This is only *your* point of view.
> It is?
I was talking about the "tools that blatantly flaunt convention"
(in particular for mv). Most people think that mv behaves in the
correct way and must not be changed.
> If this were truely one of our biggest problems, then why aren't our
> lists flooded with bugreports/enhancement requests?
Because the problem is rare and/or users don't necessarily know
that their data may be lost. Just like security holes: one sees
the complaints only once the users notice the consequences...
Also, most users don't report bugs, or report them at a bad place.
> >I'm not asking to choose between performance and support for some
> >(non always broken) tools. Using ctime (possibly in addition to
> >mtime[*]) should be fine.
> Umm, well, only when not operating on network filesystems, but
> especially working copies are supposed to work on network shares.
Of course, but then, using ctime could be optional. But I'd like
to know more about the ctime problem you're talking about.
> >[*] that would take a few additonal processor cycles per file, so that
> >the user won't see the speed difference.
> But the fallback is a full file compare, meaning a *lot* of extra
> i/o... (Surely a measurable difference).
I repeat that a full file compare is not a solution either, and I'm
waiting for more information about ctime.
Vincent Lefèvre <vincent_at_vinc17.org> - Web: <http://www.vinc17.org/>
100% accessible validated (X)HTML - Blog: <http://www.vinc17.org/blog/>
Work: CR INRIA - computer arithmetic / Arenaire project (LIP, ENS-Lyon)
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Wed Mar 21 18:24:11 2007