On Fri, Apr 20, 2012 at 11:16:27AM +0100, Philip Martin wrote:
> Johan Corveleyn <jcorvel_at_gmail.com> writes:
> > Now, all is not lost if there are mismatches: 'svn cleanup' corrects
> > the last-mod-times in the wc-metadata (for all files which are still
> > identical to the pristine, the last-mod-time in metadata will be
> > updated). But the problem is that users are lazy, and they won't run
> > 'svn cleanup' unless they know it's necessary / beneficial. So I'd
> > like to help them a bit by offering an easy way to detect if their
> > working copy contains a significant amount of timestamp-mismatches.
> Can that be done more efficiently than 'svn cleanup'? Perhaps what you
> want is a 'svn cleanup --verbose' that reports on fixed timestamps and
> redundant pristines?
I don't think adding a new command line option for this will be useful.
Fixing up on-disk timestamps is an obscure low-level operation.
I would guess that most users aren't even aware of the timestamp
optimisation in the first place. Many wouldn't notice the new option
even exists, and most others will probably forget about using it.
Couldn't svn status automatically fix the recorded time-stamp in
meta-data, like svn cleanup does? That way, the problem fixes itself
during normal use. There is one slow status run which compares file
contents and fixes up timestamps in one go. Subsequent status runs
are faster again.
Of course, this would require 'svn status' to obtain a write lock on wc.db.
The status operation must not hold a write lock during regular operation
because doing so prevents concurrent read access by other clients.
To keep the time window where the write lock is held as small as possible,
we could collect a list of affected files during the status run while
a read-lock is held and update timestamps after the status run for those
affected files which still have the same timestamp they had during the run.
Received on 2012-04-20 12:36:17 CEST