[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Antwort: Re: Re: dangerous implementation of rep-sharing cache for fsfs

From: <michael.felke_at_evonik.com>
Date: Fri, 25 Jun 2010 18:33:31 +0200


Martin got my point:
>> It's not the probability which concerns me, it's what happens when a
collides. If I understood the current algorithm right the new file will be

silently replaced by an unrelated one and there will be no error and no
warning at all. If it's some kind of machine verifyable file like source
code the next build in a different working copy will notice. But if it's
something else like documents or images it can go unnoticed for a very
long time. The work may be lost by then. <<

The data checked in the repository is exactly like this!
It's mostly data generated by measurements, produced once,
normally never changed or regenerated and
untouched after using it once or twice.
But then, suddenly and unexpected someone comes and what?s to see data
in the worst case, to check it, because of a law suite.
Then it's to late to realize the data is wrong and
the original one has been drop silently by the repository.

The mayor role of subversion in our lab is to ensure that data und
haven't changed over time without registration and the ability
to reproduce the original data.

So I would be very gland we someone would help me implementing the check.
I already started investigation the subversion source code
for a way to implement this.
Briefly, i think it would a C-function call by rep_write_contents_close()
in addition to only if (old_rep) that,
1. find the data of the old_rep in the repository
2. reconstruct the full text of it
3. get/finds the full text of the file to be commit
4. compares them binary
5. returns the result of the comparison as a boolean


P.S. I one weekend now, so excuse that I answer any e-mails Monday.
Received on 2010-06-25 18:34:16 CEST

This is an archived mail posted to the Subversion Dev mailing list.