On Tue, Jan 24, 2017 at 4:45 PM, Luke Perkins <lukeperkins_at_epicdgs.us>
> I appreciate everyone's audience on this issue. I have not felt a need to
> be directly involved in the subversion system mainly because it works so
> well. This is the first time in 10 years I have felt the need to get
> directly involved in the SVN development team.
> Statement: " As a bug report alone, this one seems pretty easy:
> I completely disagree with this statement. I have nearly 300GB of dump
> files used as a means of backing up my repositories. Some of these dump
> files are 10 years old. The incremental SVN dump file is automatically
> generated at each and every commit. After these incremental SVN dump files
> are created, they are copied and distributed to offsite locations. That way
> if my server farm crashes, I have a means of assured recovery.
> Every month I run sha512sum integrity checks on both the dump files
> (remotely located in 3 different locations) and the dump file produced by
> the subversion server. Transferring thousands of 128 byte files is a much
> better option than transferring thousands of MB dump files over the
> internet to remote locations. This method and automated scripts have worked
> for 10 years. I have rebuilt my servers from the original dump files on at
> least 2 occasions because of computer crashes. This provides me a sanity
> and validation methodology so that I can spot problems quickly and rebuild
> before things get out of hand.
> Asking me to redistribute 300GB of data to 3 different offsite (and
> remote) locations, is not a good option.
> The SVN dump file has always been presented as the ultimate backup tool of
> the subversion system. The integrity of the SVN dump file system is of
> paramount importance. The whole reason why SVN exists in the first place is
> "data integrity and traceability". The code was changed back in 2015, for
> better or worse, and we need present solutions to address legacy backups.
OK, but aren't you moving the goal posts now? You are implying those old
dump files no longer work or will not load. That is not true. The only
issue is with your own process where you diff a dump file. Mike is simply
saying you are doing something we never claimed should work. The fact that
it did for you was just luck that may have ran out.
That said, based on I think Julian's comment, it seemed like we could
restore the old order quite easily without breaking anything so that seemed
harmless to me and I do not see that it has a negative impact on 1.9.x
users for whom their order would now change either .. for same reason that
we are still not claiming the order is significant to us.
Mike seemed to be pushing back on trying to formalize support for something
we specifically do not support, which is that the headers in the dump file
appear in a specific order. I cannot really disagree with him on that
Received on 2017-01-24 23:06:56 CET