Replying to various. I'm making a Dropbox-a-like client that uses
Svn/WebDav/AutoIncrement as the server. Critical design goal - to *not*
have a classic Svn working tree locally. Think 50GB of binary files sync'd
down to a client, and a wish to not have that take 100GB of local storage.
> What would content hashes provide that comparing node-rev id's would not?
I can client side detect change to a file, without a subversion working
tree. I store the Sha1 as the server had it. I would calculate that for
every file changed via a inotify/FSEvents/ReadDirectoryChangesW
notification mechanism, before pushing up to the svn server (curl push).
I can't calculate a node id on the client side. That's a function of an
actual commit. I'd need double the storage to maintain a checkout's
working-copy/tree and that defeats a design goal.
Regardless of you folks implementing the server-side hashes or not, I'm
close to completion of a Python3 script that does all the above. It just
has to do calculations as soon as items come down from svn to the client.
> Node-rev id's get changed on every text change, property change, or copy
of the node itself, but aren't changed when a parent of the node
If you implement Sha1 merkle-trees for items held in Svn, please exclude
properties related to merge-tracking from the amalcgum of what you're
As an aside, there's a technology called 'Sparkleshare' for Git (& a Git
remote) that does file sync, that I *also* have a pull request in that
introduces Svn as a backend (svn client required; uses Svn working copy) -
https://github.com/hbons/SparkleShare/pull/1721. For extra shits and
giggles I have a Perforce capability under development too -
Note too, I would love it if y'all would circle back to
https://issues.apache.org/jira/browse/SVN-4454 for an implementation.
Received on 2016-09-26 13:26:12 CEST