Hi,
I've read through the emailing lists, and documentation, and wonder how
other people deal with large fast changing binary data?
For example, we have data directories within our projects which are in the
order of a couple of hundred megabytes. The data is compressed, and the
changes to it are a) frequent and b) wide ranging - ie the data itself
changes significantly with each change.
In an ideal world, we would be able to put this under subversion's control,
but be able to let subversion know not to keep a history of these
directories/files, unless for instance we specifically wanted a certain
revision to be kept. Basically the ability to be able to selectively
overwrite the head, not create a new revision.
Ie, we go through 4 weeks of the data changing, each time it changes, the
data is commited to the repository but just overwrites the previous
revision. However, then we go through a milestone and want to be able to go
back to the specific data currently in the repository. Then, for the next
four weeks the data changes quickly again - but basically the repository now
only maintains 2 versions of the data.
I've read about svn obliterate and other related areas - it seems that this
sort of behaviour isn't supported.
I guess Im not asking a specific question, more just wanting to know how
other people deal with this sort of thing. Obviously my concern is if we put
the data in question under subversion control, the repository will quickly
grow to such a huge size it won't be practical.
Many thanks for any help/suggestions!
Cheers,
- Dave.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Thu May 19 10:50:52 2005