On 2009-07-29 11:15, Greg Troxel wrote:
> I have a system with ~30 users, with each having an 8G or so svn working
> copy. This is too much data to back up on a poor little LTO-2 drive
> shared with many other systems. (The repository is of course backed up;
> I'm talking about vast numbers of working copies.) If it were all
> useful data, then we'd get bigger tape drives or something else. If it
> were all just copies, we'd not back it up. But sometimes, somewhere in
> that 8G is 50KB of real work that hasn't been committed yet.
>
> Does anyone know of a way to integrate svn and backups (GNU tar because
> this system is GNU/Linux, but many of my others are NetBSD with dump) so
> that modified files are backed up, and unmodified files are not? I am
> thinking of running a cron job that does 'svn status' and generates an
> exclude list (or setting chflags nodump) based on what is modified, or
> really what isn't known to match the repo or set to ignored.
>
> It would be cool to have svn modify the exclude list on the fly, perhaps
> storing it in a database. This seems tricky because modifying a file
> for the first time is not necessarily an svn operation, so think the
> cron job approach is probably best.
How about having Subversion set the nodump flag on any file it
creates that is directly reproducable from the repository? Editors
should use a temp file write followed by a rename(2) which will
effectively wipe the nodump flag off the file when it is first
modified.
--
Alec.Kloss_at_oracle.com Oracle Middleware
PGP key: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x432B9956
- application/pgp-signature attachment: stored
Received on 2009-07-29 17:39:14 CEST