> I just noticed that a dump of an entire repository is much smaller than
> the sum of files that hotbackup.py creates. In my case, 129Mb for the
> dump file versus 429Mb for hotbackup.py
>
> I presume the difference arises from the Berkeley DB files being quite
> sparse. Is there anything that one can do to pack them down a bit better
> for a hotbackup?
>
> Also, given that a full dump contains all all the information needed to
> recreate a repository, why doesn't the hotbackup script simply do a dump
> instead?
>
> This would save an enormous amount of space. E.g., a tarred/gzipped dump
> of my repository is just 28Mb versus 429Mb from hotbackup.py!!!
Well, I've been running a modified hot-backup.py script for some time now
that has some features you might like. It performs a db_checkpoint on the
repo to make sure you aren't saving a bunch of logs you don't need and
then tar's and bzip's the directory and only saves those files.
I only run it nightly from a cron job though as its a little cpu intensive
for a per-commit run.
I'll send you a copy if you are interested.
Michael
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri May 16 18:58:07 2003