[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: svntar, anybody?

From: Ph. Marek <philipp.marek_at_bmlv.gv.at>
Date: 2007-07-03 14:46:59 CEST

On Dienstag, 3. Juli 2007, Ben Collins-Sussman wrote:
> I have to admit, I'm lost at this point. Reading through this entire
> thread, it seems like the list of requirments keeps changing. Phil,
> can you be *really specific* and list *all* the requirements in one
> neat list?
Ok, I start afresh.

I want to push some (big) amount of files as fast as possible to a number of
clients, which have no prior version available as they're freshly installed.
This happens in parallel and has a change granularity of a day.
Machines do not have identical hardware.

I'd like to avoid harddisk seeks, so a single large stream is prefered, CPU
usage (on the central machine) should be kept as low as possible.

My conclusions (feel free to skip them, if you think they're invalid or
misleading, or have better solutions):

- Doing exports/checkouts directly from the repository used much time (CPU for
  decompression, seeks for revision files, etc.)
- Doing a rsync from a (daily) exported/updated directory that has some
  hundred thousand files has some CPU usage, too. And seeking ...
- Simply copying a partition pushes a big amount of zeroes; and the partition
  sizes on the clients can change (as they've got varying harddisks), so
  they'd had to copy some standard image and resize it afterwards.
- Using partimage or similar has the same problem.
- Of course it's possible to export/update into a temporary directory, and get
  a tar from there. But if the middle step can be avoided, it's doesn't need
  the additional space; and it's faster, so the image can get regenerated more

Any better ideas?



To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Tue Jul 3 14:46:52 2007

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.