On Dienstag, 3. Juli 2007, Ben Collins-Sussman wrote:
> I have to admit, I'm lost at this point. Reading through this entire
> thread, it seems like the list of requirments keeps changing. Phil,
> can you be *really specific* and list *all* the requirements in one
> neat list?
Ok, I start afresh.
I want to push some (big) amount of files as fast as possible to a number of
clients, which have no prior version available as they're freshly installed.
This happens in parallel and has a change granularity of a day.
Machines do not have identical hardware.
I'd like to avoid harddisk seeks, so a single large stream is prefered, CPU
usage (on the central machine) should be kept as low as possible.
My conclusions (feel free to skip them, if you think they're invalid or
misleading, or have better solutions):
- Doing exports/checkouts directly from the repository used much time (CPU for
decompression, seeks for revision files, etc.)
- Doing a rsync from a (daily) exported/updated directory that has some
hundred thousand files has some CPU usage, too. And seeking ...
- Simply copying a partition pushes a big amount of zeroes; and the partition
sizes on the clients can change (as they've got varying harddisks), so
they'd had to copy some standard image and resize it afterwards.
- Using partimage or similar has the same problem.
- Of course it's possible to export/update into a temporary directory, and get
a tar from there. But if the middle step can be avoided, it's doesn't need
the additional space; and it's faster, so the image can get regenerated more
often.
Any better ideas?
Regards,
Phil
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Tue Jul 3 14:46:52 2007