On Tue, 24 Oct 2000, Greg Stein wrote:
> It could probably do diffs, but I'll have to get some stuff implemented
> because I'm not exactly sure how we'll be implementing the diff draft
> (referenced from the webdav-design.html document in CVS). Specifically, that
> should have some "Vary:" headers which would help control how the proxies
> will cache and under what "key", if you will.
It won't cache intelligently, though (diffs between arbitrary versions
that are a subset of what's been fetched) without some serious work at the
caching proxy level. Unless every update is fetched as a series of diffs
between sequential versions of a file (e.g., updating from 1.2 to 1.4 of a
given file transfers diff(1.2, 1.3) and diff(1.3, 1.4) instead of diff
(1.2, 1.4)), then it's going to be difficult for the proxy to respond to a
request for diff(1.3, 1.4) if all it's got is diff (1.2, 1.4). So some
way of supporting that latter case would be really interesting to me.
> But the bulk: big time.
Given checkouts involve a big batch of bytes initially, sure. However, a
checkout for a file of a given version one date and the same file after
the next commit are not going to be optimized, they'll have separate cache
keys. I think. So it's not quite as romantic as 100% replicated
repositories - though getting there can be done, and won't require
modificatons to the installed client base to get there.
> Just think... an SVN repository getting picked up and shared across the
> akamai caching network! Woo!
That might be tough, though, since Akamai is a read-only network, as far
as I know. A svn checkout of http://blah.akamai.net/blah implies a commit
against that same resource, doesn't it? I'm sure Akamai could implement
something that supports that, but it won't just work out of the box.
BTW, replicated repositories is something we (collab.net) are *very*
interested in helping see happen.
Received on Sat Oct 21 14:36:12 2006