Stefan Sperling wrote on Fri, 27 Mar 2020 14:41 +0100:
> On Thu, Mar 26, 2020 at 08:32:38PM +0000, Daniel Shahaf wrote:
> > What I'm worried about is that we'll apply the "backwards compatibility
> > until 2.0" promise to something we shouldn't. As I said, issues of
> > this nature can't generally be found by tests; they can only be found
> > by code review.
>
> I'm not worried about that. After reading through the patch I don't see
> any reason why we would not want to commit to supporting this feature.
>
I wasn't discussing the build-repcache patch specifically; I was
commenting in general terms about the risks of rushing patch review
processes lest the patches in question miss a minor release train.
"We'll just fix it in a patch release" is the right attitude, but
doesn't always apply.
> Do you have concrete concerns? I can't see any downsides.
> So far, I believe this new subcommand will be very useful.
>
No, I don't. What concerns I had I already raised upthread.
Well, I guess the documentation, once written, should point out that if
any single revision build-repcache operates on adds a large number of
reps, commits made in parallel to the build-repcache run will be starved
out of adding their own rep-cache entries, which will manifest as
"post-commit FS processing failed". (We have a bug filed for this, but
I can't find its number right now, sorry.)
> The only existing alternative is a full dump/load which is expensive
> to perform on large repositories.
> I've seen several cases in recent years were servers were upgraded but
> admins decided against doing a full dump/load cycle of repositories
> sized in the GB/TB range because the perceived benefits didn't justify
> the extra effort.
>
*nod*
This reminds me that we don't have good test coverage for on-disk and
over-the-wire interoperability with older versions (other than
upgrade_tests_data/, which is wc-only). This would be a good thing to
set up some day…
> I know of at least two cases where rep-sharing was temporarily or perhaps
> even permanently disabled after SHA1 collisions were committed (one case
> is webkit, I learned about another case via elego's SVN support).
> These repositories are now growing faster than they would if rep-caching
> could be re-enabled easily.
I don't follow. There's nothing stopping the admins of those
repositories from reënabling rep-cache as soon as they upgrade to
a Subversion release that has the sha1 collision fixes. That will solve
the growth problem going forward. Compared to this, the benefits of
running build-repcache on those repositories are limited.
Cheers,
Daniel
Received on 2020-03-27 23:59:05 CET