Daniel Shahaf wrote on Thu, 07 May 2020 21:58 +0000:
> What I'm trying to do here is to get a _problem statement_ to which your
> patch is a solution, so we can consider other solutions. You wrote
> > rep-cache verification leads to errors for other writes to the rep-cache
> which is accurate, but doesn't explain _why_ that is a problem (other
> than in terms of the three listed consequences, but you also said fixing
> at least one of _those_ wouldn't address your use-case); and you also
> > rep-cache verification runs with holding a SQLite lock for a long time
> which is an accurate description of the implementation, but is not
> a problem statement.
> I hope I don't sound like a broken record here, but I do think getting
> a clear problem statement is important. Importantly, the problem
> statement should be made _without reference to the patch_. Explain
> where the incumbent code falls short and in what way.
The way to get a problem statement is:
1. Write a sentence that explains the problem.
2. Write a sentence that is what would say if someone read the last
sentence you wrote and responded by enquiring "Why is that a problem?".
3. Goto 2.
The termination condition (not spelled out) is "unless, in your
profesional opinion, the explanation is sufficiently detailed". That
is admittedly nebulous, though I suspect it's closer to being in
consensus than to being subjective.
> > The limitation seems too strong and so I think it would be better to
> > fix the problem with holding a SQLite lock during entire verification.
> > > For example, how about sharding rep-cache.db to multiple separate
> > > databases, according to the most significant nibbles of the hash value?
> > > This way, instead of having one big database, we'd have 2**(4N) smaller
> > > databases.
> > If I am not mistaken, the sizes of such shards will still be O(N), but not
> > O(1), as in the patch. Therefore, this problem can still occur at some
> > rep-cache sizes.
> (Please don't overload variables; that makes conversation needlessly
> With the proposal, the number of shards will be O(number of reps /
> 2**(number of significant bits used to identify a shard)). That's an
> inverse exponential in the number of bits used to identify a shard, so
> if the shards grow too large in the admin's opinion, the admin simply
> has to double the number of bits used to identify a shard.
(Exponential functions usually go with O(1) _additive_ increases, as
opposed to multiplicative. My bad.)
Received on 2020-05-08 00:45:25 CEST