On 2 Mar 2017, at 6:37, Daniel Shahaf wrote:
> Garance A Drosehn wrote on Wed, Mar 01, 2017 at 14:48:07 -0500:
>> I do not see how this extended-sha1 would be easier to
>> break than the current sha1, because it includes the
>> current sha1, unchanged.
> Regarding "easier to break", you were probably thinking of
> collision attacks, which are never made easier by extending
> the output. I was thinking of "first preimage" attacks
> (finding a message that has a given checksum). Extending
> the output can make _these_ attacks easier.
Yes, in the context of subversion I'm thinking of collision
attacks, and only collision attacks.
> That's called a "second preimage attack". The extended
> construction is not secure against such an attack: a hash
> function with a 320-bit output is supposed to require
> O(2³²⁰) evaluations to find a second preimage, but it's
> trivial to find a second preimage for F and G simultaneously
> with only O(2¹⁶⁰) evaluations. (Do you see that?)
But consider that sha1 is a 160-bit output. With the paper
that Google just released, did they need to do O(2¹⁶⁰)
evaluations to create a collision?
When I looked at their announcement at
I noticed the following comments:
"In practice, collisions should never occur for secure
hash functions. However if the hash algorithm has some
flaws, as SHA-1 does, a well-funded attacker can craft
As to how much work it was:
"- 6,500 years of CPU computation to complete the
attack first phase.
- 110 years of GPU computation to complete the
"While those numbers seem very large, the SHA-1
shattered attack is still more than 100,000 times
faster than a brute force attack, which remains
My thinking is that their attack is not a brute-force attack,
and thus assuming the ideal of O(2¹⁶⁰) is misleading.
"In 2013, Marc Stevens published a paper that outlined
a theoretical approach to create a SHA-1 collision"
I'll admit that I have no idea how his theoretical insight
would apply to a double-sha1 hash. But if all it does is
get us back to O(2¹⁶⁰) (despite using 320-bits!), that
leaves us 100,000 times better off than we currently are.
>> [...]. But it might be
>> faster to use this tactic instead of making the much more
>> substantial change of totally dropping sha1 for sha256.
>> (I don't mean it will necessarily be faster to implement,
>> but just that it might run faster in day-to-day operations)
> I concede the performance issue.
Indeed, performance is the only reason I suggested this. Some
comments in this thread were concerned about the performance-
hit of moving to sha256.
I was also thinking it *might* be easier to make a double-sha1
repository backwards-compatible with older versions of svn.
But I do not know anything about the code in subversion, so
that's total speculation on my part.
Thanks for the extra info. The paper was interesting, even
if some of it went over my head...
Garance Alistair Drosehn = drosih_at_rpi.edu
Senior Systems Programmer or gad_at_FreeBSD.org
Rensselaer Polytechnic Institute; Troy, NY; USA
Received on 2017-03-02 19:32:07 CET