Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

DJB has been complaining about this NSA position since 2022 (I guess long before it was an issue at the TLS WG):

https://blog.cr.yp.to/20220805-nsa.html

I'm actually quite surprised that anyone is advocating the non-hybrid PQ key exchange for real applications. If it isn't some sort of gimmick to allow NSA to break these, it's sure showing a huge amount of confidence in relatively recently developed mechanisms.

It feels kind of like saying "oh, now that we can detect viruses in sewage, hospitals should stop bothering to report possible epidemic outbreaks, because that's redundant with the sewage monitoring capability". (Except worse, because it involves some people who may secretly be pursuing goals that are the opposite of everyone else's.)

Edit: DJB said in that 2022 post

  > Publicly, NSA justifies this by
  > 
  > . pointing to a fringe case where a careless effort to add an extra security layer damaged security, and
  > . expressing "confidence in the NIST PQC process".


> I'm actually quite surprised that anyone is advocating the non-hybrid PQ key exchange for real applications.

Why is that so surprising? Adopting new cryptography by running it in a hybrid mode with the cryptography it's replacing is generally not standard practice and multi-algorithm schemes are pretty niche at best (TrueCrypt/VeraCrypt are the only non-PQ cases that come to mind, although I'm sure there are others). Now you could certainly argue that PQ algorithms are untested and risky in a way that was not true of any other new algorithm and thus a hybrid scheme makes the most sense, but it's not such an obviously correct argument that anyone arguing otherwise must be either stupid or malicious.


There are probably other periods of time when I might have advocated running hybrids of different families of primitives, although I'm not sure that I was ever following the details closely enough to have actually advocated for that.

The cool thing is the dramatic security improvements against certain unknown unknowns for approximately linear additional work and space. Seems like a pretty great advantage for the defender, although seriously arguing that quantitatively requires some way to reason about the unknown unknowns (the reductio ad absurdum being that we would need to use every relevant primitive ever published in every protocol¹).

I see PQC as somehow very discontinuous with existing cryptography, both in terms of the attacks it tries to mitigate and the methods it uses to resist them. This might be wrong. Maybe it's fair to consider it an evolutionary advance in cryptographic primitive design.

The casual argument from ignorance is that lattices are apparently either somewhat harder to understand, or just less-studied overall, than other structures that public-key primitives have been built on, to the extent that we would probably currently not use them at all in practical cryptography if it weren't for the distinctive requirements of resistance to quantum algorithms. I understand that this isn't quantitative or even particularly qualitative (for instance, I don't have any idea of what about lattices is actually harder to understand).

Essentially, in this view, we're being forced into using weird esoteric stuff much earlier than we'd like because it offers some hope of defending against other weird esoteric stuff. Perhaps this is reinforced by, for example, another LWE submission having been called "NewHope", connoting to me that LWE was thought even by many of its advocates to offer urgently-needed "hope", but maybe not "confidence".

I'd like not to have to have that argument only in terms of vibes (and DJB does have some more concrete arguments that the security of SIKE was radically overestimated, while the security of LWE methods was moderately overestimated, so we need to figure out how to model how much of the problem was identified by the competition process and how much may remain to be discovered). I guess I just need to learn more math!

¹ I think I remember someone at CCC saying with respect to the general risk of cryptographic backdoors that we should use hybrids of mechanisms that were created by geopolitical rivals, either to increase the chance that at least one party did honest engineering, or to decrease the chance that any party knows a flaw in the overall system! This is so bizarre and annoying as a pure matter of math or engineering, but it's not like DJB is just imagining the idea that spy agencies sometimes want to sabotage cryptography, or have budgets and staff dedicated to doing so.


Expand on "recently-developed mechanisms".


I don't have a good sense of what to point to as the "mechanism".

https://en.wikipedia.org/wiki/Lattice-based_cryptography#His...

2005 (LWE), 2012 (LWE for key exchange), earlier (1990s for lattice math in general), 2017 (Kyber submission), later (competition modifications to Kyber)?

I can see where one could see the mathematics as moderately mature (comparable in age to ECC, but maybe less intensively studied?). As above, I don't know quite how to think about whether the "thing" here is properly "lattices", "LWE", "LWE-KEX", "Kyber", or "the parameters and instantiation of Kyber from the NIST PQ competition". Depending where we focus our attention there, I suppose this gives us some timeframe from the 1980s (published studies of computational complexity of lattice-related algorithms) to "August 2024" (adoptions of NIST PQ FIPS documents).

Edit: The other contextual thing that freaks out DJB, for those who might not be familiar, is that one of the proposed standards NIST was considering, SIKE, made it all the way through to the final (fourth) round of consideration, whereupon it was completely broken by a couple of researchers bringing to bear mathematical insight. Now SIKE had a very different architecture than the other proposals in the fourth round, so it seems like a portion of the debate is whether the undetected mathematical problems in SIKE are symptomatic of "the NIST competition came extraordinarily close to approving something that was totally broken, so maybe it wasn't actually that great at evaluating candidate algorithms, or at least maybe the mathematics community's understanding of post-quantum key exchange algorithms is still immature" or more symptomatic of "SIKE had such a weird and distinctive architecture that it was hard to understand or analyze, or hard to motivate relevant experts to understand or analyze it, unlike other candidate algorithms that were and are much better understood". It seems like DJB is saying the former and you're saying the latter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: