Quantum Algorithms for Lattice Problems
eprint.iacr.orgI work on homomorphic encryption, and there are some rumors circulating that, if this checks out, it will break some of the leading FHE schemes like BFV, where the moduli used are quite large (in the hundreds of bits or even over a thousand bits).
… only if scalable quantum computers exist.
If scalable quantum computers do not exist, we do not need PQC.
We need PQC about 20 years before practical, scalable gate quantum computers appear (if they can do all the right gates).
I think that this will be signaled when someone factors a 32 bit integer on one. At that point I guess it'll be about 20 years before someone can factor a 2048 bit integer, and I'll get twitchy about what I am sending over the wire with PKI. My feeling is that all my secrets from 20 years ago are irrelevant to life now so I feel 20 years of warning is quite sufficient.
We are within 20 years of scalable quantum computers already.
The record for integer factoring on quantum computers was on the order of factoring fifteen into three times five the last time I checked. Can we do three digits now?
Significantly larger numbers than 15 have been factored [1] but not using Shor's algorithm. Shor's algorithm is particularly sensitive to noise/errors in your quantum computer and isn't going to be useful unless we get a properly error corrected machine working. The algorithms used in [1] are considerably less fancy (with worse asymptomatic performance) but are more resilient to noise.
I couldn't quickly find any info, but does this algorithm show the kind of exponential quantum speed up needed to break RSA? Because if it's just slightly faster than the best known classical algorithms, then it's enitely irrelevant to the question of when we need to switch our encryption schemes (even though it may be a significant advancement in the area of quantum algorithms research).
I think its unknown, but my feeling is that the answer is almost certainly no.
These sort of variational algorithms are appealing (to some people) because they're potentially usable on the sort of noisy small quantum computers we have today and in the near term future, but they aren't very fancy. I think in general what you'd expect to get out of them is a sort of Grover's algorithm-like square root speedup.
it's generally believed that the algorithm is somewhere in between ecm and quadratic sieve (so slower by a super-polynomial factor than NFS which is the best classical algorithm)
that paper is factoring with an algorithm that almost certainly isn't polynomial time. That paper is only slightly better than the quantum factoring algorithm of making a quantum computer perform trial division.
I agree
And to extend off this comment, there are methods being worked on for building qubits that are intrinsically noise-free and don’t need the exponential number of error correcting operations. When those are available, you’ll see a step function increase in capabilities.
For a circuit of size C, the size of a fault tolerant circuit to compute the same thing is O(C polylog C)
Technically correct is the best kind of correct.
>When those are available
Pretty big if
We’re working on it.
Interesting - why is Shor's sensitive to noise? Is that the Rphase gates?
Yeah, for Shor's algorithm to factor an integer of order 2^k you need controlled phase gates with phases roughly order 2^{-k} (very roughly, with some caveats, but lets just say you need some small ones) these very small phase gates are susceptible to even very small errors.
This is a gross oversimplification. For the true version see here
Took me a while to read it - seems to be a pretty significant take down of anything that uses a QFT - basically reality won't permit it.
This is a minority point of view on quantum computing, as I understand.
Wot? Science isn't a democracy! The parent refs a preprint from a very reputable author, which has been somewhat peer-reviewed already *
Now, I got to the bottom of page 6 and my maths failed me: I can't follow the expansion, but I expect that the reviewers of Physica A or where ever the gentleman who wrote this sends it off to will be able to check. I do follow the principle of the proof though and it's pretty intuitive to me, for what that's worth.
Anyway, I can't say I give a hoot what the majority or minority think - and nor should anyone else. Read the paper for yourself and make up your mind.
* The author thanks Al Aho, Dan Boneh, P ́eter Ga ́cs, Zvi Galil, Fred Green, Steve Homer, Leonid Levin, Dick Lipton, Ashwin Maran, Albert Meyer, Ken Regan, Ron Rivest, Peter Shor, Mike Sipser, Les Valiant, and Ben Young for insightful comments. He also thanks Eric Bach for inspiring discus- sions on some of the number theoretic estimates, and we hope to report some further improvements soon [7]. A similar result can be proved for Shor’s algorithm computing Discrete Logarithm, and will be reported later.
To be clear: there are two related but ultimately separate claims here.
1. Shor's algorithm won't work on the very noisy quantum computer we have for the near and intermediate future.
2. Shor's algorithm won't work on a hypothetical error corrected future quantum computer.
Claim 1 is pretty convincing proved in the paper. Claim 2 is not. The author puts forward some arguments for claim 2 in the introduction and conclusions but explicitly states that he does not prove it.
I think the point of view that the person you're replying to is talking about is claim 2. There are pretty good reasons to believe that claim 2 is false in my opinion, in particular we have threshold theorems for quantum error correction which should "save the day" for quantum computing.
I am embarrassed because I looked up the quantum error correction paper and (guess what) I'm totally out of my depth on it!
So I could be being a complete plonker here, but what I can understand tells me that for quantum error correction there's an error rate which is the lowest bound on what can be corrected, but my reading of the Shor's Algorithm paper is that when there's noise the algorithm just doesn't work - so n>nc as n is 1?
Ok so the relevant error rate you can think of as a quantity measuring (either on average or worst case) how far the state you produced in your quantum computer is from the state you wanted to create. I.e. if the error rate is small the states are close and if its large they're very different. You can also model the noisy system as something like reality rolls a dice and randomly chooses whether to apply the operation you wanted or do something different. If the error rate is small then most of the time it does what you wanted.
The point of the Shor's algorithm paper is that Shor's algorithm doesn't scale, that is you might be able to factor some numbers but if you have some fixed nonzero error then you can't factor bigger numbers just by adding more qubits.
On the other hand the point of the threshold theorem for quantum error correction is that as long as the error rate you have is less than some critical value, then you can make your error rate smaller by adding more qubits.
The way this works is that you can use a bigger quantum system to simulate a smaller one with a smaller error rate. Let's say (for example) you can use your bigger quantum system to simulate a smaller one with half the error rate. Then you could add another layer of simulation so now your physical system is simulating a smaller system, which is simulating a smaller system which has a quarter of the error rate. People have analysed how many extra qubits you need for this sort of error correction and it essentially adds a polylogarithmic overhead to your requirements. Polylog is very good scaling, but the constants are (as far as I know) pretty big right now and therefore impractical.
If you do this "properly", and someone manages to build a physical system you can scale like this with an error rate below the required threshold then this essentially circumvents the problem in the Shor's algorithm paper, you add more physical noisy qubits and reduce the error in your simulated "logical" qubits.
The author of the Shor's algorithm paper essentially doesn't believe this threshold theorem stuff is actually going to work in reality, partly because they think quantum mechanics is wrong (it is, but it's wildly unclear if its wrong in a way that would cause problems for the threshold theorem).
Then there is the hypothetical quantum computer systems whose error rates are so slow as to be negligible and you don’t need error correction at all. Those may be on the horizon as well.
thank you
I'm not sure that's the right question. It's more, is there a chance at all of anyone figuring it out, and given the enormous scale of the security risk that poses, we should start proactively mitigating those threats. If fusion energy goes from perpetually 10 years away to suddenly here, that's pretty much just a white swan. If quantum computers happen, that's a global security risk before it's a civilizational upgrade.
The last time I checked they even cheated to factor fifteen
You should check again. Numbers like 1099551473989 have been factored successfully by now. The arxiv link in the sibling post is a good start.
biggest number factored by a quantum computer isn't the right question. the right question is biggest number factored using a polynomial time algorithm. the answer to that as far as I know of still 15 (although I would be interested in papers that show more progress)
This is one of the things I really resent about QC as a field - there's so much chaff where one paper will say "we can do x" and the reality is that x does not mean what everyone thought that they meant. Number of Qbits is another thing - also what gates are implemented in the devices; how long they can run for etc etc etc.
Application of Shor's algorithm is currently limited by available error correction. Long-lived qubits would eliminate that need and drastically increase capabilities.
I'm not sure that you are correct. I've tried to read https://arxiv.org/abs/2306.10072 in the last day and if my reading is right (I am very stretched by this stuff so I am very happy to be corrected) then no amount of error correction will rescue Shor's - only zero error phase gates. I suspect that a similar story is true for native QML, as quantum memory scales it's just going to get exponentially harder to maintain it.
That’s what I’m saying, effectively zero error phase gates are on the horizon. My company is working on the tech that would make them possible, for example, and we have competitors working on other paths to the same thing.
I believe the current record using Shor's algorithm is 31, done by IBM recently.
we need sources in this thread
If you want a number accompanied by a scientific publication, the best you get is 21: https://www.nature.com/articles/s41598-021-95973-w
IBM has gone through 2 generations of chips since then.
But have they factored anything bigger?
They have reportedly made it to proving that 31 is prime, as I said earlier, using their 1000-qubit chips.
proving primality is doable in polynomial time without a quantum computer, so that's hardly impressive.
Hemomorphic encryption is not the same thing as post quantum crypto?
No, they're orthogonal terms. Homomorphic encryption is encryption where a specific operation on ciphertexts (e.g., ×) translates into an operation on the underlying plaintexts (e.g., +). With fully homomorphic encryption, there are even two such ciphertext operations (and corresponding plaintext operations).
Post quantum crypto is cryptography that cannot be broken by a quantum computer. This is rather nebulous, since we haven't yet discovered all possible algorithms that can run on quantum computers. Before you know it, someone comes along and finds a new efficient algorithm for quantum computers that breaks something thought to be post-quantum. Which is what is happening here - if the results stand up under scrutiny.
Sidenote: it may turn out that any crypto scheme which supports some operation on ciphertexts that translates into an operation on the plaintexts is quantum-resilient (or, vice versa, quantum-vulnerable). But tgat would require a fornal proof.
Homomorphic Encryption does often use lattice mathematics
But classically secure FHE is still a useful thing (even if it is broken by hypothetical quantum computers).
FHE is still only known from lattices, and has nothing to do with post-quantum computers.
I wouldn't bet against the existence of a modern Bletchley Park analogue.
Just a bit more improvement and they might be able to use a computer that doesn't exist to break an encrypting scheme nobody uses. Alarming.
Major systems and big companies like Google are already mid-transition to PQC. So it is alarming.
More to the point, the purpose of the encrypting system nobody uses is to have something to use if anybody ever makes the computer that doesn't exist. Now if that happens, what?
We really need to get people to take really complicated risks that might never come to pass much more seriously. Perhaps someone smart can explain the really complicated risks that might never come to pass to the government that doesn't really look beyond the three year time horizon and get them to allocate some of their money that doesn't really exist to help.
Furthermore this could have implications for fully homomorphic encryption schemes based on lattices. But nonetheless I laughed :)
So a thing which is currently useless because it runs at a speed that makes the Harvard Mark I look fast, might be rendered useless if a thing that doesn’t physically exist despite decades of effort is constructed? :P)
Google has dozens of chrome extensions in their app store that anyone can check in 2 mins are plain malware, and they do nothing about it. If they cared about security that's what they would be working on, these guys just want to publish papers.
I'm sure they have thought more about how to prioritize security threats than an anonymous internet commenter.
The fact that you work at Google and did not care to ask what are the extensions just confirms to me nobody there cares.
I’ll bite; what are some of these extensions?
HBO watch party. If relays a fake costumer support chat if you visit a site like united airlines, that puts you in touch with scammers (probably does other malwary stuff too). A friend almost got scammed by this, they reported it to someone they know who works at Google and a couple months later the extension is still up.
Tbh that is the only actual example I know, but after poking around a bit, ppl who actually know about security say that's the state of things with these extension and app store apps, and nobody at google seems to think fixing it is their job.
Funny thing is, they were asking this google friend for advice about getting rid of the malicious chat before they realized it was this chrome extension. The advice the google employee gave was to format the computer (it wouldn't have fixed it because once they logged into chrome again all the extensions would come back).
Hard sell that people running this clown show could be doing PQC in any meaningful sense (other than publishing papers. The papers are fine).
There was a previous one removed a few months ago for malware called HBO Max Watch Party. Was that it? If you have a specific extension id I can file a bug on your behalf.
And after reading about the situation internally, I can confirm there are dozens of people working on this problem, and that you have no idea what you're talking about. So please try to be a bit more humble.
Actually never mind, I double checked and it was just HBO watch party (it is still up and has the malware). I appreciate if you can take a look at this.
https://chrome.google.com/webstore/detail/hbo-watch-party/dn...
This is the link to the malicious extension.
It has been removed, along with a dozen others that did similar tricks. I also looked for a prior report and didn't find any for this extension, which suggests to me that the extension has not been reported before. I suggest in the future using the existing malware reporting forms on the Chrome extension store, rather than venting in HN comment threads.
Yes I am checking the link my friend sent me now it it was that one, it is down. Thank you for your interest.
"One person doesn't care, therefore nobody cares"
Sadly you are like the 6th google employee I personally told about this (and it is still up).
Arrogance.
A fitting reply to a total non-sequitur, more like. A huge corps handling of browser extensions has absolutely zero to do with encryption algorithms, and security is such a big field that "care about security" means nothing at all.
The comment was just a chance to vent anger at Google in an unproductive way.
It is a pretty random example, but it is meant to say that the math is rarely the limiting factor for security. People spend time thinking about this type of stuff because they like it, not because it is actually important for security.
In my mind RSA is the last instance of a mathematical development changing the game of security. After that it is twists of the same idea on more obscure mathematical objects, and pyrotechnic protocols that only the truly unhinged (ethereum people) are willing to try out in practice.
Their deployment is additive. You would need to break both the PCQ and classical schemes, so they’d be unaffected here.
They wouldn't be immediately hacked, especially as this is a quantum algorithm anyway. But if it turns out that the current PQC schemes are not quantum-resistant, then that work will need to be redone (unless the progress in quantum computing stalls out, I guess). The current result does not break Kyber / Dilithium / NTRU variants / Falcon / FrodoKEM even assuming it's correct, but obviously there's some concern that the a follow-up result might improve on it.
The NIST process has been running for 7 years, though they do have a few "non-lattice" schemes waiting for a 4th round of standardization: the code-based schemes Classic McEliece, BIKE and HQC. We could switch over to those, and the work to add crypto-agility to protocols would not be wasted, but the work on lattice software and hardware would be largely wasted.
Also, error-correcting codes are also solving short-vector problems in a lattice! But since the lattice has a different shape maybe it would be fine? After codes the list gets pretty thin... like there's CSIDH, but it's very slow, has partial quantum attacks, and it isn't very trusted after SIKE got broken in half.
there's always post quantum rsa https://eprint.iacr.org/2017/351.pdf. yes it sucks, but at least for the quantum computers we're likely to have 20 years from now, you could probably get away with a 1gb key...
Lamport signatures work and are PQC. There are solutions that are practical to use (1gb rsa keys are not). Just not drop in replacements without large tradeoffs.
Headline should be "polynomial time quantum algorithms for solving lattices" or somesuch. The polynomial time aspect is the main contribution here - and also why this is attracting attention.
Some initial Reddit discussion here: https://www.reddit.com/r/cryptography/comments/1c0vlfx/the_f...
And a question at crypto.stackexchange: https://crypto.stackexchange.com/q/111385
How does this affect these statements on Wikipedia [1]
> some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.
and [2] ?
> One class of quantum resistant cryptographic algorithms is based on a concept called "learning with errors" introduced by Oded Regev in 2005.
[1] https://en.wikipedia.org/wiki/Lattice-based_cryptography
[2] https://en.wikipedia.org/wiki/Ring_learning_with_errors_key_...
The idea of "appear to be resistant to attack" is an empirical one. When someone says that, they are saying that we simply have not found a good attack against this problem. That can change any day, in principle. Unfortunately, "we don't know of an attack" is about as strong a statement you can make in cryptography, when talking about a fundamental hardness assumption. More verbosely, you'd say "the best known attacks take 2^whatever operations on a computer (classical or quantum), and that's expensive, so we're probably fine unless someone makes a significant leap tomorrow"
imo, this isn't quite true. there are a lot of areas where we can say "this looks sufficiently secure for now, but given the rate of advancement in this area in the last decade, we expect it will probably lose a few bits of security in the next decade"
CRYSTALS-Kyber, NTRU, SABER, CRYSTALS-Dilithium, and FALCON are lattice-based method finalists in NIST PQC Round 3.
[1] NIST Post-Quantum Cryptography Standardization: https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
The NTRU article mentions PQ resistance to Shor's only, other evaluations, and that IEEE Std 1363.1 (2008) and the X9 financial industry spec already specify NTRU, which is a Round 3 Finalist lattice-based method.
In [1] Under "Selected Algorithms 2022", the article lists "Lattice: CRYSTALS-Kyber, CRYSTALS-Dilithium, FALCON; Hash-based: SPHINCS+".
Round 4 includes Code-based and Supersingular elliptic curve isogeny algos.
FWIU There's not yet a TLS 1.4/2.0 that specifies which [lattice-based] PQ algos webservers would need to implement to support a new PQ TLS spec.
Do you know how little we know? We don't even know P isn't PSPACE!
People seemed to be focusing on the fact that this wouldn’t break the NIST leading PQC public key cryptosystem, but I think that misses the point. This takes a problem at the core of this security, which previously only had an exponential approximation, and finds a polynomial approximation. Sure that polynomial is too high O(n^4.5) to break the leading proposed systems, but I mean are you really feeling safe when an exponential just changed to a polynomial?
An analogy would be something like this. Factoring is hard. We base RSA on the hardness of this problem and there we use numbers that are the product of two primes. Someone just found an algorithm that doesn’t work to find the product of two primes, but can take a product of four primes and return two products of two primes. Do you feel safe with RSA?
Anyway the paper could be wrong or it could be right, it will take a while for those in the field to dig through this. As a cautionary tale, there have been a few extra good quantum people who have proposed quantum attacks on lattice problems that have later been shown to have bugs.
The running time of attacks hasn't suddenly become O(n^4.5). The latter figure describe the noise ratio for which the LWE assumption becomes broken in quantum polynomial time.
The proposed post-quantum encryption schemes use a much smaller noise ratio which (at the moment) is not affected by these attacks.
I didn’t say the runtime did I? The approximation ratio went from exponential to polynomial noise ratio. This just went from 2^n to n^4.5 and everyone seems to say “oh this is fine”.
The attackable noise ratio did not go from exponential to polynomial either. It went from classically subexponential to quantumly polynomial.
Yes sub exponential which is splitting hairs. Exp(O(n log log n / log n)). Thanks for the acknowledgment that I didn’t say runtime.
Are the OpenSSH lattice instances or the ones of DJB affected by this problem?
Does this result apply to all LWE problems? Does this approach care about LWE vs Ring-LWE at all?
If so, it's a big blow to systems like FrodoKEM that banked on unstructured lattices providing higher security.
Not a lattice expert, so add salt to taste, but it looks like LWE in general (incluring RLWE)
But the current attack essentially wants q > n^2, so even if it is confirmed, not all LWE schemes are dead. There will certainly be people who tweak the params in response and carry on.
However, attacks only get better. And for people in FHE who are squeezed between performance problems and dangerously thin security parameters, it is a bad day if confirmed. There's no credible practical alternative to LWE for FHE...
RingLWE security reduces to LWE via a relatively simple reduction (see https://www.jeremykun.com/2022/12/28/estimating-the-security...).
Hello everyone. I am a college student and currently new to this field. If possible can somone explain in simple terms that what real future impacts would this paper can create?
It would be silly not to first ask your interpretation, given that you are a college student.
Since this is about quantum computing, real world effects are very likely to be none except an exorbitant amount of grant money.
Some post-quantum signatures like CRYSTALS-Dilithium are based on lattices. Makes me think that quantum key distribution (what I've been working on for the past 6 months) has a chance to actually become useful instead of being only of interest to academics and to a few companies that sell overpriced solutions to paranoids.
QKD does not solve the problem that quantum computers create, and cannot replace public key cryptography. That's a common misconception that the marketing departments of QKD research tries to keep alive.
Even under ideal conditions (whether these can exist is debatable), the best QKD gives you is a securely encrypted channel only when you already have a securely authenticated channel. The latter is extremely important, makes the whole thing mostly useless, and is often omitted by QKD advocates.
If you don't have an authenticated channel, you are susceptible to a MITM attack which makes any asymmetric crypto useless. Thus I think there is an implicit assumption in any asymmetric crypto that you already have an authenticated channel. Or did I miss something?
Grossly simplifying, Alice and Bob may establish an authenticated channel either by physical means (a wire) or by some combination of certificates/passwords and out-of-band authentication. Most of the time, QKD implicitly assumes the former - a line-of-sight connection or a fiber-optics cable. In these circumstances the parties might as well exchange flash drives with one-time pads, similarly to how the Kremlin-White House hotline was protected.
I'm not a huge fan of QKD, but there is a potential use case for it. Basically, for digital signatures we have schemes like SPHINCS+, and perhaps also PICNIC and FAEST, which don't require "mathematically structured" assumptions like other public-key crypto, but instead are secure based on not much more than one-way functions. If (and it's a big if) quantum computers can break all those structured assumptions but not AES/SHA, then we would still have secure public-key signatures, certificates etc but not KEMs.
But QKD can, in principle, securely distribute keys if you have a way to exchange quantum state (e.g. line-of-sight or some sort of currently-nonexistent quantum router) and a classical authenticated channel. SPHINCS+ could provide that authenticated channel. In that case QKD would enable secure key exchange even between parties who don't have a pre-shared secret.
Of course right now, all of that is science fiction.
Code based systems are still in, and classic McEliece could be extended to ~50 MiB for a keypair and still be way more practical than QKD. Just run the max current classic McEliece spec hybrid post quantum with X448.
NSA is that you?
please explain?
OP recommended McElice, not DUAL_EC_DRDBG. Is there something I should know about the former?
There is an update:
https://news.ycombinator.com/item?id=40086515
"Update on April 18: Step 9 of the algorithm contains a bug, which I don’t know how to fix."
...
"Now the claim of showing a polynomial time quantum algorithm for solving LWE with polynomial modulus-noise ratios does not hold."
From the paper:
> Let us remark that the modulus-noise ratio achieved by our quantum algorithm is still too large to break the public-key encryption schemes based on (Ring)LWE used in practice. In particular, we have not broken the NIST PQC standardization candidates. For example, for CRYSTALS-Kyber [BDK+18], the error term is chosen from a small constant range, the modulus is q = 3329, the dimension is n = 256 · k where k ∈ {3, 4, 5}, so we can think of q as being almost linear in n. For our algorithm, if we set αq ∈ O(1), then our algorithm applies when q ∈ Ω^~(n^2), so we are not able to break CRYSTALS-Kyber yet. We leave the task of improving the approximation factor of our quantum algorithm to future work.
(of course, this doesn't mean we are in the clear -- a polynomial-time algorithm is alarming)
I don't understand your comment in the context of the previous comment you posted. AIUI, the excerpt says "our algorithm only applies when the modulus q is larger than n^2" where n is 2563 or 2566 (I guess?). So the excerpt would be saying that the algorithm does not apply in this case, because 3000 << (256*3)^2. Right?
If the history of cryptography is any guide, even though this result doesn't break LWE crypto-protocols, it's much more likely now that someone will come up an improvement that will break LWE crypto-protocols. First constructions of algorithms are rarely optimal.
Even though the opposite is possible as well, now that a concrete algorithm has been made. Someone could very well prove that LWE crypto-protocols are secure against some class of algorithms this algorithm belongs to.
Of course, right now, we should just wait for the experts to read the paper and check if there are any problems.
The algorithm is only quantum-polynomial time for a parameter regime not applicable to the PQC candidates.
Factorization and discrete log are also polynomial on a quantum computer, and we are very good at just increasing bit widths. If CRYSTALS is also polynomial in BQP, there is very little reason to invest so much into it.
I am still of the (very controversial) opinion that the only PQC algorithm worth investing in at the expense of classical algorithms is Classic McEliece. This is a code that has stood up to classical and quantum cracking attempts for a very long time - cracking these codes is equivalent to creating a very valuable algorithm in error correcting codes.
The NIST also is dead set on people using only PQC or classical crypto, not a wrapper with both. That is stupid IMO.
It's NSA who wants only PQC and not hybrid. NIST is fine with hybrid. They don't plan to standardize hybrids as entire units, but they said they plan to standardize the KDF modes you'd need to build them.
Thanks for your comment, very interesting. About your last paragraph : Do you know why NIST refuses hybridization, when European agencies imposes it ? What is the political behind it ?
The charitable interpretation I would give the NIST - and a very real concern - is that they are not sure that one form of cryptography doesn't weaken the other, without proofs. Since these cryptosystems also tend to work in different number fields, it's very hard to prove anything about their interactions at all.
We all know the uncharitable interpretation, that the PQC algorithms may be backdoored.
NIST does not refuse hybridization, they will be publishing guidance on hybrid schemes in the draft of SP 800-227 at the same time as the final standards. They don't impose it though, because at a large scale it's more efficient to run just (fast) ML-KEM instead of (fast) ML-KEM + (slower) ECDH, which more than doubles your computation time for what they see as no benefit.
> The NIST also is dead set on people using only PQC or classical crypto, not a wrapper with both. That is stupid IMO.
Yeah, this is rather baffling. After SIKE got broken, you'd think they would have realized the importance of combining these new cutting-edge candidates with something reliable.
The remark clearly states that CRYSTALs is not affected by this attack.
There was an update of the paper 2024-04-18:
"Note: Update on April 18: Step 9 of the algorithm contains a bug, which I don’t know how to fix. See Section 3.5.9 (Page 37) for details. I sincerely thank Hongxun Wu and (independently) Thomas Vidick for finding the bug today. Now the claim of showing a polynomial time quantum algorithm for solving LWE with polynomial modulus-noise ratios does not hold. I leave the rest of the paper as it is (added a clarification of an operation in Step 8) as a hope that ideas like Complex Gaussian and windowed QFT may find other applications in quantum computation, or tackle LWE in other ways."
Posted here: https://news.ycombinator.com/item?id=40086515
Will be interesting to see how this pans out.
If the findings of this paper hold up, I believe it could pretty much undo a decade of NIST's efforts in post-quantum cryptography. a seismic shift in the world of cryptography.
Not entirely true, there are other PKE and DSA algorithms that were/are a part of the competition that used problems not related to lattices. However, the lattice-based options were often among the fastest and smallest.
Isogenies vindicated? :)
I know you're kidding but for the benefit of the class isogeny schemes were pulled when their best candidate design turned out to be breakable with a Python script owing to obscure non-cryptographic mathematic research from the 1990s.
I'd expect we're not getting isogenies back. :)
breakable with a Python script
The traditional, elegant method of a more civilized age:
Last on the program were Len Adleman and his computer, which had accepted a challenge on the first night of the conference. The hour passed; various techniques for attacking knapsack systems with different characteristics were heard; and the Apple II sat on the table waiting to reveal the results of its labors. At last Adleman rose to speak mumbling something self-deprecatingly about “the theory first, the public humiliation later” and beginning to explain his work. All the while the figure of Carl Nicolai moved silently in the background setting up the computer and copying a sequence of numbers from its screen onto a transparency. At last another transparency was drawn from a sealed envelope and the results placed side by side on the projector. They were identical. The public humiliation was not Adleman‘s, it was knapsack’s.
W. Diffie, The first ten years of public-key cryptography, Proceedings of the IEEE, vol. 76, no. 5, pp. 560-577, May 1988
AFAIK, only SIDH-like schemes that exposes auxiliary points are broken, so others schemes like CSIDH may have some chances? https://issikebrokenyet.github.io/
I was at a conference with some of these folks recently and they stated some glimmer of hope remains for repairing isogeny-based crypto. I guess we'll see.
No? One of the side effects of running an open competition is that it focused attention on a variety of competing options for this, all of which were formalized, recorded, and publicly evaluated by the world's academic cryptography experts. We're strictly better off as a result, and much of NIST's own work would still be valuable even in a hypothetical scenario in which none of LWE was quantum-safe.
This is the reason why nist did the decade of work - to focus effort on figuring out what options are secure. Finding out an option is not secure is a good thing. Its why we are putting effort into PQC now before quantum computers are a real threat.