Intel In Bed with NSA?
cryptome.orgIt is really, really hard for me to see this as anything other than utter paranoia. As one of the messages in the thread stated:
> Right. How exactly would you backdoor an RNG so (a) it could be effectively used by the NSA when they needed it (e.g. to recover Tor keys), (b) not affect the security of massive amounts of infrastructure, and (c) be so totally undetectable that there'd be no risk of it causing a ststorm that makes the $0.5B FDIV bug seem like small change (not to mention the legal issues, since this one would have been inserted deliberately, so we're probably talking bet-the-company amounts of liability there).
And how long ago would the idea that the NSA get call logs for every call in the USA have been utter paranoia? Or that they tap and record all international internet traffic?
Just because you are paranoid doesn't mean that they aren't out to get you!
If you random number generator isn't then all of your crypto is basically useless. Paranoid is the correct state of mind for these systems.
> And how long ago would the idea that the NSA get call logs for every call in the USA have been utter paranoia? Or that they tap and record all international internet traffic?
Before 1988, if you were paying attention. So the idea that the NSA was watching everything you did is almost 30 years old now.
Well, it is documented that the NSA made DES weaker by using less bits for key size (this makes brute forcing easier). I aslo noted that Schiener's AES submission was passed over (I speculate that Rijndael is easier to brute force).
The feds used to fight civilian crypto tooth and nail. Then they allowed it, and in one of the crypto books a story was related that the feds were bummed about RSA and friends. The listener questioned why, when surely their efforts were feeble compared to the government's. The response was the pace of development was much faster than expected.
The NSA, working with IBM, also made DES more resistent to differential cryptanalysis, which was not widely understood at the time.
The change the NSA made was to replace the s-boxes used with ones that made using differential crypto analysis slightly less efficient than brute force. As it happens, the s-boxes provided by the NSA were also among the worst 9%-16% possible with respect to linear crypto analysis. "A software implementation of this attack recovered a DES key in 50 days using 12 HP9000/735 workstations" [1]. I do not know the specs of said workstations, but for reference the book claims that was the fastest attack at the time of writing (1996).
This is not to say that the NSA was aware of linear crypto analysis when they made their recomendation. Indeed the fact that their s-boxes also happened to be just good enough to beet differential, and the fact that an independent government investigation (the details of which are classified) cleared them of wrongdoing, are enough to convince that they did not intend to introduce a hole. Furthermore, the NSA has also now published the requirements they used to generate their s-boxes. Schneier suggests in his book that the s-boxes were weakened unintentionally by the act of introducing structure to them, without knowing to defend against linear analysis.
[1] Bruce Schneier, Applied Cryptography
> also made DES more resistent to differential cryptanalysis
Was that the result of the last-minute "black box" change? I never heard the result of that, so any light you shed would be welcome.
Correct. The NSA suggested changes in the DES S-boxes, which led to many questions. Ultimately, what was discovered is that their changes strengthened DES, not weakened it, as some had feared.
You can read more about their involvement here: http://crypto.stackexchange.com/questions/16/how-were-the-de...
Very interesting, and not exactly news, which tells you the last time I looked at this. Obviously, I'm a dinosaur.
Thanks for the pointer.
> The feds used to fight civilian crypto tooth and nail.
Curious. I'd like to read about this. Can anyone post any links?
Read up on the Clipper chip: A chip which sort of being promoted to be the "official" way to do crypto in the US. Specifically designed to be decryptable by the NSA via "key escrow".
https://en.wikipedia.org/wiki/Clipper_chip
It died when Matt Blaze figured out a way to trick the clipper chip doing encryption that the NSA could NOT decrypt.
" Then-Senators John Ashcroft and John Kerry were opponents of the Clipper chip proposal, arguing in favor of the individual's right to encrypt messages and export encryption software."
Wow. What happened?
Just do a search for Phil Zimmermann and what they did to him in the 90's for having the audacity to create PGP.
http://www.loundy.com/Roadside_T-Shirt.html is one example, there are probably others. It ended up going in a sane direction, but it's a bit crazy to imagine in hindsight.
More illegal crypto T-Shirts: http://www.cypherspace.org/adam/uk-shirt.html
This was one of my favorite shirts, but it finally gave up the ghost a few years ago.
Many developers that worked on crypto would cross the border into Canada to meet up and work on crypto to get around the export restrictions (crypto software was classified as a weapon; exporting it could get you the same punishment as exporting a missile).
Read "Crypto: how the code rebels beat the government, saving privacy in the digital age" by Steven Levy. He outlines the whole story of public crypto until about 2000. Good read, too.
'I speculate that Rijndael is easier to brute force' On what basis?
Well, I guess Rijndael is "easier" to brute force in that it's faster than Twofish. But "easier" to brute force doesn't mean a whole lot; AES-192 is easier to brute force than AES-256, but both are so outside the realm of current-day computation than it doesn't really matter.
Just as a matter of interest, re: the new bitcoin boxes like the butterfly http://arstechnica.com/gadgets/2013/06/how-a-total-n00b-mine...
Do these put a different slant on the whole "current-day computation" angle? Not necessarily these machines, but isn't it feasible that custom hardware could be manufactured using current tech, that upsets the notion of AES brute force feasibility?
Edit:
>anything other than utter paranoia.
I want hackers, cypherpunks, and cryptographers to be utterly paranoid.
Definitely paranoia. If you want to believe NSA is spying trough your Intel system, they could do it trough vPro and not some RNG calculations. One might assume that NSA can easily tap into the built in VNC server[1] of the CPU.
[1] Computers with particular Intel® Core™ vPro™ processors enjoy the benefit of a VNC-compatible Server embedded directly onto the chip, enabling permanent remote access and control. A RealVNC collaboration with Intel's ground-breaking hardware has produced VNC Viewer Plus, able to connect even if the computer is powered off, or has no functioning operating system. http://www.vnc.com/products/viewerplus/
Basic Assumptions:
> You have activated Intel vPro technology on the PCs through configuration of the Management Engine BIOS extension (MEBx).1
http://www.vnc.com/products/viewerplus/ViewerPlusUseCases.pd...
Because you don't reflash firmware every time you enable/disable this technologies, it's obvious that there must be some code which checks configuration flags to activate this features.
Twist is that such code is executed on dedicated specialised processor in chipset/CPU with own firmware and it does much more:
http://www.blackhat.com/presentations/bh-usa-09/TERESHKIN/BH...
> It is really, really hard for me to see this as anything other than utter paranoia.
It is really really hard for me to imagine Intel not beeing 100% cooperative with the NSA.
You know who else cooperates with the NSA? The Linux community. You know, that whole "SELinux" thing? Yeah, that's an NSA project.
Turns out cooperating with the NSA doesn't automatically mean spying on the public, it could instead be hardening crypto security. Which is the NSA's other job, it turns out.
Yes and no better example than DES in which the NSA hardened DES against differential cryptanalysis and then reduced the key size from 128 bits to 54 bits so they could break it. Given the prior actions of the NSA is doesn't seem unbelievable that they would both harden and backdoor linux.
Who was arguing for 128-bit DES? Wikipedia says IBM wanted 64.
The original version of DES was called Lucifer and used a 128 bit key. http://en.wikipedia.org/wiki/Lucifer_(cipher)
The NSA choose the key size of DES since they were running the process (making DES 256 times weaker than a 64 bit key).
I assume he means "cooperating with NSA in nefarious ways if the NSA wanted".
> You know, that whole "SELinux" thing?
You mean that damned monstrosity I always disable? You're claiming it's not a plot to make Linux utterly unusable?
SELinux works well nowadays. You'd know that if you hadn't disabled it.
If I hadn't disabled it... which of the dozens of times it's gotten in my way on a new image? Most recently last week, by the way. I disable it because it prevents correct code from running in an already-secure environment. I don't bother beforehand, because I inevitably forget. And then waste ten minutes before I realize I need to turn off the magic "break everything" switch.
In the last seven days, has the fundamental incompatibility between SELinux's design and traditional Unix permissions and tools been suddenly corrected? Has tooling been created to allow us mere mortal sysadmins and engineers to understand and manipulate the byzantine SELinux configuration?
I didn't think so.
> which of the dozens of times it's gotten in my way on a new image
What was one recent example?
> an already-secure environment
Not possible.
> has the fundamental incompatibility between SELinux's design and traditional Unix permissions and tools been suddenly corrected
You mean labels? No, that's pretty fundamental to SELinux.
> Has tooling been created to allow us mere mortal sysadmins and engineers to understand and manipulate the byzantine SELinux configuration?
Try setroubleshoot.
> What was one recent example?
System Apache unable to listen on non-standard port.
> Not possible.
Tell me of a vulnerability on a fully-updated RHEL 6 image running only SSH and a basic Apache configuration serving static files which would be prevented by the stock SELinux configuration.
> You mean labels? No, that's pretty fundamental to SELinux.
Exactly. So my explicit decisions about file permissions must be duplicated. No thanks.
> Try setroubleshoot.
So, no.
There are only two X86 chip manufacturers of note. Intel and AMD both could tell the NSA to get bent.
The fact that there are few X86 chipmakers makes it more a problem. There are fewer arms for the NSA to twist.
That's an argument from ignorance fallacy.
"I can't imagine how that (potential) backdoor can be abused, therefore it doesn't exist".
Random generators controlled by a third party are ABSOLUTELY a problem for any crypto system based on them.
Your (b) argument is even more ridiculous, considering the NSA events that just unfolded.
Your (c) argument makes zero sense, considering it got detected.
"It would be difficult to implement effectively, therefore it is likely to not exist."
Of course, the judgement also takes into account the extreme consequences for the company implementing it if discovered, and the unlikelihood that that company could be legally compelled to do so, which was the case with all recently revealed examples of companies cooperating with the NSA. (Never mind that we have not even seen hidden /software/ backdoors forced by the NSA - merely systems that were known to be interceptable being intercepted.)
The same argument also applies to trusting the CPU itself: although it would be more difficult to insert a generic backdoor and ensure it could be exploited easily without compromising performance than to backdoor a random number generator, this is a matter of degree, not a fundamental difference in the argument. Though you may not trust the CPU either, I suppose, but in that case not using rdrand won't save you.
The comments about RdRand being impossible to verify because it's on-chip seem quite reasonable. (Although Intel have tried to be quite open about how it works. https://sites.google.com/site/intelrdrand/references)
I have no idea if RdRand is the only source of entropy for /dev/urandom in the kernel these days but that does seem quite silly. Especially as RdRand is documented as having two error conditions, not enough entropy, and that the hardware appears to be broken.
In any case, here's the LKML thread where it was merged too http://thread.gmane.org/gmane.linux.kernel/1173350
>I have no idea if RdRand is the only source of entropy for /dev/urandom in the kernel these days but that does seem quite silly
If I understand correctly, the idea is to use RdRand to feed the entropy pool (which is also fed by other noise)[1] from which urandom pulls. So it doesn't seem RdRand would be the sole source of entropy if it were to be used in this context.
Most servers do not have any serious source of randomness (unless you buy another hardware RNG) which is partly why these were introduced (Intel used to have a motherboard RNG, and VIA had on CPU ones years back).
You can buy one of these http://www.entropykey.co.uk/ which are unlikely to be NSA "certified" instead.
If the NSA is working with Intel, they're not going to bother with an RNG... The processor is the most trusted part of the computer security model - why would you choose bad random numbers as your attack vector?
Relevant talk: Hardware Backdooring is Practical - Jonathan Brossard https://www.youtube.com/watch?v=j9Fw8jwG07g
This issue just does not pass the rubber hose test. If the NSA wanted and got a backdoor in intel chips there are so many better ways to do it than introducing a bad hw rng. If you wanted one exploit in the chip, why would you pick a hard to exploit one and user controlled one on top of that? It's classic paranoid thinking: People have a choice to use the hw rng or not. So it becomes a big deal. All the while not addressing the non-choice issue like having a potential backdoor triggered by a specific instruction sequence.
It also needs to be hard to detect and relevant specifically for crypto operations. So where would you put a backdoor on a chipset?
It's safe to assume every core technology company has been compelled to be in bed with the NSA in some form or another. Intel has been anti-trust managed by the government for nearly two decades. Getting access to the monopoly desktop / laptop processor maker would be far too rich a target to ignore.
This is why I show preference towards AMD chips even when they have the competitive disadvantage. Any sufficiently large company ends up, through their will or the gov'ts, wrapped up in politics. Which is the one of the larger issues of our age.
AMD is probably cooperating with the government on the same level as Intel.
Any company with over 1k employees probably is. I'm just saying if there are any systemic backdoors in Intel chips, AMD probably doesn't have them because they are 5 - 10% of the market and the gov't doesn't care to jump through hoops to get them implementing whatever backdoor they want.
Would appreciate some sort of a summary. Reading some mile long email exchange just to figure out what the headline is really about makes it kinda tricky.
I read the whole thing, but few here would truly feel that my summary of 'paranoia. paranoia everywhere' is not a government plant.
The core concern seems to be the idea that an RNG embedded into Intel's latest kit might actually be a PRNG that could be backdoored by NSA on command somehow with resultant catastrophic effects to crypto primitives on that box, if the Intel RNG were the only source of entropy on the box.
Uh, RdRand is definitely a pseudo random number generator. The question is about whether it's cryptographically secure or not, or more specifically, whether it can be or is backdoored.
I upvoted but the current title ("Is Linus Tovalds 'evil'?") is downright horrible and I hope a mod will revert it to the original one soon.
Linus is (was?) one of my living heroes. But he controls the Linux kernel.
FTA:
"It's worth noting that the maintainer of record (me) for the Linux RNG quit the project about two years ago precisely because Linus decided to include a patch from Intel to allow their unauditable RdRand to bypass the entropy pool over my strenuous objections. " -- Eugen* Leitl
Linus has close ties to Intel and has for a long time.
Yeah, like when he worked for Transmeta, and that stint in the mid 200x's where a PowerPC64 was his main machine?
He may have a lot of Intel connections, but he doesn't seem to be committed to any specific vendor.
JFTR, that quote is from Matt Mackall.
(OT: Eugen Leitl simply forwards posts from one mailing list to another, almost always without any reason for doing so, commentary, explanation, "value add", etc. He's in my kill file for that reason.)
Submitted a question here: http://crypto.stackexchange.com/q/9210/2512
Feel free to edit the question if you have anything to add!
Hanlon's razor help in this kind of discussions. Maybe when Linus took that option didn't saw Intel as something that would intentionally make predictable its RNG for following government orders, and just choose to not reimplement the wheel where it was already available.
Would he take another option since last month? Maybe in the light of this he could take back that choice.
Linus does not have the option to reimplement the wheel. Software cannot generate random numbers.
> Software cannot generate random numbers.
Can hardware? [0]
[0] http://en.wikipedia.org/wiki/Hardware_random_number_generato...
Intersting discussion, but incredibly bad title.
I was trying to be concise. I also put quotes around evil.
Looks like it has been flagged, and is now dead. Too bad...
This is nothing more than speculative emails.
Did anybody look @ http://leitl.org/
This email could just as easily be the musings of an insane person, which is what's suggested by the contents of the website.
The thing that the thread about is kinda interesting, too. https://heml.is/
One reason it would be a poor decision for the NSA to recommend Intel backdoor the RNG: Intel would be in a position to sell/leak the backdoor secret to other governments.
The NSA would have no way of blocking it from being used to attack the US. And you can't roll out a hotfix for billions of CPUs worldwide.
Doesn't the NSA end up using these machines as well? It seems like a lot of work to introduce a flaw that you have work around for you own use later. And if it's a hardware flawu, I doubt even the NSA could demand intel or amd manyfacture seperate batches for their own use.