Cryptographic failures in RF encryption allow stealing robotic devices
cossacklabs.comThe worst cryptography vulnerabilities I've discovered have been in RF and small embedded systems, because both settings (and they're often combined!) create constraints that make high-level crypto libraries untenable. This is part of why there's so much interest in lightweight cryptography schemes like Xoodyak (Daemen), Gimli (from the Nacl folks), and STROBE (Hamburg).
Everyone --- at least, everyone in the mid-2000s --- got CTR nonces wrong. But you haven't seen what a custom RF environment does to cryptography until you've seen the counters wrap. :)
There may or may not have been an RF embedded vendor who just added an xor of any password key you gave it to itself, so that you could turn encryption "on" and add a password and it would "just work" with every other device because they all ostensibly encrypted traffic with a key that was a string of zeroes.
Another hypothetical vendor may have claimed to use 128-bit AES, where it would take a config password, encrypt it with AES, and then xor each packet payload of RF traffic with the bytes from that ciphertext. This was when SDRs and anything that could intercept FHSS traffic cost over $10k so nobody really noticed.
My skills were lame by most standards, and if this is getting attention now, we can expect some really funny conference talks in the next few years and there are some careers to be made on breaking implementations in this relative backwater. The hardest part at the time was extracting the bootloader firmware dump via an open jtag, but most of the firmware images were available via ftp, and the tools for that today are just amazing compared to the 00's.
on the flip side, do newer embedded hardware designs have better sources of entropy and monotony yet? (does that still matter?)
Yes, many more powerful chips include hardware RNGs now. ESP32 and STM32 and Atmel SAM have them at least. Some 16 bit ones like some MSP430s have them (and AES) too.
I don't think they're generally in 8-bitters unless some of the newer "big-little" ones throw one in, but probably most IoT devices that need cryptographic security would use a 32-bitter these days anyway if nothing else for the networking.
There are also devices like the ATECC608 which have an internal HRNG, and also provide offloaded security cryptographic signing based on that, which both saves a very small device burning cycles on crypto and also prevents a private key ever residing in the CPU.
>Hundreds of thousands, if not millions. The nature of the device has been heavily redacted to protect the guilty.
This is rather annoying, and sort of the whole point of responsible disclosure.
Disclose the vulnerability to the company, and at some predetermined amount of time later spill the beans, including the vendor.
If the company does not want to fix it, the people using the products deserve to know that and make their decision (dump the product, live with the risk, etc.). Or the company fixes it, and people are happy.
A valid point. But responsible disclosure in the world of un-patchable devices that actually move and can cause physical harm once pwned feels a little bit different. While we've done things to mitigate a blast radius, publicising guilty names would still lead to lots of damage because you know, these are toy cars.
I am the only one who knows my risk tolerance and threat model. I do not appreciate when other researchers think that they know my tolerance and threat model better than I do.
The only reason not to release names after a reasonable responsible disclosure timeframe is because the researchers somehow think they are the only ones that will ever find that flaw. Pure hubris. Some malicious person will eventually find those same flaws, and then I'm fucked without being given the opportunity to evaluate whether or not I want to risk getting fucked.
> Many developers see security people as annoying creatures, always pointing out mistakes and criticizing incorrect decisions. A cryptographer is considered more malignant: they know math and can tell you actual probabilities of some of your failures. They also yell crypto is not a cryptocurrency and don’t roll your own crypto often. That would be us.
> The precise definition of the second proverbial phrase depends on the context and has changed over the last couple of decades, but most of the time it means Do not design your cryptosystems, especially if you don’t know anything about them.
this post looks interesting but i can't get past this writing style. sorry. : /
There doesn't seem to be any actual demonstration of an attack. Just a bunch of discussion of various types of attacks with the implication that there might be something or somethings out there that are susceptible.
So are they talking about the Donkey car project? That's the only one that I'm aware of that aligns with what is said in the article.
I suppose they could be talking about any of a variety of drones.
They suggest up to millions of affected devices.
ArduPilot then? Or they are just inventing theoretic vulnerabilities to drum and panic/business. On a second read it feels more like the latter.
None of these vulnerabilities are "theoretical" (they're maybe a bit stale, is the worst you could say about them).
A reasonable first approximation would be that all writeups about new vulnerabilities are intended to drum up something, so that's just about the least interesting thing you could say about a post like this.
Theoretical in the sense of actually existing in a widely deployed product such that it can't be responsibly disclosed.
I have seen all of these vulnerabilities in widely-deployed products of varying sorts (this was my day job for many years). I don't know who these authors are, I'm just saying that the bugs aren't theoretical.
I'm not sure that we agree on what theoretical means. That these classes of bugs are probable does not make their existence in any particular software any less theoretical an in the absence of evidence. Which software? Which endpoint? Sample exploit? One example of a robot executing an unauthorized command? The authors do not say and only offer vague assertions and contrived examples.
tl;dr: Balancing tradeoffs and benefits during disclosure is a hard job sometimes and if authors chosen to do it this way - they could have done it for a reason? You don't have to trust me on this, but it has no commercial agenda behind it.
Disclaimer: I happen to work at the same company as authors, not involved in writing this, but I was witness to all research that led to this post. I have seen huge internal arguments on how much and how should be disclosed, given the context (see below), prior to this article being written.
1. I can attest that all these bugs are found in one physical device. I have seen it. Which is really widely used to this day. Moreover, this device has more relatives than we could easily enumerate, some of them potentially vulnerable to a subset of the bugs identified as well. The "vendor" is aware and nothing is changing for a while, in some ways getting worse (blast radius increases over time). This is result of economic reasons, rather than negligence,- "the vendor" in this case is a mixed bag of responsibility between several parties, not all of them commercial, not all of them actually existing to this very date, I believe.
2. In a normal situation, responsible disclosure path, instead of what you've been reading in a post, would be a right way to go. However, context matters in this case: authors happen to live in a country which is at war now (takes like 5 seconds to figure out, looking at the website), so their ability to talk about security vulnerabilities is a bit different to your expectations for reasons that are not very hard to understand. They use vague language, distort a few important details and focus on frivolous illustrations to avoid unnecessary damage.
Pointing out practical exploitability vectors publicly in a way that is understandable to anyone related to the field of practice is sufficiently helpful:
* Some people will now have explanations why their toy cars were stolen and consider changing their supplier of toy car equipment.
* Some people conducting engineering risk analysis will understand that this is not a "potential theoretical vulnerability", looking at their toy car and some of its settings, and consider alternatives.
Consider blog post and examples to be didactic material for an ongoing discussion about some hardware among field practitioners. Authors needed something to point their fingers at and say "this is how X can be exploited to do Y", without reading 2 hours lecture on cryptographic bugs that have been obvious 15 years ago.
3. Why not point out vendor and device list? Consider the context again, please.
It's easy to wave your hand and say "if people are idiots using hardware and devices that are known to be vulnerable, we should let them screw themselves", disclose the name of the vendor, and go on with your life. However:
* Being pointed out directly, these vulnerabilities could easily lead not only to "market levelling out discrepancies" (which does not always happen harmlessly, as we all know). It could lead to more physical damage and deaths immediately happening around authors of this post because exploitation is so easy.
* Not making it would lead to these devices being used over and over again, and obvious cryptographic bugs being dismissed as "theoretical threats", because remote toy car community is full of "Internet of Stuff" people who are dismissing cryptographic vulnerabilities on basis of "it's crypto, who knows how to exploit it, we've got more important stuff to worry about right now".