When Lightning Strikes Thrice: Breaking Thunderbolt 3 Security
thunderspy.ioI skimmed the paper and while the research looks solid, just in terms of the digging they did and the documentation they're providing, this website really buries its lede: if you've got a Macbook running macOS, the Macbook IOMMU breaks the DMA attack, which is the thing you're actually worried about here.
Additionally, regardless of the OS you run, Macbooks aren't affected by the Security Level/SPI flash hacks they came up with to disable Thunderbolt security.
Last time Tunderbolt was broken (Thunderclap [1]), it was found that the Linux driver didn't activate the IOMMU. I assume that's since been fixed.
It seems to do that now:
https://christian.kellner.me/2019/07/09/bolt-0-8-with-suppor...
What's the relationship of the "bolt" project with the default driver support in Linux?
This only holds for Macbooks running MacOS. It will not be protected by the IOMMU if the system uses Bootcamp with Windows or another operating system such as Linux.
Windows 10 also supports Kernel DMA protection by IOMMU. Win10 on Macbooks not work for it?
No, Bootcamp enabled operating systems do not have the same protections as MacOS on the very same hardware. Apple says to use MacOS if you want (IOMMU+kDMA) security protections.
Yes, buries the lede indeed.
"THUNDERBOLT IS HOPELESSLY INSECURE AND BROKEN!!"
blah
blah
blah
blah
* except on 90% of computers shipping with Thunderbolt.
Windows PC makers were much later to TB3 and even now only ship it on a small percentage of their computers. I'm not even sure there is a Linux out of the box system with TB3 support.
Dell XPS 15 can ship with Linux.
I should have figured the machines with a Linux option would be the higher end ones for developers, makes sense from a lot of perspectives. Just didn't really look.
> there is no malicious piece of hardware that the attacker tricks you into using
> All the attacker needs is 5 minutes alone with the computer, a screwdriver, and some easily portable hardware.
Just started reading, but the comparison is already a little bizarre. It almost seems like the digital version of "This murderer is on the loose and you're in danger! He doesn't need to inject poison into your food. All he needs is just 5 minutes in front of you with a knife!"
What they're trying to get across is that this is not a Bad USB [0] attack, but an Evil Maid [1] attack. In either case, the attacker does not need to rush. To commit a Bad USB attack, the attacker deputizes you and uses your confusion [2] to get you to insert a dangerous peripheral device, on your own time. In an Evil Maid attack, the attacker patiently waits until you trust (read as: "are vulnerable to") their physical presence, and then inserts a dangerous peripheral device.
To use your analogy, in the former case, the murderer poisoned your food at the grocer's, and you unwittingly dose yourself when you make your meal. In the latter case, the murderer spends time getting to know you and letting you trust them, and then one day, when you go to the bathroom, they come in and shoot you like Vincent Vega.
[0] https://en.wikipedia.org/wiki/USB_flash_drive#BadUSB
As a general rule, anyone with physical access to your machine already owns it. Physical security matters, a lot.
That being said, malicious hardware is a problem. A hacked phone charging terminal at the airport could certainly be a serious problem if there are enough vulnerabilities in the USB stack.
> As a general rule, anyone with physical access to your machine already owns it.
People frequently say this, but never really explain it. As far as I can tell, it translates to "Nobody cares about physical security" - except it's clear that people /do/. Things like Boot Guard are only really relevant to physical attacks. DMA protection in firmware is only really relevant to physical attacks. It's extremely obvious that the industry is attempting to avoid short term physical access to a device being sufficient to compromise it, and research that demonstrates that it's still possible is valuable.
> DMA protection in firmware is only really relevant to physical attacks.
That's a different kind of attack than what people usually mean by "physical access" though. The thing where they drop a bunch of malicious flash drives in the parking lot or put a malicious USB charger in an airport isn't the same thing as the attacker having unsupervised physical access to the machine, and the former is certainly worth defending against even if the latter is hopeless.
> Things like Boot Guard are only really relevant to physical attacks.
One could argue that they are also relevant to purposely locking the device owner into specific operating systems.
As an example of "physical access and you're screwed," one way to compromise a machine is to install a microphone anywhere near the machine and then wait for the user to type their passphrase. It's possible to deduce what keys are being pressed from the sounds they make and the timing, so now the attacker has your passphrase. The same can be done with covert video surveillance.
Another possibility is to measure electromagnetic emissions to much the same effect. Most computer keyboards are not exactly TEMPEST certified and even if they were, someone with physical access could make adverse modifications.
Protecting a machine against unsophisticated attackers is pretty easy, to the point that the likes of Boot Guard are not even required, but protecting a machine against physical access by a sophisticated attacker is pretty hopeless.
Physical access is just such a rich attack surface that keeping your computer away from malicious actors is the right and proper solution.
An extreme example a pentester imparted to me once was, if someone could spend sufficient time alone with my laptop, they could remove my hard drive and insert it into an identical laptop with a hardware or firmware backdoor preinstalled. We were discussing nation-state adversaries, but the general principle applies.
Another example is attacks on encrypted drives (so-called "evil maid" attacks). If a computer is booted and the drive is decrypted, an attacker with physical access could open the computer, remove the RAM, and download it's contents, thereby stealing the encryption key. If the computer is powered down, it's still vulnerable to other attacks; enrypted drives necessarily have cleartext code for accepting the password & decrypting the drive. You could modify this code to log the decryption key, or broadcast it over your device's radios.
There's also the classic Windows "sticky key" exploit, where you replace the sticky key binary with a program that gives you administrator access, reboot the computer, and then activate sticky keys.
You could install a keystroke logger. You could install a device to record monitor output. You could log network traffic.
I've yet to find a kiosk environment that I couldn't break out of. Once I was able to break out of a scanning kiosk environment, and into a Windows desktop, by turning the quality settings all the way up and crashing the kiosk. That was one of the more difficult examples; most of the time all you need is to find a way to right-click. (I had the proper authority to investigate these kiosks.)
The point is that the list goes on.
It is true, as you say, that there has been progress in implementing mitigations, and that there are people who care deeply about these issues. A counterexample might be SIM cards, TPMs, and other HSMs. These systems are able to provide better guarantees by encapsulating their peripherals and being willing to self destruct. But that could describe a cell phone, tablet a laptop, too.
Maybe in the future this "law" won't be so hard and fast.
> Physical access is just such a rich attack surface that keeping your computer away from malicious actors is the right and proper solution.
Keeping attackers away from your computer is certainly the best solution, just as keeping your computer off the network is the simplest answer to avoiding network security issues. But that's not always an option, so we still need to care about it.
> An extreme example a pentester imparted to me once was, if someone could spend sufficient time alone with my laptop, they could remove my hard drive and insert it into an identical laptop with a hardware or firmware backdoor preinstalled.
That'll be detected with any properly implemented remote attestation solution (switching the machine will change the endorsement key, so attestation will fail)
> If a computer is booted and the drive is decrypted, an attacker with physical access could open the computer, remove the RAM, and download it's contents, thereby stealing the encryption key.
Removing soldered-on RAM from a motherboard fast enough to maintain the contents is not a straightforward attack. Not theoretically impossible, but you're not going to have a good time of it.
> If the computer is powered down, it's still vulnerable to other attacks; enrypted drives necessarily have cleartext code for accepting the password & decrypting the drive. You could modify this code to log the decryption key, or broadcast it over your device's radios.
Will be detected via remote attestation.
> There's also the classic Windows "sticky key" exploit, where you replace the sticky key binary with a program that gives you administrator access, reboot the computer, and then activate sticky keys.
How do you do that with an encrypted drive? Look, yes, it's not easy to guard against physical attacks. But some organisations that genuinely do have to deal with state level attackers care about physical security and care about mitigating it, and we have moved well beyond the "physical access means you've lost" state of affairs. Finding new cases that allow attackers with physical access to subvert our understanding of the security boundaries of a machine is of significant interest.
You raise some interesting points, and have force me to question my assumptions that this is simply a lost cause.
> they could remove my hard drive and insert it into an identical laptop
Does that make having a layer of stickers on one's laptop also a layer of defense?
Stickers are an inconvenience, especially when applied over a screw hole required for disassembly or similar, but it's not exactly cryptographically secure. What stops the attacker from buying the same sticker as you, or taking a good picture of it before destroying it and printing a new one off?
An example is using glitter-containing nail polish to cover the screws, taking a high resolution picture and then having an app that checks whether the glitter particles are still in the same position. There are companies selling solutions along these lines.
I guess at that point you're basically asking whether it's possible to make higher resolution printers than cameras, but considering you can in principle do printing using lithography similar to what they use to make semiconductors, that's probably going to win over the average phone camera. Although you're obviously then talking about a much more sophisticated attack.
It's not just a matter of printing, it's a matter of placement. If you can carry equipment of that calibre into a hotel room and do the swap then that'll defeat things, but it's not clear that that's realistic.
You wouldn't necessary need it to be in the hotel room. You sneak in, take a picture, have the lab down the street reproduce it, come back in a half hour and make the swap.
That's also assuming you would actually need that level of sophistication. It's plausible that there is a level of printing technology somewhere between "crappy inkjet" and "semiconductor fab clean room" that could still fool a phone camera.
There is also the possibility of accessing the inside of the machine without tearing the sticker. You think they're going to disassemble it by removing the screws, but they actually disassemble it by slicing off a section of the case with a sharp blade and then epoxying it back together. Or make their modifications through the cooling vents.
And that's really the other problem too. If you don't know how they're going to do it, you don't know what to look for to detect that they did. Your sticker is intact so you're safe, right? Right?
You're still left with needing perfect placement, which isn't something that's realistic to do by hand. Physical case modifications are also going to be detected by any reasonable tooling (there's at least one vendor who can tell you which physical mold something came off on the production line via phone camera imaging, they're definitely going to spot a glued together hole in the case). So you're left with going in via existing case holes as the most realistic option, which has raised the bar by a significant amount - this is now an attack that's going to take much longer and require a higher level of skill, so the probability that it'll be carried out is reduced by a lot.
Nobody is realistically going to say that a computer plugged into the internet is unhackable. Instead the goal is to make it sufficiently difficult to hack that it's either cheaper to solve the problem a different way or target a different person. The same is true here. Nobody believes it's literally impossible to compromise an iPhone when you have physical access, but it's considered hard enough that almost any other option is preferable. We should be holding laptops to the same standard.
First things first: lol.
After that: at this point it's easier to pay a random person to follow you and steal your whole bag/backpack and wallet and make it look like the usual theft.
Or just break into your house/office or whatever.
You lol but a similar scheme was used for nuclear weapons treaty compliance verification (search for 'epoxy'):
https://www.washingtonpost.com/archive/politics/1988/03/21/a...
The point is that even having physical possession of the system shouldn't be enough to get anything useful out of it.
Our phones are just small computers, and the notion that the FBI can get things out of them given permanent custody is national news. It's weird that people think this isn't really one of the battle lines in computer security.
cheap tamper protection:
https://mullvad.net/en/help/how-tamper-protect-laptop/
- "Then we paint the border of the sticker with glittery polish. It's important with the glitter because the outcome will always be unique."
- "After the polish has dried, we take a high-resolution photo of each area."
I think you may have missed that my comment was primarilly a terrible pun.
Not that it is physically secure, but if your disk is encrypted using a key in the TPM chip you can’t just put it in another computer, it won’t boot.
If you have that kind of access it doesn’t really matter though because you can copy the drive, then add a device that monitors the keyboard so you get the key when the user enters it and then you can just clear or disable the TPM chip.
An example: Macbook chargers these days have charge ports that are also used for USB devices. This means that if a user plugs in a compromised "charger", it can set its own HID type (and pretend it is a keyboard or a mouse), open a terminal and start typing malware into the computer.
All of this is a bit silly though, because physical intervention implies a level of commitment that lends itself to more reliable approaches: https://xkcd.com/538/
And a thing you can do for machines that have built-in keyboards is refuse to enable new HID devices until the user provides affirmative consent. The people who have reason to care about these attacks have defenses, and research that demonstrates those defenses are incomplete is useful research.
Yeah thats a good point - I personally have the bad habit of clicking "yes" to that dialogue whenever I see it, since it does sometimes spuriously appear. I certainly wouldn't attempt a teardown of all of the equipment currently plugged into my machine when I saw a message like that. Do you know if HIDs can impersonate other HIDs? E.g., if you attached a dongle to a usb keyboard, could that dongle claim the identity of the keyboard and thereby avoid the prompt?
My favorite "security interface failure" is the fact that OSX apps frequently demand a user login and password in a popup window. E.g., Slack does this. It would be so easy for an app render this popup (even on a webpage!) and I would totally type my password into it. I feel like the only answer to this is to have a sacred corner of the screen that only the OS is allowed to write to
This is why NT had a "secure attention key" (ctrl-alt-del) that couldn't be intercepted by an app that might try to display a fake login screen.
It's not that nobody cares about physical security, it's that physical access opens up entire classes of attacks that aren't possible otherwise, like physical keyloggers and bridging airgaps.
If you follow defense in depth as a security architecture philosophy, which the industry does, then you still implement defenses against physical attacks, but you recognize that those defenses are either (1) defenses against opportunists, or (2) last ditch defenses.
There are huge swaths of people who don’t think about physical security at all.
But many do and it’s a difficult problem that impacts the efficiency of the business. I’ve had to deal with it often and end of the day, you need to keep important data off of mobile or other client devices, and have controlled workarounds for exceptions.
Some of the tougher compliance standards recognize this and essentially prohibit many types of remote access without the entity owning the remote computer.
The point of the saying is that, try as we might to secure the devices, they can be compromised by someone with physical access (and the right knowledge and tools) in essentially all cases. It is not meant to discourage you from using the best security measures available ON the device, but rather to point out that the only way to truly have physical security is to maintain control OF the device.
These are mitigations, they’re designed to slow down an attack by someone who has physical access to the machine. In many ways they’re a bit like a finely designed padlock; none are ever going to stop a skilled lock pick, but they can slow them down enough to make an attach impractical.
Please tell this to the Intel SGX folks. They don’t seem to have gotten that memo yet...
There are always people who “need” physically tamper proof software, and in a free state you’re free to express such demands. Intel isn’t the first nor the last.
Like so: https://youtu.be/BKorP55Aqvg
Tamper resistant.
If all it takes is a malicious Thunderbolt device, why is a screwdriver needed?
Because they need to open up the victim's device to read its TB3 configuration directly off the SPI flash that holds it; that's how they get the malicious device to work in the first place.
Many smaller devices do not require tools and are trivial to clone. Any of the victim devices will do. It's not only useful to attack a target computer.
Device identifiers and capabilities are not bound to the security level secret values. Drop off a pre-cloned video adapter in a conference room. If it is used and as a result authorized by a targeted computer at a later moment in time, it's game over. An attacker may now perform DMA operations unless the system has kDMA protection enabled. This requires kDMA support in the BIOS, IOMMU hardware, and in the Operating System.
The focus on DMA is however missing a very important observation about security levels from the research: There is a lot of attack surface when you're able to plug in a PCI(e) device as easily as a USB disk.
You almost certainly know more about this than me, but hasn't macOS been breaking this attack --- malicious PCIE DMA --- for several years now with its IOMMU configuration? Ivan Krstic has a whole series of BH slides about this, and in the context of T2.
The point about attacking trusted devices and pre-cloning devices is well taken.
Yes. With MacOS and Thunderbolt 3 devices on Apple hardware the IOMMU is used as expected. This should handle DMA attacks when booted into MacOS.
An important caveat: the IOMMU alone will not handle every other issue that comes with malicious PCI(e) devices.
That seems a bit counter to "Thunderspy is stealth, meaning that you cannot find any traces of the attack". No traces on the computer sure, but breaking apart my screen might be possible to see.
Unless they opened it before you even receive the device.
I think that was the point.
Then I guess the comparison didn't help, but what I'm trying to say is, hidden threats are harder to protect against, not easier. Telling me I need to watch out for a threat because it's visible doesn't make any sense. You tell people to be more on alert for hidden threats, not for obvious ones.
Looks like most of these require physical access to the SPI flash and not just the thunderbolt port unless I'm reading the disclosure wrong.
This is the kind of garbage that the infosec community often memes about. A marketing website, a domain name, a cute logo for a vanity project masquerading as security research. Basically every one of the "seven" vulnerabilities boils down to "if someone can flash the SPI of the thunderbolt controller then xxx" but if they can flash the TB SPI, then they can also flash the BIOS SPI which has a lot of the same "vulnerabilities" but arguably is more impactful. The reason they only mentioned TB is because the BIOS stuff is well known and you can't put your name on it.
Let's break down each of the "vulnerability".
1. "However, we have found authenticity is not verified at boot time, upon connecting the device, or at any later point." This is actually false. Like, the author either didn't experiment properly or is lying/purposely misleading you. The firmware IS verified at boot for Alpine Ridge and Titan Ridge (Intel's TB3 controllers). They aren't for older controllers which does NOT support TB3. When verification fails, the controller falls back into a "safe mode" which does NOT run the firmware code for any of the ARC processors in the Ridge controller (there are a handful of processors where the firmware contains compressed code for). I'm willing to bet the author did not manage to reverse engineer the proprietary Huffman compression the firmware uses and therefore couldn't have loaded their own firmware. Because if they did, it wouldn't have worked. Now the RSA signature verification scheme they use to verify the firmware does suffer from some weaknesses but afaik doesn't lead to arbitrary code execution (on any of the Ridge ARC processors). I would love to be proven wrong here with real evidence though ;)
2. Basically the string identifiers inside the firmware isn't signed/verified. This has no security implications beyond you can spoof identifiers and make the string "pwned" appear in system details when you plug the device in and authenticate it. Basically if you've ever developed custom USB devices you can see how silly this is as a "vulnerability."
3. This is literally the same as #2.
4. Yes, TB2 is vulnerable to many DMA attacks as demonstrated in the past. Yes, TB3 has a TB2 compatibility mode. Yes, that means the same vulnerabilities exist in compatibility mode which is why you can disable it.
5. This one is technically true. If you open the case up, and flash the SPI chip containing the TB3 firmware, you can patch the security level set in BIOS and do stuff like re-enable TB2 if the user disabled it. But if I were the attacker, I would instead look at the SPI chip right next to it containing the UEFI firmware and NVRAM variables (most of which aren't signed/encryption in any modern PC).
6. SPI chips have interfaces for writing, erasing, and locking. If you have direct access to the chip you can abuse these pins to permanently brick the device. Here's another way: take your screwdriver and jam it into the computer.
7. Apple does not enable TB3 security features on Boot Camp. I guess this one is vaguely the only real "vulnerability" although it's well known and Apple doesn't care much about Windows security anyways (they don't enable Intel Boot Guard or BIOS Guard or TPM or any other Intel/Microsoft security feature).
Not that it matters but my personal experience with TB3 is that I've done significant reverse engineering of the Ridge controllers for the Hackintosh community.
> they can also flash the BIOS SPI
Boot Guard makes that impractical in most cases. The point here is that on machines that don't implement kernel DMA protection, you're able to drop the Thunderbolt config to the lowest security level and then write-protect the Thunderbolt SPI so the system firmware can't re-enable it, making it easier to perform a DMA attack over Thunderbolt and sidestep the Boot Guard protections.
This isn't a world-ending vulnerability, but it's of interest to anyone who has physical attacks as part of their threat model.
Boot Guard is not implemented on most (all?) self built machines and a lot of pre-builts as well. But even if it is enabled, UEFI variables are not protected at all. You can disable Secure Boot just by overwriting UEFI variables and then boot any arbitrary code from USB.
Which will change the measurements in PCR7, which is a detectable event that will break Bitlocker unsealing.
> Now the RSA signature verification scheme they use to verify the firmware does suffer from some weaknesses but afaik doesn't lead to arbitrary code execution (on any of the Ridge ARC processors).
Hi, I'm the author of Thunderspy. I'll restrict myself to answering your first point.
There appears to be a misunderstanding. The first vulnerability we found is 'Inadequate firmware verification schemes'. We do not claim a general ability to run arbitrary code on the Thunderbolt controller. Rather, we found that the signature does not cover the data in the SPI flash essential for Thunderbolt security. We've released tools that allow you to modify the SPI flash contents without changing the parts of the firmware covered by the signature (see [1], exploitation scenario 3.2.1 in the report [2], and the PoC video [3] that matches the latter scenario). This is how it is possible to read and modify device strings, uuid, and secret values. The steps for doing specifically the latter are detailed in exploitation scenarios 3.1.1, 3.1.2 and 3.1.3. Please let me know where you got stuck.
[1] https://github.com/BjornRuytenberg/tcfp [2] https://thunderspy.io/assets/reports/breaking-thunderbolt-se... [3] https://www.youtube.com/watch?v=7uvSZA1F9os
> Basically every one of the "seven" vulnerabilities boils down to "if someone can flash the SPI of the thunderbolt controller then xxx" but if they can flash the TB SPI, then they can also flash the BIOS SPI which has a lot of the same "vulnerabilities" but arguably is more impactful.
The section "3.1.3 Cloning victim device including challenge-response keys (SL2)" does not require flashing the victim system, it only requires reading flash from victim device which seems lesser hurdle.
Have you documented or published any of your Thunderbolt reverse engineering efforts?
I'm not a hacker so my reverse engineering is about getting TB3 working on OSX instead of attacking it but it requires the same level of understanding. I have personally tested flashing modified ARC code on Alpine and Titan ridge and can confirm that it fails with an "authentication error" making the author's first claim demonstrably false. https://osy.gitbook.io/hac-mini-guide/details/thunderbolt-3-...
What would it take to have a Thunderbolt/USB C condom? You know, like those standard USB adapter that just drops the data leads on a usb charger to make attacks like this impossible. Maybe we would have to implement a hardware switch on the device itself?
I'm not going to feel safe charging with a public use charger until I find some way to insure only power and not data is making it to my device. Even POE feels like it's safer than modern peripheral standards right now.
(I admit this might not be perfectly linked to the article, it's just a need I've felt for a while but I can't seem to buy a solution for.)
USB power delivery does not use the data lines at all. It negotiates the permissible voltage and current using Vbus pin only. There's no reason why your USB data port needs to be enabled while charging. Just disable it. I actually have a charge-only thunderbolt cable in my desk ... it's incredibly irritating because the only way to tell the difference between it and a real thunderbolt cable is that it doesn't work.
Sounds like exactly what I was looking for, where did you pick it up?
Secops at work distributed them. I have no idea where they came from.
I just bring my own brick for such circumstances. It takes no effort for me to evaluate the security, and it’s more flexible than counting on built in USB ports.
But buses, trains, planes — they all offer a USB socket, not a power socket.
I've long since taken to carrying a USB battery that can charge and provide power at the same time. It's more reliable for me than USB condoms, and, well, it's a battery, which is useful too.
That’s not been my experience, but I’m sure it’s hyper location/company specific.
As another commenter mentioned, I too carry a battery for longer trips, which is useful for charging devices when physically on the move and away from outlets. The model I chose from Anker can also top up a MBP, albeit slowly.
How about a SSH-like “trust on first use” prompt for all data connections? Each USB/TB device has its own pub/private keypair.
If you ever plug in a charging cable and get the prompt, you know something is wrong.
I think Thunderbolt already does something like this? At least on Windows, I'm prompted to trust or not trust a new Thunderbolt device before it can do anything except draw power. (I didn't know that was a feature and kept wondering why my external GPU wasn't showing up...)
That is exactly what TB has. The problem is that the device private key (in many(/all?) devices) sits in the flash memory completely unprotected so anyone can clone it.
It is not like ssh at all. It is a problem that secrets are kept in the flash and it is also a problem that those secrets are sent over the untrusted channel.
The key is transferred only on the initial connection, after that a challenge/response mechanism is used. So from UX point of view it achieves similar TOFU, even if the technical details vary a bit. Sure, its bit worse but it is still very much trust on first use.
After the device is connected, use looks like a key consistency aware system like an ssh client. It is as you note very different in the first protocol run.
To extract the device secret value, an attacker needs to connect the target device to an attacker device. As you note, the thunderbolt device leaks the secret value over the untrusted channel. Impersonation of that device after that moment is trivial as a result.
The entire cryptographic protocol is broken from the start.
> To extract the device secret value, an attacker needs to connect the target device to an attacker device. As you note, the thunderbolt device leaks the secret value over the untrusted channel.
If victin device is connected to attacker host, then only responses to challenges are potentially leaked. That might allow active mitm, but not cloning the key. That's the whole reason TFA needed to go poking around in flash to get the keys.
Not saying that TB is the best security protocol in the universe, but as far as I can tell the vulnerabilities exposed here are mostly implementation flaws rather than protocol level issues.
ssh uses asymmetric keys and the cache on the client has a three tuple (host,ip,public key) which allows a client to notice a difference in any of the three elements. By comparison, Thunderbolt leaks the entire secret as the first step and subsequent steps use derived values. ssh is secure if the key doesn't change and isn't compromised through other means. Thunderbolt is not secure and it fails under a passive surveillance adversary, it also fails for active adversaries.
I take your point that subsequent secret use in the n+1 protocol run isn't as bad as the very first run, and as you note, that probably doesn't matter in the face of an active attacker.
If Thunderbolt had used asymmetric cryptography, I would probably agree with you that the protocol has the same semantics as ssh. The reason that I disagree is that it appears to have the same semantics for the user interface but the underlying protocol differences are what make the protocol unsuitable for use. It's at least part of why Intel has now retired Security Levels and is leaning so strongly on kDMA. Security Levels as a protocol is simply not cryptographically secure for any meaningful definition of secure as the first step exposes the base secret value.
Note: the attack doesn't require the use of a flash clip, that's just a simple way to demonstrate device specific state extraction.
I wonder if that could be used by used sellers of MacBooks to get into the computers.
https://www.vice.com/en_us/article/akw558/apples-t2-security...
I guess MacBook resellers sometimes get computers where the password has been set and they can't get into the computers. I imagine they would be motivated to find anyway they can to unlock the computers.
No; for Macbooks, this work reduces to BadUSB.
Maybe it could be done by checkm8 exploit?
There is a nice write-up about this on attackerkb. If you're not familiar with it it's a community to provide assessments of vulnerabilities and point out which are worth stopping everything to patch and which are mostly harmless. It's currently in open beta. Main site: https://attackerkb.com/ Thunderspy assessment: https://attackerkb.com/topics/mPaHZgsUvk/thunderspy
There were news sometime ago that Microsoft did not include thunderbolt in their surface 3 because it was insecure. I wonder if that's related to this and whether Microsoft knew about this for a while.
> Contrary to USB, Thunderbolt is a proprietary connectivity standard. Device vendors are required to apply for Intel’s Thunderbolt developer program, in order to obtain access to protocol specifications and the Thunderbolt hardware supply chain. In addition, devices are subject to certification procedures before being admitted to the Thunderbolt ecosystem.
I thought that this had changed with USB-C?!
Easy read on the Wired magazine: https://www.wired.com/story/thunderspy-thunderbolt-evil-maid...
This video shows the POC demo: https://www.youtube.com/watch?v=7uvSZA1F9os
Really though, if an attacker has unencumbered access to one’s device, all security goes flying out the window.
The website is highly self-promoting.
> if an attacker has unencumbered access to one’s device, all security goes flying out the window
This is rapidly starting to become less true - full disk encryption is everywhere, backed by hardware TPMs; the Lockdown LSM prevents root from owing the boot chain; devices with soldered RAM are functionally immune to cold boot attacks.
There are still things an attacker can do - put a hardware keylogger on the keyboard wires, a skimmer on the fingerprint reader - but that requires future input from the victim. It is feasible today to defend against a physical attacker if you have the right hardware upfront and don't use it after the attack.
This is rapidly starting to become less true
Unfortunately, both for right-to-repair and actually owning the hardware you bought.
TPMs don't impede your ability to repair anything. Soldered ram is a hassle, but it's not any more malicious than soldered CPUs. It's a design choice, and tradeoffs had to be made.
> TPMs don't impede your ability to repair anything
There are some stories like this: https://www.vice.com/en_us/article/akw558/apples-t2-security...
It's suggested that many such devices might be stolen. But there will also be devices where the user forgot to wipe their data (or didn't know how); or devices that are only just damaged enough that you can't wipe the user data.
Probably an official Apple store can refurbish them somehow, but that is the NOBUS / EARN IT argument.
Well, that's more an explicit T2 issue that goes beyond what is known as "industry standard" TPM. Apple just hates you a (big) bit extra.
This kind of stuff shouldn't really theoretically have to affect repairability, but Apple seems to go out of their way to make sure that as much as possible gets bricked when you replace things.
Full disk encryption is still be broken, given a decade or 3. You might care about that risk or not, but the fact is still there.
The point still is that if the attacker has unencumbered access to your device then indeed _further_ use of the device is unrecommended to say the least. It doesn't matter if you had or did not have full disk encryption. It does not matter if you had or did not have Thunderbolt.
An extremely low tech solution would be to place a smallish and tactically hidden camera on the chassis, you don't even need the screwdriver for that. And it just happens all the time on ATMs and I'd bet that like on ATMs it would fool a shitton of people.
And this story is precisely about the type of attack that "requires further user input" -- what would be the point of requiring Thunderbolt at all in the first place if you already have the system in pieces?
> Full disk encryption is still be broken, given a decade or 3.
What? FDE is all symmetric crypto, long since 256-bit, and I think all AES. AES is extremely well understood, and the threat scenario for FDE is also purely cold attacks so even any side channels are irrelevant. I've never seen any feasible attack suggested even in principle, so I'm curious what you have in mind in 10-30 years. If you're thinking "quantum computers", you've gotten confused. Against symmetric keys those only provide at best square root(n) speed up via Grover's Algorithm, essentially halving the key size space. But 128-bit is still infeasible to search, and it'd be trivial to counter anyway by doubling the key length. It's only against current asymmetric cryptosystems that Shor's Algorithm can apply in principle (if if Big-If an actual scalable general purpose QC can actually be built).
I simply measured the time it took from the introduction of DES to when it was no longer "recommended" and substracted the years since AES was standarized, then added a decade of margin of error.
It does not sound to me far fetched to think that AES will be similarly "unrecommended" in such amount of time even if there is absolutely no evidence right now.
Oh, so you just made it up out of whole cloth with zero understanding of the actual math? I guess that answers my question then.
Seriously? Are you saying you expect something encrypted with AES _today_ to remain inaccessible _for the next 3 decades_? I'd have a hard time finding anyone even remotely claiming that. How many crypto recommendations from 30 years ago are still not entirely 'questionable' today? 50 years? AES as a recommendation is not even half that old. The algorithm may survive with changes; but the actual encrypted data, I would not bet on it.
If you have anything that claims that AES is different enough to warrant this extra optimism, I would love to have a look.
>Seriously? Are you saying you expect something encrypted with AES _today_ to remain inaccessible _for the next 3 decades_?
Yes, seriously. In fact to be clear (since you edited your time down to a mere 30 years) I fully expect something encrypted with 256-bit full AES today to remain inaccessible for all of foreseeable human existence [1]. I mean, it's hard to even really know where to begin here because it's not clear you've so much as looked at a wikipedia page on this before, and really don't grasp how non-linear improvements have been. DES is your cited milestone, but the primary weakness of it was simply that it had a 56-bit key. That's a mere 72 thousand trillion. A 256-bit key isn't "~4.6 times as hard" though, it's "the number atoms in the entire galaxy times as hard". 2^256 is around the lower bound of the estimated number of atoms in the entire universe. A 512-bit key is something like "an entire universe of atoms for every single atom in the universe". These are non-intuitively big numbers.
The algebraic framework of AES is pretty straight forward, and decades better knowledge went into it. But mainly it's that non-linear advances in computing meant that by the end of the 90s tech had caught up with and surpassed what was needed for the kind of keys necessary to make brute force utterly impossible with margin to spare, by anything within the known laws of physics. There have been academic attacks which mildly reduce full AES below brute force, but they simply don't matter at all in practice. 2^254 is better than 2^256, but still impossible. I already cited quantum computers, there we have the math to show that if a fully scalable general purpose one could ever be made it'd allow a quadratic speedup. And against a 128-bit it'd drop it to less than 2^64 and that'd be fairly trivial. But everything modern moved over to 256-bit keys ages ago (FileVault 2 for example was 9 years ago and it was not remotely the first) and it'd be relatively trivial to double keys again at this point if anyone was really concerned.
Side channel attacks are a real issue too for many purposes. But FDE is an exception, since it exclusively is for defending data at rest. That simply nullifies an entire range of tricky implementation issues for this threat model.
Again seriously: you can't just do linear historical extrapolation without at least knowing a bit of why those things went that way and what the foundations are. It's like you being surprised I'd expect algebra or calculus to remain relevant "for the next 3 decades".
>I'd have a hard time finding anyone even remotely claiming that.
Would you now? Here, let me help by starting you off with this guy named Bruce Schneier [2]:
>There is a significant difference between an academic break of a cipher and a break that will allow someone to read encrypted traffic. (Imagine an attack against Rijndael that requires 2^100 steps. That is an academic break of the cipher, even though it is a completely useless result to anyone trying to read encrypted traffic.) I believe that within the next five years someone will discover an academic attack against Rijndael. I do not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic. So while I have serious academic reservations about Rijndael, I do not have any engineering reservations about Rijndael.
If my expectation is wrong, well at least I can't be ashamed of the company I'd be in.
----
1: Maybe it's possible to brute force 128-bits if we convert the entire solar system into a Matrioshka brain or something like that, I haven't crunched the math. But that's far enough out into transcendent territory that I don't think it's relevant to any data in existence today.
2: https://web.archive.org/web/20090201005720/http://www.schnei...
EDIT TO YOUR EDIT:
>If you have anything that claims that AES is different enough to warrant this extra optimism, I would love to have a look.
Literally any intro to this topic at all that you'd find as in the first few results of going to your search engine of choice and typing "advanced encryption standard". This isn't some niche weird thing. You going "well DES was made obsolete by advances in computing power in the 90s which means AES will be too in a few decades" is the weird thing.
EDIT 2:
Also at some point here we're going to get HN rate limited on replies, discussion on HN isn't intended to support very long chains. I don't know if we'll be able to say anything else, so I'd just leave with really encouraging you to skim through a few intro to modern cryptography pieces, and/or look at the math itself. It's interesting stuff and obviously under girds much of the modern world. In fact the entire history of cryptography leading to this point is really fascinating, what kind of secret message systems people used over millennia and how developing mathematics and computers have fundamentally systematized and changed the nature of it.
I still doubt about how TPM is durable against attackers.
As another commenter pointed out, public charging or borrowed chargers are an issue. Think airport charging kiosks/counters. Maybe power over data connectors isn’t the best idea (I enjoy single cable docking, but an extra, magnetic power cable wasn’t that much more work).
Borrowed chargers aren't the threat model here; these attacks involve an attacker opening up your machine and reading the contents of the TB3 controller's SPI flash.
That isn't entirely accurate. The ability to clone a given device state gives access to any system which has authorized that cloned device. A borrowed thunderbolt device which is not the target machine may also be used to bypass security levels as a result. No need to open the laptop in that case. See section 3.1.1 and 3.1.3 in the report.