Security and Correctness in Wasmtime
bytecodealliance.orgThis seems like a pretty comprehensive strategy to take wasm security and correctness seriously. It pretty much covers everything I would want to see if I were relying on this system, including auditing, fuzzing, formal correctness, spectre, and even a clear-eyed organizational stance toward reported security vulnerabilities.
The post mentions using `cargo vet` to organize audits of third party crates, discussed here a few months ago [0]. I'm more familiar with cargo-crev which does something similar, how do these auditing tools compare? The audit format [1] seems somewhat reasonable, but it doesn't include the review date and there's no mechanism to validate the authenticity of the auditors.
[0]: https://news.ycombinator.com/item?id=31719532
[1]: https://mozilla.github.io/cargo-vet/recording-audits.html
I previously asked "why cargo vet instead of extending crev" and I think the answer was that architecturally cargo-crev is meant to be a single repository of public audits, where as cargo-vet aims for a decentralized system where anyone can publish an audit anywhere, and each individual project has to opt into which audit databases they trust.
Also cargo-vet has some good ideas about how to suddenly introduce cargo-vet into an existing codebase.
I'm certainly curious to see how in-process sandboxing plays out with Spectre. Even the process boundary sometimes doesn't feel like enough, heavy handed as it may be. I wonder if there's a way to prove the absence of side channels by encoding side effects more directly and ensuring that those side effects never propagate across a boundary. The problem would probably be enumerating them... and then idk, everything has side effects to some degree, "the value was read, which caused a L cache line to flush" I guess it's probably not tractable.
I kind of, vaguely loosely, feel like running multiple 'workers' within a single process is just not a reasonable goal. Ultimately if you have a multi-tenant requirement you should be using separate processes and pinning them to separate physical CPUs, and hope that that is enough. Not to discourage this, I can't wait to look back in a decade and see how this all has changed.
edit: Also, there are other use cases. Like, maybe I'm a single tenant and I'm deploying multiple workers to a single VM. I trust myself, but it would still be nice to have it be hard for those boundaries to be violated - driving up the cost is sane.
It also sort of reminds me of the Sysiphean task of removing ROP gadgets from the Linux kernel.
> WebAssembly programs are sandboxed and isolated from one another and from the host, so they can’t read or write external regions of memory, transfer control to arbitrary code in the process, or freely access the network and filesystem. This makes it safe to run untrusted WebAssembly programs: they cannot escape the sandbox to steal private data from elsewhere on your laptop or run a botnet on your servers.
As if users will not concede every requested permission to the first Monero miner that asks.
They might, but that's the wrong way to think about security. It is true that people can be tricked into bypassing any security layer. It is also true that strong security boundaries are useful tools.
A more meaningful security boundary might be making HTML viewers' ability to run arbitrary code an opt-in feature, rather than opt-out.
Imagine if every PDF viewer included a virtual machine that ran in the background while viewing the document.
I have some bad news for you about PDF: https://opensource.adobe.com/dc-acrobat-sdk-docs/library/jsa...
See also: https://rawgit.com/osnr/horrifying-pdf-experiments/master/br...
Even better, every font renderer does! A couple of the PDF-based jailbreaks for iOS were actually bugs in the virtual machine used by font renderer to allow fonts to do programmatic hinting, and the PDF only really existed as a container to deploy the font and force it to deterministically render exactly what was required.
I intuitively expected some trash like that from Adobe which is why I wrote "every PDF viewer" and not "Acrobat Reader".
His "breakout" demo works in Chrome's viewer as well (and obviously FoxIt).
Opt-in code execution is not a meaningful security mechanism because users do not have the expertise or information to answer a prompt like "Do you want to allow this web page to run code?"
Prompts are not opt-in. Opt-in is moving the mouse to (say) the lower-right corner, clicking on the NoScript icon, and selecting "Temporarily allow example.com".
That's not a panacea, but it at least raises the bar from "get people to even briefly look at your attack site", to "come up with a at-least-vaguely-plausible excuse why your site needs to be handed a remote code execution vulnerability in order to function".
The “user” may be a multi tenant system for say, FaaS
No need to, the data inside of the sandbox can still be corrupted (C compiled into WASM code), so even if it doesn't escape to the host, there are tons of possible ways to exploit the code and via data corruption force it to execute another code path thanks to incorrect state on the data structures.
But the malicious miner won't be stealing the banking credentials you used in a different tab
What is the overhead of the spectre mitigations? I read a PhD dissertation showing that v8 suffers a 20% overhead in practice. People can sugar those numbers, but I wouldn't expect another userspace-emulator-like program to behave differently. Is this number in the ballpark for wasmtime?