TLS 1.3 approved
ietf.org0-RTT sounds nice, until you get to appendix E.5. Everyone should read this:
E.5. Replay Attacks on 0-RTT
Replayable 0-RTT data presents a number of security threats to TLS-
using applications, unless those applications are specifically
engineered to be safe under replay (minimally, this means idempotent,
but in many cases may also require other stronger conditions, such as
constant-time response). Potential attacks include:
- Duplication of actions which cause side effects (e.g., purchasing
an item or transferring money) to be duplicated, thus harming the
site or the user.
- Attackers can store and replay 0-RTT messages in order to re-order
them with respect to other messages (e.g., moving a delete to
after a create).
- Exploiting cache timing behavior to discover the content of 0-RTT
messages by replaying a 0-RTT message to a different cache node
and then using a separate connection to measure request latency,
to see if the two requests address the same resource.
Ultimately, servers have the responsibility to protect themselves
against attacks employing 0-RTT data replication. The mechanisms
described in Section 8 are intended to prevent replay at the TLS
layer but do not provide complete protection against receiving
multiple copies of client data.
It seems practically guaranteed a lot of devs will enable it without understanding the ramifications.. I hope embeddings like Nginx add a nice configuration interface like "enable_0rtt YES_I_UNDERSTAND_THIS_MIGHT_BE_INSANE;" or similar. Meanwhile I wonder if concentrators like Cloudflare will ever be able to support it, without knowing lots more about the apps they are frontingI guess e.g. Nginx could also insert an artificial header to mark requests received as 0-RTT, and frameworks like Django could use that header to require views be explicitly marked with a decorator to indicate support, or something like that
Cloudflare only supports 0-rtt for GET requests with no query parameters in an attempt to limit the attack surface.[0] It is enabled by default for all free accounts.
> I guess e.g. Nginx could also insert an artificial header to mark requests received as 0-RTT, and frameworks like Django could use that header to require views be explicitly marked with a decorator to indicate support, or something like that
There is an Internet Draft for that [1]. It is co-authored by Willy Tarreau of haproxy and implemented within haproxy 1.8 [2].
[1] https://tools.ietf.org/id/draft-thomson-http-replay-01.html
[2] https://www.mail-archive.com/haproxy@formilux.org/msg28004.h... (Ctrl+F 'Early-Data') https://www.mail-archive.com/haproxy@formilux.org/msg27653.h... (Ctrl+F '0-RTT')
That would certainly require a lot of coupling from proxy to code, though. The nicest thing about TLS is that it's just a transparent dumb pipe that provides confidentiality and integrity (and less often, client authentication).
Having to write your app to understand that TLS 1.3 was used, and 0RTT was used, seems like a really really bad idea. The longer I'm in engineering, the more I realize that the number of people who understand the ramifications here is much smaller than the number who can throw together a TLS 1.3 listening HTTP server by following some dude's tutorial.
Framework support is not going to be enough. This seems like a bad, bad move.
Eh, it doesn't sound too bad. There's already a myriad of meta data (HTTP request headers) you need to process today if you want to be a good HTTP citizen, such as acknowledging Accept-Encoding, HTTP byte ranges, If-Modified-Since, X-Forwarded-For, X-Forwarded-Proto, etc; and sending correct response headers such as Cache-Control, Vary, and so on.
Which is why it should have never been implemented in TLS 1.3.
I believe the argument against not doing it was that some companies will just implement their own protocols instead. Eh, I think the chances of that happening were pretty slim. Now most of the problems we'll see with TLS 1.3 will likely be related to 0-RTT.
Also, wasn't that basically the same argument for implementing MITM in TLS 1.3? That if they don't do it the banks and middlebox guys will just stick to TLS 1.2 or whatever?
And who cares about a little bit of an extra HTTPS delay, when just adding Google analytics and Facebook Pixel to your site can increase the delay by over 400 ms? Some poor performance tracking tracking scripts add 800 ms on their own.
0rtt is still useful for static assets, and generally everything that is public. I have a handful of static websites (literally static, as in consisting of just HTML and CSS files), for those 0rtt is awesome. TLS is no longer used to only protect private pages (eg. access to your private emails, the admin section of a CMS). It's also used for privacy reasons on completely public websites.
Wouldn't that be a bit of a privacy leak? If 0rrt works, it was a request for a static asset.
Of course, TLS is not only for privacy reasons, but also for integrity reasons (preventing injection of malicious Javascript and similar attacks). For that purpose, 0rtt for static assets works fine.
> Wouldn't that be a bit of a privacy leak? If 0rrt works, it was a request for a static asset.
Response size and timing probably already leak this.
This honestly really bothers me.
We're encrypting everything, we have "Let's Encrypt", we have browsers telling users that their connections are "secure".
Meanwhile your DNS lookups are public (which leaks what site you're accessing) and size+timing analysis leaks which static assets you've retrieved. Which gives away for example what article you're reading on what news site. Which the site itself is telling google, facebook and other malicious third-parties anyway...
How is anyone supposed to understand digital privacy? Everything sucks, and I'm not even sure what could be done to make it suck less.
I think for the average user, the authentication part is a lot more important than the encryption part unless they're entering passwords. I want to be relatively sure that the site I'm visiting hasn't been replaced by something serving malware. I don't care as much about people knowing which articles I read.
For DNS lookups we're having people testing out DNS-over-HTTPS which would solve this entirely, lookups would be opaque to anyone but the DNS server involved.
For timing and size you can usually do something about it as a site owner (HTTP/2 for example will multiplex connections so it makes timing and size comparisons much harder)
Even without encrypted DNS lookups, HTTPS leaks the FQDN (i.e. exact subdomain) of what you are connecting to through Server Name Indication.
SNI was added to allow servers to know which SSL certificate to send to the browser, previously you would need to have one IP address per SSL certificate.
Just to be clear, 0rtt is only for "revisits", and for revisits of static assets the client is likely to have the asset cached still. So the only benefit is if the "static" asset has changed, or the client's cache is cleared. Which seems less useful.
The content of your page may be encrypted but the DNS lookup isn't
There's work now with DNS-over-HTTPS to prevent that.
As well as SNI
Tls connection reuse already works for that. And pipelining in http.
> And who cares about a little bit of an extra HTTPS delay, when just adding Google analytics and Facebook Pixel to your site can increase the delay by over 400 ms?
That's how you might feel if you live with fast broadband internet. A lot of the world is stuck with high latency, low-bandwidth connections. Applications that are incredibly lean will still be slow if the server is in San Jose and the client is in, say, Uganda.
I run a very large number of web performance tests and GA and the Facebook pixel do not add 400ms either alone or together of user-perceptible speed decreases. They will take some small amount of main thread time for parse and compile, but they are loaded async and generally not performance problems on the (hundreds) of pages I have profiled.
Intercom and other live chat solutions are typically the biggest offenders on modern pages. They serve 10-20x the script as GA and the Facebook pixel.
GA and FB-P both effect the document complete time; but your right, there are way way worse scripts out there.
The other thing I don't like about 0-RTT is that the client reveals that they've been to the server before, i.e. it removes a plausible case for anonymity. Just another implicit "cookie" that needs to be washed, I suppose.
I would love if instead the pre-shared secret enabling 0RTT could be something obtained through DNS instead, if that's possible. But that would require a secure DNS, which we don't have.
But that problem is just session resumption, and that isn't new or specific to 1.3. Another way to do this would be session tickets too (also not new with 1.3). Your client can remove support for both, and always connect as a new connection.
If you're concerned about it, couldn't you turn it off clientside?
Yes. And like so many other behaviours in the web-stack, I feel like I'm in a constant fight with my client software to please choose privacy over convenience. So it's worth being aware of where these tradeoffs exist. Especially when I'm writing that client software.
If you're that paranoid about your privacy, then I recommend you choose a user agent whose philosophy on privacy more closely represents your own.
Tor Browser, for example, is highly likely to "choose privacy over convenience" whenever possible with it's default settings.
How exactly does the client reveal that?
From the spec: "When clients and servers share a PSK (either obtained externally or via a previous handshake), TLS 1.3 allows clients to send data on the first flight (“early data”). The client uses the PSK to authenticate the server and to encrypt the early data."
The client initiating the 0RTT provides a pre-shared key, thus revealing to the server that they're not a newcomer. I don't know exactly how many bits of that PSK could be used by the server to identify specific clients. For QUIC I think it's a 15-bit identifier. Browsers will need to clear the PSK (and so remove the 0-RTT) when they clear cookies or in a "private browsing" mode.
Is there any way to do a 0RTT request for a completely new connection/session?
I mean, if I want to get weather data from let's say NOAA, so a simple GET / HTTP/2, why would I want to send any PSK? Let the server send the response and the Server Cert and the client can decide whether to trust the reply or not.
CloudFlare only "allows" 0RTT for GETs, for example. Is that different, or they also need the PSK?
0-RTT is defined with a PSK (pre-shared key). There are two ways you might have a PSK. The only one that would come up in a web browser as they're constructed today is a "resumption" PSK, agreed between the two parties during a previous connection.
For the Internet of Things it's also envisioned that some devices might know a PSK at the outset to use TLS rather than some custom protocol to secure their traffic. Maybe your lightbulb controller knows a PSK for the lightbulbs baked in at the factory. But it's not expected that web browsers will care about this case.
I'm pretty happy with the strong confidentiality guarantees offered by TLS 1.3, and a finished standard is better than more draft and committee turns, but I think the simple use case of securely accessing "public" information with 0-RTT seems to be left out.
Or simply serving static content faster would have been a nice few percentage efficiency gain.
For the nontechnical, what does 0rtt do?
"Zero round trip time," i.e., if your web browser previously had an encrypted session with the server and cached the cryptographic keys involved, the next time you visit the website, it can immediately encrypt an HTTP request to that public key and send it in the first packet.
Normally there's a handshake involved: your browser and the server send packets to each other to set up an encrypted channel, then the server uses its certificate to prove that it's in control of its end of the private channel, then you can send a request. So if you and the server are, say, 50 ms apart, there's usually an extra 200 ms for this handshake, which 0-RTT can save you.
The danger is that because your browser isn't setting up an encrypted channel but just sending a request and hoping for the best, someome who can capture the packet can just re-send it to trigger the request twice. Duplicaing the request is fine for, say, the HN home page, but annoying for a comment reply and a real problem for an online purchase.
> Normally there's a handshake involved: your browser and the server send packets to each other to set up an encrypted channel, then the server uses its certificate to prove that it's in control of its end of the private channel, then you can send a request.
Not an expert on this but this seems a little bit wrong or at least very misleading when I reason through it? I don't imagine the server needs to prove anything before it is sent data encrypted with its public key... if it doesn't have the private key then it simply can't decrypt; it wouldn't need a certificate for that. Rather I expect this is because the server & client want to generate ephemeral keys (for forward secrecy), which fundamentally requires a round-trip. Is that correct?
A few things:
Yes, normal setup for TLS 1.3 always does ephemeral keys for forward secrecy first.
If some alternate protocol started by sending data encrypted with a remote server's public key this data can be replayed by attackers, just like with 0-RTT in TLS 1.3, this problem is unavoidable for 0-RTT protocols.
But where should we get a public key from anyway? If it came from a previous session, the resumption PSK is better. If we got it by guessing, maybe checking a central store of known public keys, then it might be wrong and we have to start over any time it was wrong anyway.
We have to wait to see the certificate (and transcript signature) in the normal case because until we see the certificate (and signature) we have no proof we're talking to whoever we wanted to talk to, and even if the wrong person can't decrypt the message they can replay it at their leisure.
Note that "waiting" for these is an exaggeration, in TLS 1.3 the server sends both its half of the key exchange AND the certificate with the transcript signature AND any extra metadata in a single message, it's just conceptually separate because the latter part of this message is encrypted while the first part agreed the keys for encryption, so the client needs to think about them separately.
The server needs to transmit its certificate to the client. Before that the client generally doesn't know the server's public key.
For PFS suites with ephemeral-ephemeral DH/ECDH (DHE/ECDHE in TLS parlance) the client generates a DH key pair for each connection and so does the server; both public keys need to be exchanged before secrecy can commence.
EE-DH-based handshakes have innate entropy (due to the ephemeral keys), but TLS was initially build without EE-DH. For the historic RSA key exchange, client and server random nonces supply the handshake entropy and liveness proof; again necessitating a transmission of both nonces to the other party. RSA-KEX was removed in TLS 1.3. The nonces are always there, mostly for PSK and PSK-only handshakes (otherwise you could use PSKs only once).
TLS 1.3 resumption essentially uses a previously negotiated shared secret (PSK) which allows both parties to forego authentication-by-signature, because knowledge of the PSK authenticates them. Forward secrecy is added back in by EE-DH, but can actually be disabled.
TLS 1.3 0-RTT extends session resumption. Essentially, the early data is encrypted only under the PSK. It has neither forward secrecy (relative to the session under negotiation) nor liveness [I think it might be hypothetically possible to reject replays server-side by rejecting duplicate ClientHello.random values but this is hugely out of spec and completely negates any performance benefits 0RTT might have had].
(It's important to realize that TLS is and always has been a meta-protocol with a lot of knobs you can tweak. Now, for use in HTTPS/FTPS/STARTTLS the set of parameters is relatively restricted, because e.g. browsers simply won't support PSK-only handshakes. For general discussions of TLS properties this is something to keep in mind, however.)
Rejecting replays by remembering the conversations you had previously forever is permissible in the standard and even called out as something an application can do. It's not required because whilst it's trivial for a toy web server anybody at scale can't do it.
The first time you connect to a server, a "handshake" needs to be performed in order to generate a shared secret key. If you've already performed the handshake with a given server in the past, 0-Rtt allows you to skip it and use the key you generated before.
Great presentation by a couple of Cloudflare employees:
"Deploying TLS 1.3: the great, the good and the bad”—https://www.youtube.com/watch?v=0opakLwtPWk
By the looks of it, Cloudflare already does support 0-RTT as a (generally available) beta feature in the crypto tab of a website. Maybe TLS 1.3 not being enabled on the origin machine protects the origin from this type of attack.
A quick thought is that the protocol could require a sequence number on 0-RTT and only accept newer ones.
Section 8 of the Draft lists three plausible ways to prevent replay attacks. There isn't a shortage of ways to prevent them, but it takes actual effort by the application software, because you need to store state. How much state do you want to store - and where?
This looks trivial when you are running one Apache httpd on a Raspberry Pi on your desk. Why not just build any of these approaches right into the standard? And then you try to figure out how to make it work for Netflix or Google, who have thousands of clusters of servers - and your brain explodes.
So that's why the standard doesn't specify one solution and require everybody to use it but it does say if you need 0-RTT then you need to figure out what you're going to do about this, including specifying some of the nastier surprises that shuffle message orders and change which servers get which messages.
Example: let's say you think you're clever, you have two servers A and B, load balanced so they usually take distinct clients but can fail into a state where either takes all clients. You might figure you can just track the PSKs in each server, offer 0-RTT and if a client tries to 0-RTT with a PSK from the other server (somehow) it'll just fail to 1-RTT, no big deal.
Er, nope. Bad guys arrange for a client to get the "wrong" server, they capture the 0-RTT from that client, the "wrong" server says "Nope, do 1-RTT", the client tries again and in parallel the bad guys play the 0-RTT packets to the "right" server, which does the exact same thing as the "wrong" one - the replay has succeeded.
Does someone know what the differences are between the final version and the draft that Chrome and Firefox enabled in Feb 2017? How much did they have to change for the middleboxes?
The final version is going to be basically the last draft (draft-ietf-tls-tls13-28) with a few editorial changes. There's a changelog in the draft: https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-...
The question is just which draft Chrome and Firefox were using back then. The changes for the middleboxes were according to the changelog in draft-22, and IIRC consisted basically in adding back a few unnecessary fields, and allowing an useless handshake message (which is ignored by the receiver). The main trick was IIRC to make all TLS 1.3 connections (resume or not) appear identical to a TLS 1.2 resume connection.
A more detailed history of all changes to the spec can be found at its git repository: https://github.com/tlswg/tls13-spec/
No banking backdoor either, Sanity won out!
See below if you haven't come across this https://www.thesslstore.com/blog/tls-1-3-banking-industry-wo... https://tools.ietf.org/html/draft-rhrd-tls-tls13-visibility-...
If I'm not mistaken this means no "authorized" MITM
- Static RSA and Diffie-Hellman cipher suites have been removed; all public-key based key exchange mechanisms now provide forward secrecy.It does not. There are some passive decryption tools that will no longer work because they functioned by having non-forward-secure connections and the server's private key installed in the decrypter. (But one can just not support TLS 1.3 at the server to keep them working.)
MITM proxies, which are trusted by the client and which terminate and recreate the TLS connection, will continue to function. (Assuming they implemented TLS 1.2 correctly, which some didn't.)
Does that assume that all of the components (browser and server) support 1.2 as well? In a theoretical future state if I disable 1.2 on my browser doesn't that mean I won't trust a MITM box.
> Does that assume that all of the components (browser and server) support 1.2 as well?
No: a client, server, and MITM proxy can all be exclusively TLS 1.3 and everything will still work(+).
(+) as much as it did with TLS 1.2, anyway.
Did banks and other network operators who require monitoring their traffic just deem the MITM proxies too expensive or complex, or what was the reason for their protest?
The main complaint seems to be that the proxy can no longer use the Triple Handshake attack to inspect only part of the connection and pass the rest through, it's instead forced to do a full MITM all the time: https://tools.ietf.org/html/draft-camwinget-tls-use-cases-00
Except they'll just start checking SNI instead to see what sites you're browsing. Getting rid of triple handshakes is good from a theoretical perspective, but it does nothing at all to increase your privacy against middle-boxes.
https://tools.ietf.org/html/draft-ietf-tls-sni-encryption-02 is the current state of the effort to encrypt SNI. It's a good overview of which obvious ideas can't work and why, plus some complicated ideas that might work, with two of them expanded to a state where it's reasonable to start developing opinions about them as engineers or cryptographers.
Ultimately the destination is that one of these two (or potentially some upset newcomer if it's written up and presented soon) becomes the agreed way to do SNI encryption, and perhaps in 2019-2020 we start seeing clients & servers that can really do it.
For the short term it remains unclear which of the following three things will happen, both generally and in particular environments:
1. (Bad) Organisations bite the bullet and MitM Proxy everything, expenditure on MitM proxies sky-rockets, lawyers spend lots of time & money in court arguing that they _had_ to MitM proxy everything even though it violated employees legal rights.
2. (Good) Organisations suck it up and get rid of MitM proxies almost everywhere, preferring edge solutions that actually still work with TLS 1.3 without needing to spend loads of cash on MitM proxies for everything.
3. (Meh) Organisations refuse to use TLS 1.3, insisting on downgrading everything to TLS 1.2 and crossing their fingers that never becomes obsolete or insecure (see also Windows XP)
With SNI, they can't easily see whether you're looking at https://en.wikipedia.org/wiki/Cat or at https://en.wikipedia.org/wiki/Pornography (just looking at the size is not enough; Wikipedia has millions of pages, thousands of then are changed every day, it has a variable-sized HTML comment, and the HTML size also changes when you're logged in), while with the triple handshake attack, they can see the full HTTP request and response headers, while avoiding most of the cost of being "in the middle" during the actual content transfer.
"Assuming they implemented TLS 1.2 correctly, which some didn't." -> BlueCoat?
Basically everybody selling middleboxes screwed up, some worse than others but even the workaround in TLS 1.3 could best be understood as a hint that most middleboxes are ineffective garbage.
Nota bene: TLS debugging via SSLKEYLOGFILE and similar mechanisms is of course not affected.
Yes and no. Passive eavesdropping where the eavesdroppers has been given the servers' RSA keys no longer works, but that's about it. This was mainly something that banks did for very, very defective reasons.
Other types of MiTM still work, such as an active MiTM through a malicious root certificate.
This is not true. The client can simply skip the certificate verification, making the connection unauthenticated. Raw Public Keys (which are basically simplified X.509 certificates), can also lead to unauthenticated connections.
In fact, Appendix C5[1] reads:
[1] https://tools.ietf.org/html/draft-ietf-tls-tls13-28#appendix...Previous versions of TLS offered explicitly unauthenticated cipher suites based on anonymous Diffie-Hellman. These modes have been deprecated in TLS 1.3. However, it is still possible to negotiate parameters that do not provide verifiable server authentication by several methods, including: - Raw public keys [RFC7250]. - Using a public key contained in a certificate but without validation of the certificate chain or any of its contents.
Anyone know what TLS 1.3 offers (or fixes) that TLS 1.2 does not?
WolfSSL has a short write up on the changes: https://www.wolfssl.com/differences-between-tls-1-2-and-tls-...
Worth watching: "Deploying TLS 1.3: the great, the good and the bad”—https://www.youtube.com/watch?v=0opakLwtPWk
SNI is still in plain text :( They could just hash the SNI and do matching based on hashes.
That gains nothing, since the attacker can simple connect to the service, replaying your hash, and see which certificate comes back. Take a look at https://tools.ietf.org/html/draft-ietf-tls-sni-encryption-02 which on its section 2 has a long list of requirements a solution should meet; hashing the SNI fails at least the first two (Mitigate Replay Attacks and Avoid Widely Shared Secrets).
There's also a limited amount of domains registered, so even if you did something to prevent hash replays you could just try them all and see which matches.
I have a dumb question - Can existing clients with the old TLS versions connect to a server with TLS 1.3?
A TLS server and client can both support multiple versions. Clients which don't support 1.3 will continue to use 1.2 (or 1.1 or 1.0, if the server still supports them)
So it looks like that this seasons NSA/GHCQ backdoor is 0-RTT, and will be implemented into the commercial variants, whilst the open source variants will turn it off by default. Or use it like Cloudflare, in HTTPS without GET params only.
Can you explain how 0-RTT might be used as a back door?
(... edit, actually, I recognize this username from previous nonsensical discussions about crypto and backdoors: https://news.ycombinator.com/item?id=13364173 )
Thanksfully those folks easily expose themselves. Calling the Siphash security theatre senseless explains it also.
The trick about "backdoors" is that it is hidden. 0-RTT has very explicit guarantees about what it can and cannot do. By its very nature, and as written in the spec, it allows for a replay attack (which is in many cases entirely harmless, but is a concern regardless).
The rest of your comment is less sensible than the first. Everyone will implement it, and it's up to the user to decide whether they feel that they need the feature and know that their application is unaffected by replay attacks.
You make it sound like NSA/GCHQ somehow secretly put in a weakness into a IETF standard that’s gone through many public drafts...
That's roughly the line of their job, as we have all learned the hard way a few years back.
So they’re hardly the only intelligence agencies in the world, so I don’t get why they’re specifically being pointed out unless you have some direct evidence.
As far as I’m aware, 0rtt started with Google’s QUIC. It’s since gone through a ton of academic and industry debate, particularly at the IETF level. It’s something optional to turn on, comes with notes on limitations and weaknesses, and major supporting vendors like Cloudflare have giant blog posts about how it can be used in only limited ways. How is this an intelligence crafted backdoor?
If you were going to point fingers (probably unfairly) at people whose agenda seems compatible with agencies that don't like BCP#188 (the IETF policy document "Pervasive Monitoring Is an Attack") then the best candidates would be those asking for the "transparency" features, some of which claimed at different times to represent data centre operators, financial institutions, and IoT manufacturers.
None of that made it into this draft, indeed the Monday meeting (this link is about the Wednesday meeting although practically speaking I think this was a done deal by Monday) of the TLS working group at IETF 101 basically killed all those plans, at least in so far as they impact TLS 1.3 itself. The IETF operates on "rough consensus" and there wasn't any way forward on "transparency" (aka snooping) that had consensus, so it was either publish this or stall forever.