WebPKI and You
blog.brycekerley.netIf you like this sort of thing, perhaps you'll enjoy my SSL/TLS and PKI history where I track a variety of ecosystem events starting with the creation of SSL in 1994: https://www.feistyduck.com/ssl-tls-and-pki-history/
Does the TLS working group know that? Pretty much all their design work apart from one guy representing email use via Postfix and a few guys working on a perpetual-motion-machine profile for telcos assumes the only possible use for TLS is the web.TLS is not The WebMaybe you're onto something, but in what way do you think that TLS is not serving other protocols?
Personally, I think we have a bigger problem on the PKI side, where Web PKI is very strong, but Internet PKI has been neglected. The recent move to remove client authentication is a good example.
The design decisions seem to be completely oblivious to the fact that anything exists outside the web: WebPKI, always-on Internet connections with DNS and the ability to tie in to third-party services to do things like ECH, everything can be flipped over to support whatever trendy thing someone has pushed through the WG over a period of a few weeks, CPU and memory is free and near-infinite, etc. Now project that onto a TLS implementation that has to run on a Cortex M3 in some infrastructure device, little CPU, little RAM, no DNS, and the code gets updated when the hardware gets replaced after 10-20 years. The end result is the creation of a hostile environment in the WG where pretty much everyone not involved in web use of TLS has left, so it's become an echo chamber of web-TLS users inventing things for other web-TLS users to play with.
Well, WebPKI is for the web, if you need TLS for other purposes that don't fit with the goals of those looking to protect web users and web infrastructure you need a different PKI. It's not like it's technically hard to set up your own private PKI, and there are plenty of companies who are happy to provide those services if you don't want to do it yourself, but it is more complicated and costly than just using WebPKI so we of course see WebPKI resources getting used inappropriately and then those users complain when there's a need for revocations and/or changes.
> Now project that onto a TLS implementation that has to run on a Cortex M3 in some infrastructure device, little CPU, little RAM, no DNS, and the code gets updated when the hardware gets replaced after 10-20 years.
Also the OT world needs to accept that they can't have their cake and eat it too. If you need to be able to leave the same code running untouched for 10-20 years, you don't connect it to the internet. If you need it connected to the internet, you accept that it needs to be able to receive updates and potentially have those updates applied in a matter of days. Extremely strict external security controls can mitigate some of these situations but will never eliminate the need for there to be a rapid update process.
Why on earth not? Just because most of the code that uses the web PKI is crap and needs constant patching doesn't mean there aren't developers writing code that isn't crap and that you can leave running for 10-20 years without any patching. Years ago someone who created a (at the time) widely-used security tool got asked why there hadn't been any updates in years, and whether it was abandonware. His response was "some people do things properly the first time".Also the OT world needs to accept that they can't have their cake and eat it too. If you need to be able to leave the same code running untouched for 10-20 years, you don't connect it to the internetAnd before you say "even if the code is fine it's old crypto, it's insecure", when was the last time someone got pwned because they ran 25-year-old TLS 1.0?
> Why on earth not? Just because most of the code that uses the web PKI is crap and needs constant patching doesn't mean there aren't developers writing code that isn't crap and that you can leave running for 10-20 years without any patching.
I never said that's not possible, I said you can't design your systems to assume that it's one of those things. It is certainly possible that after 10-20 years a system might never have needed an update, but you didn't know that when it was built, purchased, or implemented and assuming that will be the case is undeniably irresponsible.
> And before you say "even if the code is fine it's old crypto, it's insecure", when was the last time someone got pwned because they ran 25-year-old TLS 1.0?
The correct answer there would be "none yet", and there's no guarantee it would ever happen, but there are known weaknesses so it's always a possibility. Again, not saying everything will need to be updated regularly, but it's not a good call to assume your thing will never need it.
Let's look at this from another angle. Presumably if you have a desire to expose a device to the internet as a whole it's because you either want it to be able to access external resources or you want external systems to be able to reach it, and the outside world has this tendency to move on over time if protocols are flawed, even if those flaws don't matter to your device. If there's a process for updating regularly, this is no big deal. If there isn't, your thing is going to get progressively more annoying to use wherever it needs to interact with systems outside of its control.
There's a huge suggestion in here which would make PKI vastly more respectable: Disallowing root programs (browser operators) from also being CAs. I loudly suggested at the time Google Trust Services should be rejected but the Mozilla rep loudly defended approving a CA from a root program from a company that happens to pay their entire salary.
PKI as it stands is only a few steps from Google just deciding everyone must have a short-lived certificate from Google to be on the web.
While I sort-of see what you're trying to say, if you knew the groups and teams involved - you'd know there was no favouritism and a strong degree of separation between CA and root programs.
The root programs who have their own CAs are also cloud providers, who arguably have a legitimate need for the CA. Or in Apple's case they have their own CA, but don't issue externally. They keep CA and root program separate.
I know the people involved enough to be very concerned that the CA/B decides whether something can exist on the web.
The CA/Browser Forum, not Google, decides the length of time certificates are valid for.
If you don’t like a particular CA’s policies, you can choose a different one.
That's fundamentally untrue. Browser root programs unilaterally decide their requirements, and also unilaterally boot other CA/B members out of the CA/B. The CA/B is not by any means a useful functional body, it's an excuse why companies can't be forced to undo things after the fact.
That's absolutely incorrect. While CABF sets the 'Baseline Requirements' that ultimately go into the WebTrust audit scheme that root programs use to accept roots into their trust stores...browsers can and do set their own rules.
The reduction of TLS cert lifetime to a max of 398 days was an Apple policy.
> browsers can and do set their own rules.
Here's a link to the minutes of the CABF meeting where the 25 certificate issuers and the 4 browser vendors—Apple, Google, Microsoft, Mozilla—agreed to reduce the validity period of TLS certificates unanimously [1].
> The reduction of TLS cert lifetime to a max of 398 days was an Apple policy.
Actually, all of the browser vendors voted to reduce the validity period of TLS certificates from 825 days to 398 days at the September 2019 meeting. The ballot failed because a majority of the certificate issuers voted against it.
At the February 2020 CABF meeting, Apple announced it would unilaterally enforce the 398-day limit through its own root program policy. Starting September 1, 2020, any new TLS certificate with a validity period exceeding 398 days would simply not be trusted by Safari, macOS, or iOS.
This effectively made the 398-day limit a de facto standard — no CA would issue longer certificates if they’d be rejected by Apple devices [2].
[1]: https://cabforum.org/2025/04/11/ballot-sc081v3-introduce-sch...|Date |Max Certificate Validity|SAN Data Reuse Period| |-----------------|------------------------|---------------------| |Before March 2026|398 days (current) |398 days | |March 15, 2026 |200 days |200 days | |March 15, 2027 |100 days |100 days | |March 15, 2028 |47 days |10 days | |March 15, 2029 |47 days |10 days (final) |[2]: https://www.entrust.com/blog/2020/02/apple-announces-398-day...
I feel this is a perfect complement to the current 1. link: https://satproto.org/ which implements its own CA system with different trade-offs.
The short lived certificates started making a lot more sense to me when I discovered I could get Let's Encrypt to issue IP address certs. Clearly, in this context of use we need our certificates to die quickly.
You can now make any web server operate with a publicly valid TLS certificate without paying any money, registering a domain, configuring DNS or disclosing any personally identifiable information. It can be entirely automatic and zero configuration. The only additional service required is something like a STUN server so the public IP can be discovered and updated over time.
I am reading your comment and find the proposition interesting, but I can't quite understand the part about the STUN server - doesn't that "just" help me find my own public IP address ? Do you mean that I could then give out this address to others (instead of them having to do a DNS lookup) so they can connect to the webserver ?
> I am reading your comment and find the proposition interesting, but I can't quite understand the part about the STUN server - doesn't that "just" help me find my own public IP address ?
He is hosting his domain on a machine behind a reverse proxy over which he has no control (common enough); in this case the server will not know its own public IP as all resolves to (for example) `www.mydomain.com` will return the address of the proxy. To get the public IP he uses a STUN (or similar) public-facing service.
Not quite sure why he needs the public IP, though: from what I remember, the certs include the domain, not the IP.
You can issue a TLS certificate with a SAN that is a literal IPv4 address. You do not need a domain to serve TLS to clients. It definitely helps with the UX, but it's not mandatory for the browsers and other web tech to function.
If you're running private PKI, sure, you'll do it.
What value is it when you are behind a proxy that can change IP? I mean, I'm going on the assumption that the proxy is not under his control, nor does it do the tls termination.
If your public interface address can change, it does dramatically reduce the value of a purely IP-addressed host. But I don't think it eliminates it entirely.
With a dynamic IP you can still detect a change, reissue a cert for the new IP and proceed automatically. There are self-hosting and machine-to-machine scenarios where this amount of autonomy could be welcome.
Yes the point is to simply discover the public ip you present as on the internet. It's not a particularly hard problem to solve, but you often can't know your public interface just from inside the machine. Being behind a NAT with TCP 80/443 forwarded to the actual web server is an example.
It's worth noting that while splitting the PKI hierarchies is a good thing, the CABF does provide rules S/MIME (email signing) and Code Signing. Also, "WebPKI" never actually appears in the BR documents from what I can see, nor do they require the use of HTTP (hence why you can use these for SMTP).