A popular conference talk has been making the rounds, praising C++26’s new safety features — erroneous behavior, standard library hardening, contracts — as the answer to decades of memory safety criticism. The audience claps. The community shares. Everyone feels reassured that C++ is handling the situation.
It isn’t. And a careful look at the actual evidence reveals a talk built on a shaky opening, overstated claims about compile-time evaluation, and a fundamental misunderstanding of what “opt-in safety” actually delivers at scale. The individual features are real improvements. The framing that they constitute an adequate response to the memory safety crisis is not.
The talk opens with the CrowdStrike incident of July 2024, where a faulty update to the Falcon sensor bricked roughly 8.5 million Windows machines. The speaker poses it as a rhetorical question: “Was it negligence? Was it greediness? Or was it bad engineering like choosing the wrong implementation language? Who knows?”
Except we do know. CrowdStrike’s own Root Cause Analysis, published August 6, 2024, identified the root cause as an out-of-bounds memory read in the Content Interpreter component of the Falcon sensor. A Rapid Response Content update for Channel File 291 provided 21 input fields, but the Content Interpreter expected only 20. The 21st field was accessed via an out-of-bounds index, reading an invalid value that was then dereferenced as a pointer — causing an unhandled exception in the kernel-mode driver (csagent.sys) and an immediate BSOD. CrowdStrike’s Content Validator, which should have caught the field count mismatch, had a bug that let the malformed template pass through. This is textbook memory unsafety — a bounds violation, the exact category at position number one on the CWE Top 25 that the speaker references later in the same talk.
So why the theatrical “who knows?” The speaker had an opportunity to make a strong, honest argument: “Here’s a real-world catastrophe caused by an out-of-bounds read. Here’s what C++26 offers to prevent exactly this class of bug.” Instead, the audience gets rhetorical hand-waving. The irony cuts deeper: the CrowdStrike bug is precisely the kind of defect that std::span with library hardening would have caught — had CrowdStrike been using it. But the speaker doesn’t make this connection explicit, which makes the entire opening feel like decoration rather than argument.
And there’s a subtlety the talk misses entirely. Even with std::span and library hardening, the Falcon sensor’s Content Interpreter would have trapped on the bounds violation and terminated. The machine still wouldn’t have booted. The difference between “crash from out-of-bounds read” and “controlled trap from bounds check” matters for security — the attacker doesn’t get to exploit the invalid memory — but from a reliability perspective, 8.5 million machines still go down. The real lesson from CrowdStrike wasn’t about programming languages at all. It was about deploying untested configuration updates to the entire fleet simultaneously, without staged rollouts, without canary deployments, without a fallback mechanism. Delta Airlines sued CrowdStrike for exactly this — gross negligence in deployment, not in programming language choice.
Safety, as the speaker correctly states, is a system property. Then the entire talk proceeds to discuss only language-level features.
The talk cites the well-known statistic: about 70% of security vulnerabilities come from memory safety problems in C and C++. Multiple governments have published papers echoing this figure. The speaker uses it to motivate the C++26 safety work, which seems reasonable until you trace where the number actually comes from.
The 70% figure originated from Microsoft’s analysis of Windows CVEs (published in 2019) and Google’s analysis of Chromium and Android bugs, both covering codebases with millions of lines of legacy C and C++ code accumulated over decades. Microsoft’s Matt Miller presented this data at BlueHat IL 2019, analyzing CVEs assigned across all Microsoft products. Google’s Project Zero and Chrome security teams published similar analyses for their codebases.
These are massive, legacy-heavy codebases where much of the code predates modern C++ practices. Code written with raw new/delete, C-style arrays, char* string manipulation, manual buffer management — the full catalogue of pre-C++11 antipatterns. And there’s a conflation problem: the studies report “C/C++” as a single category. The Windows kernel is largely C, not C++. Android’s HAL and native layer are heavily C. Modern C++ with RAII, smart pointers, and containers is a fundamentally different beast than malloc/free C code, but the statistic treats them as one.
Here’s what the talk doesn’t mention: Google’s own data from September 2024 shows that Android’s memory safety vulnerabilities dropped from 76% to 24% over just six years — not by retrofitting safety features onto existing C++ code, but by writing new code in memory-safe languages (Rust, Kotlin, Java). Google’s security blog makes a fascinating observation: vulnerabilities have a half-life. Code that’s five years old has 3.4x to 7.4x lower vulnerability density than new code, because bugs get found and fixed over time. The implication is striking — if you just stop writing new unsafe code, the overall vulnerability rate drops exponentially without touching a single line of existing C++.
That’s a fair counter to the blanket “C++ is unsafe” narrative, and the talk could have made this point. But it didn’t — the speaker presents the 70% figure at face value, without noting that much of it comes from C code or pre-modern C++ practices, and without acknowledging Google’s finding that the fix was writing new code in safe languages, not retrofitting old C++ with opt-in checks.
The C++ committee’s approach — adding opt-in safety features to the language — is the exact opposite of what worked at Google. Google’s strategy says: write new code in languages where safety tools aren’t needed because safety is the default. The committee’s strategy says: keep writing new code in C++, but use these new safety tools. Google has empirical data showing their approach works. The committee has proposals.
The speaker makes one of the talk’s boldest claims about constant evaluation: that the constexpr interpreter built into compilers is “even better than all the sanitizers” because it detects all undefined behavior, all the time, at compile time.
This is technically correct and practically useless for the vast majority of production code. The constexpr interpreter can only evaluate code that doesn’t touch the outside world — no I/O, no system calls, no networking, no file operations, no dynamic memory patterns that depend on runtime input, no multithreading. The speaker acknowledges two of these limitations in passing (”we can’t have compile-time fuzzing” and “we can’t have compile-time multithreading”) but frames them as minor remaining gaps.
They’re not minor. They’re the majority. Consider what a typical C++ application does in practice: it reads input from files, sockets, or hardware registers. It processes that input through state machines. It allocates memory dynamically based on that input. It communicates results to other systems. It coordinates work across threads. Every single one of these activities — where bugs actually live and where attackers actually strike — is beyond constexpr’s reach.
The CrowdStrike out-of-bounds read happened while parsing a runtime configuration file. Constexpr can’t help with that. Heartbleed was a buffer over-read triggered by a malformed TLS heartbeat message received at runtime. Constexpr can’t help with that either. The entire category of “processing untrusted input,” which is the primary attack surface for security vulnerabilities, is inherently runtime behavior.
Compile-time unit tests are great. I use them. They catch real bugs. But presenting constexpr evaluation as a primary defense mechanism against the classes of vulnerabilities that actually cause incidents is overselling a tool for a job it was never designed to do.
Standard library hardening in C++26 is genuinely useful. When you access std::span, std::vector, std::string, or std::array out of bounds, the hardened implementation will trap immediately instead of reading garbage memory. This is a real, measurable improvement.
But look at what libc++’s own documentation says about the current state. The default hardening mode is none. You have to opt into it. The “fast” mode suitable for production only checks two assertion categories: valid-element-access and valid-input-range. Iterator bounds checking requires ABI changes that most vendors haven’t enabled. The unordered containers (unordered_map, unordered_set, etc.) are only partially hardened. vector<bool> iterators aren’t hardened at all. And checking for iterator invalidation — accessing a vector element through an iterator after the vector has been reallocated — still leads to undefined behavior even with hardening enabled.
Then there’s the coverage question. How much of a typical performance-critical C++ codebase actually uses std:: containers? In HFT, in game engines, in embedded systems, in kernel modules — the domains where C++ is chosen precisely because you need control — teams routinely use custom containers, custom allocators, ring buffers, lock-free data structures, memory-mapped regions, and direct pointer arithmetic. Library hardening covers zero of that. It also covers zero of the C APIs you’re calling into: POSIX, kernel interfaces, hardware abstraction layers, third-party libraries with C linkage.
The talk’s demo is telling: a trivial program with a hardcoded out-of-bounds access on a std::span. Of course hardening catches that. But this is the easiest case imaginable. Show me hardening catching a use-after-free through a raw pointer to a pool-allocated object in a real trading system. It can’t, because that’s outside its scope.
The speaker is visibly enthusiastic about P2900, the C++ contracts proposal. Preconditions, postconditions, contract assertions — these are developer annotations that express what should be true at specific points in the program. The speaker correctly points out that the existing assert() macro is debugging-only and gets compiled out in release builds. Contracts fix that by working with full optimization.
Good. But contracts have a structural problem that the talk doesn’t address: they depend entirely on the developer writing correct and complete annotations. This is the “disciplined programmer” assumption that has been the central failure mode of C++ safety for 40 years. We gave developers const. They don’t always use it. We gave them smart pointers. They still use new. We gave them std::array. They still use C arrays. Every single opt-in safety feature in C++ history has had incomplete adoption because adoption requires discipline, and discipline doesn’t scale across teams, across dependencies, across decades of maintenance.
The contracts evaluation semantics make this worse. You can choose ignore (don’t check at all), observe (log the violation and continue), quick_enforce (trap immediately), or enforce (log and terminate). The observe semantic is explicitly designed for legacy code — “it worked for the past 10 years, why shouldn’t it now?” the speaker says. This is a pragmatic accommodation that also creates an escape hatch large enough for every team that doesn’t want to deal with contract violations in production.
Compare this with how Ada/SPARK handles contracts. In SPARK, contracts are verified statically by a formal proof engine using SMT solvers (CVC4/Z3). The toolchain proves, at compile time, that preconditions are always satisfied by all callers. If it can’t prove it, the code doesn’t pass review. There’s no “observe and continue” — you fix the proof or you don’t ship. C++ contracts are runtime checks with optional enforcement. SPARK contracts are compile-time proofs with mandatory satisfaction. These aren’t the same category of tool.
And it gets worse. P2900 doesn’t even support class invariants — they’re “deferred to a future proposal.” Eiffel had class invariants in 1988. The D language has had in/out/invariant contracts since 2001 — twenty-five years before C++26. As Sean Baxter, author of the Safe C++ proposal (P3390), put it: “Contracts check for bugs. They don’t prevent bugs. A bounds-checked access that traps is still a denial of service.” Baxter proposed an actual Rust-inspired borrow checker for C++. It wasn’t adopted.
The talk presents the C++23 change from undefined behavior to erroneous behavior for uninitialized variable reads as a significant win. And it is — in a narrow sense. Compilers can no longer exploit uninitialized reads for aggressive optimization. They can’t assume “the programmer would never read an uninitialized variable, so I can delete this null check three lines later.” That’s a real improvement.
But the speaker’s framing — “by just recompiling this code with a modern compiler you will get implicitly initialization behavior” — is misleading. Erroneous behavior means the program has well-defined but wrong behavior. The variable still holds an indeterminate value. You’re still reading garbage. The program still produces incorrect results. The difference is that the compiler must faithfully compile your buggy code instead of transforming it into something completely unpredictable.
Compare this with Rust, Go, Swift, or even Java: the variable is either initialized to a known value at declaration, or the program doesn’t compile. Period. There’s no “erroneous behavior” category because the error is prevented structurally. In C++23, you can still write int x; return x; and get a program that compiles, runs, and returns garbage. The garbage is just more predictable now.
The speaker recommends value initialization (int x{};) as the solution. And it is. But it has been the solution since C++11, fourteen years ago. If the community had universally adopted value initialization in 2011, we wouldn’t need erroneous behavior in 2023. The fact that we needed a language change to mitigate the consequences of developers not using a feature we’ve had for over a decade is itself an argument against the opt-in approach.
The speaker mentions profiles — safety profiles that would enforce categories of safe behavior — and calls them “the very best thing C++ would ever have.” Then immediately says they’re pushed to C++29 at the earliest. This is the most revealing moment in the entire talk.
Here’s the realistic timeline, and history is not encouraging. C++20 was ratified in 2020, and modules are still not fully usable in production five years later. C++23 had limited adoption by early 2025. C++26 gets ratified in 2026. Major compilers achieve reasonable conformance by 2027-2028. As of early 2025, no major compiler (GCC, Clang, MSVC) ships production-ready P2900 contracts support. Large codebases begin adopting C++26 features by 2029-2030. Profiles in C++29 won’t be ratified until 2029, with compiler support by 2030-2031 and real adoption by 2032-2033. And profiles have no working implementation — critics on the committee itself have questioned their feasibility.
The U.S. government’s guidance on memory-safe languages came from the NSA in November 2022, CISA in partnership with international agencies through 2023-2024, and the White House ONCD published “Back to the Building Blocks” in February 2024. These agencies are asking software suppliers to present memory safety roadmaps now. They’re not going to wait until 2033 for C++ profiles.
Meanwhile, Rust hit 1.0 in 2015. It’s been production-ready for a decade. The Linux kernel accepted Rust code starting in 2022. Android’s Rust adoption began around 2019. Google’s data shows the results: memory safety vulnerabilities in Android dropped from 76% to 24% in six years. Not by adding opt-in features to C++. By writing new code in a language that is safe by default.
The C++ committee’s approach isn’t wrong — these features genuinely help. But positioning them as an adequate response to the memory safety crisis, given the pace of standardization and adoption, strains credibility. A strategy that delivers meaningful safety improvements in 2032 is competing against one that’s been delivering them since 2015.
The speaker is clearly knowledgeable and well-intentioned. The features presented are real improvements. But an honest talk about C++ safety in 2025 would acknowledge several things the audience didn’t hear.
It would acknowledge that opt-in safety features have a 40-year track record of incomplete adoption in C++, and that contracts and hardening will follow the same pattern unless something fundamentally changes about how C++ is taught, tooled, and enforced.
It would present Google’s Android data — the most compelling real-world evidence we have on memory safety — and honestly discuss why Google’s solution was “write new code in Rust” rather than “add contracts to our existing C++.”
It would distinguish between the stock of existing vulnerabilities (which the 70% stat describes) and the flow of new vulnerabilities (which depends on what language you’re writing new code in today).
It would acknowledge that the most important safety and security lessons from CrowdStrike were about deployment processes, not programming languages — and that even a perfectly memory-safe Falcon sensor would have still crashed 8.5 million machines if CrowdStrike pushed a malformed configuration without testing.
And it would be honest about the timeline. C++ is making real but incremental progress on a problem that other languages solved architecturally. That’s a defensible position. “C++ is handling it” is not.
The C++ community deserves better than reassurance. It deserves a clear-eyed assessment of where these tools help, where they don’t, and what the realistic alternatives are. Because the organizations making language-choice decisions right now — the ones reading NSA advisories and CISA guidance — aren’t going to wait for C++29 profiles. They’re choosing today. And “we have a plan” is not the same as “we have a solution.”