Settings

Theme

You gave me a u32. I gave you root. (io_uring ZCRX freelist LPE)

ze3tar.github.io

184 points by MrBruh 14 hours ago · 115 comments

Reader

FriedFishes 13 hours ago

I can't quite make out if this is new or not. The attack vector here seems congruent with a similar exploit from a couple months ago [1]

But still might be an open threat. On the email thread Jens seems to think that this is already patched and in stable, he also points out that for this exploit to work (as written in the article) you already need escalated privileges [2] Catchy title though.

[1] https://snailsploit.com/security-research/general/io-uring-z...

[2] https://seclists.org/oss-sec/2026/q2/448

stonegray 12 hours ago

> “and is writable with CAP_SYS_ADMIN”

Am I reading this wrong or is this just a way of executing an arbitrary binary with uid=0 if you have both CAP_NET_ADMIN and CAP_SYS_ADMIN?

If you can write modprobe_path, is it really news that you can find a way to execute code?

  • PlasmaPower 12 hours ago

    No, you can grant yourself this inside an unprivileged user namespace. `unshare -Ur capsh --print` lists the capabilities inside a user namespace and demonstrates that it has both CAP_SYS_ADMIN and CAP_NET_ADMIN.

    Almost all distros allow unprivileged user namespaces, and in my opinion this is the right decision, because they're important for browser sandboxing which I think is more important than LPEs.

    • delusional 10 hours ago

      I don't think namepsace CAP_SYS_ADMIM grants you access to write non namespaces sysctls like modprobe_path

  • pizzalife 12 hours ago

    Right. `CAP_SYS_ADMIN` is for all intents and purposes equivalent to root.

rishabhaiover 13 hours ago

What is happening? I see multiple outages and CVEs is being reported on HN's front page. I've never seen these many security/incident related posts on HN's front page.

  • spindump8930 13 hours ago

    Some combination of reporting bias given concerns about LLM security capabilities and actual new vulnerabilities found with LLM assistance. Even if exploits and outages are unrelated to LLMs, I'm certainly thinking about whether claude could build these things (or if actors already have).

  • NitpickLawyer 13 hours ago

    > What is happening?

    Slowly at first, and then suddenly. AI assisted anything follows this trend. As capabilities improve, new avenues become "good enough" to automate. Today is security.

  • john_strinlai 13 hours ago

    i believe a good portion of the cves hitting the front page are moreso because they are ai-related (found partially/in whole by ai) and make for quick upvotes.

  • elija 8 hours ago

    In some sense, I wonder if non-open-source is "safer" since LLMs can't mass scan the code for exploits.

    • overboard2 6 hours ago

      Maybe for a while, but there's nothing stopping LLMs from examining disassembler output.

  • calebhwin 4 hours ago

    It's actually the perfect evergreen content to discuss on HN in an age where so much else is AI generated.

  • majorchord 13 hours ago

    AI is happening.

    • cachius 13 hours ago

      In each recent case?

      • gordonhart 13 hours ago

        AI assistance was explicitly disclosed on yesterday's. Today's has Claude as one of two contributors on this GitHub Pages site at least so it's also very likely.

        Agents are capable of finding this kind of stuff now and people are having a field day using them to find high-profile CVEs for fun or profit.

  • raverbashing an hour ago

    I wonder where are the Rust naysayers hiding now

    C code is broken - period

  • gilrain 13 hours ago

    Automated vulnerability discovery via LLM.

    • ryandrake 11 hours ago

      Anyone care to share which models and which prompts actually lead to finding these kinds of vulnerabilities? Or the narrowing-down workflow that can get an LLM to discover them? Surely just telling claude "Find all vulnerabilities in this project LOL" isn't enough? I hope?

      • Arcuru 9 hours ago

        The Anthropic researchers have said their flow is as simple as:

        1. Pick a file to seed as a starting place.

        2. Ask the LLM (in an agent harness) to find a vulnerability by starting there.

        3. If it claims to have found something, ask another one to create an exploit/verify it/prove it or whatever.

        4. If both conclude there is a vuln, then with the latest models you almost certainly found something real.

        Just run it against every file in a repo, or select a subset, or have an LLM select files with a simple "what X files look likely to have vulns?".

        So basically yes, it is that simple. It's just a matter of having the money to pay for the tokens.

    • pixl97 12 hours ago

      Everyone was talking about how Mythos was overblown marketing, and while it may be, they missed the forest for the trees. Capabilities have been escalating for a year now and we're at the point of widespread impact. I don't suspect we'll see a slowdown for a long time.

      • microtonal an hour ago

        I agree. It is not like Mythos or other LLMs are insanely smart/superhuman. Many of these vulnerabilities could be discovered fairly easily by trained human experts as well. The problem is more that it requires an insane amount of attention and time of highly-paid experts to shake out these issues vs. an LLM that never gets tired and can analyze a large amount of code at low cost.

        Linus' law was wrong because there were never enough (qualified) eyeballs to check the code. LLMs provide an ample supply of eyeballs (though it's not a benefit to open source, since proprietary developers can use the same LLMs).

      • pjmlp 2 hours ago

        Same applies to them being good enough to program, but many are so focused on source code generation that they don't get the whole picture.

        Thanks to agents and tool calling, there are now business cases that can be fully described by AI tooling, the next step in microservices, serverless and what not.

        Naturally with a much smaller team than what was required previously.

  • sva_ 9 hours ago

    A mix of AI and hybrid warfare.

  • jdub an hour ago

    ... there's also a bit of a frequency illusion factor.

  • themafia 10 hours ago

    Perhaps it was the prior quiescent period that was the anomaly.

sherr 3 hours ago

Desktop and server vulnerabilities are one thing. At least many are actively maintained and will get patched. I have a concern about all the common and cheap internet firewalls and routers that are around, running old software and kernels. Many or most will not get patched. I have some Ubiquiti boxes that are long out of support and run old kernels for instance. The hope is only that there's nothing they expose that gets hit.

pamcake 9 hours ago

This kind of post really shouldn't require client-side js — from third-party domain — to read...

static markdown version: https://raw.githubusercontent.com/ze3tar/ze3tar.github.io/9d...

kro 13 hours ago

CAP_NET/SYS_ADMIN is required for this. So this would be "not as bad" as the others.

  • kam 11 hours ago

    Also "The page pool is only created on a real ZCRX-capable NIC (mlx5 ConnectX-6+, Intel E800, NFP)"

  • t0mas88 11 hours ago

    It could work for container escape?

    • kro 28 minutes ago

      Containers, even with root user, are often stripped of these capabilities unless --privileged

somebudyelse 7 hours ago

Let's see... That's 4 Linux LPEs in the last 10 days?

Copy Fail [1]

Copy Fail 2: Electric Boogaloo [2]

Dirty Frag [3]

And now this...

[1]: https://copy.fail

[2]: https://github.com/0xdeadbeefnetwork/Copy_Fail2-Electric_Boo...

[3]: https://github.com/V4bel/dirtyfrag

staticassertion 13 hours ago

io-uring is a security nightmare. Constant privescs and a powerful primitive for syscall smuggling. Worth considering disabling it outright (already the case for most containers afaik).

shorden 11 hours ago

Interesting, I haven't tested this myself but intuitively I think that a 4 byte OOB write is plenty for a data-only attack like [PageJack](https://i.blackhat.com/BH-US-24/Presentations/US24-Qian-Page...), so I don't think hardening against the KASLR leaks discussed in OP would necessarily save you from this attack.

dundarious 10 hours ago

How many systems have the relevant NICs, and followed the non-automatic setup steps in https://docs.kernel.org/networking/iou-zcrx.html, and are not running within a VM/container disabling io_uring?

This seems on the low impact end of the numerous historical io_uring issues.

Interesting and important all the same.

saghm 12 hours ago

[flagged]

  • musicale 10 hours ago

    > "No way to prevent this", Says Only Language Where This Regularly Happens

       clang -fbounds-safety ...
    
    also see lib0xc etc.: https://news.ycombinator.com/item?id=47978834
  • dvt 11 hours ago

    Obviously the way to prevent this is by bounds checking, which is literally in the `770594e` patch. It's just a bug and they happen routinely in all languages. Since this is doing pointer arithmetic, it could just as easily happen in unsafe Rust, for example.

    • gpm 11 hours ago

      Like they said, "no way to prevent this" (kind of bug from happening again).

      • mikestorrent 11 hours ago

        Static analysis and other tools can find this, but they're expensive; wonder what the kernel team has access to?

        • PlasmaPower 11 hours ago

          If static analysis could actually find these issues with a reasonable false positive rate, the companies behind them would be running them on Linux to get the publicity of having found the issues like all the AI companies are doing now. Imo the good static analysis heuristics are already built into compilers or in open source linters.

          • canucker2016 8 hours ago

            The cheap, low-hanging "fruit" lint rules have been added to today's C/C++ compilers. But these rules can be fragile, depending on what level the static analysis scan occurs - source-code-level-textual pattern matching or use of an AST/parse tree.

            Possible problems within a function should be discoverable.

            This particular bug would be hard to discover for a typical linter unless they knew/remembered that there are two execution paths for cleanup of a given element.

        • TheAdamist 11 hours ago

          If not static analysis what would ai tools be considered? They're operating off the same source code

          Also nice the onion reference by op.

          • PlasmaPower 11 hours ago

            "static analysis" is usually deterministic rules you can e.g. put in CI. AI is also somewhat dynamic in that it can execute commands to try stuff out. The best AI vuln finding harnesses work that way, by essentially putting the AI inside of a fuzzer-like environment and telling it to produce a crash.

          • wizzwizz4 11 hours ago

            It's a reference to Xe Iaso's blog (e.g. https://xeiaso.net/shitposts/no-way-to-prevent-this/CVE-2025...), which is itself a reference to The Onion.

            • saghm 9 hours ago

              It's possible I had seen that blog post and not remembered! I was intending to reference the Onion though (and even googled to make sure I had the wording right), but seeing someone else make the same joke and forgetting is certainly something I would do

        • canucker2016 9 hours ago

          Coverity scans several open source projects for free. see https://scan.coverity.com/faq and https://scan.coverity.com/projects

          see https://scan.coverity.com/projects/linux for the linux-specific scan results - you need to create an account to view the reported defects.

          This past couple of weeks isn't a good look for them with the releases of defects found in Linux and Firefox.

        • emmelaich 8 hours ago

          Linus himself wrote a static analyzer. https://en.wikipedia.org/wiki/Sparse

          There are other free ones, I don't know if they're run as a matter of course.

        • ivan_gammel 11 hours ago

          Technically, the kernel team is sufficiently competent to design and build bespoke tools for themselves. It‘s probably a question of risk assessment and priorities.

    • ellieh 10 hours ago

      sure, but with unsafe Rust you have a very clear marking for the section of code that requires additional care and attention. it is also customary to include a "SAFETY" comment outlining why using unsafe is OK here

      • dvt 10 hours ago

        You actually kind of don't, I use like a zillion crates which have unsafe Rust in them and it's not like I'm sitting here reading every single line of their code. I like Rust for various reasons, but its memory safety is (imo) overstated, especially when doing low-level stuff.

        • josephg 10 hours ago

          Almost all rust (95%) is safe rust. You can opt out of array bounds checks with unsafe { array.get_unchecked(idx) } instead of just typing array[idx]. But I can't remember the last time I saw anyone actually do that in the wild. Its not common practice, even in most low level code.

          Rust is bounds checked by default. C is not. Defaults matter because, without a convincing reason, most people program in the default way.

    • amluto 11 hours ago

      But one would have to explicitly choose to use unsafe Rust for this instead of ordinary safe Rust. And safe Rust has no particular difficulty writing to slots in an array or slice or vector specified by their index.

      • skullone 11 hours ago

        except nearly everyone uses unsafe rust

        • josephg 10 hours ago

          No they really don't. 95% of rust is safe rust[1].

          Also unsafe rust doesn't remove bounds checks. arr[idx] is bounds checked in every context.

          You can opt out of array bounds checking by writing unsafe { arr.get_unchecked(idx) } . But thats incredibly rare in practice.

          [1] https://cs.stanford.edu/~aozdemir/blog/unsafe-rust-syntax/

          • overfeed 9 hours ago

            > 95% of rust is safe rust.

            Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?

            • steveklabnik 7 hours ago

              Hubris, an embedded RTOS-like used in production by Oxide, has ~4% unsafe code in the kernel last I checked. There’s a ring buffer implementation that has one unsafe, for unchecked indexing: https://github.com/oxidecomputer/hubris/blob/master/lib/ring... (this of course does not mean that it is the one ring buffer to rule them all, but it’s to demonstrate that yes, it is at least possible to have one with minimum unsafe.)

              It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.

            • josephg 8 hours ago

              I doubt it, but you can probably get pretty close.

              This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).

              Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.

              "Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:

                  static void io_zcrx_return_niov_freelist(struct net_iov *niov)
                  {
                      struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);
              
                      spin_lock_bh(&area->freelist_lock);
                      area->freelist[area->free_count++] = net_iov_idx(niov);
                      spin_unlock_bh(&area->freelist_lock);
                  }
              
              At a glance, this function absolutely could have been written in safe rust. And even if it was unsafe, array lookups in rust are still bounds checked.
        • saghm 9 hours ago

          "unsafe Rust" is not a binary; you don't opt into it for every single line of code. Given that the entire premise behind the idea that using C instead of Rust is fine is that people should be able to pay close attention and not make mistakes like this, having the number of places you need to look be a tiny fraction of the overall code that's explicitly marked as unsafe is a massive difference from C where literally every line of the code could be hiding stuff like this.

        • Jtsummers 10 hours ago

          > except nearly everyone uses unsafe rust

          Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?

          • dvt 10 hours ago

            Let's say you want to call win32 (or Mac) OS functions, all of a sudden you're doing all kinds of wonky pointer stuff because that's how these operating systems have been architected. Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

            • Jtsummers 10 hours ago

              > Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

              So the vast majority of Rust projects involve writing at least one unsafe block? Is that really your claim?

              • greiskul 10 hours ago

                And even if you do end up writing an unsafe block, that should be a massive flag that the code in said block should deserve extra comments on why it is safe, and extra unit tests on verifying that it does not blow up.

                How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.

                • saghm 9 hours ago

                  Exactly; I feel like a lot of people seem to misunderstand what Rust is trying to solve. It's fundamentally not trying to make unsafe code impossible; it's making the number of places you need to audit it a tiny fraction of your codebase compared to needing to audit the entirety of a C or C++ codebase. When I'm doing code reviews, you'd better believe I'm going to spend some extra time on any unsafe block I see to figure out if it's necessary and if so, if it's actually safe safe (with the default assumption for both of those being that they're not until I can convince myself otherwise).

                  • skydhash 9 hours ago

                    The thing is you can actually write quite good C code (see OpenBSD project). The power of C is that it's pragmatic. It lets you write code with you taking the full responsibility of being a responsible person. To err is human, but we developed a set of practices to handle this (by making sure the gun is unloaded and the safety is on before storing it to avoid putting holes in feet).

                    I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.

                    • kibwen 6 hours ago

                      > To err is human

                      Yes, which is precisely why I write in Rust, because the compiler errs less than I do.

                      • skydhash 3 hours ago

                        It may, but it still requires careful annotations. So you should hope that you have not made an error there and described the wrong structure for the code.

            • dralley 10 hours ago

              A tiny fraction of programs need to use win32 or Mac OS functions beyond the standard library or other safe wrappers for said functions.

            • josephg 10 hours ago

              Making use of win32 functions doesn't turn off bounds checking in your rust code.

            • amluto 10 hours ago

              So what? Just because you used the keyword `unsafe` to call an unsafe API does not mean that you are going to use unsafe pointer access to write to a vector.

    • Rygian 11 hours ago

      That's not prevention. That's remediation.

  • slopinthebag 8 hours ago

    Surely nobody could create a better language in 50 years. Surely we can't fix these issues.

  • themafia 10 hours ago

    And you see a lot of other languages being used to create operating systems with complicated multiprocessor and locking semantics?

teo_zero 4 hours ago

> Affected: Linux 6.15 – 6.19 [...] Fix: commit 770594e (not yet in any stable branch at time of writing).

Is it considered good pactice to publish a vulnerability not yet patched in any stable branch?

csmantle 9 hours ago

I first read this from the author's posting to oss-security. Turns out that the author did agree to revise the blog post for the "admin cap for root shell" part [^0]. [^1] would probably tell more.

The title looks like clickbait to me.

[^0]: https://www.openwall.com/lists/oss-security/2026/05/08/10

[^1]: https://www.openwall.com/lists/oss-security/2026/05/08/14

SubiculumCode 11 hours ago

Do most servers need this? Or can most of us 'sysctl -w kernel.io_uring_disabled=2 ' ?

JoeDohn 9 hours ago

So this is another CVE? Or am I misreading this one? "Copy‑fail", "DirtyFrag", now "IUrinegOnYou :)"?

Joke aside, we'll see more CVEs in the coming months, and in a sense that's good: it leaves less maneuvering room for bad actors (especially those selling them to the highest bidder).

ctoth 10 hours ago

If this many are public right now, what does that say about the dark matter of private ones? What's the typical public-private rate for this sort of thing/can someone help me calibrate my base rate expectations?

baq 13 hours ago

What’s our prior for p(doom) today…?

himata4113 8 hours ago

high privilege access required (CAP/NET admin), containers / sandboxing wins once again.

Can we make sandboxing the new default now? Flatpak does a good job, but we're still pretty far away for apt/yum/pacman installed packages. AppArmor was a decent step forward, but clearly not enough.

  • pjmlp 2 hours ago

    Yes on Android, iDevices, macOS, Windows (UWP, Win32 boxing), Qube OS, but it remains a controversial topic in GNU/Linux land.

rvz 14 hours ago

Another one.

Linux is falling apart faster than it can assign these CVEs.

  • hn92726819 13 hours ago

    Falling apart? You mean getting stronger? Every single one of these is an existing hole being patched. It isn't making new holes

    • tgv 2 hours ago

      As other people said in this thread: so many devices won't be patched. And that can easily lead to users and manufacturers moving away from Linux. Linux is in a glass house.

    • Gigachad 10 hours ago

      Government agencies probably already have half of these exploits in their private toolbox for years now. Finding and patching them is good, but there probably needs to be some systematic change to prevent them rather than just patching bugs when they get found.

    • pjmlp 2 hours ago

      I remember when people used to joke with Windows security and something like that would never happen on Linux, well..

  • gordonhart 13 hours ago

    Linux is "falling apart" because it's the highest-profile open source project people can point LLM agents at to find CVEs. It'll come out the other end of this hardened by all of the attention it's getting, but the next few months/years will be... bumpy.

  • maven29 13 hours ago

    perhaps this will lead to better AppArmor and SELinux defaults?

    • ChocolateGod 13 hours ago

      People will just turn SELinux off rather than have to go through the horrible tooling when it breaks a regular use case.

      • pjmlp 2 hours ago

        It is enabled by default on Android, and only developers can change it temporarly via an ADB session.

      • yjftsjthsd-h 12 hours ago

        I do think SELinux is a good example of how robust software with poor UX/DX gets undermined by that poor UX/DX. Although I do wonder if AI can help with it?

        • pjmlp 2 hours ago

          There is also the Android way, this is how it goes, fix your apps.

  • EGreg 13 hours ago

    How's BSD doing? How about Amazon Linux?

    • yjftsjthsd-h 12 hours ago

      Amazon Linux is a Linux distro? Though, yes, I would like to know how the BSDs are doing.

    • toast0 12 hours ago

      FreeBSD is getting piles of security updates lately too. Not sure about the other BSDs.

    • cachius 13 hours ago

      And Windows?

      • mschuster91 12 hours ago

        Pray to God no one ever lets an AI agent run loose on the various leaked Windows source code dumps.

        Given Windows' absurd amount of backwards compatibility, chances are pretty high that there are a lot of sleeping dragons buried inside even modern Windows 10/11 kernel and userland that date back to code and issues from the 90s - code where half the people who have worked on it probably not just have departed Microsoft but departed living in the meantime.

        • pjmlp an hour ago

          While true, since MinWin and OneCore that most of that code has been moved around.

          Also contrary to Linux, Windows 11 (optional on W10) uses sandboxing for kernel and drivers.

          Since Windows XP SP2 that Windows keeps getting mitigations, Microsoft has security teams whose day job is to attack Windows.

          They are also promoting using CoPilot for C and C++ code review for some time now.

          While it won't stop all attacks, it is better than the whole UNIX is safer than Windows attitude from the 90's, turns out it is a matter of how much money is into it.

          Want really safe above anything else, look into Qube OS with its sandboxing over everything, or mainframe systems like Unysis ClearPath MCP, with NEWP as systems language, and managed environments.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection