Settings

Theme

Linux kernel use-after-free in Netfilter, local privilege escalation

seclists.org

288 points by kuizu 3 years ago · 106 comments

Reader

l33tman 3 years ago

"We developed an exploit that allows unprivileged local users to start a root shell by abusing the above issue. That exploit was shared privately with <security () kernel org> to assist with fix development. Somebody from the Linux kernel team then emailed the proposed fix to <linux-distros () vs openwall org> and that email also included a link to download our description of exploitation techniques and our exploit source code.

Therefore, according to the linux-distros list policy, the exploit must be published within 7 days from this advisory. In order to comply with that policy, I intend to publish both the description of exploitation techniques and also the exploit source code on Monday 15th by email to this list."

Interesting.. they didn't write what conditions have to be met for it to be exploitable. Also interesting that someone screwed up and accidentally forwarded an email including the exploit to a broad mailing list...

Part of the nf modules are active if you have iptables, which you have if you run ufw (for example), so pretty broad exploit if that's all that's required, but the specific module in question in the patch, nf_tables, is not loaded on my Ubuntu 20.04LTS 5.40 kernel running iptables/ufw at least.

  • pizzalife 3 years ago

    > but the specific module in question in the patch, nf_tables, is not loaded on my Ubuntu 20.04LTS 5.40 kernel running iptables/ufw at least

    This doesn't matter since Linux has autoloading of most network modules, and you can cause the modules to be loaded on Ubuntu since it supports unprivileged user/net namespaces.

      ubuntu:~% grep DISTRIB_DESCRIPTION /etc/lsb-release
      DISTRIB_DESCRIPTION="Ubuntu 22.04.2 LTS"
      ubuntu:~% lsmod|grep nf_table
      ubuntu:~% unshare -U -m -n -r
      ubuntu:~% nft add table inet filter
      ubuntu:~% lsmod|grep nf_table
      nf_tables             249856  0
    • TacticalCoder 3 years ago

      For comparison, on my Debian Bookworm (aka "testing" but in hard freeze and full freeze in a few days I think, stable release in june) here...

          ...$  lsmod|grep nf_table    (tried without any just to make sure) 
          ...$  unshare -U -m -n -r
          unshare: unshare: failed: Operation not permitted
          ...$  /sbin/nft add table inet filter
          Error: Could not process rule: Operation not permitted
          add table inet filter
          ^^^^^^^^^^^^^^^^^^^^^^
      
          root #  cat /proc/sys/kernel/unprivileged_userns_clone
          0
    • veonik 3 years ago

      Yikes... are other popular distros shipping with unprivileged user namespaces enabled by default?

      • marcthe12 3 years ago

        Most, I think Debian has patch to be disabled at runtime via sysctl. The reason is that most containers or sandboxing techniques are root only unless you mix it with user namescapes. So most container or sandbox software use suid(firejail) , root daemon(docker) or user namescapes (podman and flatpak). Looking at the cves, user namespaces is probably the safer option

      • galangalalgol 3 years ago

        That is part of enabling rootless containers on rhel or similar.

        • waynesonfire 3 years ago

          should have re-written it in rust.

          • yjftsjthsd-h 3 years ago

            Rewritten what? The container runtime will need the same access regardless of what it's written in, and rewriting all of Linux (the kernel) would be... ambitious, although it is adopting rust incrementally.

      • failsecure 3 years ago

        Yes and this decision haunts distros like Ubuntu over and over again. There's no easy win though.

      • touisteur 3 years ago

        Do you need a user namespace? I'd expect a network namespace to be enough. Am I missing something?

        Edit: should've read better, this seems to need CLONE_NEWUSER.

  • jstanley 3 years ago

    > Somebody from the Linux kernel team then emailed the proposed fix to <linux-distros () vs openwall org> and that email also included a link to download our description of exploitation techniques and our exploit source code.

    > Therefore, according to the linux-distros list policy, the exploit must be published within 7 days from this advisory. In order to comply with that policy, [...]

    What? Someone publishes information about your vuln to a random mailing list, and this somehow creates an obligation on you to follow that mailing list's policies? I don't get it.

  • thelastparadise 3 years ago

    I don't think the bug itself is newsworthy. The existence of the exploit code, and the way that it was accidentally published, I think are.

    • pizzalife 3 years ago

      It's exploitable by an unprivileged user on the most popular distro out there (Ubuntu). I would say it's newsworthy.

  • hsbauauvhabzb 3 years ago

    What’s actually reasonable here. I’m all for exploit code becoming public eventually, but I think it’s silly to drop it immediately after a fix has been released, or before, in almost all scenarios (unless there’s been 90+ days or the issue marked as wontfix)

    • sp332 3 years ago

      Odds are that well-resourced attackers already have the exploit by now. Making it public lets users decide if this is important to them and come up with their own mitigations.

      • j_walter 3 years ago

        Once they issue the patch...it's only a matter of time till a good chunk of reasonably decent coders can develop the exploit. Once the premise is released...yeah the top exploit coders will have this in a few hours.

        • hsbauauvhabzb 3 years ago

          So we lower the bar to all adversaries with no benefit?

          If you can read exploit code to determine if patching is worth it for your use case, you can probably also read diffs for the same outcome.

          I’m not saying don’t release them, but releasing them with short notice seems irresponsible, without much benefit to defenders.

      • ikiris 3 years ago

        The link to the exploit accidentally went public. Anyone can have it.

  • candiddevmike 3 years ago

    What a dumb policy. Why have the disclosure time be so soon? This thing will be in the wild before folks can upgrade if I'm understanding this correctly.

    • krastanov 3 years ago

      The thing is already in the wild because someone on the private mailing list already accidentally mailed it to the public mailing list.

    • chasil 3 years ago

      You have a few options for dealing with problems like this.

      You can "apt update; apt upgrade" then reboot when a new kernel is available.

      Oracle has also offered Ksplice for free on Ubuntu for many years, and I'm sure that patch will be available promptly.

      https://ksplice.oracle.com/try/desktop

      Otherwise, Kernelcare is available for a fee. I think Canonical also has paid kernel patches.

      • withinboredom 3 years ago

        There is Ubuntu Pro which is free for up to five servers/desktops, after that, it requires a paid subscription.

explorer83 3 years ago

Based on this 11 month old discussion this has been an exploit vector for sometime - https://groups.google.com/g/linux.debian.bugs.dist/c/ZF9rWY3...

"I vaguely recall at least around 6-7 such holes, and a quick google search seems to reveal that at least those would have been mitigated by unprivileged user namespaces being disabled: CVE-2019-18198 CVE-2020-14386 CVE-2022-0185 CVE-2022-24122 CVE-2022-25636 CVE-2022-1966 resp. CVE-2022-32250"

snvzz 3 years ago

There's easily thousands of such bugs hidden in the kernel.

Reminder the kernel has over ten million LoCs, or megabytes of object code.

Perhaps we should start thinking about whether it is a good idea to run something this large in supervisor mode, with full privileges.

I wouldn't say it is sensible in a world where seL4 exists.

  • akvadrako 3 years ago

    It won't make that big of a difference. If you exploit the networking layer you could intercept any local traffic, which will mostly be unencrypted, and communicate with outside attackers. You are probably owned by that point unless you treated localhost as untrusted.

    It's like why it doesn't matter if you are running as root or not. The user account has access to whats important, like a database or keychain.

  • 0cf8612b2e1e 3 years ago

    Microkernel does seem the only sensible path forward. Even if the kernel is slowly rustified, going to be playing security whack-a-mole for a long time.

    • anonymousiam 3 years ago

      Back in the day when the micro-kernel/monolith flamewars were raging, the arguments for monolith were about improved performance and lower memory usage. I haven't seen much discussion on this topic for years, but at least those two arguments have not aged well.

      • pjmlp 3 years ago

        Mostly because the cloud is based on microkernel like approach regardless of the kernel.

        Hypervisors, userspace drivers, containers, language runtime sandboxes, bytecode deployments, driver and kernel sandboxes (safe kernel / driver guard),container only distributions,...

      • aflag 3 years ago

        Why not? It isn't clear to me why monolithic kernel wouldn't still have better performance.

        • pjmlp 3 years ago

          It doesn't matter with layers hypervisors, virtualization, containers and sandboxes running on top.

          All mitigations to achieve microkernel like capabilities.

          • aflag 3 years ago

            Hm, if you're making the underlying hardware slower, don't you want the kernel to be even faster though?

            VMs are much more than micro kernels. It's about allowing the user to install whatever they want in their machine. Containers are just a userland abstraction. Not sure where the link to microkernels is there.

            • pjmlp 3 years ago

              Not when using hypervisor type 1.

              • aflag 3 years ago

                Why not? Hypervisor type 1 has less overhead, but it's still not quite the same as running directly on the box. I don't think micro kernels would replace those anyway. To be honest, I don't even really see the connection between running most of the kernel in user space and allowing concurrent systems to run in the same hardware.

                • snvzz 3 years ago

                  seL4 with its VMM is a better hypervisor architecture than, say, Xen.

                  Xen is unfortunately large, and the full hypervisor runs privileged.

                  With seL4, VM exceptions are forwarded to VMM, which handles them.

                  From a security standpoint, a VM escape would only yield VMM privileges, which are no higher than that of the VM itself. This is much better than a compromise of Xen, which would compromise all VMs in the system.

                  Makatea[0] is an effort to build a Qubes-like system using seL4 and its virtualization support. It is currently funded by a NLNet grant.

                  0. https://trustworthy.systems/projects/TS/makatea

      • mananaysiempre 3 years ago

        Spectre and friends seem to have killed Liedtke’s fast synchronous IPC, unfortunately. Of course, there’s still asynchronous IPC, exokernels (perhaps the closest thing to today’s containers), and so on.

    • CorbetL 3 years ago

      Linux may eventually become a microkernel with most IPC done via io_uring, but it may take 20 years to reach this state.

      • touisteur 3 years ago

        Right now it seems microvms are the way. Build an extremely minimal tailored kernel+userland for network-facing components. If you don't have nf_tables built-in (and it's not loadable because not present) this vulnerability isn't a problem. I mean, right now to use it one would have to chain it with a RCE on your userland app (or on the kernel but just skip the nf_tables step then...). Then one would have to escape the VM, then if you're using firecracker or crosvm, you'll have to break seccomp. Still imaginable, but by then I guess the next kernel (or userland app) fix release is already available :-) and you're already rebooting your microvm.

        If you can CI/CD in minutes a reduced kernel+app and reboot in 100ms your network-facing thing (be it nginx or haproxy) you might just take latest vanilla anyway...

      • ttarr 3 years ago

        Care to elaborate plz?

        How would we go about GPUs, NCs, and many kinds of drivers?

    • galangalalgol 3 years ago

      For rack servers you could probably get away with a number of microkernel os today. Desktop has clear options in that regard, but you are giving up op n source.

  • userbinator 3 years ago

    Alternatively, perhaps we should start thinking about whether it is a good idea to have multiple users of different privilege sharing the same hardware.

    • gizmo686 3 years ago

      "User" in a modern Linux system is just a weird name for "security domain". Many programs run as their own user to limit their ability to attack the rest of the system if they get compromised; and limit the ability of a different compromised component from attacking them.

      My desktop, on which I am the only person with an account, has 49 "users", of which 11 are actively running a process.

      At work, every daemon we run has a dedicated user.

      On android, every app runs as its own user.

    • thfuran 3 years ago

      What's the alternative, just running all code at ring 0?

    • travis729 3 years ago

      I’ve been thinking this recently as well.

  • saagarjha 3 years ago

    Who's going to make seL4 perform comparably to Linux?

    • snvzz 3 years ago

      Why would we need to slow seL4 down?

      • saagarjha 3 years ago

        I'm not really in the mood for trolling.

        • snvzz 3 years ago

          The name-calling is uncalled for.

          To elaborate, seL4 claims to be the fastest kernel around[0], a claim that remains unchallenged.

          To put it into context, the difference in IPC speed is such that you'd need an order of magnitude more IPC for a multiserver system based on seL4 to actually be slower than Linux.

          A multiserver design would imply increased IPC use, but not an order of magnitude.

          0. https://trustworthy.systems/projects/seL4/

          • camgunz 3 years ago

            Sorry I'm pretty naive to this space. I didn't immediately see any performance info on that page save for this paper [0] which shows seL4 competitive with NetBSD, but far from Linux. Is there something else I should look at?

            [0]: https://trustworthy.systems/publications/full_text/Elphinsto...

          • b112 3 years ago

            The name-calling is uncalled for.

            From an observer on the sidelines: there was no namecalling.

            He said you trolled, not that you ate a troll. The distinction is important.

            Even the best of us troll, sometimes. (Not claiming you did btw, just that there was no name calling.)

          • saagarjha 3 years ago

            No, it doesn’t. Here’s the full quote from their website:

            > seL4 is the world’s fastest operating system kernel designed for security and safety

            Linux is arguably not designed for security and safety but it blows seL4 out of the water when it comes to performance. There’s a reason it only gets used in contexts where security is critical; I would have expected that you would be aware of this considering you were the one who is promoting it.

            • snvzz 3 years ago

              >but it blows seL4 out of the water when it comes to performance.

              Citation needed.

              And by that I mean actual benchmarks of Linux doing the few tasks seL4 does, such as IPC or context switching, faster than seL4.

              • saagarjha 3 years ago

                No, you don’t get to define the benchmarks like that. People use an OS so they can run real-world programs on top of it, not spin it in a loop and see how fast it can do IPC. In a monolithic kernel there’s no need to switch contexts for many things; that’s the entire point of using one. I’m sure that seL4 has a perfectly fast implementation of those operations but that’s because it sits and does those all day as part of its basic functionality. Optimizing overhead doesn’t win you extra points when you’re comparing against an OS that doesn’t have it all.

                • snvzz 3 years ago

                  seL4 is an order of magnitude faster at this "overhead" thing. We're talking nanoseconds vs microseconds difference.

                  The multiserver architecture does indeed imply an elevated use of IPC, but it does in no way outweigh the difference in IPC cost.

                  In this model, data sharing, and the implied locking, is minimized, which as a consequence helps SMP scaling.

                  Dragonfly, while not multiserver proper, took a different direction than Freebsd and Linux by optimizing IPC and not implementing fine-grained locks, and instead favoring concurrent lockless and lockfree servers.

                  As a consequence, Dragonfly scales much better than Freebsd, and in many benchmarks manages to outperform Linux.

                  This is despite the tiny development team, particularly so when considered relative to the amount of funding these two systems get.

                  I am sickened by the effort that's being wasted on a model that we know is bad and does not work. Linux will never be high assurance, secure or scale past a certain point.

                  Fortunately, no matter how long it'll take, the better technology will win; there's no "performance hack" that a bad system can pull to catch up with the better technology once it's there.

                  Just a matter of time.

          • arp242 3 years ago

            > To elaborate, seL4 claims to be the fastest kernel around[0], a claim that remains unchallenged.

            Can I run Firefox or PostgreSQL on seL4? Or another real-world program of comparable complexity? And how does the performance of that compare to Linux or BSD?

            That's really the only benchmark that matters; it's not hard to be fast if your kernel is simple, but simple is often also less useful. Terry Davis claimed TempleOS was faster than Linux, and in some ways he was right too. But TempleOS is also much more limited than Linux and, in the end, not all that useful – even Terry ran it inside a VM.

            I've heard these sort of claims about seL4 before, and I've tried to look up some more detailed information about seL4 before, and I've never really found anything convincing on the topic beyond "TempleOS can do loads more context switches than Linux!" type stuff.

  • phendrenad2 3 years ago

    Actually over 30 million LOC

  • unixhero 3 years ago

    Well it is the kernel.

  • Gigachad 3 years ago

    We really need to be moving faster on migrating Linux to a safer language which prevents these kinds of issues.

knorker 3 years ago

> delete an existing nft rule that uses an nft anonymous set. And an example of the latter operation is an attempt to delete an element from that nft anonymous set after the set gets deleted

I'd be very interested to hear how this can be done by an unprivileged user.

Try to race set add/removals, sure, but if it depends on the set itself getting deleted, that seems… harder.

  • 0x006A 3 years ago

    on https://bugzilla.redhat.com/show_bug.cgi?id=2196105 a comment suggests that it might only be possible if you have "unprivileged user namespaces" enabled

    • pizzalife 3 years ago

      >a comment suggests that it might only be possible if you have "unprivileged user namespaces" enabled

      Which is the default on Ubuntu.

      • chlorion 3 years ago

        It's the default on pretty much any modern Linux system!

        • klooney 3 years ago

          From 2016- https://lwn.net/Articles/673597/

          Andy Lutomirski described some concerns of his own:

          > I consider the ability to use CLONE_NEWUSER to acquire CAP_NET_ADMIN over /any/ network namespace and to thus access the network configuration API to be a huge risk. For example, unprivileged users can program iptables. I'll eat my hat if there are no privilege escalations in there.

          • withinboredom 3 years ago

            I hope he hasn't been eating his hat all these years. I hear that isn't good for the digestive system... /s

AdamJacobMuller 3 years ago

https://nvd.nist.gov/vuln/detail/CVE-2023-32233

The NIST CVE page points back here. Funny.

Nothing I see so far specifically says how far back this goes, but, https://security-tracker.debian.org/tracker/CVE-2023-32233

Seems to go back really far.

moring 3 years ago

Honest question: Why did they build an exploit that uses the bug? I always assumed that use-after-free is equivalent to "game over" (i.e. I assumed that local privilege escalation is a given) and it is clear that such a bug must be fixed.

By that I mean, it might be easy or hard to exploit a bug to achieve LPE, but it seems to be redundant to prove that it is possible.

  • mort96 3 years ago

    Making a PoC is a great way to convince both yourself and the maintainers that the bug is actually exploitable in the wild and thus a big fucking deal. Alternatively, you might discover that there are some other things going on which turns out to make the bug unexploitable.

    • moring 3 years ago

      Let me rephrase my question: Is there actually such a thing as an "unexploitable use-after-free"? How would that look like? How would you reason that it is actually unexploitable?

      Context: My experience with C programming is that practically every bug that is related to memory management tends to blow up right into your face, at the most inconvenient time possible.

      • mort96 3 years ago

        Here's a stupid example:

            struct foo *whatever = new_foo();
            // use 'whatever'
            free_foo(whatever);
            if (whatever->did_something) {
                log_message("The whatever did something.");
            }
            // never use 'whatever' after this point
        
        The 'whatever' variable is used after what it points to is freed, but it's not exploitable. Worst case, if new memory gets allocated in its place and an attacker controls the data in the offset of the 'did_something' field, the attacker can control whether we log a message or not, which isn't a security vulnerability.
        • moring 3 years ago

          What happens if the code gets pre-empted between free_foo(whatever) and the if-statement, memory allocation gets changed, and subsequently dereferencing the pointer to read whatever->did_something causes a page fault?

          I am making assumptions here: That pre-emption is possible (at least some interrupts are enabled), that "whatever" points to virtual memory (some architectures have non-mappable physical memory pointers), and that a page fault at this point is actually harmful.

          However I do want to point out that the reasoning why your example is not exploitable isn't as easy as it first seems.

          • mort96 3 years ago

            No preemption is needed, the call to free might unmap the page the pointer points to. I was considering adding a paragraph about that but didn't bother. A page fault isn't a privilege escalation issue though, it's a pretty normal thing.

      • friendzis 3 years ago

        > How would that look like? How would you reason that it is actually unexploitable?

        For use-after-free to be exploitable, by definition an attacker must be able to put arbitrary content at the memory region. This is not always easy: may require certain [mis]configuration, data layout and so on.

        > practically every bug that is related to memory management tends to blow up right into your face, at the most inconvenient time possible.

        I will not contest this claim, however there is a difference between "blow up" and "exploit". Malicious packet being able to segfault a server is one thing, malicious packet resulting in RCE is quite another. This may be a lost in translation moment when under colloquial use "exploit" does not include DoS.

igo95862 3 years ago

I am developing a sandbox project for Linux desktop applications called bubblejail:

https://github.com/igo95862/bubblejail

In the next not yet released version 0.8.0 there will be a new option to disable a specific namespace type per sandbox. For example, disabling the network namespace would prevent this exploit.

This is more flexible than globally disabling all user namespaces as some programs might use other more harmless namespaces like Steam uses mount namespaces to setup runtime libraries.

alex14fr 3 years ago

Glad to have sticked with the good old iptables and left CONFIG_NF_TABLES unset in kernel configuration.

  • sam_lowry_ 3 years ago

    Aren't iptables just an emulation layer on top of netfilter?

    • failsecure 3 years ago

      For modern distros, the nft package includes an alternative binary that takes the place of /sbin/iptables and translates the input to an nft compatible format. As far as the kernel is concerned, iptables is still iptables. Old iptables can be accessed by calling the iptables-legacy binary which will auto load the old iptables ko.

    • TechBro8615 3 years ago

      Yes, AFAIU (not an expert), iptables and nftables are two command line tools and abstractions (chains vs. tables) for interacting with the same underlying netfilter API.

      • nubinetwork 3 years ago

        I believe at one time they were two separate subsystems, but they got merged in 4.x or 5.x

        • alex14fr 3 years ago

          I run 6.3 and the incriminated files were not compiled in my kernel thanks to CONFIG_NF_TABLES=n during make config.

    • eikenberry 3 years ago

      Probably depends on the distro. Iptables is a wrapper around nftables in most distros, but probably not all.

      • smashed 3 years ago

        You can check with: iptables -V

        If it says (nf_tables), you are using the compatibility layer from the iptables-nft package.

        It works quite well. Apps like Docker that inserts rules using the legacy iptables syntax are oblivious to the fact that they are actually inserting nftables rules.

        It also provides an easy migration path. Insert your old rules using your iptables script then list them in the new syntax using nft list ruleset.

        The problem is that it works so well that it seems most users just stayed with the iptables syntax and did not bother migrating at all.

        • ahartmetz 3 years ago

          IMO, the problem is that the people who created nftables (and the "ip" tool) couldn't create a user interface that anyone but themselves would like to use. Linux traffic shaping functionality suffers from the same "obscure word soup" interface.

          • smashed 3 years ago

            I agree for the "ip" tool (from iproute2).. I got used to it but I still prefer the ifconfig output. It is somehow consistant and you can get used to it.

            I somehow got accustomed to the nftables rules format. It is in fact objectively much better than the iptables format in many ways. The native JSON, easy bulk submit to the kernel, built-in sets and maps (the source of the currently discussed CVE though). It really does fix a lot of what was wrong with iptables.

            But iptables was probably not broken enough for most users to warrant re-learning everything.

            Now, the traffic shaping tool, oof.. I still cannot grok any of it. I've been happy with the fireqos script so far to abstract everything out of the tc syntax.

fnordpiglet 3 years ago

Rust needs to be more prominent in the kernel, and where not rust ebpf. The days of hand mangling pointer arithmetic need to end.

  • eklitzke 3 years ago

    The patch doesn't fix anything with pointer arithmetic: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

    • wtallis 3 years ago

      I wouldn't generally expect a use-after-free to result from improper pointer arithmetic; that's the recipe for a buffer overflow. But Rust happens to also be well-known for helping manage object lifetimes, which seems to be what went wrong here.

      • eklitzke 3 years ago

        I'm not sure if your claim here is correct. The patch is to change call sites like

          priv->set->use++;
        
        To look like:

          nf_tables_activate_set(ctx, priv->set);
        
        Where this function is defined as:

          void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set) {
            if (nft_set_is_anonymous(set))
              nft_clear(ctx->net, set);
            set->use++;
          }
        
        So to me (someone who is not an expert in this code) it looks like the fix is checking if the set has the anonymous flag before changing the reference count. I'm not an expert in this code and I could be mistaken, but I think your claim that this would be fixed by Rust object lifetime checking requires better evidence.
        • wtallis 3 years ago

          I think a Rust-influenced design would have shied away from the manual direct reference count management in the first place and resulted in a fairly different-looking API; but at a minimum I'd expect that the safe wrapper `nf_tables_activate_set` would probably have existed from the beginning, and may have been designed to transfer ownership of the `nft_set` rather than just capture a reference to it.

          More generally: doing a line-by-line translation from C to Rust is never going to be the best way to make use of the capabilities Rust has that C lacks.

        • comex 3 years ago

          One of the parts of Rust’s safety story is to always use smart pointers for reference counting rather than the type of ad-hoc manual reference count management seen in the code you quoted. Combined with lifetime checking, it makes it impossible for some random logic error to cause a use-after-free.

          • max_k 3 years ago

            > to always use smart pointers for reference counting

            Agree - and the Linux kernel is extremely fragile because it is full of ad-hoc manual code like that.

            Unfortunately, Rust won't be the rescue, because (in the foreseeable future) Rust will only be available in leaf code due to the many hard problems of transitioning from fragile C APIs to something better. Writing drivers in Rust is useful, but limits the scope of how Rust helps.

            Many of Rust's advantages at a tiny fraction of the effort could be had easily with a smooth transition path by switching the compiler from C to C++ mode. The fruit hangs so low, it nearly touches the ground, but a silly Linus rejects C++ for the wrong reasons ("to keep the C++ programmers out", wtf).

            Every time I work on the Linux kernel source, I'm horrified by how much pain the kernel developers inflict on themselves. Even with C, it would be possible to install a mandatory coding style that is less fragile.

            For example, in the aftermath of the Dirty Pipe vulnerability last year, I submitted a patch to make the code less fragile, a coding style that would have prevented the vulnerability: https://lore.kernel.org/lkml/20220225185431.2617232-4-max.ke... - but my patch went nowhere.

            • comex 3 years ago

              We’ll see. As far as I know, the biggest blocker to using Rust outside of drivers is the fact that LLVM lacks support for some architectures Linux supports. And rustc_codegen_gcc seems on track to fix that eventually; even if it takes years more, that’s not much time on the scale of Linux’s development history.

              • max_k 3 years ago

                That wouldn't solve the hard problems I meant. Rust portability is an easy problem - it's clear how to port Rust to more architectures, just nobody has done it. But doing interop between Rust and C in both directions, with complicated things like RCU in between - that is a hard and complex problem.

          • mtlmtlmtlmtl 3 years ago

            Correction: it is impossible in safe Rust that only ever calls safe Rust. The moment you're calling unsafe Rust, the possibility returns.

            Not saying Rust isn't an improvement, it's a huge improvement over C, but there's no reason to oversell it. Rust is not going to make these errors magically go away, at least not in a kernel, even if you wrote the kernel from scratch, all in Rust. Unless you managed to write all of it in safe Rust which... good luck with that.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection