Why GNU su does not support the `wheel' group (2002)
ftp.gnu.orgIt shows how old is that, and how things changed.
Back in the day, it was about multiple OS users on one big machine, maintained by a university or a corporation.
Now I'm the only human user of my several machines. I have more than one interactive user account on some of them. I put these accounts to the wheel group, to avoid ever using a root password. (Void Linux has it pre-configured in /etc/sudoers.)
Yes, Unix was designed to protect users from each other but the modern need is to protect applications/invocations from each other. It is unfortunate that Unix wasn't really designed for the modern use case.
Basically https://xkcd.com/1200/
The irony behind it is that one could argue that we are using UNIX wrong, because technically each program should run as its own user with its own groups. Which is what apparmor and firejail/sandboxes kind of want to embrace but in practice people just care too less.
Only sounds like "irony" if you don't understand problem.
The problem is not isolation or lack of it. The problem is that app require complex set of permissions for both users files and other apps.
App might want to send notification to notification daemon. But app should not be able to pretend to be another app, whether by name or icon. And good luck trying to stop malicious app from just making same/similar enough icon and spelling Firefox with some fancy UTF characters to go around it.
And that's pretty simple case! And already very hard on kernel/OS level to solve. Now look at files.
You might want to allow graphical editor to open any graphical file, regardless of location.
You might want to allow that same editor to only edit some of them.
But for browser, you might want to allow saving new files, but not editing/rewriting existing ones, because it is not an editor, and should have no business editing the files.
Or, allow browser tab browsing certain URL (say, web image editor) to modify the files, but not the image sharing webpage that only needs to read the file.
Now we not only have insanely granular permissions per app, the different actions from "app" (web browser is basically container for multiple applications at that point) also need different permissions.
It has nothing to do with "unix bad", or "unix wrong", to actually separate the applications without hardships on the user (like fucking with permissions every time one app needs to touch files of another app) is just very very hard
> You might want to allow graphical editor to open any graphical file, regardless of location.
More likely, you want to temporarily give them permission to specific files you indicate. A graphical editor doesn’t have reason to read any file that the user didn’t explicitly picked for editing/viewing.
That’s how Mac OS works nowadays (possibly except for the ‘temporarily’; I don’t know the details): applications can only open files that the user selected in the system file open dialog. That runs in a separate process and opens up an app’s sandbox to allow access to the file the user selected.
That limits your application though. It means you have to use the system file picker. For many apps that might be fine. But it means you can't have something like vim or emacs where you open files with a command. Or have an option that does something like open a sibling .h file when you are editing a .c file. Or search up the directory to find the applicable .editorconfig file.
So why does it work for Mac, Android, and iOS?
In fact, it doesn't work. Both Android and macOS apps will commonly ask for "full filesystem access" permissions for this exact purpose, which sort of defeats the point (for those apps at least). I don't use iOS enough to speak to how this is handled there, but the few times I've had to wrangle some files on there it made me want to smash my head into the wall.
Well, the examples given don't, generally speaking. For stuff like compiling you can do things like have the permission apply to an entire folder, though.
You can do this on basically any modern unix by passing file descriptors over a unix socket: the “graphical editor” server would launch as a user that can’t access anything except a socket and then users would open files by pushing an open fd to the editor over its socket.
This sounds interesting but I don’t understand what the underlying mechanism is. For me, a file descriptor is just an int corresponding to something I can read from and write to and a socket just carries bytes. I don’t understand how an FD can be sent over a socket or, if it can, how that’s anything more than just sending an int?
It's special API that tells kernel to duplicate FD and give it to different process.
https://linux.die.net/man/7/unix
There are few interesting uses like for example, if you want to restart a network server, the old process can send its open, listening socket to the new process and thus achieve seamless switchover.SCM_RIGHTS Send or receive a set of open file descriptors from another process. The data portion contains an integer array of the file descriptors. The passed file descriptors behave as though they have been created with dup(2).Other nifty thing with UNIX sockets is that you can just... read which user sent the message and as it is kernel adding that metadata you're 100% sure it came from that user. That's for example how you can set postgresql so say a certain user in the system can log as themselves without having to have a password.
https://en.wikipedia.org/wiki/Unix_domain_socket
In addition to sending data, processes may send file descriptors across a Unix domain socket connection using the sendmsg() and recvmsg() system calls. This allows the sending processes to grant the receiving process access to a file descriptor for which the receiving process otherwise does not have access.[2][3] This can be used to implement a rudimentary form of capability-based security.
Ancillary data: https://linux.die.net/man/3/cmsg
I’m not exactly sure of the terminology, but there’s an opaque object corresponding to the int that can be passed between processes via unix sockets. I believe nginx and other web servers do this to transfer open connections to the new server process on restart without interruption.
You can express most of this using the existing capabilities in linux, the issue is that the interfaces you use to do stuff need to change in order to actually make it usable (as opposed to just instantly disabled as soon as it becomes a problem, like apparmor).
While true and actually pretty cool, a comment like this is a pretty good explanation of why we haven’t had widespread adoption of Linux on the desktop. I can imagine the users’ eyes glazing over.
I wouldn’t tell a user this, but developers of the desktop environments and distributions are leaving a lot of the design space unexplored.
Flatpak [1] offers something similar on Linux:
> The FileChooser portal allows sandboxed applications to ask the user for access to files outside the sandbox. The portal backend will present the user with a file chooser dialog.
> The selected files will be made accessible to the application via the document portal, and the returned URI will point into the document portal fuse filesystem in /run/user/$UID/doc/.
[1]: https://docs.flatpak.org/en/latest/portal-api-reference.html...
> Which is what apparmor and firejail/sandboxes kind of want to embrace but in practice people just care too less.
In practice I don't have the time to debug every shitty little app armor integration for weeks. I lost days to libvirt-manager because its app armor support was enforced and not even half assed. Some configuration paths would automatically get whitelisted in its auto generated app armor profiles, others would just get you a file not found until you whitelisted them manually. The process responsible for generating these profiles would also silently kill itself if it encountered a path that was on its internal ban list, have fun debugging that when you do things like using an alternative bios rom, which by default are all stored in a blocked path.
Apparmor feels like security through obscurity, unless you already know that you are dealing with app armor fuckery there is no chance in hell that you will be able to run your application and not being able to run anything is the holy grail of security.
Regarding the last paragraph... Apparmor writes pretty verbose messages visible in journalctl (and in dmesg I think), so it's not really an obscurity
I used libvirt with apparmor and was pretty satisfied with it
the problem is "just that" is not good enough
because programs often to need to have part of the capabilities of the user which started them, just a very well controlled subset of them, something which the UNIX model can't properly represent (through you can hack it on-top of it)
There is also the problem of having by default "owner-user owner-group other" as permission sets for files and executable. This works if others is "other humans" (assuming it does work, security issues on shared systems based on that where not uncommon). But this works much less if you want to protect users from rogue programs because then "other" tends to be far to permissive.
Process owned by human-user fork(2)s and then exec(2)s suid program owned by program-user; program owned by program-user then does most of the work; but calls back over a domain socket to program owned by human-user to get it to do things on the program-user’s behalf.
Picture: local DB client, remote DB server. Server can stream a file to the client for the client to write to disk. “On the same machine, as a different user” is just the trivial case of “over the network.”
This doesn't actually provide the benefit of application isolation though; if the software is malicious or vulnerable the as-user component could be as well. Remember that the biggest use case for application isolation is untrusted applications. Essentially any setuid-based approach to isolation requires a trusted developer using very good practices to remain secure, and that's why it's faded away.
What's insecure about setuid if the setuid user isn't a privileged user? For example, a setuid-nobody program, shouldn't be any more insecure than a systemd service spawned as User=nobody, no?
(Also, implied is that any untrusted logic lives in the spawned program, while the "client" program is simple and auditable. As I said: like a database client vs a database server. Or how about: like a client that wants to print something, vs. a print server embedding untrusted printer drivers!)
like I sayed: hacks
If everything is a file, and files can have permissions, then you can simply allow the "program user" access to those files using groups.
The group model is far too inflexible to make this realistic... A file can only have one group, and people use more than one application. ACLs are available on Linux (although seldom used) and help to address this problem, but the ergonomics are very poor. Since ACLs don't address the issue of syscalls, IPC other than file based, etc., It hasn't really made sense to make them the focus or application isolation efforts. The kernel namespacing and capabilities features are a lot more attractive for this use and are more similar to the historic approach of chroot... But the tools still aren't great.
>A file can only have one group, and people use more than one application
But users can be in multiple groups. You can have files with groups like "graphics, audio" etc. and give access to the application users by adding that user to the relevant groups.
>IPC other than file based
This isn't UNIX model though, is it?
Though I agree with you. Given the current state of programs, file permissions aren't enough for isolation.
That's what Android does: each app runs as a different user.
The issue with implementing that on traditional UNIX systems is that only root can impersonate another user. (Mechanisms such as su/sudo are achieving their goal through a setuid bit, and implement a policy using executable code in user space, which historically hasn't been without its own share of bugs.)
Next problem will be sharing data between programs that legitimately need to do so; if I had an _emacs user that owned my source code, how do I make it non-painful for the _gcc user to read the source and write the resulting executables (which would end up in a directory owned by _emacs)? What about git, various preprocessors/generators, formatters, linters?
You'd have to step out of the traditional UNIX authn/authz model to effectively implement that. It's what various security-focussed OS's have been doing for a while anyway; e.g. OpenBSD implements unveil, which "hides" entire branches of the VFS tree. For example, if git has no business reading or writing files outside of the currently operated on repository, it can restrict itself very early in the process life - before proceeding to perform any of the "tricky" operations that are the common sources of security bugs.
> The irony behind it is that one could argue that we are using UNIX wrong, because technically each program should run as its own user with its own groups.
I think one problem with the UNIX design is that UIDs/GIDs are a flat namespace, and commonly only 32-bits in size (even on 64-bit systems), when what is really needed to meet contemporary requirements is a hierarchy, either with an unlimited number of levels, or at least generous limits. Allow a user to create sub-uids (such as one per an application) and even sub-sub-uids (a web browser might create a sub-sub-uid for each website the user visits).
I think the Windows design of variable-length SIDs is in principle superior to the POSIX approach.
(Although, not necessarily in practice - it isn’t uncommon for Windows to make design decisions which in theory are superior to those of UNIX, but the practical implementation of them is full of warts, backward compatibility hacks, arbitrary limitations, and undocumented black boxes, which end up canceling out a lot of the theoretical advantage.)
Have you heard of user namespaces? They would match all your requirements it seems.
I have but I don’t agree that they do.
From what I understand, Linux user namespaces require you to reserve a UID range for each namespace to be mapped to its parent. Since you only have 32-bits to play with, you are forced to map multiple UIDs in the child namespace to the same UID in the parent, while many security decisions are based on the root user namespace UID only. So this is actually a lot more limiting and inflexible than Windows-style variable length UIDs would be.
> you are forced to map multiple UIDs in the child namespace to the same UID in the parent
Is that really a limit or just a thing for convenience?
I don't think besides 0 in namespace being the actual user in the actual system as a good convenience, that there is any "need" for pids per root-pid, and even if that happened it would save "root-pids".
And I find it unlikely as of now that a system would reach the 16-bit limits of running more that 65000 applications on a single system without hitting some other limit like /proc/sys/kernel/pid_max or /proc/sys/fs/file-max first.
> I don't think besides 0 in namespace being the actual user in the actual system as a good convenience, that there is any "need" for pids per root-pid, and even if that happened it would save "root-pids".
What happens with filesystems though? I would assume the filesystem is using the root user namespace. Which means if you have two different UIDs and they map to the same UID in the root namespace, they get collapsed into one for file ownership/etc. That seems a rather major limitation.
> And I find it unlikely as of now that a system would reach the 16-bit limits of running more that 65000 applications on a single system
With 32-bit identifiers, if you make each level 16-bit, you only have room for two levels. What if you have need for a third?
Also, you have to design a mapping from however many levels you need to the 32-bit flat namespace. A mapping which works well for one use case might turn out to be a problematic limitation in another. With variable-length UIDs there is no mapping to bother with.
> Also, you have to design a mapping from however many levels you need to the 32-bit flat namespace. A mapping which works well for one use case might turn out to be a problematic limitation in another. With variable-length UIDs there is no mapping to bother with.
Yes, this thing is gonna make IPv4 NAT look like a nice thing in comparison.
Yes, it will probably mean horrible kludge mapping of isolated-applications to UIDs, but until you get to 2^15 ~ 2^16 count of isolated-applications it should work fine.
Yes, this will be on a per-system basis, the resulting filesystem will be only useable by your system, and no other system.
What I'm saying is that in theory the "filesystem" and the "UIDs are 32 bit" parts are mostly there. They're there from the multi-user-big-box days not being used (except by Android/Linux).
> With 32-bit identifiers, if you make each level 16-bit, you only have room for two levels. What if you have need for a third?
The main reason why 65536 UIDs and GIDs are often submapped to every user is because POSIX systems often have a hardcoded assumption that user nobody is UID 65534, GID 65534, and if you want to run nested POSIX systems under POSIX systems without too many changes, reserving that many UIDs and GIDs are required.
There's no good place for that universal "nobody" user anyways, and if you're rethinking how the UIDs and GID mechanisms relate to security, definitively no place for a universal nobody, so you might as well map only the required ammount of UIDs/GIDs per isolated-application.
That then leads to leaving unmapped UIDs unmapped on both host and isolated-applications.
Unless you're reaching the 2^15 ~ 2^16 count of isolated-applications then it should work fine.
Another option would be doing what Android (and supposedly flatpak) does: you should not be able to simply run whatever you want if you're a isolated-application. If as a isolated-application you need to run another isolated-application you need to invoke "the platform" via `am` or `flatpak-spawn` and use it to spawn another isolated-application.
> What happens with filesystems though? I would assume the filesystem is using the root user namespace. Which means if you have two different UIDs and they map to the same UID in the root namespace, they get collapsed into one for file ownership/etc. That seems a rather major limitation.
As far as I know most of what can be considered normal Linux filesystems (ext4, btrfs and I think xfs) support said 32-bit UIDs so you would not need to change filesystem code (and I believe changing and bugfixing filesystem code is always a scary proposition) to use a 32-bit mapping.
Nothing prevents you from using only UID/GIDs; there are other security mechanisms that could be used:
* present every isolated-applications with different overlay filesystems visible. So you can have several things read/write to the same places, but every one has it's own view of what is being read/written.
* displaying a entirely different file system for every isolated-application (bindfs as an example)
* every isolated-application has it's SELinux context (labels) or other forms of ACL applied to those files.
But I find this birthday attack scenario dubious, why would you "need" this UID overlap if the isolated-application don't overlap outside of both namespaces?
If they aren't the same isolated-application it's the wrong thing to do and a security risk.
If they do map to the same isolated-application with the same set of data then trusting everything will be fine is a reasonable assumption. It isn't getting any more data or more privileges from being at different UIDs or GIDs in different contexts.
You are really proving my point - multiple paragraphs of details and provisos, many of which would have been completely unnecessary if UIDs had been variable-length all along, as Windows SIDs are.
It seems like the obvious solution. Users are protected from one another in Unix, applications need to be protected from each other, therefore applications must be users.
I ran a SaaS for a long time before containerisation, and we would create a new Unix uid for each customer, and run the application instance exclusively under that uid. Coupled with a postgres database instance and properly isolated postgres roles, it felt like a reasonable way to isolate customers from each other.
The problem with this approach is that, of course, it really doesn’t scale easily. Eventually you need multi tenant, and eventually we ended just pushing everything into the database, using row level security and tenant IDs. It worked great but felt more fragile (eg, you can disable RLS)
I’m not an OS expert by any means, but I think ultimately the problem is that we’re using one operating system model for two orthogonal use cases.
I feel like need a well-defined client model - “one user with multiple apps” and a well-defined server model - “one app with multiple users”. But it’s not clear to me how the OS can help with the latter, since it’s going to be domain specific. Maybe Postgres’ model is the right answer after all.
Unix doesn't make it easy for an unprivileged user to switch to a different user account for just one app though. Plus it gets more complicated when your application wants to save something to the disk so it can be accessed by a different application.
"More complicated" but not by much. That's where groups come in.
> in practice people just care too less.
I tried to use Apparmor and SELinux, but how policies work is beyond me. Snap's sandboxing seems to be the closest thing to user-friendly sandboxing, but it's still not that user-friendly.
Maybe on the server/desktop side of things. In embedded Linux the "user per app" scheme is very useful and is embraced.
The issue I have with it is that a lot of living off the land techniques are caused by this false sense of how UNIX user and group management is supposed to work.
I mean, the correct approach would be to have groups even for specific network protocols because capabilities are not enough to sandbox a binary correctly, and the network group is pretty much pointless.
And then there's icmp, which brings us to the ping binary which on lazy distributions still has an SUID flag set, as well as glibc which still allows LD_PRELOAD by default because it is intended functionality from the perspective of its developers.
Most of these privilege escalation exploits can be mitigated, if users and groups and capabilities are managed correctly.
In practice I probably would recommend to use the systemd seccomp sandboxes because most of these quirks have been abstracted away there and are configurable in the service files - like file/folder access, user/group randomization, chrooting, capabilities etc.
That is what Android does. Each application (by default) gets its own user id.
Isn't this what Android does? Every app has it's own user and group and you only get to manipulate "kinda-global-state" thru platform APIs.
This is why I'm still hopeful that capability-based microkernels are the future. They simply fit modern security needs far better than hacky sandbox solutions on top of the same old operating systems with coarse permissions.
The use-case is concerned with is multi-user, most likely remote login, sometimes desktop login. He whines about everyone not having root and breaking the box. Too bad. If you don't like the isolation and privilege separation a system provides, use virtualization, containers, run your own, or change the OS to allow your specialness that doesn't affect other users.
It's not wheel's fault, and so substituting root group for wheel was nothing more than a special snowflake move by someone who thought the sysadmins were holding back and the "enemy".
The solution is to use https://qubes-os.org. My daily driver, can't recommend it enough.
Qubes is one of those things that, I think, everyone knows is better but it seems just far enough away to not want to change.
How big of a change is it? If you are, say, a Linux terminal native can you just pick up and run?
Mostly yes. Your applications run in a standard Linux environment and if you pop up a terminal, hey, it's your favorite distro and it works.
There's some learning curve for features which exist for valid reasons, especially around communicating between domains. For instance, copy-and-paste between qubes requires extra steps. Plugging in a USB keyboard or mouse doesn't just work - you have to authorize it first (just click the OK button using a PS/2 mouse, or laptop's touchpad). You have to learn how to move files between qubes. USB drives, cameras, and microphones aren't globally available to all applications - you have to attach them to a qube first. You can install software using apt-get inside a qube, but it won't persist across reboots - you have to update the OS template.
I want those extra steps and complications - they are features, not a bugs! The first few days you'll be looking things up in the FAQ. After that it's pretty easy.
There are a few sore points that don't go away. You don't get GPU acceleration in your web browser, so rendering is slower. Gaming is not an option. Your application qubes live behind a firewall qube, so things that require network broadcast like Chromecast won't work. Those are fine for me but not for everyone.
Please tell me the USB devices that were there at install time get authorized.
> You can install software using apt-get inside a qube, but it won't persist across reboots - you have to update the OS template.
> I want those extra steps and complications
Is it wrong of me to say that enabling persistence, with snapshots, on a qube should be a single toggle?
> Please tell me the USB devices that were there at install time get authorized.
Yes, if you only have a USB keyboard, it will work. Manual creation of a USB VM then is recommended for security: https://www.qubes-os.org/doc/usb-qubes/
> Is it wrong of me to say that enabling persistence, with snapshots, on a qube should be a single toggle?
Of course you are right. TemplateVMs provide /root partition to AppVMs and software should be installed normally to the former. At every AppVM reboot, their /root is reset to the one from TemplateVM. Ordinary, persistent VMs are also possible. Details: https://www.qubes-os.org/doc/getting-started/
Qubes is enough of a pain to use that another OS project started to try to take the concept and make it more usable:
Doing so has proved hard and slow so far, and Spectrum hasn't had a usable release for the masses yet.
> Initial versions of Spectrum will have the user be responsible for writing Nix code for each application and resource, and the combinations they make between them.
As a qubes user, I think this is interesting but it definitely does not sound more usable.
You would feel at home more as a cloud native, because everything runs in its own VM, spun up and down on demand.
All software runs in Linux VMs, so it is practically the same as running several Linux operating systems with a nice UI.
I tried it for a week. Feels like overkill so I instead went for Fedora Silverblue & have everything in isolated podman containers.
Of course, I’m not being targeted by the state so my threat model is much lower.
Everyone is somewhat targeted by the state already... Just not at a pin point level yet.
If you're targeted by state, Qubes on a PC isn't secure enough. It sits at a weird place, where it is stronger than your regular Linux, and showcases interesting ideas, but is quite restrictive in what you can do and doesn't provide any real security guarrantees. It's an open-source small shop project. Xen bugs and kernel bugs are too frequent, big boys know them/buy them/make them/exploit them, surely silently for years.
The idea your data on a PC connected to Internet can be really secured from the most powerful actors is very naive.
Snowden is using and recommending Qubes [0]. Only 25% of Xen bugs on average affect Qubes [1] and never lead to escapes. What is restrictive about Qubes? I do everything I need on it.
Don't do things just because twitter persona says so. Is there an independent security audit of Qubes that checks its factual capabilities in security?
> Never lead to escapes
Escape is the highest form of security failure. I'm talking about data access and exfiltration.
Do you store all your important data on a VM with no internet access? Even Qubes users don't, it's hard to work with. Then it's Firefox/ kernel bug away from being accessed remotely.
XSAs are publicly known vulnerabilities discovered by someone who wanted to make it public and later were published by the Xen developers. There very probably are publicly unknown vulnerabilities, both in HW and Xen, discovered/created by people who want to profit from exploiting them. There are whole teams focused on this kind of work, payed by states and criminal-enablers like NSO.
> What is restrictive about Qubes?
No GPU acceleration for video in a VM, legacy OS on dom0. Xen development in support of modern CPUs has fallen behind, didn't even boot on modern Zen X570 platform last time I tried, dysfunctional nested virtualization, using KVM from Linux does not work, can't run Android Studio with phone emulator.
> twitter persona
Did you just call Snowden a "twitter persona"? You're not serious. Not sure if I should reply further after that.
> Do you store all your important data on a VM with no internet access? Even Qubes users don't
Yes, I do. And I'm a Qubes user. There are many more users like me on their forums. This is much more convenient than you think: you can easily and securely copy/paste passwords wherever needed.
I am serious. He is famous for leaking interesting US gov documents and running away, but then he's become a celebrity who talks and writes on the internet. What computer security work is Snowden known for that makes him an authority on computer security? If let's say Kevin Mitnick said Qubes is a solid product for data protection it would be interesting; Snowden, not much.
What about that audit?
If you keep all your important data off internet, that's good for you, and Qubes helps.
It might be an overkill if you need too big efforts to switch. I did not feel that way. The independent VMs for different workflows even helped to organize my work.
It's one of those things I keep thinking I'd like to try but actually implementing it seems like it'd be a huge pain in the posterior at first.
Am I wrong about that?
It's mostly a huge pain in the GPU, unless you only need the basics.
Probably depends on the hardware. See this: https://forum.qubes-os.org/t/community-recommended-computers....
Upd: yes, also there is no GPU acceleration in the VMs.
> I have more than one interactive user account on some of them. I put these accounts to the wheel group, to avoid ever using a root password.
Why have multiple different unix users if all have the same power of the root user? They aren't isolated from each other, they aren't less powerful than root. So why not just use the already existing single root account?
First, some software literally refuses to run as uid 0.
Second, it can still be useful for bookkeeping.
Seems like it’d actually decrease security as now there are several different “root” logins that could be compromised.
Though I suppose, with further thought, it’s not significantly worse than having them in sudoers, in that particular respect.
That's why the same question really applies to standard but insane practice of using sudo to get root. There is no security difference between root and lowuser that can sudo into root.
Sudoing into root from lowuser account is in some scenarios potentially more dangerous than just using both accounts separately, as the user who uses root regularly/very often gets accustomed to the fact his commands are powerful and can screw his system, so mistakes almost never happen. While sudoing all the time creates a false sense of security and the user is more likely to run harmful command with sudo.
Yeah I don’t really understand why when I ssh into a VM in the cloud I have to first connect as a static dummy username like ec2-user then sudo to root.
Yeah. From http://ec2-downloads.s3.amazonaws.com/AmazonLinuxAMIUserGuid... :
> To prevent remote root exploits, the Amazon Linux AMI does not allow remote root login via SSH[...] By default, the only account that can log in remotely via SSH is ec2-user. The ec2-user has sudo privileges.
Can someone please explain how this makes any sense for better security. It seems to be just a security theater.
I remember having BootCommander on a test box containing Windows 95 (and a bunch of other crap) to boot to up to 16 OSes.
Back in the day, university campus networks popularized a "cluster" approach of logging in to specific machines whereby hosts followed the pattern: $(uname)[0-9]{2}. You would soon setup authorized_keys and a script that would check the load of all machines by iterating ssh. The same home directory would be shared to all machines, regardless of OS, so portability of shells scripts and POSIX/C code was necessary.
corp users still share some machines.
actually if you think about it, lots of machines you OWN nowadays don't give you root access.
For example you can't get access at the actual filesystem on your iphone without a jailbreak.
The etymological history of the group name is interesting:
https://en.wikipedia.org/wiki/Wheel_(computing)
The term wheel was first applied to computer user privilege levels after the introduction of the TENEX operating system, later distributed under the name TOPS-20 in the 1960s and early 1970s. The term was derived from the slang phrase big wheel, referring to a person with great power or influence.
In the 1980s, the term was imported into Unix culture due to the migration of operating system developers and users from TENEX/TOPS-20 to Unix.
I always understood it as the thing you use to drive the ship.
I read the page and don't understand what's going on. What is special about the 'wheel' group and what is su even "checking" in the first place? Isn't it just supposed to switch user? And what are the implications of not-checking whatever it was supposed to check? And I also don't get: if someone has the root password, can't they change what groups they're a member of?
> What is special about the 'wheel' group and what is su even "checking" in the first place?
By convention, "wheel" is a special Unix user group that determines who can use "su" and "sudo". Most "su" and "sudo" implementations allow the sysadmin to make their use exclusive to the trusted users inside the "wheel" group. In most systems, it's the default setting of "su", and optional for "sudo" (given as an example in /etc/sudoers).
> if someone has the root password, can't they change what groups they're a member of?
No. If "su" is configured to be "wheel"-exclusive, you can't log in as root even if you have the password, because you cannot use "su" - unless you have direct access to the system console that allows you to type "username: root", which is almost never the case on servers that disable remote root login.
What is special about the 'wheel' group and what is su even "checking" in the first place?
Users who aren't in the wheel group aren't supposed to be able to become root, even if they have the password.
Isn't it just supposed to switch user? And what are the implications of not-checking whatever it was supposed to check?
Someone who steals the root password (say, by looking over the sysadmin's shoulder) would be able to become root.
And I also don't get: if someone has the root password, can't they change what groups they're a member of?
No, because they can't log in as root and (on non-broken systems) can't become root.
Native question: if you can `sudo`, can't you just `sudo bash`? What can you do with `su` that you couldn't do with `sudo bash`?
Or is the wheel group not really about being able to sudo?
You’re thinking about it backwards
`su` predates `sudo` by a decade doesn’t offer the fine-grained control `sudo` has. With `su` if you have the root password, you can do anything you want as root. With `sudo` admins can configure what commands users are allowed to run as root and could specifically block `sudo bash` from running.
Wheel and su predate sudo by many years. sudo has a config file called sudoers; su has a config file called the wheel line in /etc/group.
It's 'substitute user', not just 'super user'. With su you can impersonate any user.
Wheel users can do anything because (default) sudoers contains
https://unix.stackexchange.com/questions/152442/what-is-the-...%wheel ALL=(ALL) ALL
Basically, su vs sudo. Do you want any user to be able to become root, if they know the root password? Or do you want more control over the process?
To be fair, that's ancient. This links to coreutils 4.5.4. I've got 9.1 installed. The current manpage says support for wheel is implemented in PAM.
I found this patch, which adds PAM support to coreutils 5: https://lists.gnu.org/archive/html/bug-coreutils/2003-04/msg...
It removes the section at the bottom of the man page, and has this addition to the source code comments:
+#ifdef USE_PAM + + Actually, with PAM, su has nothing to do with whether or not a + wheel group is enforced by su. RMS tries to restrict your access + to a su which implements the wheel group, but PAM considers that + to be fascist, and gives the user/sysadmin the opportunity to + enforce a wheel group by proper editing of /etc/pam.conf + +#endif
A different reason why it's good that it doesn't support wheel:
- it makes it smaller, less code which can go wrong
- su isn't limited to "set user root" but wheel tends to be
- it avoids having to handle many kind of subtle problems with group based permission handling in linux
It's just not a bad idea to have a very minimalist program like su and then delegate all more complicated "acting as user" permission handling to other programs like sudo or doas.
Through tbh. the more I do learn the more I come to believe that uid/gid based permission handling is fundamentally flawed (but also good enough inside of a single application OCI(docker) image). The facts that Linux had to add a (very limited) capability system or that enterprise permission handling often goes through stuff like pollkit adding additional handling then just "gid/uid match" is I think very telling.
I don't think GNU has ever committed to any kind of minimalist philosophy.
Have you seen the number of flags every command has? ls has almost the entire alphabet taken.
From the GNU Fortran manual:
“9.5 Case Sensitivity There are 66 useful settings that affect case sensitivity, plus 10 settings that are nearly useless, with the remaining 116 settings being either redundant or useless.”
https://gcc.gnu.org/onlinedocs/gcc-3.4.6/g77/Case-Sensitivit...
GNU had to create a convention for the --long flags to fit all of their options.
I think the convention itself is good - I prefer long flags in scripts for improved readability.
But GNU is anything but minimal - compare GNU ls[1] with BSD ls[2], and try to recall the last time you needed --dereference-command-line-symlink-to-dir.
[1]: https://linux.die.net/man/1/ls [2]: http://man.openbsd.org/ls>I don't think GNU has ever committed to any kind of minimalist philosophy
So true. I remember early on when GNU was started, people in the project where saying and developing with something like this in mind (paraphrasing):
"Make sure it works and meet users needs, even if it is too heavy for current systems, the hardware will improve as time goes on"
And that came true, for example, emacs is a lite ballerina compared to current IDEs.
suid bits are flawed and ideally should not exist. You should only be able to drop privileges. su/sudo should be replaced by ssh anotheruser@localhost (or simpler implementation with unix socket and without encryption, but the idea is the same).
You would not be able to change your password without suid.
I guess there are ways that sudo/doas could be adapted to implement passwd, chfn, chsh and friends, but the approach appears to have been chosen in the '70, and codified by POSIX.
How do you think these should be implemented?
Just make a request to the service which runs under root to change password. Include necessary credentials (e.g. current password or its hash) and new password (or its hash). How this request will be authenticated is another matter, but there are plenty of ways to authenticate a request. Or may be there should be better ways if current ways are flawed.
My point that it does not have to be coded in the kernel as a dedicated mechanism to circumvent protection. Use any IPC channel to send a message to another process which already runs under root and accept those messages.
On this, you must understand the original "poverty of UNIX," in that it originated on a 16-bit PDP-11.
There was no room in that environment for a running service to elevate privilege, so it was implemented as a kernel system call.
This poverty meant that efficiency was required, and setuid was the most efficient mechanism.
It was a reasonable and efficient mechanism for its time, and it has successfully scaled to the realm of modern supercomputers, and remains efficient on the lowliest of embedded systems.
Maybe there was a more secure option bearing in mind of all the places that UNIX was forced to go, but I cannot think of one.
You’re talking about tools and systems that just did not exist when the idea to have su check wheel membership came about.
It was a different world, and having some basic speed bumps like not allowing random user accounts to su to root was useful at the time.
Reaction: Mr. Stallman's idyllic worldview does not seem to admit that someone may actually own the computer system in question, or otherwise have legal rights to set limits on who uses the system, when, and for what purposes.
And what was allowed by the social norms of the tiny 1980's *nix computing world, or what you can get away with when you're as famous as Mr. Stallman...those may not translate well to other contexts.
Stallman's ideas are often informed by high trust environments and business arrangements where the cost of the software itself is a fraction of the TCO. There's a big disconnect between the environment where Stallman made up his mind (large education/business environments) and how most people are introduced to free software (low cost entry into technical computer usage).
I used to think Stallman was an ideologue from a different era. When I started dealing with software projects measured in years and millions his thoughts made much more sense to me. When you're selling me a system that has a 6/7 digit implementation cost for it to stand any chance of meeting my goals, withholding the source code only serves to annoy me.
Withholding source code is obnoxious in any era. But that's a pretty distant issue from administrative privileges.
Non-administrative users are given administrative privileges to complete their work all the time, even today. Misuse results in them being disciplined or fired. Heavy-handed privilege controls are very often a drain on the productivity of users and can result in stupid or dangerous (from a security standpoint) workarounds. 20 years on, you have to exercise more judgement in what you allow considering modern risks, but the idea that you shouldn't make things harder for people over a small number of bad actors that you can handle at an organizational level is still a good one.
It does seem a very strange position when today's sensibilities are applied.
I do understand the point of view when I think back. Today, Unix-like systems are everywhere. Learning it and working with it is a given. Back then, having access to a unix system was not a given. It was very expensive for hardware and software. The idea that one would be so close to the system and could be denied enough access by an overzealous BOFH was too much to take.
It just goes to show that circumstances change, and things can get weird if we don't change with it.
Well, I am a bit unsure if it does or does not translate well....
One of my favorites is extensive rights management. Especially on CMS. More ofte than not it is part of a buying decision, but used Stallman style soon after.
The observation that these credentials leak is correct. Or that you grow permissions over time for no other reason than doing work. A wheel group would today quickly attract users, too.
So let it be. The latest iteration is "basically let everyone, but audited and short term only". I find that very close to Stallmanns idea, for very different reasons.
legal rights? "I'm on the side of the masses, not that of the rulers." He is pretty clear...
Reaction: How does that ideal play out, when a few kiddies start running fork bombs on a *nix system that Mr. Stallman wants to use?
This actually went on for many years.
rms famously refused to secure his account @gnu.ai.mit.edu, and so the machine basically became an open shell server for every hacker in the world, ca. 1992.
Lots of hijinks ensued, and it was usually not possible to do anything useful in that account, since it was usually broken or pwned in remarkable ways. But they were wild, fun times.
In a similar vein, EFF cofounder John Gilmore famously refused to secure his SMTP server at toad.com [1]. He believed SMTP should be open for all just like the old days, potentially making it basically an open relay for every hacker and spammer in the world. And so, he got into trouble with his ISP. Gilmore said the server was in fact rate-limited and the abuse potential was not as large as it appeared to be.
[1] https://en.wikipedia.org/wiki/John_Gilmore_(activist)#Activi...
Hah! I don't doubt it, but amusingly that section of the article is blissfully free of citations or sources, just some original research. Cool stuff.
I used to do dumb stuff by telnetting to mail servers, back in the day. I think I really annoyed a few coworkers with that; one said he almost called CERT.
I think it was one of the earliest Cybersecurity realizations I had about how insecure the Internet really was, based as it was on blind trust among hosts that were supposed to have legitimate admins in control. I found that it was so easy to telnet to a mail server and feed it whatever you wanted, this must be the tip of the iceberg. And it was!
Fork bomb does not need root to function.
:(){ :|:& };:
explodes fine as a regular user
Only if you haven't configured sane limits.
Legal rights are not moral rights.
Compare to https://news.ycombinator.com/item?id=37173339 which has a lot of discussions of similar issues for "modern" security configurations. Just because IT admins can choose to set a short session expiration on your SSO integration for your MDM managed laptop doesn't mean that we should cooperate with them or develop tools to let them do that.
> someone may actually own the computer system in question,
And that person may not be the systemadmin
I can’t stand his holier than thou writing. No amount of brilliance or clever code makes someone less of an asshole.
Some people are so dazzled by singing and dancing skills, that they consider their singer to be a hero and a nice guy.
Similarly, Stallman's coding expertise can sometimes overshadow any potential shortcomings in the non-IT subjects.
Personally, I'm dazzled by Stallman's singing and dancing skills.
Join us now and share the software,
You'lll be free, hackers, you'll be freee
Also by his uncanny tendency to be proven correct in matters concerning software freedom. His coding expertise are tertiary at most. Honestly, I never see people praise RMS's coding expertise, where are you even getting that idea from? I don't think you get it why people like RMS.
You just pinpointed the problem.
He is good at one domain, and then by cognitive bias people think he is right on everything.
It’s not true at all, and I think you have to take a bit of distance with glorifying IT personalities.
Like Bill Gates, Elon Musk, Stallman, and many others (especially in the VC world) it’s important to take them with a grain of salt, and not accept them as perfect nice guys because they have money (Musk) or influence (Stallman).
Otherwise they can spread dangerous ideas that normally should 100% be challenged, but that are not, due to blind acceptance.
> He is good at one domain, and then by cognitive bias people think he is right on everything.
You really don't get it at all. You're out of touch. People think that Stallman is right about one thing only, software freedom, and think he's out of touch with virtually everything else.
Well we actually somehow agree, but for different reasons.
It's good, for a second I thought you were supporting his views on non-IT topics.
The problem is that the political speeches are part of the person, and the scope way way beyond software.
They are really interleaved with (supposed) IT topics, like if IT was a bait.
Once I went to one of his conference, and I had "learnt" more about "sex" and "facism" than software engineering or freedom.
> If you are used to supporting the bosses and sysadmins in whatever they do, you might find this idea strange at first.
Should this be that far-fetched though? That employees might not be simple thralls of the capitalist, whose agency extends only as far as his master permits?
It reminds me of something I'd read that one of the reasons modern capitalism is so borked is because the founding fathers weren't conceiving of things like "Amazon" existing, where one entity employs a staggeringly large number of employees. Or that a small number of companies would employ such a large percentage of workers.
Their worldview was that where most people were "self-employed" - and if they weren't, employers were small and had a few or tens of employees at most. Or it was a matter of master and apprentices where both groups were investing heavily in each other in a trade and in the running of a shop.
So, while yes our current system finds it a matter of course that employees are utterly subject to the whims of their employer and the legal and economic system fully supports them in this, does it have to be that way?
(I know you can go be a contractor, but good luck with health insurance and etc etc etc all the other things that being yoked to an employer brings that I wish were just public taxpayer-funded services).
> It reminds me of something I'd read that one of the reasons modern capitalism is so borked is because the founding fathers weren't conceiving of things like "Amazon" existing, where one entity employs a staggeringly large number of employees. Or that a small number of companies would employ such a large percentage of workers.
I'm not quite sure I buy that argument. They lived in the time of the East India Company, which owned something like 50% of the world's trade at the time and ruled several nations.
Yeah, there's definitely counterexamples. I thought of East India too. I really wonder what operating a huge company like that looked like in an era where the fastest way to transport messages was to have fresh horses pre-positioned every X miles and have someone gallop your message non-stop. I assume it was very different from Amazon employees peeing in bottles to avoid getting dinged for metrics.
> I really wonder what operating a huge company like that looked like in an era...
Lots of attempts to standardize procedures, etc., etc. - but East India agents far from home often had enormous latitude, and there were plenty of disasters and atrocities. (Not that either the British Government proper, or other European powers, were notably better. But they could certainly be worse - just look at the Spanish Conquest of the Americas, or the Belgian Congo.)
> ...the fastest way to transport messages was to have fresh horses pre-positioned ...
When there were enough short-but-important messages to be passed along a given route, they did have a far-faster-than-a-horse technology available - https://en.wikipedia.org/wiki/Optical_telegraph#India
Does this mean that RMS is ideologically opposed to sudo?
Maybe. You still only have to learn one password under each system. If someone named foobar sympathizes with you and leaks their password, then you login in as foobar and "sudo bash" or whatever. The way RMS envisions is that the root password is shared among many people and one of them shares it with you (there is no way to know who), and then you use your own account to "su" and use the root password.
It's easy to get someone in trouble in either case. If the user foobar starts doing crazy stuff as root, then foobar is in trouble. If you get the root password and start doing crazy stuff, then your username is associated with the troublemaking. (Assume that the logs go to some machine where you don't have access to remove them.) RMS's mechanism shifts the responsibility for your actions onto you, so that someone who knows the root password is more likely to leak it to you.
The best of both worlds is to get the root password, then find a hapless coworker who left their screen unlocked while out to lunch. su with the root password there, cause your chaos, everyone blames lunch guy. (Do look for outside systems; people and cameras can see you using someone's computer. Was always funny to me how many people have tried something like this, only to be nailed by the security cameras.)
Here's an video of RMS talking about the period referenced in that man page regarding the introduction of passwords on user accounts:
I’m curious how Twenex worked, that a non-operator (root) account was able to patch the kernel. Maybe the kernel files were unprotected, because it’d be absurd for ordinary users to want to change them? Or did he have to use an exploit to elevate his privileges?
Probably just didn't have filesystem permissions of any kind.
This sounds like a protest against someone not giving him the root password. A stupid protest.
There are little (to no?) situations where su has a good reason to check wheel.
You either have the password, or you don't have it. But not something in-between.
Outside of any ideology, in a scenario where you use su to become root, it's a very odd choice to link the wheel group to su; because if you know the password to the "root" user, and you have physical or remote access to the computer, you can likely just login as root.
And if you can't, then it means you actually needed sudo su, not su.
Those who actually need to be root, usually use sudo instead of su.
In the other cases, if you just need to switch user, then no point at all to refer to wheel
You can disable direct root login and force users to login as their own account first. This way, any root login is tracked—you know who logged in as root, because they had to log in as their own account in order to run su.
In such case: sudo su, then.
and let sudo verify that the user belongs to the group of allowed sudoers.
No need for the password to the root account.
Objection: su is a very simple program that does (approximately) one thing. Meanwhile the sudoers(5) man page starts with an introduction to EBNF grammars.
I strongly prefer doas wherever it's available.
What a bizarre anachronistic rant.
sudo does completely obselete su, yes. (sudo su is redundant, you can just sudo -su)
incorrect, they serve different purposes. If sudo isn't installed then you don't require security updates for sudo...
If you do have sudo you can be very restrictive on who can run what.
That doesn't make any sense ? Why it should check some random group ?
The idea is to only allow user accounts in the wheel group to invoke su to take on root privileges. So, if someone had access to a random user account and knew the root password it wouldn’t do them any good.
What a great example of how slavish devotion to an ideology makes idiots out of smart people.
That's just like, your opinion, man.