Snaps are an anti-pattern on Ubuntu
techtudor.blogspot.comI had been an Ubuntu user for almost 16 years, on servers, laptops, and recently containers. The snap situation with Ubuntu is just plain unpalatable, both in principal and in practice. I became so disappointed in the move by Canonical that I finally left Ubuntu altogether and no longer recommend it to friends and colleagues.
It takes years to cultivate a garden, but only minutes to destroy it.
Linux Mint responded to the situation by removing snap from their repositories today[1]. I hope other distros will also follow suite.
[1]: https://www.zdnet.com/article/linux-mint-dumps-ubuntu-snap/
Same here, were already in talks with red hat to move dozens of servers which are currently running ubuntu with enterprise subscriptions.
We know redhat is just as bad at forcing people to use their software (systemd...) but they have to key points which ubuntu lacks:
- they usually win their software wars
- their software usually works well enough and they give you great support
- they mantain their projects and live with their decisions, unlike ubuntu which flips and flops with major updates (I'm waiting for them to switch from netplan to something else again and fuck up all my ansible config again, which was already battle tested)
- they have better management tools overall.
The only reason we went with ubuntu in the first place was because it was familiar to all of us (we had all ran it on our desktops). When they take that familiarity away, then they lose their only real advantage.
If any Canonical employee is around here they should be taking notes or this will blow in their face like Upstart, Mir, Unity 8, etc.
Or maybe Shuttleworth doesn't care at all and he just wants the big bucks from MS.
When they moved from Unity to Gnome shell, you could read all over the OMGUbuntu comments that people actually liked Unity and wished they would keep it.
It's always the same, you hear people complaining the most.
Ubuntu has been a success because they took some risk.
The first one has been to make installing proprietary drivers easy. Something they got a lot of heat for.
And since they managed to become the most popular Linux distro ever, I think critics should maybe ask themselves why.
I'm using Ubuntu 20.04 with Unity right now. Unity wasn't bad. I disliked it at first because it lacked features and it wasn't polished. After some time a lot of people started liking it.
But they messed up with Unity 8 and Mir. They didn't want to work with the community, they didn't care if they were working on stuff that already existed. Instead of working with the upstream devs, they worked alone. Where did they end up? Back to Gnome Shell with a lot of years of effort down the drain.
And this is even worse. More info here: https://news.ycombinator.com/item?id=23433794
EDIT: As a reply to this pointed out, these predate the things I claimed were created from NIH syndrome. Please disregard.
Original comment:
They also created their own init system Upstart, only to replace it with systemd later. They have their own source control system, Bazaar.
I really appreciated Ubuntu back in '06 when my laptop's wifi and graphics drivers just worked out of the box. I've used Ubuntu in various forms for a long time. But their management has some serious NIH problems.
> They also created their own init system Upstart, only to replace it with systemd later. They have their own source control system, Bazaar... their management has some serious NIH problems.
Citing these as a case of NIH is inaccurate -- Upstart and Bazaar predate Systemd and Git, respectively.
It's fairer to say that Canonical's technology, for whatever reason, often doesn't seem to catch on with the broader FOSS community.
Ah! That's my error; thank you for pointing that out. I hadn't realized that. I'll edit my comment.
I would agree with you in this about snaps IF snap were somewhere close to production level software.
Nobody can assemble some new kind of app packaging, slowing the starting speed to 5-10 seconds and call that "good rounded software"
"It is just a couple of seconds more sometimes, not a big deal" isn't going to cut it. Most of the Internet using Ubuntu, already knows that.
I think Canonical is again in denial.
It already happened a couple of times: MIR, Unity (buried by not being community driven probably), Unity8, the mobile project or something, all but forgotten now.
This snap thing is probable the pet project from a couple top guys in Canonical. Maybe hundreds of powerpoint presentations have been watched about some "big strategy" for the next years, but they didn't know the thing (snap) is unbearable slow.
Apt sucked when it started. Give it some time.
It's a complicated endavior, and it will need many iterations before becomming a decent solution. That's why we have to start ASAP.
I was annoyed at many of the moves Ubuntu made and voiced that, but at the end of the day, I kept using it. When 20.04 came out, I uninstalled snap. If I'm forced to use snap, I'm gone.
There's no comparison.
The recent I ran Debian and now Ubuntu will be gone. Evaporated in a poof of smoke. I'll be elsewhere. Probably back to Debian.
There are also these "AppImage" files. They launch, but there is no guidance on how to install them to the system.
Launching Chrome: I click the Chrome icon.
Launching PrusaSlicer: Start a terminal and type
That doesn't seem like progress to me from a UX perspective.chmod 755 ~/Downloads/PrusaSlicer-2.2.0+linux-x64-202003211856.AppImage ~/Downloads/PrusaSlicer-2.2.0+linux-x64-202003211856.AppImageAnd flatpack. So now there's apt, snap, appimage, and flatpack. 4 fucking systems that need to be maintained just to update apps on an OS. Frankly, it's ridiculous and the apps from all those alternate systems all have some issues too. None work as well as apt. I don't understand what is wrong with apt. If they want newer packages provide a repo for newer shit. Problem solved.
Apt is great as long as what you want is available in the repo. Over the years I have had a few issues though.
Often apt's versions trail behind the newest versions. There are some good reasons for this, but it can get in the way sometimes.
I have had to ad a lot of PPAs to get some of the software I wanted. The Aurora channel PPA for Firefox stopped getting updates at one point (they discontinued it), and I didn't realize until a few versions later. I don't think any of these package managers have that problem figured out, but PPAs are commonly maintained by third party community/unofficial folks. I had the same problems with Arch's AUR.
I believe it's fixed now, but for a long time Steam required a lot of 32 bit libraries, which meant two versions of several dependencies were installed on my machine.
Additionally we were using an older version of Ruby at work which required OpenSSL1.0 for certain libraries, and a Ruby upgrade (from an old version to a less old version) broke my development environment.
Not that the others are perfect. Just for example the Spotify snap package from the software center doesn't even work. The Deb had problems on my machine which I couldn't resolve. I finally installed Flatpak to get a working version.
I think this thing about apps not available for apt is in the past. I specifically use Ubuntu because there are .debs for everything. Most of the new apps for linux published on Internet "need" to have a .dev available, if they want to became popular.
Well, I have found some strange, out of common, software - mostly comercial stuff - which doesn't have any .deb available nor ppas, nor nothing.
It is clearly a decision from somebody who said "no, we are not spending hours packaging our app, if they want it, they will try to install it with the methods we will provide"
Kind of nonsense, except that if you're in Linux looking to install a comercial app, special snowflake, is probably because you have zero chances to do otherwise, then you bit the bullet and try whatever crazy method to deploy their app they have put in place.
Most common stuff I found: - Just download my zipped binary, you know how to deploy it - Just run this command line, "sh something" giving it root credentials to run code from the Internet (YEAH I KNOW TOO, as much insecure as it gets)
Having said that, many comercial apps are there for to be easyly downloaded as .debs (they should just install with a GUI right out from the link, in old-Ubuntu behavior). Or they even offer you detailed instructions to configure a ppa (to manually install with apt).
Heck, nowadays it is common sense and good netiquette to make your installation scripts in the downloaded .deb to just deploy the ppa for apt, so next upgrade happens automatically.
If you ask me and I wouldn't dreaming to control the app-deployment infrastructure in Linux, I would say "yeah, you need to contact EVERY software not providing the .deb format and start working with them, providing them free support, even free scripts to handle the packaging"
I remember in the 90s, there was LOTs of shareware because Microsoft knew this stuff from the 70s and 80s: if you want you're software in use, you need to talk directly with those who could probably use it.
No, links hanging in some flashy website won't cut it, nor repos in github half sharing some code.
I think the main problem with apt is that the traditional way of using it is to depend on other libraries installed by apt, so if two pieces of software want two different versions of something that are not forward/backward compatible then you have issues. However, I don't see why they can't statically compile OR containerize and still use apt. Just have the .deb install an appimage to /usr/bin and create the necessary .desktop files. The .deb would then have almost zero dependencies. Problem solved.
I have the same problem, except I also add Brew for Linux to that list D-:
From the limited experience I had with an AppImage I found they usually create a .desktop file to make the program searchable. Though I have to agree that there is no really simple, gui way to install AppImages.
Yeah mine used to in 18.04 and now they don't in 20.04. No release notes about the removal of that feature, no workarounds, nothing. Not that it's a big issue for me to create a .desktop file myself but the apt-get experience is slightly nicer still because everything just works.
If you want a centralised repository of AppImages you might as well just use the distributions package manager for that.
$ sudo install ~/Downloads/PrusaSlicer[tab] /usr/local/bin/PrusaSlicer $ PrusaSlicer
The gravy train of owning the platform with an app store like Google and Apple do is just too tempting to resist trying to pull it off, unfortunately.
Same. My go to has been to use Qubes if you at all can, because it's actually secure, and then to use Ubuntu, because it actually works. To me most of the bad reputation of desktop linux seemed to come from people refusing to use Ubuntu for demented reasons... But I have must not have been following distro news at all in recent years, because I only just now learned snap is not fully open. That's quite the cynical walled-garden power grab and bad enough by itself to drop Ubuntu.
> To me most of the bad reputation of desktop linux seemed to come from people refusing to use Ubuntu for demented reasons...
My experience is completely different. I spent 7-8 years using linux on a laptop about 4 of those using either ubuntu or derivatives, my experience was that after about 6 months it was time to reinstall the OS.
Since I have installed fedora and it has been the most stable and resilient system I have ever used, I have also been treating it badly as an experiment (like powering off randomly if my 50 reddit tabs where causing too much lag) and it has no problems at all.
Currently I am using mostly my office laptop with debian and it has the same issues as ubuntu.
From my point of view I cannot understand why fedora is not more popular.
Reinstalling every 6 months? Why on earth?
I'm still using Ubuntu 16.04, which I think I might have reinstalled once (after I upgraded to an SSD, so it doesn't really count), and dragging my feet about upgrading because it simply works marvelously and I don't like change. It does everything I want, coding, watching movies, gaming.
I believe that, we probably did something subtly different.
I am not saying ubuntu is bad, but it was not a good fit for me
When I used to maintain a linux machine about 10 years ago, upgrading the kernel was a huge pain on a fedora system (basically required reinstalling the entire OS IIRC) but could be done using the package manager on ubuntu and thus was incomparably less painful. Has this difference gone away?
I never directly did that, my experience was that major version upgrades on ubuntu rarely were painless and on fedora they mostly worked.
I don't understand what the big deal is, though. I run ubuntu and I don't run any snaps. This seems to work fine.
If you're using desktop ubuntu, you're using snaps. Run `mount` to see which snaps ubunutu has mounted on your system.
Hmm, interesting. Thanks.
I do the same on 19.10. I also use another Chromium anyway, one that has hardware accelerated video playback.
https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-...
It's available on Ubuntu 20.04.
Once the regular Chromium had a higher version number than the Chromium in the PPA. aptitude updated to the regular Chromium. I then gave the PPA Chromium a priority of 1000 by creating a file called /etc/apt/preferences.d/saiarcot895-chromium-beta with the following content:
Package: *chromium* Pin: release o=LP-PPA-saiarcot895-chromium-beta Pin-Priority: 1000The point is that later releases will start forcing snaps on people without their knowledge.
Welcome to the ranks of "old fogey." :-)
More seriously, this is not an unexpected response if you've been using a system for 10 years or more. If you were new to the system you would just say "oh, interesting, this is how it does self contained packages" but your perspective of having a system you understand well, has worked for all your needs, and you know all the ways in which to work around it if you can't get exactly what you want works against your perception of a new feature.
In my experience it is the leading cause of burnout in engineers. You learn things, you use things, you customize them to your needs, and then the 'new participants' who don't have any experience and find those things "arcane" or "opaque" re-implement them for themselves, their friends, their company whatever. And then its something new and something new gets the exposure so still more people see the 'new' thing without even knowing there was an 'old' thing and its just "the way this feature is done."
As an experienced person it is tiring and bothersome to have to re-implement tool flows, capabilities, and other parts of your environment because some youngster re-invented the wheel yet again and you were not in a place to educate them on why the existing wheel was just fine.
The longer you live the more cycles you go through and the more ridiculous each new re-imagining of how to do 'X' becomes until all you seem to do is complain about how in the previous versions everything worked fine and this new stuff is crap and you aren't going to put up with it.
At which point ageism kicks in and your employer lays you off with mumblings about "not a team player" or "resistant to learning new skills" as if sharpening a knife with a round stone is any different or any better than sharpening a knife with a square one. It is easy to get bitter. It is easy to just roll over and whine with your fellow "oldsters" about the "good old days". It is also a kind of death.
Counter intuitively, I suspect that if companies invested in keeping the status quo engineering salaries would go down. That would result from skills learned as a junior engineer always being relevant to the current environment but increasingly more efficiently applied (as it typical as people get more experienced, they do things more quickly). That minimizes the number of people you need to develop your products and that keeps the number of engineers you need to employ down, so your costs go down and the poor engineers who aren't currently working have to compete more aggressively for available entry jobs by taking a lower salary. Fortunately, because it is counter intuitive I don't think there is any risk of it coming to pass.
In my opinion, the elves have left Middle Earth. Ubuntu and Ubuntu's current cohort of developer/users are more interested in an open source version of Windows than anything else. As a result more and more "windows like" architecture and features are replacing the old "UNIX like" architecture and features.
You made me chuckle. I mostly agree with you [1]. Though, to be fair, Ubuntu has been accused of being an open source Windows (or rather, OS X) almost from the get go.
I'm really conservative about software and I want to keep my apt, dammit it! I did like Unity, but only because I was never too attached to Gnome.
----
[1] I still remember when KDE devs broke that desktop for me, I think it was KDE4? They decided you just couldn't place desktop icons -- "you're doing it wrong", foreshadoing Steve Jobs -- and there was much gnashing of teeth, and I and many others ragequit to Ubuntu. Little did we know, of course :P
Interesting, what distro did you move to?
I took a shot and tried Manjaro. After using it for a few weeks, I can say that I absolutely love it, and will be a Manjaro user for a long time on my personal machines. For new users, I have been recommending Pop_OS.
I used Manjaro for about two years on my Dell XPS, and everything was a constant hassle: features I needed weren't set up by default, stuff broke constantly. Usually it wasn't too bad, so I just fixed it and moved on.
Then my laptops ability to connect to WiFi broke, and there was no way I could figure out to solve it. So I made a new partition, installed Pop_OS, moves my old home directory to this one, and was done with it. Pop_OS has been amazing - I always liked GNOME 3, and Pop Shell really helps in that dept.
Not OP but I have the same grievance. I moved to Fedora on my dev machines.
I moved to Fedora several years ago after getting fed up with Canonical. While it hasn't been entirely pain-free, I experience a lot less pain than I used to. Even OS upgrades alone, always work on Fedora. Modern Fedora is the most "just works" distro there is IMHO (but I'm still mad at Canonical, so am probably biasing my opinion with emotion).
I was using Ubuntu since 07, but recently changed my machine to Manjaro and love it.
The AUR is work of genius.
I still build all my containers at work with Ubuntu though.
I think there are some very good critiques of snap (performance, provenance, reproducibility, namespacing, etc), and the first couple points in this article seem reasonable.
However I can't agree with this:
> apt/deb is a wonderful package management system and everyone is happy with it, at least the majority of Ubuntu/Debian users. Besides, dnf/rpm is also a similar packaging system for Fedora/RH systems and everyone is happy with that too.
Debs and rpms are great at assembling tightly coupled monolithic systems. Great! Let's keep using them for the base system. However when I want to install a QT app on a Gnome system or gasp a proprietary app, Debs are insufficient. I want all of the QT libs embedded in the package. I want the proprietary app in a container. I want MAC with a polished UX. I don't want debs to worry about those features. I want an "app store" done right: open yet verifiable. Protection in depth.
And I want:
- a user-space install option
- rollback functionality (!)
- being able to install multiple versions at the same time and switch between them
- if I really need to: being able to install the latest version (and even an unstable release); if that means that apt-get has to download and compile stuff, then I'm ok with that.
Flatpak ticks all of those boxes: https://github.com/flatpak/flatpak/wiki/Tips-&-Tricks
Sounds like Nix or Guix.
Lots of package management systems support these features. Even MSIs from Microsoft despite Windows ironically lacking a centralized repo until recently.
No streamlined enterprise support (insurance), no nix or guix, except for running ON established enterprise linux distributions.
Both nix and guix are happy to run on top of an existing Linux distro.
That's an organizational limitation, not a technical one.
> I want all of the QT libs embedded in the package.
I don't understand why anybody wants this.
Libraries should have major versions and the latest of each major version should be compatible with anything using that major version, because that's what major version means for a library. You might then need to have more than one major version of the library installed, but any two applications using the same one should be able to use the same copy, and then have it maintained by one person in one place.
If every package has a separate copy of every library, people have to maintain them all separately. When that library has a security update, you now have to update five dozen packages instead of one or two, and have a security vulnerability if any of the maintainers don't keep up in a timely fashion. Which not all of them will.
> I want the proprietary app in a container.
People want containers to be magic but they're actually a hard problem. You want the app not to be able to do anything you don't want it to but still be able to do everything you do want it to.
A backup app that can't read my files is useless; it can't back them up. But it shouldn't be able to modify or delete them. But it should be able to modify its own state. It shouldn't have general network access but should be able to communicate with the backup server, which might have to be specified by the user and not the package maintainer. It doesn't need access to the GPU or the ability to use gigabytes of memory, but it does need to be able to transfer a lot of data over the network, but the data it transfers is lower priority than other network packets.
That requires the person configuring the app's container to have both detailed knowledge of the app and detailed knowledge of the container system. It's common for this not to be the case.
And that's why containers are a mess, not anything to do with the package manager, which should have little to do with the container system outside of packaging the app's default container configuration with the app.
> I don't understand why anybody wants this.
Because creating debs is largely a completely distinct undertaking from the dependency and build management the developer of an app does.
Bundles, whether via images or static binaries, allow app developers to distribute their app against the exact dependencies it was developed against -- potentially using the same build system.
There's obviously tradeoffs to each approach, which is why I don't think there's one right way to distribute every bit of executable code on a system.
> People want containers to be magic but they're actually a hard problem.
I work on a container orchestrator, so I understand some of the difficulty. :) Mobile apps are years ahead of desktop apps when it comes to containerizing in a user friendly way. Obviously there's plenty of work still to be done, but the problem is far from intractable and the benefits are enormous.
Mobile apps are a hellscape. It's an example of how wrong things can go. Apps treat their privilege model as a license to abuse all their privileges as much as possible.
Ability to read Contact list? Location access? Great let's upload it all to the Googbook analytics for data mining our customers.
But what's the alternative? No MAC and every app has all user privileges by default like most Linux distros?
I would argue the concept has not failed, just the implementations have fallen short. And for good reason! It's an extremely complex confluence of cutting edge technology, UX, security, privacy, etc. It's been improving, and surely the open source community can do it even better (or at least with a greater focus on privacy).
It would be nice to have a prompt when the phone asks for phone book access:
Allow Deny Fake
The Fake would just have generated names and numbers to pollute their data mining.
> Libraries should have major versions and the latest of each major version should be compatible with anything using that major version
Should, but accidental breaking changes are a thing. Plus flatpack more or less solves this by having standard runtimes (base collections of libraries/dependencies that flatpack apps target) that get security updates.
> That requires the person configuring the app's container to have both detailed knowledge of the app and detailed knowledge of the container system. It's common for this not to be the case.
With Snap, developers explicitly ask for the permissions they need and the approval process evaluates if it makes sense for that app to have those permissions (and by default or not).
> Should, but accidental breaking changes are a thing.
They're a thing either way. If a library has a security update that breaks your app, your choices are to have a security vulnerability or to have a broken app, until somebody fixes whatever is broken in either the library or the app. The only good option is to fix the breakage quickly -- or do better testing to begin with.
> Plus flatpack more or less solves this by having standard runtimes (base collections of libraries/dependencies that flatpack apps target) that get security updates.
At which point they might as well be their own packages so that you only have to install one copy of that version, since they're the same for every application anyway.
> With Snap, developers explicitly ask for the permissions they need and the approval process evaluates if it makes sense for that app to have those permissions (and by default or not).
This is not a solution to the hardness. If the developer doesn't understand the permissions model well enough to know to ask for a given permission they need, the app is broken. If they ask for permissions they don't need, either the approvers need to understand the app well enough to know it doesn't need that, or they approve permissions it doesn't need.
It doesn't get you out of needing somebody who understands both the app and the permissions model well enough to be able to correctly specify which permissions the app needs and doesn't.
This permissions model and approval process already works in mobile app stores. Yeah, if the developer doesn't understand the permissions model they ship a broken app, so what? Are we supposed to expose our computers so that incompetent developers don't break their own apps accidentally?
> This permissions model and approval process already works in mobile app stores.
The permissions are too coarse-grained. Flashlight apps ask for ridiculous permissions and get them. Some permissions are "too dangerous" so you can't have them even if the user trusts you completely and you have a good reason, which makes certain apps impossible. It's rubbish.
> Yeah, if the developer doesn't understand the permissions model they ship a broken app, so what?
So then they err on the side of requesting too many permissions, which is made even worse when they're coarse-grained.
> Are we supposed to expose our computers so that incompetent developers don't break their own apps accidentally?
What Linux does, to start, is to not package things from incompetent developers. If your app is nothing but a fork of Thunderbird that uploads all the user's contacts to your server, Debian isn't going to package that because there's no demand for it. But you could get the equivalent thing into the app stores, because things get there when developers push them there, not when packagers pull them there.
Then the Linux apps have the source code for anybody to view and modify. If the app was originally written to do something problematic, you can modify the app not to do that before distributing it.
That makes the permissions model much less important, because the problem of malicious apps is much reduced and all you need it for is containing bugs.
Your app isn't supposed to access the network, so you assert as such. Then if it has a bug or somehow gets compromised, the system can at least prevent it from accessing the network.
But you don't have such an aggressive tension between false positives and false negatives because more of the false positives got eliminated through having access to the source code and not packaging garbage apps to begin with. If a Debian packager doesn't restrict the app from accessing the network even though it didn't really need to, probably doesn't matter anyway. If an app store does the same thing, that was the only thing preventing the app from sucking up all your contacts and sending them to a third party server.
> I want the proprietary app in a container.
People want containers to be magic but they're actually a hard problem. You want the app not to be able to do anything you don't want it to but still be able to do everything you do want it to.
As I see it the problem that containerization in snaps and similar solution is the isolation of system configuration.
I agree that permissions are an hard problem and honestly I am not sure how much they are relevant for snaps, but what is theory is feasible is that installing a snap could be completely and reversible.
I believe that is true of flatpack at least.
> As I see it the problem that containerization in snaps and similar solution is the isolation of system configuration.
If you drop your app's config file in /etc/ and nothing ever touches it, isolation isn't really buying you anything. If something does, that could still be what you want to happen.
For example, suppose there is a P2P app that can operate either by having you forward a port from your router (which is not always available) or by operating as a Tor onion service. To do the latter it has to modify Tor's configuration so that it allows incoming connections to the application's port. It's something you want to happen, it's something the package can clean back up again when it's uninstalled, but that doesn't work if the two otherwise independent applications have their configurations isolated from each other.
So it's still the permissions problem.
I would say that more than having your own config be left alone, the complex case is when two different applications want to mess up a third party configuration.
Software repos as a whole in part exist to solve and harmonize these cases.
> people have to maintain them all separately.
Yes. I think most people (most people don't run Linux in any form) would like to think of their system as having a collection of independent applictions, not a set of libraries.
If people are expected to "maintain", or even understand, the concept of shared libraries, then I think the system is only geared to power users and tinkerers (the current user base more or less).
The people who maintain the libraries are not the users, they're the library package maintainers.
If you move the libraries into the application packages then the package maintainers for every application also have to maintain every library they use, duplicating the efforts of one another. The users then suffer when they do it poorly because they don't have the time or domain expertise to maintain multiple third party libraries in addition to their own application.
I can only comment on snap for the server (non-desktop) side of things but packages in Ubuntu, which are Debian packages, contain random (none or a lot) amounts of shell scripts /var/lib/dpkg/info/* which may fail for any reason and introduce any number of side effects into the system, as they handle sometimes very complex software migrations and can change any number of things. Surely the desktop has some history here too (packaged proprietary graphics drivers for example doing unspeakable things to Xorg configs come to mind).
Snap is a way to contain / scope this kind of scripted activity. This is a welcome change. Additionally, deb/apt has much worse transaction support than yum and its successors (you can simply roll back yum transactions, good luck rolling back a borked APT system where package maintainer scripts already have done unspeakable things to the system and you're kind of stuck). APT's configuration system is also arcane and badly documented; debhelpers that control how most packages are built and work are tens of thousands of lines of perl, python, C++, C, makefile and m4 code that somehow work but are in no way a way to build straightforward predictable packages. It's ultra flexible, but also ultra complex. The trend in package / release management is going towards simplification, and not complication. A stop gap solution were many projects that allow the generation of Debian packages from venvs, random directory trees (for which you could also use the deprecated old-style DEBIAN package format and pass it to dpkg-build without the arcane dpkg-* toolchain, but again, Debian claims these this kind of packaging is not "well formed", and who knows when it gets removed).
Snaps are just a different way to do integrated containerized applications with scoped config management, versioning and a release system on top (which makes the difference). Perhaps somebody could make a better solution. Meanwhile, RedHat introduced AppStreams -- probably Canonical also felt it needed an answer to that.
Ubuntu's exposure to their "universe" repository component, many packages in which are badly maintained and are not a great part of the release QA, is also a huge risk as an enterprise distribution (where the money is) so it is no surprise that Canonical is looking to decouple their core offering on the server platform from that and maybe at some point remove it from the base install altogether ("take your own risk") and then drop it.
If rpm/yum fixes much of apts problem, why didn't Ubuntu change for that?
rpm/yum shares a number of apts problems, and certainly doesn't fix all of them.
What you want is static executables. It is sad that those are not typically packaged in dependence-free debs, but there's nothing really against it.
Yes, sign me up please! You don't necessarily need to go all the way down to statically linking libc, but if just about everything else could be included in the binary, that would be lovely.
We have come up with such convoluted solutions to this problem—Docker, nix, Snap, etc—when the simple option is sitting in front of us. And it works! On my Mac, I don't want to install homebrew or MacPorts—Package managers make me feel like I never know the state of my system—so when I need a command line tool, I try to either track down or otherwise compile (in a VM) a static binary. When I can find them, they work perfectly!
You can link libc statically, it's probably smaller than the other stuff you use. If you are really worried about size you can use the musl which is quite smaller.
The only problem I have found with static linking is for programs that use opengl, that need to be linked to a specific version of the graphics drivers. If you try to statically link a program that depends on libGL, it may work for you with an nvidia chipsed, but then it will fail at runtime for somebody that does not have the same type of graphic card. I do not know of a clean solution for this problem, regarding static linking; but I would love to know.
I'm not against linking libc statically, but the reason I brought that up is to preempt a counterargument I always see: what happens if a security vulnerability is found in libc, and just about every single program needs to be recompiled?
My response is either (A) so what, go ahead, it's worth the complexity tradeoff for static binaries, but also (B) you can link a tiny handful of core system libraries dynamically, and still reap most of the simplicity benefits of static linking.
The latter approach solves the OpenGL problem too. If there are real cases where users would need to swap that out, go ahead and link it dynamically. The goal needn't to remove all dependencies, but to avoid the quagmire of hundreds (or more) of nested dependencies.
Then snaps could be the solution for proprietary software, not for regular software.
Statistically, proprietary software is regular software.
But yeah, snaps is a great solution for that, espacially because getting a non FOSS software is not possible in official distro repos, and hosting your own repo is HARD.
And that's what's happening. I installed Telegram using a snap because of this.
> Statistically, proprietary software is regular software.
Statistically, proprietary software in the world of Linux is a black swan. Free software is the rule, proprietary software the exception to the rule which tends to be used only when there are no alternatives.
And Flatpak is that much more than snap is !
Largest issue I have with Snap is that it appears to assume the owner of a system is not in best position to make choices for their own systems.
Sure, this might be true for the average user, but it is toxic to the “super user” community that’s in the best position to help support the larger community and may end up pushing them away.
Snap at the very least should have an opt-out feature, if not be opt-in during an install.
More criticisms maybe found here:
https://en.wikipedia.org/wiki/Snap_(package_manager)#Critici...
There are parallels with the Gnome team's stance that "we know what's best for you", and that turns off a lot of linux users. There is a tension between those who wish to turn linux into Mac OS or Windows, and those who want fine-grained control over the workings of their computer. The arrogance of the gnome team and the snap apologists is a huge red flag to me. I don't use Ubuntu or Gnome, and I'm glad linux provides me that choice.
Even Windows and macOS provide a higher level of configurability out of the box than Gnome does.
I recently switched all my systems to another desktop environment because an app I needed to run was buggy under gnome and gnome developers in their infinite wisdom decided to remove the setting needed to fix it. Mind you, the setting had existed in gnome shell for years.
What gnome devs seemingly fail to realize is that gnome is only a means to an end (i.e. to run software). People /will/ switch to alternatives the moment that it fails to do it's job. The exact same thing applies to ubuntu.
I find gnome to be pretty configurable. The opinionated defaults aren’t so bad because you can just replace the environment if you dislike it
You literally have to install a special tool to configure the look and feel. Did you miss the whole thing about how gnome devs don't want user themes to be supported? Or how they are forcing csd and dropping menus and config left and right? I'm guessing you weren't a gnome 2 user because it's night and day.
Trying to explain to a non-technical user that GNOME doesn't let them reconfigure something because the GNOME developers think they're an idiot who will be confused by configurability is a nightmare. I ended up telling my dad to install XFCE and he's not looked back since.
The problem isn't that GNOME devs are trying to make user friendly software for non-technical users; that intent is commendable. The problem is the GNOME devs have incredibly insulting opinions about the skills and intelligence of non-technical users.
Yeah, its targeted for the ederly if you guys werent aware.
No. NO.
You can't even change language shortcut from default (Win + Space). As I understand this comes from MacOS, which gnome devs brainlessly copy.
https://askubuntu.com/questions/41480/how-do-i-change-my-key...
Your example is dead wrong. I run Gnome and my language shortcut is CapsLock. I used the Tweaks configuration software which is part of Gnome (it is the one you are supposed to use for more invasive configuration).
I do find Gnome plenty configurable. You just need to go in order of Settings -> Tweaks -> their weird registry -> custom extensions. I would agree this is convoluted, but I do not mind it (as a power user it took me 5 minutes to google how to do it, while it probably makes sense to have only the first state (Settings) visible by default).
Yup, you can all the miracles in the console, BUT reread the parent comment
> I find gnome to be pretty configurable. The opinionated defaults aren’t so bad because you can just replace the environment if you dislike it
IT'S NOT CONFIGURABLE. You can hack your way around their "opinionated defaults" which are for MacOS user from USA.
Take a look at the KDE's settings for this case https://i.stack.imgur.com/ukKmp.png
There is no reason to google, install some tool and mess with it.
The examples from the OP are configurable from Tweaks which is a GUI. No hacks, command line use, or third-party installs. In particular, the equivalent to the screen shot you showed is available from Gnome's Tweak tool.
FYI, the Gnome Tweaks interface is basically identical to the KDE settings here.
Screenshot: http://pvv.org/~jabirali/tweaks.png
You do understand that them being extensions means they are not part of GNOME.
You picked the 4th stage but conveniently skipped all 3 tools that precede it. The Tweaks tool is a GUI that is a part of Gnome and it deals with the examples that OP raised.
You must be really new to Linux. I mean, wow.
This is a rather childish way to respond to my comment... What is the point of being antagonizing like this? How do you see the conversation progressing or what point are you trying to make?
For context, I have used Linux and other Unixes for 14ish years, spanning the spectrum from embedded devices to supercomputers, with (or without) a variety of graphical shells.
The problem is configuration churn. A lot of gnome2 stuff didn't carry over if I recall and had to be reinvented for gnome3
Gnome 3 has been out for 9 years now, which is longer than the time between the releases of Gnome 2 and 3. I don't think there is a lot of config churn. (It may have been worse in the early days of Gnome 3.)
Compare against KDE and check how far is "configurable".
How do you set the background colour in GNOME?
At least in the recent versions I've tested, you literally can't. You can only set it to an image.
> Sure, this might be true for the average user
To be fair, that has always been Ubuntu's target market...
There is an opt out feature: diversity.
You can always use only the system package manager, or use a distro that doesn't use snap.
All those complaints feel so moot.
It's really hard to be in FOSS nowaday: you can't make a move without your users judging you all along the way, because a lot of them are idealists that expect a lot from you, yet don't think about the non tech saavy users.
It's way easier to make proprietary software: most of your users don't criticise any single decision you make, you don't have to justify yourself, you get much more users, and you make money out of it.
> It's way easier to make proprietary software: most of your users don't criticise any single decision you make, you don't have to justify yourself, you get much more users, and you make money out of it.
Why on Earth would you think that users of proprietary software don't criticize it? I'm pretty sure that Windows gets more criticism than Ubuntu...
Take zoom: in HN we heard a lot of complains for ethical reasons, but the biggest part of the user base doesn't care and are just happy with it.
On FOSS, you'll have complains about the software it self AND ethics from most of your users, because they are mostly technical, and contains way more idealists than the average user sample.
Super users should be using Debian. Not being snarky.
I'd expect users who go into Debian expecting some power-user version of Ubuntu to be disappointed. To be happy on Debian, you need to adopt their philosophy that stability is better than having the latest-and-greatest.
For those unfamiliar, Debian releases come out about once every two years, at which point all software in Debian's repositories is frozen at its current version. Software receives security updates between releases, but nothing else.
I personally think this is wonderful, and I would absolutely use Debian if I was interested in switching to Linux (which I'm not, at the moment). Constant change is inherently frustrating, even when the changes themselves are a net positive (they often aren't). Debian's approach provides a level of reliability and consistency that is sorely lacking in most modern software.
So, while I also recommend Debian, I do so only if you too agree with the above paragraph.
Debian has stable, testing, unstable, and experimental repositories.
If you only enable stable, then you are signing up for very outdated software.
If you add `testing`, you get quite a ways towards having an up to date system, while still not having to worry too much about odd bugs.
Adding in `unstable` gets you about as close to up to date as you can get without compiling the source yourself.
Experimental is good to keep around, but in my experience most things skip it and just hop straight to unstable.
The beautiful thing about Debian compared to Ubuntu is that it actually is a rolling release system. Ubuntu users have to worry about what version they are on. With Debian, you set what track you want to follow and just remember to install updates as they become available.
Because it's a rolling release, you're much more likely to catch small issues and be able to isolate what package is causing the problem, as opposed to doing a thousand package upgrades at once and then being snagged because one of them had an install issue.
Debian strongly advises against mixing repositories: https://wiki.debian.org/DontBreakDebian#Don.27t_make_a_Frank...
Debian Testing is an option, but would you recommend that over a distribution focused on rolling releases, like Arch? From my vantage point (which isn't particularly good, as a non-Linux user myself), most of the Debian project's effort is concentrated on producing Debian Stable. Case in point, security updates for Debian Testing are sometimes significantly delayed.
> would you recommend that over a distribution focused on rolling releases, like Arch?
Everything about Debian except the `stable` repository is explicitly a rolling release.
> Debian strongly advises against mixing repositories
Certainly you wouldn't want to add in the other repositories if you're aiming for Debian Stable type guarantees.
This is the `sources.list` file that I've been using for nearly a decade:
And then I have a preferences file that prefers testing to unstable to experimental (actually three separate files in the preferences.d directory, but I'd think you could combine them.deb http://deb.debian.org/debian/ testing main non-free contrib deb http://deb.debian.org/debian/ unstable main non-free contrib deb http://deb.debian.org/debian/ experimental main non-free contrib
It may not be advised, but it works pretty well. Sometimes you have to get a bit creative when you go to run `apt-get dist-upgrade` and it wants to delete half your system, but usually you can just manually install individual upgrades (`apt-get install <x>`) until it unwedges itself.Package: * Pin: release a=testing Pin-Priority: 700 Package: * Pin: release a=unstable Pin-Priority: 650 Package: * Pin: release a=experimental Pin-Priority: 600
back when I used debian(10 years ago) testing was the staging ground for the future stable version, so it got a couple of issues that varied as the mantainers stabilized the system.
then unstable was a really rolling release system, in my experience more stable than testing(understandable quirk as is was used by mantainers to prepare the next release).
At that point I decided that I'd rather use arch then unstable debian, but unstable was quite similar regarding package candence and stability.
Debian offers both the stable release which you mentioned as well as more up-to-date 'testing' and relatively cutting-edge 'unstable' releases. The 'unstable' release tends to be stable enough for day to day use by the so-called 'power/super/hyper/turbo/whatever' user, it hardly ever breaks. I tend to run stable on servers, unstable on user-facing desktop/laptop/notebook applications. Even on servers I sometimes add the testing or unstable repository at a lower precedence to be able to selectively add packages from there. I've done this for decades and have yet to have a significant breakdown on either server or user-facing installations.
Consider that LTS editions of Ubuntu, which are very popular and Canonical themselves recommend, work exactly the same way.
I'm a super user but the last time I tried to use Debian as a desktop OS the experience was so disappointing that I reformatted to Ubuntu. I gave a shot to KDE and reformatted again to Gnome after a couple of weeks. That was 2014. I removed Debian as a possibility, KDE maybe someday. On a server Debian is OK.
What didn’t you like about Debian on the desktop? Old software?
I personally use any distro based on the situation.
As others have pointed out, common reasoning behind picking a distro include: hardware requirements, existing ecosystems, end user, package management, configurableness, security, long-term support, ease of learning, driver support, core dev team’s opinions, funding, etc.
I don't remember exactly, it was 6 years ago. I remember the general feeling of having to do too much work to get a usable desktop. Maybe the settings? Keep in mind that I don't need anything fancy. I liked Gnome 2 and I stayed with Gnome Fallback until last year when there were enough Gnome Shell extensions to bend it to what I like a desktop to be.
Maybe Debian is on par now. Old software can be worked around with containers and third party apt repositories. I often do that on Ubuntu too.
Debian has improved greatly in the last six years.
Try it with the xfce or mate flavours.
These days Debian Gnome and default Ubuntu are so similar these days the difference is negligible once you installed 1 or 2 plugins that mimic the Ubuntu functionality. Ubuntu's gnome is just plain gnome with a few in-house extensions made by Canonical - hardly worth it in my opinion while Debian is rock solid. Buster (the current stable) is nearly perfect IMO.
“Super users” in this case is relative, as in the super users of Ubuntu’s community are the users to most likely have an issue with this and willing to leave to another distro if needed.
Beyond that, increasingly common to see Ubuntu used in enterprise and the dev tasked with dealing with the issue may not have the authority to decide to use Ubuntu or not.
I have to disagree. I've used debian for years. I stand by the assertion that it's a wonderful platform for a server, but marginal for a desktop.
Should? Maybe. Probably will at this rate? Almost certainly.
One of my big things with snap is how it locks the snap into the home directory. I get why they do that but it would be nice to override[1]. In my case I want to play audio files outside of my home directory but VLC doesn’t have access. And VLC now only updates the snaps and not it’s repos so you have to use the snaps.[2]
[1]https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1643706
The website you linked (https://www.videolan.org/vlc/download-ubuntu.html) says:
> If you wish to install the traditional deb package, it is available as usual via APT, with all security and critical bug fixes. However, there will be no major VLC version updates until the next Ubuntu release.
This is in line with Ubuntu (and Debian) repo policies. You do not get major software updates in between distribution updates unless you use a third party repository. You do get bug fixes and security fixes, and/or can track unstable if you need the bleeding edge.
Ohh yeah I misread that
>> In my case I want to play audio files outside of my home directory but VLC doesn’t have access
You can try using soft links as a work around.
Is this possible? Softlinks is just an indication of where to look for the file, but if the process can't see the file, a softlinks won't help.
A hard link should be able to see the file, but they need to be in the same file system, and I don't know much about ubuntu's default partition scheme to say if it's doable for most of the users.
This is a rant against snaps. Well, lets get to it.
Sorry, snaps are a LOT slower than just running a binary. Did I said they're slow, well they're slow.
I have an SSD and it feels like it's 1992 and I'm trying to run some snap from a Cyrix without cache and 16MB of RAM. I switch to binary version (oh my chromium), and it freaking flash, 0.x sec. and you're there, the full app is available.
Snaps are a NO GO my friends.
Besides having LOTS of problems running out of the standard GUI (Gnome3), or even in the standard (supposedly heavy-tested) GUI, they are slow.
Sorry, I've already said that uh? SLOW, that's snaps.
If there is somebody from Ubuntu here, please take a serious look about how snapped apps (pun intended), read/write $HOME defaults.
I mean we have to have defaults somewhere. So thingies like the colour theme, the theme engine, default download path, etc. are fully followed just as the user has configured them.
I use Ubuntu, but I certainly would not be using in the future if my applications which now take merely 0.x seconds to open start to take, 3-4-15! seconds to open. I fact I started to look to Debian and Fedora, they currently appear to have saner defaults than Ubuntu.
No, the second time I open an app in a session doesn't count AT ALL for the speed.
As a package manager for my app snaps have really been an anti-pattern and we're considering removing support for them.
Here are some problems I've had:
- snaps use a different directory than our main app. So if you install our debian package, then go to a snap package, all your data seems to vanish. It's just in another hidden directory. I tried to figure out how to get the directories to sync up but couldn't get it to work as it's yet 'another thing to support'. I only have so much time.
- snaps have various bugs that you encounter after you've shipped the app that aren't present at build time. Mostly due to being in a container and 'reasonable' things not being accessible and needing to be granted access to via a configuration file.
The strategy I'm thinking of migrating to is to just distribute as a .deb and have our own apt line that is installed during the .deb installation. I think this is what Slack and other Electron packages have migrated to which is easier for them to support.
I mean conceptually it sounds great. Put your apps in a container. They will be isolated. Great. But in practice it's a nightmare.
To be fair though. MacOS has similar issues when they started going with isolation and privileges.
I think the main issue is that none of the OS maintainers spend a day in the shoes of a package maintainer. And if they did they don't care because they own the OS and many of these apps compete with your core product.
At least you have plausible deniability that your behavior isn't anti-competitive - you're just trying to improve the security for the user!
For example, Zoom got a ton of crap about their installer but they compete with Facetime which DOES NOT have to constantly ask the user for privileges. Apple granted Facetime these privileges via the OS.
From the perspective of a user, it's horrible.
"Can this app access your Downloads folder?"
"Can this app access your Webcam?"
"Can this app access your Microphone?"
"Can this app access your Documents folder?"
... and on and on ad nausea.
Devils advocate: it is plain weird for every app I install to have so much file system and system access. It’s nice to have a sandboxed solution built in. It would be nice if it was a solution that didn’t have the problems that this article listed, but snaps could be adapted to be good with a few changes.
Why a proprietary backend though? I suppose cannonical views packaged apps as a platform opportunity and wants to be the first to “capture” the users without somebody bigger coming and taking over?
> Devils advocate: it is plain weird for every app I install to have so much file system and system access. It’s nice to have a sandboxed solution built in. It would be nice if it was a solution that didn’t have the problems that this article listed, but snaps could be adapted to be good with a few changes.
Exactly. Personally I have been sticking my desktop programs into "firejail"-managed "containers" for a long time. It's a good thing that a similar solution has been implemented that is suitable to bring this too the masses.
> It's a good thing that a similar solution has been implemented that is suitable to bring this too the masses.
Couldn't you just make a "default-firejail" package that installs the symlinks to firejail somewhere that's before the default install install location in your path? And maybe consider installing that in the base install, but that potentially risks breaking things unexpectedly for users.
Wasn't AppArmor already doing this though? If I remember correctly (and I never properly read up on it, so please correct me if I'm wrong) it limits which syscalls you can do and with which parameters, like opening only certain files. I think apparmor rules/profiles were becoming more common to be delivered with their respective packages (I'm using Debian), and it sounds like that already solves your exact concern without deviating from apt:
> it is plain weird for every app I install to have so much file system and system access
A quick glance at Wikipedia to make sure I'm not talking out of my ass seems to confirm that:
> is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. [...] AppArmor is enabled by default in Debian 10 (Buster) [from July 2019].
(Also, I'm not a fan of claiming "devil's advocate" when you're saying something that you know everyone will agree with. It's similar to saying "downvote me all you want but [insert popular HN opinion]". Of course the principle of lease privilege for software is something lauded by every logically thinking person.)
AppArmor has only ever caused me pain.
For whatever reason it's decided that I'm not allowed to connect to the network.
It's easy enough to remove the package, but it likes to tag along as a dependency when installing updates.
Sounds like snaps are causing more pain than apparmor though?
I mean, if you want security but don't want the inconvenience that comes with new efforts towards sandboxing, then either you have to read up and help in the effort xor wait for others to fix it and run old and stable software xor not have security.
I wanted to love snaps. I really did. I like the idea of a self-contained program that doesn't get into DLL hell, and can't stomp all over my system if it misbehaves, and can be cleanly uninstalled.
Unfortunately, snap comes with all of these extra issues that happen when the developer isn't empathizing with the user. Also, much to my chagrin, snaps don't actually uninstall cleanly, and can really hoop your system. I now install snaps INSIDE of an LXC container so that snap can't misbehave and break my system, or else if I can I just use apt with a custom repo (for docker because the snap is awful). Ubuntu 20.04 will probably be my last Ubuntu system, and that's a shame... I really liked it.
The point of snap is you have one single package for all releases instead of one package for each release.
This reduces the maintenance burden, while also allowing you to have up to date packages.
The way this is done is by bundling all dependencies in your snap package rather than using the system ones.
I think it's a great idea for applications you want to be updated frequently, like VS Code, Chrome, etc.
It's not perfect, e.g. back end is closed source, but I'm glad Ubuntu is giving us options and being at the forefront of package management.
I like Snaps. After the debacle that is Catalina, I built my first Linux desktop in 15 years. Ubuntu 20.4 worked perfectly out of the box, with Wayland and everything. Snaps and the Snap store are a huge improvement over the previous GUI interface to Apt.
I ended up settling on Silverblue, a Fedora derivative that uses an immutable base system along with Flatpak for applications, and it’s been great. Equally trouble free, and Flatpak has many of the benefits of Snaps without some of the downsides (fully open source).
I've been using Linux for 15 years as my daily OS, I love the package system, but this point of view is missing the point.
Those arguments are always from very tech saavy people.
They are a good exemple of purity over practicality, completly ignoring all the problems apt/yum have for the average user or dev publishing software.
If you are looking for reasons as why Linux on the destkop never happen, well, this is one of them.
The whole snaps business was already annoying. Then my external monitor stopped being detected due to some issue with Nvidia graphics on (which probably is more Nvidia's fault then Ubuntu's). I'm using Thinkpad X1 Extreme Gen 2. Tried Fedora 32's live usb and basically everything worked but the installer didn't detect all partitions and didn't give a choice of choosing exactly what goes into which partition without offering to wipe-out stuff (they need to take a cue from the flexibility of Ubuntu's installer). Finally tried Pop!_os's live usb and everything basically worked + had no issues with the installer either.
I wasn't keen on using a derivative distro of a derivative distro; thought of using Debian but wasn't very confident if the issues won't persist on it. Considering everything just works perfectly with Pop!_os, I might just stick with it.
Ubuntu was such a breath of fresh air at the beginning (I still remember Dapper Drake). It is sad to see it going this way.
Say what you will of their hardware, but the system76 team really nailed pop_os. So many little things I was used to suffering through manually configuring on a fresh install were just "right" straight out of the box.
Supposedly they're working on tabbed and stacking layouts for tiled windows- if that is the case I don't know if I'll go back to i3 or sway again!
I don't have any experience with their hardware but I'll have to agree on the software front. I have a multi-port hub that I also connect the HDMI cable for monitor to; I connect the hub to the laptop its actually plug-n-play. I haven't had to run one command to hack/override configs etc.
I tried their tiling option but didn't stick to it because when I maximized a window and toggled to another, the original window went back to being tiled. Too lazy to remember shortcuts for tiling WMs (probably to my own detriment)
I've been a critic of their hardware in the past, but yes they do a great job on the software. And to be fair the purchase on which my earlier criticisms were based is almost four years old now. Hopefully they have improved the hardware side of things.
Honestly this article had some poor arguments. Like Apt being fantastic-it isn't-see below. I do not see what the hate around snaps are for.
Yes, some snaps are not updated, and VLC only working in the home directory as another HN commenter said is a pain.
Snaps still solve a lot of issues on Linux that Windows does not have.
What happens if the new version of your editor breaks some well known plugins?
On Windows you just install the older version from the archives. On Linux you risk dependency issues. Snaps are a single command to switch to an older version. Very important in both the software and media space.
One option is to stay on the LTS version of Blender.
Another is back when I used Atom, one upgrade completely broke a popular plugin for me because of a change in Chrome. I thought it would be fixed quickly so didn't revert the package and I lost the older version in the cache. Took the Atom devs 2 weeks to handle the change to Google Chrome-then Arch maintainer of the Atom package didn't get around to upgrading it for a week or two after that.
Things like apt pinning really don't help you when you discover the issue on your main workstation. Also, doesn't change the fact that it is so much easier to revert on Windows. Snaps make version control even easier to do on Linux then Windows.
The back-end being proprietary is a good argument. but for a company that has worked so long with Linux and Free Software; give Canonical a little trust to release it.
I honestly like snaps. I like having spotify and discord working as to his other concerns I just don't care.
What's the problem with Spotify's deb package? Works fine for me.
Because adding a bunch of third party-repos/packages is annoying when I need to update or upgrade major releases. Also don't want to give spotify root, I just install it and stop caring about it.
Same here : I use it for Firefox and Chromium, and it just works. Another comment mentioned VLC, which should be a good candidate for frequent release too.
I can understand the "proprietary store" concerns of this thread, however.
They should maybe consider adding an option for people to point at other repositories and surface them either in the ubuntu software store or in their own software store.
Why not just get a Mac?
Not saying you're wrong, but if the response to people wanting to use linux is "Get a Mac" then that's problematic.
It isn’t. I personally use both a Mac, and Linux. Openness is what makes Linux valuable. I have no desire for Linux to become like the Mac with proprietary parts of the system.
I'll take a contrarian/devil's advocate stance. for some applications and some usage scenarios, snaps makes a lot of sense.
Take a multi user system and users who run applications like chromium or firefox.
It's dangerous to upgrade the application while users are running them as the files the running applications depend on can change thereby making them either break in weird ways or force the end users to restart them.
if these apps were just distributed as snaps, it wouldn't matter. they would keep on using the old image without any problem, while new executions would get the new image. If one really wanted to encourage them to exit and restart (i.e. some security hole), the same mechanisms that exist today to get people to restart could be used.
with that said, I think it should be a choice, not something force down our throat. if I install something with apt/dpkg , I expect it to be an apt/dpkg package, not a snap. if I want to use snap, I'll install it with snap.
You people do realize that snaps have been around for 4 years right? It has widespread first party support from various companies including Microsoft, Amazon, Mozilla, Google, Spotify, JetBrains etc...
They have wide spread adoption with almost 10x the install base of Flatpaks.
Do you guys really need to keep throwing blogs at something which isn't going away and is useful to users? How is this useful in anyway? Canonical isn't suddenly going to give up on this, and I don't even want them to.
The main point here is that Snaps take away control from the user. The user has no control over how and when apps update. The backend for snaps is proprietary and completely in control of Canonical. If they decide to shove ads down through Snaps, they can at any time. And the whole hijacking of the chromium apt package to backdoor in snaps without user consent is a move straight out of the Microsoft playbook
This feels very natural to what Apple, Google and Microsoft do on their OS. But Canonical seems to have forgotten that such behavior is what drove a lot of people to Linux. It is never going to be accepted. Nor it should be.
Very well said! I am afraid there is a prophecy in your description. I have started moving all of our infrastructure away from Ubuntu at work. This sounds horrible. 2020 has been bad enough for one year.
> Do you guys really need to keep throwing blogs at something which isn't going away and is useful to users? How is this useful in anyway?
Personally I find snaps a very disappointing solution to a very interesting problem.
> Do you guys really need to keep throwing blogs at something which isn't going away and is useful to users?
Do you really want the answer to that...
There are people who would loudly insist that there should be nothing but a kernel and Stallman's FAQ.
> There are people who would loudly insist that there should be nothing but a kernel and Stallman's FAQ.
Literally nobody says that. Not even RMS says that.
systemd went away. Oh, wait...
The thing about that is, even systemd contains bits of functionality that overlap with snap/flatpak now.
In my opinion, AppImage solved the problem Snap tried to solve, yet without being horribly intrusive.
Kudos to Linux Mint for taking a stand in regards to it, and hopefully more Ubuntu-based distros do the same until Canonical gets the idea.
I don't think they're out there trying to be malicious, but they need to be set back to the correct path as it has been the case with odd monetization choices for Ubuntu before that didn't really benefit anyone.
The anti-features of snap seem like business decisions. Making it easy for users to use alternative stores would kneecap Canonical's paid IoT offerings https://ubuntu.com/internet-of-things. Also, I wonder if the ability for developers to force software updates could be marketed as a kill switch for proprietary software vendors.
My prediction (95% confidence): Snaps will become installable from Windows (via WSL) and Microsoft will buy Ubuntu and try to take over the installers for other Linux distros by having them use Snap as the main repo. This will be Microsoft´s long term app installer strategy
Am I missing something isn't https://github.com/snapcore/snapcraft and https://github.com/snapcore/snapd Free Software under the GPL3.
I had seen some discussion before about the server not being open source but I can clearly see the store api there. I'm just looking at the code now and haven't taken any time in testing it out for myself yet.
Regarding debian packages and apt as we have seen with the amount of PPAs, there really has to be some solution for that. I like the snap format I believe it's heading in the right direction. It still has some problems with desktop software but that seems to be addressed as it has progressed.
There is this strange cold war between Red Hat and Canonical and Red Had seem to have most of the NIH problems if you look at the history. I really don't think Red Hat or some of it's developers possible like relying on code from Canonical.
Snaps and flatpak help Linux work for me. My ideal desktop is a stable core with a few select user applications that I can keep current. Thanks to snaps/flatpak I can run Debian stable on my personal desktop, but keep emacs, firefox, and a few more apps at their most recent versions.
I'd like to point out that, as far as I know nobody or very few people have problems with flatpak.
There is flatkill.org, but that kind of comes across as FUD.
The privilege escalation attack looks a bit worrying, but aside from that it looks either the same/better as/then native packaging (Sandbox for some apps), or something that is on the developer (Out of date packages).
It's possible I'm misunderstanding something, if so please feel free to tell me so.
Snaps, flatpacks, and other containerization solutions only work on systems with OSes set up in the last 5 years. Most of the containerization is justa symptom of the real problem: future shock re: underlying libraries (c++whatever,glibc, etc) and new features being used.
I'm _fine_ with snaps.
As long as they are limited to applications (i.e. not OS components) and look reasonably like normal applications.
Why? Because you can't ask developers to package their apps for 100+ distros. And they let us run latest versions of apps on an otherwise stable/old OS.
Flatpak and AppImage would like a word.
i recently upgraded ubuntu and had to start dealing with snaps.. of the two i tried to install they were both out of date and i ended up using apt to great success. what was the point of this thing?
Someone looked up from his workstation, put a finger in the air and said: Containers are Cool, Windows 10 is cool, what if we made Containers like Windows 10 and then mounted them like Mac mounts installation files!
> of the two i tried to install
Which two? I mean, a package is only as good as its curator. That sounds more like a complaint about Ubuntu's Snap update policy or the community support for whatever you were trying to install than anything about Snap.
I don't understand the fuss here. People who want control over the open source software they install still have it, it still works like it always has. People who don't care can't even tell the difference.
And in a handful of situations, like large apps with extensive dependencies, or externally managed builds, or needs for cross-distro binary compatibility, Snap has real and tangible advantages.
I think the issue here is that the apt versions are not always maintained after an app moves to snap releases.
> And in a handful of situations, like large apps with extensive dependencies, or externally managed builds, or needs for cross-distro binary compatibility, Snap has real and tangible advantages.
I see that snaps can offer these advantages, but so can AppImage and FlatPak, and these are even more cross-platform and don't come with the same limited ecosystem.
I think snap versus apt is a question of taste, in the spectrum black box just works vs hard to use but fully at the hand of the user. I'm currently having the same choice with python between pip and conda. At the end, Ubuntu was a very nice bridge between a use system that's just work, and server os for production (db, backend..). I hope snap won't make that old.
snaps and flatpaks are something I've wanted on Linux forever: self contained apps. supporting umpteen million Linux distros is a non starter for small isv's (or even large ones). if they aren't perfect, fix them... but don't throw out the best thing thats happened to software deployment on linux in forever.
I am currently using lxd to manage containers on a headless server. I just found out that lxd on Ubuntu 20.04 has snapd as a dependency, which seems a bit odd. Does anyone know if there is an easy way to install lxd without snap, or should I just ditch it and try some other lxc-container manager?
I've been unhappy with the direction of Ubuntu for awhile, but having said that it's still the best collection of packages that install with fairly sane defaults for desktop/laptop. I've adopted a model of starting with ubuntu-minimal installation and just installing the debs I want (disable recommended packages by default). This gives me a fairly reliable base from Ubuntu but a system assembled how I want (except systemd, can't avoid systemd).
Honest question: why not fedora?
I've removed snaps on my new Ubuntu installs. Not sure how well this is going to work out in the long run but if things start to mess up I'll move back to LinuxMint.
Are the arguments presented legitimate? Doesn't apt put the power in the hands of the distro developers and not the end users? After all they get to decide what is packaged and what is not.
Its all open source code so you can download and install yourself so I don't buy the argument about it being against the GNU philosophy.
Apt and snaps solve different problems. The only argument I can see here is the one about the back-end which is old and tired.
....6 years of NixOS, and I kinda forgot remember that elsewhere unprivilaged users and admins can't manage packages the same way.
How is the package situation on NixOS nowadays? Do you find you have to custom install a lot of software or are there native Nix packages for most everything you need?
What kind of learning curve should one expect if migrating from debian/ubuntu based distributions?
There are existing packages in Nixpkgs for almost everything. Home-manager has firmly emerged as the "configuration per user" tool. I might say, in fact, get comfortable with home-manager and then switch to NixOS, if cold turkey sounds like a lot.
Its a very, very different sort of distro, but if you know basic function programming and are willing to do some "unlearning", you should be fine.
I'm still haven't had a chance to learn more about this. Can pls someone put in plain english what snap is?
There are real issues with snaps but “apt/deb is a wonderful package management system and everyone is happy with it, at least the majority of Ubuntu/Debian users” is such an immature argument.
If Linux is going to have any chance of replacing OSX or Windows for the vast majority of users it needs to have a uniform executable which Snapcraft seems like the best candidate for. Without a uniform executable a lot of developers won't target Linux which is a deal-breaker for a lot of people. Clearly its preferable to install common packages from a standard package manager, and there should be better controls of how snaps get updated for power users, but most potential Linux users want apps that work like apps on their phones. This means you install it from a GUI app store, have access to proprietary apps, and then the apps keep themselves up to date.
Desktop Linux doesn't really have the goal of replacing Windows or OSX. It's a hobbyist thing, target more or less at software developers, and I think that's perfectly fine.
Plus, the desktop is becoming less and less relevant every year.
I spend easily 10 hours a day probably at a traditional computer. My wife spends an equivalent amount of time I'd guess in front of a screen, but it's her phone.
The first I heard of snaps was while I was upgrading from 18.04 to 20.04 (I only do LTS for work servers). What drove _me_ away personally, was I had no idea what snap is and it was all over the place all of a sudden. After studying up I was like “no I don’t want that, I want to have full control over my system”.
I switched to Fedora Server. The packages are much fresher, the kernel is fresher. It is a “staging” area for Red Hat, which for me is a plus.
Goodbye Ubuntu. I’ve been using you as my primary Linux distro since version 6.06!
Please, Canonical! unsnap snaps.
Bye bye Ubuntu if not.
> The backend is proprietary.
GitHub is proprietary. I understand that some people don't accept that either. But if you accept and use GitHub, then you should have no problem with snaps on this basis.
Also, on this topic, consider this quote[1]:
"We did that experiment with Launchpad; the people who said they wouldn’t use it because it wasn’t open source were the same people promoting a closed source alternative. When we open sourced Launchpad, they said that they wouldn’t use it anyway because Canonical was the primary contributor."
I am not saying that Flatpak is proprietary. I am saying that the focus on the backend not having source available is specious.
> Developer controls the updates.
Users CAN defer updates (eg. because they're on a metered connection, or they only want to update on Patch Tuesdays, or whatever). However, in the default arrangement they cannot defer them indefinitely. But in today's Internet-connected world, refusing updates forever is also anti-social and unacceptable, so I don't see a great loss there. However you can manually install a snap such that it never updates[2].
> APT does a fantastic job as it is.
No, it doesn't. It is fantastic for distribution releases that don't change their dependency structure after release. It's terrible for shipping new software to an existing distribution release. This is seen both in packages that must have major updates frequently (eg. Firefox, which added a whole new Rust toolchain dependency that had to be backported into existing stable distribution releases). It's also seen in various third party apt repositories that ship software to users that break their systems by causing future upgrade issues because they mess with distribution-provided dependencies in a way that future distribution package updates do not know about. apt/deb also provides no application sandboxing for third party software that you might trust less than your distribution. If as a developer you've ever tried to ship software to users as deb/apt, you would know this. Complaints about it are all over the Internet and this has been the consensus for many years.
> Don't shove it down our throats, make it optional at least.
It already is optional. You can remove snapd and pin snapd in apt to a negative score to never have it installed again[3]. Chromium won't be available to you as a distribution-provided deb in Ubuntu 20.04 then, but nor is it in Mint.
[1] https://forum.snapcraft.io/t/linux-mint-20-disables-deb2snap...
[2] https://forum.snapcraft.io/t/disabling-automatic-refresh-for...
[3] http://manpages.ubuntu.com/manpages/focal/en/man5/apt_prefer...
If you don't like Github (or if they make a change in the future that makes you dislike Github), you can use Gitlab CE, Gitea, Gitorius, git over ssh, etc. all open source options while still using the upstream git client.
If you don't like the Snap Store, well, get coding and be prepared to fork the client also.
And for the record Flatpak does not have any such limitation - while many people use Flathub to host flatpal repos, there are many more Flatpak repos available, including Distro hosted ones (Fedora has a system for building Flatpaks from Fedora RPMs and hosting the resulting Flatpak repos).
> If you don't like the Snap Store, well, get coding and be prepared to fork the client also.
Given the amount of OSS projects submitting packages to winget on github it's pretty safe to say that most projects don't care if a web service is OSS or not. That boat sailed like 10y ago.
Gitlab is one open source alternative to Github.
> refusing updates forever is also anti-social and unacceptable
Woah, what?
What about Flatpak or AppImage?
It is not really optional if you have to specifically prevent Snap from being installed again.
> GitHub is proprietary. I understand that some people don't accept that either. But if you accept and use GitHub, then you should have no problem with snaps on this basis.
Which is a website and has nothing to do with software you install on a workstation.
> But in today's Internet-connected world, refusing updates forever is also anti-social and unacceptable
Can you elaborate?
I've never felt a desire or need to use snap at any point. Since we've been implementing CIS standards I've started nuking snapd on host bake.
Maybe it's a thing for linux on the desktop, but my time isn't worthless so I don't do linux on the desktop.