Flatpak Is Not the Future
ludocode.comTo be fair, as a daily user and (to some extent) app developer, in my experience Flatpaks have solved the problem of library fragmentation and app distribution on Linux.
While my experience on Snaps was mostly negative (due to bugs, virtual "loop disks" per every app affecting the system performance, etc.), I found that Flatpak finally lets me install essentially any app on any distro.
For example, the Elementary OS "AppCenter" apps are now available in any other distribution, thanks to the Flatpak remote. Testing daily GNOME apps has become as easy as installing their Flatpak reference, and letting them update automatically.
Regarding memory, storage and power consumption:
- The base runtimes (which are the heaviest bit) are downloaded only once per system, e.g., one for the KDE Plasma ecosystem, one for Elementary, and so on. The first app you install will pull them, so I do not see it as much more bloaty than having an enormous bundle of system-wide dependencies (e.g. if you install a KDE app in a primarily GNOME environment) as would happen otherwise.
- Memory and CPU-wise, Flatpaks are very light containers (which do not require loop disks, or anything else) which should have almost no overhead. I never witnessed any loss of performance, at least.
- Bug and crash-wise, I experienced a tremendously stable and "flat" experience on Flatpaks. That is, if there is a bug in one app on one distribution, the bug will exist on all, or vice versa. This is not as common as it should be in containerized app install tools, and makes debugging overall much easier.
The only drawback are updates taking longer than on other platforms, probably because compressed deltas are not yet available unlike in other major package managers.
With this, I don't want to describe Flatpak as a panacea in everything, but at least for GUI apps, it solved a lot of distribution fragmentation issues in my case.
> Memory and CPU-wise, Flatpaks are very light containers
Adding to that: it's really worth noting, on "Memory Usage, Startup Time," the author of this article went straight into Snap ("the slowest of all") and doesn't mention Flatpak once. Could it be … it isn't a big deal? Nah, let's move on, Flatpak bad.
They also conveniently use an old version of GNOME Software which predates the recent rework of how it displays permissions, age ratings, and other details. In Fedora 35, GIMP most definitely has a worrisome "Unsafe" message. It's very shiny and new so it would be excusable to miss that if you weren't pretending to be a well-researched hatchet job.
I also can't replicate that snap apps take the same long amount of boot time every single time. Snaps generally take a few seconds on the first startup but then for me start more or less as fast as any other app.
Yep, as I understand it the Snap thing is also obsolete information. Older snaps are still slow, but new snaps are more efficient to start, and that has been the case for a while now.
There are plenty of drawbacks:
- Security: you are trusting random developers on the Internet to handle security of all the dependencies in a flatpack, forever. Most do not provide security updates at all.
- Privacy: distros like Debian spot and patch out trackers, telemetries and similar things.
- Long term maintenability: your desktop becomes a random mashup of applications with increasing complexity that you have to maintain yourself.
- Licensing issues: distros review licenses (and find plenty of copyright violations while doing so). A flatpak puts you or your company at risk.
- Impact on the ecosystem: the more users switch to opaque blobs the less testing and review is done for proper packages. The whole ecosystem become more vulnerable to supply chain attacks.
So basically all the same drawbacks as in Windows, when you install third-party software?
Of course, it's basically the same distribution model. Everything but the kernel and a very basic set of runtime libraries are shipped with the application. Except, I guess, that the basic Windows runtime libraries cover more security-relevant functionality than the same with Flatpak. I'm thinking of SSL, for example.
When I had to install my first AppImage I got a horrible déjà vu from my Windows XP days. The Linux repo experience is just better. Trust the repo via gpg and https instead of trusting every download url and scanning every blob for malware. Upgrade the whole system at once, instead of hunting down every new blob (and remembering where you got it).
Haven't used Windows in a decade, but third party apps on Windows don't even have the framework for some sort of package management, do they? At least with snap/flatpak/AUR/PPA you got an aggregated updating interface and optionally sandboxing for snaps and flatpaks.
I think, something like Appimages are the better fitting equivalent to Window's normalized madness, no?
> - Security: you are trusting random developers on the Internet to handle security of all the dependencies in a flatpack, forever. Most do not provide security updates at all.
As opposed to what? Dynamic linking does not solve this problem, contrary to popular belief.
>- Long term maintenability: your desktop becomes a random mashup of applications with increasing complexity that you have to maintain yourself.
Again, as opposed to what?
>- Licensing issues: distros review licenses (and find plenty of copyright violations while doing so). A flatpak puts you or your company at risk.
Sounds like a good way for a proprietary software to be distributed...
>- Impact on the ecosystem: the more users switch to opaque blobs the less testing and review is done for proper packages. The whole ecosystem become more vulnerable to supply chain attacks.
I don't see how one follows the other.
>> - Security: you are trusting random developers on the Internet to handle security of all the dependencies in a flatpack, forever. Most do not provide security updates at all.
> As opposed to what?
Opposed to distributions providing targeted updates to each package.
> Dynamic linking does not solve this problem, contrary to popular belief.
On the contrary it provably works very well for distributions.
>>- Long term maintenability: your desktop becomes a random mashup of applications with increasing complexity that you have to maintain yourself.
>Again, as opposed to what?
Opposed to distributions that do the huge work of packaging, testing, backporting etc.
>>- Licensing issues: distros review licenses (and find plenty of copyright violations while doing so). A flatpak puts you or your company at risk.
>Sounds like a good way for a proprietary software to be distributed...
This is unrelated to the lack of licensing review.
>>- Impact on the ecosystem: the more users switch to opaque blobs the less testing and review is done for proper packages. The whole ecosystem become more vulnerable to supply chain attacks.
>I don't see how one follows the other.
Supply chain attacks are strongly mitigated by maintainers doing vetting and packaging, and by distribution doing release freezing.
> On the contrary it provably works very well for distributions.
if it worked very well there wouldn't be a need for flatpak / snap / appimage / etc. How do I install an older or newer inkscape / ardour / krita in the latest debian stable / ubuntu / fedora / whatever without recompiling stuff ?
The sentence you're answering was about security. I don't think "fixing vulnerabilities faster" is something Flatpaks were created to solve. It was more of a solution to dependency hell, that makes vulnerabilities harder to fix and audit.
I see the same issue with Docker services I deploy: how am I supposed to check that the latest Nginx security update is included in all my docker-compose deployments? With shared libraries, it's as simple as "apt-get upgrade" then restarting the services. In Docker, I'd have to check that the latest versions of all images include the correct versions of all dependencies.
> if it worked very well there wouldn't be a need for flatpak / snap / appimage / etc.
Exactly: There isn't.
> How do I install an older or newer inkscape / ardour / krita in the latest debian stable / ubuntu / fedora / whatever without recompiling stuff ?
You don't. Or if you have to, do it on Gobo (or perhaps Guix or NixOS?).
That's a ridiculous answer. "You don't" means people go back to windows or Mac OS where they can do that easily because it's an actual need people have. That's the worst outcome by far as it leads to less support on Linux.
No, "you don't" is a valid answer. You use what is in the distribution. You chose the combination of archives (stable, stable+backports, and so on...) as you need.
Then you use what the distro provides.
(The same way as you use the default, tested and guaranteed engine computer on your car rather than installing one bought in a dark alley.)
Like the car, distribution work when you use the whole set of components bundled and tested together.
If upstreams make it difficult to package new version you can help by asking them to step up their game or switch to a better software.
If you want to build a Frankenstein bundle of software you are free to do so but you'll be on your own when things start to break.
This is the opposite of freedom. People own their computers, its up to them how they are used, not some gatekeeping devs.
This is absurd. You are confusing Linux distributions with DRM.
Did you not read the last line? You are free to do whatever change you want.
You are also free to use a distribution as it is without having to spend your time handling security, stability and compatibility across applications all by yourself.
Yet you cannot mash together a frankenstein OS while also expecting it to provide security and stability. Not because of a distribution decision but because of reality.
> Opposed to distributions providing targeted updates to each package.
Except distributions can manage their own flatpak repo. Using flatpaks as a format != using Flathub.
This is what Fedora does, for example.
Debian could provide their packages as flatpacks too. Not to mention that container images are still mostly built by using regular packages, aren't they? (So they are not as opaque, nor the regular packages go untested.)
> Debian could provide their packages as flatpacks too.
That would provide no benefits.
> Not to mention that container images are still mostly built by using regular packages
Yes, and because of the bundling they are a security disaster, as proven by many researchers:
e.g. https://www.infoq.com/news/2020/12/dockerhub-image-vulnerabi... https://www.darkreading.com/threat-intelligence/malicious-or...
So they are not as opaque
Ok, say a wild CVE appears and it says libblip versions 1.3 through 2.0.4 have a severity-10 RCE vulnerability and should be patched ASAP. If the container images and flatpaks are not opaque, I presume you have a simple command that lists all vulnerable library versions that are present on all the machines you administer? And, subsequently, that you have an easy procedure to patch them all?
If Debian (or whatever org/group/project/initiative that) provides the images has a security policy, they can extend that to the images too.
Users don't run CVE checkers [0], at best they reluctantly click on the update button. Of course the authoritarian evergreen auto-update thing is what actually works in practice.
For example as much as snap's UX sucks it does auto update by default.
[0] Though they could, as files in container images are trivially accessible, after all it's their purpose. Plus there are metadata based approaches: https://github.com/TingPing/flatpak-cve-checker (plus the Flatpak project already spends some energy on ensuring that the base image is chechekd against CVEs https://gitlab.com/freedesktop-sdk/freedesktop-sdk/-/jobs/18... ) of course duplicating this effort, and building a parallel world besides packages is not ideal
No because that’s not the model of Flatpak. If that library is in a runtime then that runtime is patched and updated. If that library is is part of the application then the app is patched and updated.
Just pretend that all Flatpak apps were statically linked. Would you have the same complaints? Or would you say that version $x of this app is also affected by the CVE?
In one situation you run apt update and another you run flatpak update.
> Just pretend that all Flatpak apps were statically linked. Would you have the same complaints?
Yes. The work required to backport and test security fixes in any library or any other component in every stable release train of a flatpak (or a fat binary) is simply not being done.
Most upstreams barely handle vulnerabilities affecting #head.
Worth noting that nothing about Flatpak necessitates that distro maintainers can't compile their own Flatpak apps the same way they currently do. Debian can still compile Firefox from source and patch out trackers and update dependencies, even if at the end of the day it bundles the changes back into another Flatpak app.
In fact, the lack of interdependence between apps makes this considerably easier, because a Debian upstream that sits in the middle and custom-compiles it's "official" software can release security updates immediately for critical apps with critical security flaws -- without waiting to make sure that the security fixes don't break some other app.
Flatpak does not require you to have a separate upstream for every app, or to get your updates straight from the developer. Debian can still be a middleperson and they can still do all of the same moderation/customization/legal analysis.
----
Very importantly, on security, Flatpak is a paradigm shift away from the existing model of needing complete trust for the entire app:
> Security: you are trusting random developers on the Internet to handle security of all the dependencies in a flatpack, forever. Most do not provide security updates at all.
A big part of this is that you don't trust random developers with Flatpak, or even your own distro maintainers. Most applications do not need the level of access they have by default. The end goal of Flatpak (it is debatable whether it is achieving this, but the goal at least) is to significantly shrink the attack surface of your app.
If your calculator app doesn't have file access, or network access, or process forking, or any of these things because the runtime blocks them (and honestly, why on earth would it ever need those things), then it is a lot less dangerous for your dependencies to go out of date. A calculator should not need to worry about updating its dependencies, because it should not have access to anything that could be abused.
Now, that's an extreme example. Many apps will not be in the position of needing no privileges, but many of them will still be able to have their attack surfaces shrunk. Firefox for example really shouldn't have that much access to my filesystem. Apps should not (in general) share parts of the filesystem except in user-defined contexts. Many of them don't need Internet access at all.
Flatpak makes it easier for dependencies to go out of date, but it also (ideally) drastically reduces the number of potential security flaws you can have in the first place, and drastically reduces the ability of apps to exfiltrate data from other apps (again, this is the ideal, see X11 vulnerabilities + Wayland for how much of an ongoing process fixing Linux security is).
----
I would question a few things:
- Is the reduction in attack surface big enough to balance out the extra attention users/devs need to pay to updating dependencies?
- How many security vulnerabilities are due to dependency issues vs data exfiltration from the home folder, or from other apps that they have no need to access? Linux security is kind of a disaster in this area, how many vulnerabilities would we fix immediately just by sandboxing filesystem access so /home wasn't a free for all?
- Is this actually blocking maintainers from releasing security patches or making it harder for them to do so? I would argue no, I think the maintainers can do the exact same things they're doing today, and that their job may be even easier when they don't need to hold up an entire house of cards with every release.
- Is it better to trust Debian to try and patch out every tracker/telemetry, or is there an improvement to having apps that don't require Internet access just be flat-out unable to call home or send telemetry in the first place? I don't think this blocks maintainers from doing their jobs, and it means I just don't have to worry about trackers at all in my calculator app, even if I download it from a 3rd-party source.
----
The weak point here is other Linux vulnerabilities (gotta get off X11!), UX (an eternally hard problem to solve), and Flatpak immaturity/mistakes (I don't like that manifests are still a thing, portals are still being built, I think permissions could be better). But the fundamental concept here isn't bad. People rely on shared libraries/runtimes for a lot of things that they don't need shared runtimes to get.
And I can't stress enough: distros can still moderate and compile custom versions of Flatpak apps.
> Debian can still compile Firefox from source and patch out trackers and update dependencies, even if at the end of the day it bundles the changes back into another Flatpak app.
Does that mean I'll download one copy of libThatWasUpdated for every app that uses it?
Seems like this is exactly like iOS and Android work. I see the appeal (they are definitely more secure against malicious apps than Linux with apt-get), but the waste of space still bothers me. Docker has the same issue, and didn't really solve it with its layered filesystem (you always need several dependencies, but you can't merge images, so you'll need to install a new copy of some of them).
These packages also make it hard to audit the versions of security-critical code installed on my system. It's good that they can be updated separately by the OS, but at least when apt-get was upgrading shared libraries, I could be sure that all programs had the fix. Now I'll need to go dive into each package to know if it should be allowed to run, or if I should wait for the upgrade.
> These packages also make it hard to audit the versions of security-critical code installed on my system.
I think someone else has mentioned this somewhere, but Flatpak exposes information about the runtime it's using -- you can use metadata tools to iterate over the packages installed and check to see if any of them are outdated. I suppose the tooling in that area could be improved, but I suspect that's something that would happen in parallel with more maintainers getting involved in validating/moderating packages. I could easily see a world where you specify a set of minimum runtime versions (or pin an exact runtime version) and set up some rules about what packages you'll allow to run.
Importantly, if you do have a critical vulnerability, you can make a list of the packages in your system that would be affected, and if you're willing to re-bundle them you can update the runtime for just them, which means if you're a maintainer you can push that update out immediately for those programs to everyone; you don't have to wait until somebody has finished checking to see if the update breaks a calculator app that doesn't have Internet or file access anyway.
I don't think there's anything about Flatpak that blocks this kind of auditing (and the core technology may end up making it easier in the long run). So I do get the criticism, but it seems like the criticism here is more, "we need some more focus given to better userland tools" and not "this is the wrong technical/architectural direction to go."
----
> at least when apt-get was upgrading shared libraries, I could be sure that all programs had the fix
Ironically, this is kind of the same behavior that the linked article calls out. You really couldn't be sure about that back then. Plenty of packages embed critical libraries like libsodium and don't use the shared versions. It is very common advice in the Linux world (particularly with games) to embed as many system libraries as reasonably possible, and regardless of whether or not that's good advice, developers do it.
So right now, you're trusting package maintainers to catch that stuff and make sure that the dependencies are updated across the board (or converted into calls to shared libraries) even if the source code isn't using shared libraries by default. And after Flatpak, you'll still to some degree be trusting maintainers to do that, and you'll still be trusting maintainers to say that "updating libsodium" means updating every package that depends on it. The difference with Flatpak is that this is now very explicit and obvious to you, which may be a good thing because (in the article's words) it seems that the single shared runtime model "leads people to place more trust than they should" in their ability to ensure that applications are actually using system libraries.
----
> but the waste of space still bothers me
I don't have too much to say here, it's a legit worry.
I think this mostly comes down to how much space will actually be wasted in practice. Docker seems to be relatively inefficient about this, I'm not sure if Flatpak is better or not. But yeah, it would be good to minimize size as much as possible, I do understand that concern.
> they are definitely more secure against malicious apps than Linux with apt-get
Not at all.
Unless you're running something like Firejail on your system, yes they are.
There are a metric ton of holes to plug in the Linux security model, Linux security is a dumpster fire. But... run Wayland, use Flatpak (and check your permissions), check file access/network access -- these are steps in the right direction, they are security improvements over apt-get. We know that isolation/sandboxing (and in particular filesystem isolation) is going to be an important part of security moving forward.
And if you're comparing this to Android/iOS, it's not even a contest, Android/iOS's model of app isolation is unambiguously more secure than an apt-get free-for-all; and repositories like FDroid allow you to set upstream maintainers the same way that you would in desktop Linux environments.
If you compare this to the web, it becomes even more obvious that resource/runtime isolation between sites improves security. Increasingly we're looking at not even sharing browser caches between domains, which is even more strict than Flatpak's shared runtimes. Isolation is a failsafe -- it means if something breaks or gets infected, it can't affect everything. It means that remote code injection into a properly sandboxed app can only compromise one app, and not overwrite your bashrc and insert random keyloggers into everything you do.
Improvements still to be made, I would not today assume that Flatpak is secure or battle-tested, and it definitely can't do much about X11. But we fix these things piece by piece, and the basic idea itself is still sound.
> Unless you're running something like Firejail on your system, yes they are.
Which I am, but this is besides the point.
> There are a metric ton of holes to plug in the Linux security model, Linux security is a dumpster fire.
Please don't spread FUD. Perfection does not exist in this world but Linux is used in most security critical environments in the world, from banks to military applications to security devices like firewalls.
> But... run Wayland, use Flatpak (and check your permissions), check file access/network access -- these are steps in the right direction, they are security improvements over apt-get.
Sandboxing and installation tools like apt-get are completely orthogonal topics. Flatpak blurs the line and people get confused.
Furthermore, sandboxing can be effective only if the whole UX is designed and tested centrally.
Applications need access files in ways that are compatible with each other and useful for the end users.
Policies need to be written and reviewed by a trusted organization.
You can't rely or random upstream authors to configure sandboxes for different applications independently from each other.
Sample the sandbox settings on some flatpak applications and you'll find they range from too strict to completely lax.
> Android/iOS's model of app isolation is unambiguously more secure than an apt-get free-for-all
And yet the phones are full of spyware/telemetries antifeatures.
> I would not today assume that Flatpak is secure or battle-tested
It's not.
> Please don't spread FUD.
Oh this isn't FUD, the fact that Linux is used in secure environments does not mean that Linux's default security model for user applications is good -- it's not, this is a well-recognized problem in the security industry and it's been discussed to death already. I say this as a person who runs Linux on literally every single computer that I own, I say this as someone who literally does not have a Windows install on my entire network outside of a VM.
I'm not discouraging people from running Linux, quite the opposite. Even though Linux's security model is a dumpster fire, it's still currently better for most users to run Linux. And I don't just run Linux, I main Linux on practically everything I own, I'm desperately hoping that within the next few years I'll be running Arch on my phone. But that doesn't change anything about Linux security, we all know that X11 is insecure. We all know that the /home directory paradigm most distros use is a disaster. This stuff isn't FUD, it's well-established truth, and at this point anyone who's trying to claim that sharing /home access with every app is acceptable security design has their head stuck in the sand.
And frankly, part of the reason why Linux has security problems is because of these ridiculous articles that suggest that good security is a binary all-or-nothing approach to trusting applications, rather than a partial trust model where applications are granted the least number of privileges possible for them to run, and where we build security in depth by progressively layering restrictions on top of applications.
----
> sandboxing and installation tools like apt-get are completely orthogonal topics.
On some level I agree, which is why it's frustrating to me that people treat bundling/sandboxing tools like Flatpak as if they're somehow a paradigm shift in maintainer roles that are disrupting app distribution, when they're not.
But on another level, they are related -- you're here arguing that upstream is better equipped to handle moderation and patching than users are, and more likely to fix vulnerabilities than individual software developers. From that perspective, of course those maintainers should have mechanisms to set sandboxing rules for the software they include in their distros. It makes perfect sense to have ways for upstream packagers to set default sandboxing rules.
For better or worse, installation in Linux requires placing files on the filesystem. Because of how tools like OverlayFS work, it becomes important to care about where those files get placed. You actually mention this yourself later on when you bring up that "sandboxing can only be effective if the whole UX is designed and tested centrally." There are a lot of features in application sandboxing/management (file access, installation rollback, distro policies about security manifests, etc) that are much easier to build if we think about them from the beginning with installation, if "the whole UX is designed and tested centrally."
----
> [X] need [X]
Most of this is stuff that Flatpak explicitly is designed to help with, so I don't really understand what your problem is with the direction we're going. The stuff you're complaining about are problems with the existing Linux security model where people try to use crud like separate users to control file access, you should be happy that the Linux community is trying to address these problems.
Flatpak portals are an attempt to get rid of many of the manifest controls. It's designed to integrate with central UXes like Gnome/KDE (ironically, the original article somehow brings this up as a criticism, that Flatpaks aren't self-contained enough, even though it makes no sense to have user-facing controls be managed by Flatpak itself rather than the Desktop Environment).
And "you can't rely on random upstream authors" is exactly what I'm talking about with the middleperson maintainers like Debian or users themselves having control over what the manifest files are, and is (despite the article's objections) exactly why it's so important that Flatpak permissions not require code changes from the applications being bundled[0].
There's nothing that prevents Debian maintainers from building their own Flatpak manifests. See above about correlating upstream/distro management and a bundling/sandboxing tool, these are orthogonal concepts.
----
> And yet the phones are full of spyware/telemetries antifeatures.
So many of the problems with Linux security can be summed up with this line. Linux gets a lot of benefit from being an environment with proper developer incentives. The majority of the software is Open Source, and the majority of software is developed with both heavy community involvement and without strong incentives towards spyware/malware. There is a strong anti-commercial, even anti-Capitalist bent from a nontrivial percentage of Linux software authors and distributors, and an even stronger shared philosophy about user freedom that makes the entire ecosystem hostile to common exploitative software practices. Additionally, low consumer adoption and a high level of technical skill from the average user make it even harder for malicious programs to thrive.
All of that is orthogonal to secure architecture, it's the equivalent of moving into an upscale neighborhood with friendly neighbors and claiming that your unlocked patio door is now more secure than a window with bars on it.
There's a real (in my mind) anti-security mindset in the Linux community that stems from "if the neighborhood stays nice, we don't need to think about security." And I reject that entire philosophy; from an architectural perspective Android is unambiguously more secure than most desktop Linux distros by default. It has to be, because people actually write malware for Android, and nontechnical users use it. This is the same situation that pops up on the web: native developers for desktop complain about the Internet "enabling" spyware. But the web doesn't enable spyware; the web does a really stinking good job at mitigating privacy violations. The truth is rather that native doesn't have to mitigate spyware to the same degree, because nobody wants to embed a malicious tracker in your Open Source CLI tool, they want to embed those trackers into Farmville.
As Linux gains more mainstream adoption as a consumer-grade desktop OS (and Linux genuinely is progressively getting better and better, and even more attractive/competitive as a consumer-grade desktop OS), it will become more important for the community to care about architectural security and not just keeping the community nice.
Having a good community is not the same thing as having good security. Consumer-grade, general desktop Linux has a fantastic community, and in practice you will be less likely to encounter malware on a Linux system. But consumer-grade, general desktop Linux does not have strong security fundamentals outside of the server (or perhaps more accurately, it does have some strong security fundamentals, but requires tools like Flatpak/Firejail/Wayland to start enforcing them and to start moving concepts like OverlayFS and process isolation out of of technical kernel tools and into general user-facing tools for the non-technical Gnome user).
----
> It's not.
It is reasonable to have criticisms that Flatpak as it stands today still has security holes and issues. As I mentioned above, Linux security is in general a dumpster fire. Just as one exapmle, Flatpak is useless if you're not running Wayland. It will benefit from some of the additional sandboxing capabilities that are being built into the kernel now[1]. And we need to get stricter about actually using the file access controls built into Flatpak.
All of this is a separate criticism from the direction Flatpak is going. There is a difference between saying that Flatpak as a technology is the wrong direction for Linux, and saying that it needs better portal controls.
The latter criticism is completely accurate. The former is based on an incorrect assumption that Flatpak somehow destroys the entire Linux app distribution model and puts the entire Debian team out of a job, even though Flatpak as a technology has nothing to do with where users ultimately get their software or whether upstream maintainers can patch software and add custom security rules to that software.
----
[0]: I go into this in more detail at https://news.ycombinator.com/item?id=29321150
[1]: Incidentally, so will Firejail. There's not that much wild conceptual difference between Bubblewrap and Firejail other than UX and a commitment from Bubblewrap not to ever ask the for root access while creating containers.
I might be imagining it, and there might be another cause, but I stopped using Elementary after the latest release after many things moved to Flatpaks because I kept running out of memory. (On Endeavour/Arch, installing everything natively, I no longer have the problem.)
If I had to guess, the update might have just introduced a memory leak somewhere else in the system?
My experience was exactly the opposite: snaps usually feel more polished and take less space than "flatpaks", at least when you compare the first time you install a flatpak.It's true that over time flatpaks take less space because they use what's already there.
However the conclusion still holds imo, both are bad.Not necessarily because they haven't partially solved what they tried to achieve, but because they are being pushed instead of provided as alternatives to good-old package managers or binaries.
Though technically older than both, AppImage is more recent in adoption compared to both Flatpak & Snap, and yet 95% of the time i had absolutely no problem using AppImage(with the small caveat that desktop integration is less polished, and i use a neat tool called AppImageLauncher to 'install' them). I know it's an apples to oranges comparison and AppImage is not a package manager, but the point still stands.
This validates an assumption that I had made: easier developers, harder for everyone else.
It's a pain in the ass packaging apps for each distro, but the value of a single command to update libraries and applications on millions of Linux systems is worth it, no?
I recommend just shipping everything you need with your app, either manually or using AppImage.
Flatpack is, more or less, just a bad package management.. technology.
You are replying to a comment explaining to you why Flatpak actually works with a dismissive sentence implying it's just a bad technology. Do you have anything substantive justifying your opinion?
From what I have seen, most of the opposition to Flatpak comes from the same place that the one to systemd: fear of change. Despite being centered around technology, a very vocal part of the Linux community seems to be extremely conservative.
Is it really surprising that we're conservative? Most of what comes out of big tech right now is focused on monetisation and control and when it comes to FOSS a lot of companies are trying to get their IP on the map.
For example Ubuntu is trying very hard to push snaps. In contrast to flatpaks only they can run the store for it so there's clearly a motivation of vendor lock-in.
This kind of thing makes me suspicious and more critical of new tech. I'd first want to see if it offers me any value. In this case I don't see the big benefit even though flatpak doesn't seem to carry the lock-in that snap does. But I like the optimisation of dynamic libraries and the way I can update openssl once and have all the apps patched that use it.
Systemd is a different story. It's open enough but a bit too heavy and complex for my liking. It's not bad though and I use it on some of my boxes. Alpine is working on an alternative based on s6 and I'll probably end up using that when it matures.
Anyway I didn't choose Linux/BSD because I cared about having the same as everyone else :) Being able to deviate from the beaten track is one of the benefits of these. I currently use FreeBSD because it has the least corporate meddling right now.
I never said it doesn't work. And yes, i do have "anything substantive" against it. The fact that, as i mentioned, it is just a package manager. And a bad one.
> From what I have seen, most of the opposition to Flatpak comes from the same place that the one to systemd: fear of change.
This is what pisses me off. You, nor anyone else, can tell everyone else what >I< think or feel. It's not fear, nor fear of change, nor any other dumb ass reason people who think like you think it is. I hate it because it's just a package manager. I hate it because, when all the reasons against it are brought up, something nebulous like "security" or something quazi-psychological like "fear of change" are brought up in defense of it.
The real problem with flatpacks is that they don't solve the real problem, and they do it poorly.
Want to solve the problem of your program not running on multiple distros, or not running in 5 years ? Then look at why that is. For example; it's not that the zip file format will change, so why must i recompile my program every time libzip changes ? Or X11 to wayland transition; why does my program have to even care about that when all i want is a window and an opengl context ? (bdw the solution to the latter is SDL2)
Let's look back when flatpack started. Why did it start ? Maybe because GTK3 changed it's API ever so often ?
Linux doesn't have a good GUI toolkit. THAT is the biggest problem here.
I just fucking hate the "ohh, you just don't like change" people. That dismisses all further discussion. That is the real "hate" that people like you blame others of.
> The fact that, as i mentioned, it is just a package manager. And a bad one.
If you don’t mind me asking again, why is it bad? Because the rest of your post certainly doesn’t answer that question.
I understood that you don’t personally like that libraries change. Unfortunately, they most definitely do and no, preventing everyone on earth from ever releasing an incompatible library is not a realistic solution.
> I just fucking hate the "ohh, you just don't like change" people.
Well, you definitely are complaining about change a lot in a very impassioned way.
>> I just fucking hate the "ohh, you just don't like change" people.
> Well, you definitely are complaining about change a lot in a very impassioned way.
They are complaining about people who assumed an irrational reason for their opposition to something, while they have a rational one. I find that kind of assumption condescending ("let me, the rational one, explain to you how to get rid of your insecurities so you can appreciate how right I am") and kind of infuriating too.
> I understood that you don’t personally like that libraries change.
If you add breaking changes without maintaining the backward-compatible version for a reasonable amount of time, I'd argue your library isn't fit to be a dependency for anything important. I definitely wouldn't use it.
The Python package ecosystem may be a mess, but I still manage to use dozens of shared packages daily with quite rare breaking changes, even in libraries that see a lot of developments. I would prefer that we focus on improving API stability for shared libraries, instead of giving up and duplicating dependencies everywhere.
> Despite being centered around technology, a very vocal part of the Linux community seems to be extremely conservative.
A working system calls for a conservative viewpoint. I'm only looking to introduce change if my needs aren't being met.
My understanding is that, by volume, most people using Linux "in anger" (whose needs mostly dictate the direction of Linux development) aren't the SREs maintaining living systems for years at a time; rather, they're the DevOps people with constant streams of greenfield projects, involving half-baked third-party components, that in turn need all sorts of random bleeding-edge dependencies.
> Despite being centered around technology, a very vocal part of the Linux community seems to be extremely conservative.
A lot of the vocal people who pick linux are people who want complete control over their systems. Things like wayland, systemd, and flatpak take away some of that control.
How so? I feel like this is just unfamiliarity / resistance to learning. It's not like these new tools prevent you from accessing their internals. The internals are all still "right there." They just have internals that have been engineered for efficiency over composability or observability.
For example, journald. People know and understand "rotated-gzipped text .log files in a /var/log directory." People don't know, but more importantly, don't want to learn, this: https://systemd.io/JOURNAL_FILE_FORMAT/. It's a simple format, that makes perfect engineering sense as a format optimized for the goals it's trying to accomplish. But it's not text — not the thing a greybeard sysadmin has spent the last 40 years working with — so it's "hard."
Instead of just learning this format and writing tools to interface with it, people instead think of this data as being impossible to access, somehow "locked away" behind journald / journalctl, as if there were some kind of DRM put into your OS prevent you from accessing your own log data.
As it happens, though, journalctl already builds in support for manipulating this data with POSIXy text-based tools — getting your journal as JSON is one `journalctl -o export` away.
But nobody knows about these features (I only just learned about it now, from the link above!), because nobody talks about these features, because the tooling already solves 99% of the real-world problems people encounter, and so it's very rare to actually need to script against the journal.
And if you are in that 1% use-case, maybe you're e.g. engineering a distributed log-ingest system for efficiency; in which case you wouldn't use `journalctl -o export` anyway, but would rather link to journald's C API to parse the journal directly, because that's what's most efficient, even if it's less convenient/UNIXy.
-----
This is similar to e.g. how people pushed back against SPDY/HTTP2.0 for being a "less debuggable" protocol, just because it's binary rather than text-based.
Of course, it's extremely rare that anyone needs to actually debug the transport, rather than debugging their own application-layer protocol they've built on the transport. And because every server and client that speaks HTTP2 still also speaks HTTP1, every application-layer protocol flowing over these links can just be debugged using HTTP1 messages.
But even if you have that 1% use-case where you need to debug the transport, there are simple POSIX-pipeline-y tools to bijectively map the binary wire representation to a text-based one.
But again, when you hit that 1% of use-cases, you're probably a professional doing something weird, such that you probably want to pull out WireShark to analyze the complete nested app-in-HTTP2-in-TLS-in-TCP-in-IP-in-Ethernet packet capture in its original binary form.
And so, even though those bijective HTTP2-to-HTTP1 observability tools do exist, nobody thinks they do, because nobody talks about them, because everybody with the 1% use-case is likely solving a large-scope problem for which the simple UNIXy solution is "underpowered."
Yup. It's that the formats are less transparent and less familiar. Binary data doesn't play well with the traditional unix tools.
And a lot of the complaints about both are very generalized. "Flatpack is bad package management technology" tells you very little about what is wrong with it.
Systemd was such an improvement over the existing speghetti for folks selling / supporting linux that it was a pretty clear takeoff.
Flatpack seems a bit more focused than snaps. The usability issues with snaps kind of surprising given ubuntu has usually had a good user focus. One thing, Fedora has their silverblue / ostree type distribution initiative, which may reduce their use case for using things like flatpack for printer subsystems etc (snaps seem more flexible). I moved off linux desktop a year ago though so not at all current unfortunatly.
Nah, AppImage still won’t run on all distros.
The only thing I think actually solves packaging is nix.
Hm, I've some problems with nix and desktop apps that needs opengl or similar machine-specific libraries. There are some hacks, and I also think nix comes very close to a good solution, but it's not 100% yet.
That’s mostly due to video drivers not being under nix’s control deliberately (if packages would depend on them as well, instead of a stub, there would be exponentially more build for each driver). On nixos this is solved by a specifically-placed stub, while it will be distro specific elsewhere, so it may not work for a random distro (that’s where “hacks” come into the picture, choosing a non-nixos distro specific lib as the stub and it will work just fine)
And AppImage integrates poorly with desktops.
How so? Double-clicking an AppImage from a specific folder is not user-hostile (that's a common pattern across desktop systems), and the blogpost specifically points to appimaged and AppImageLauncher as solutions for full desktop integration.
Desktops integrate poorly with desktops.
What do you mean? When I think of "...integrate poorly with desktops", I think of Electron apps. These also have a Flatpak-like distribution (but not security) model. And a "best effort" integration model. Basically, whatever is possible without lifting too much of a finger.
When i think of "desktop integration", i think about drag and drop not working between `ark` (a QT program) and `thunar` (a GTK program), and i think of folk writing blog posts about window theming. (there's also notifications, icons, etc, blablabla, that keeps getting worse (freedesktop is, that is))
To be honest, i don't care much about how programs look. But i do know many people do care.
Nix seems to be the only approach that has a chance, but having good fundamentals isn’t enough. The effort to improve the UX and usability is underway. I’m working on it; are there use-cases or “blow your socks off” demos that would be compelling to help convince developers and application writers?
My experience with AppImages is fine, but I prefer Flatpaks because they can be updated with a remote.
My installs of Signal and Firefox are with Flatpaks and GNOME Software transparently handles updating both.
I often avoid Flatpaks because the permissions are so frequently wrong or not what I need.
For example, you mentioned Signal. I stopped using the Flatpak because of this: https://github.com/flathub/org.signal.Signal/issues/181
It looks like that problem is fixed with Flatpak override and not installing the Flatpak with root?
This is also probably because the Signal Flatpak is community developed on Flathub from Signal's .deb, they're not building Signal with Flatpak in mind. Mozilla provides an official Flatpak release of Firefox.
I'm sure official Flatpaks do better on average, but that narrows the selection a lot. And most third-party packagers raise my suspicions in a way that, say, the Debian packaging team does not. You should see my eyebrows go up when I cruise dockerhub!
I don't know what the right answer here is, but for me it's not Flatpaks or Snaps. Trusting the packager (and the org behind the package system) is a huge part of it. I breathe a sigh of relief every time I see the software I need is in Debian's repos, even if that means they may not be optimally sandboxed.
FWIW, PhotoStructure has an AppImage edition and it can upgrade automatically in-place.
(There's also a docker image, a macOS DMG, a Windows installer, and other editions as well--and yes, you can configure any edition to _not_ upgrade automatically if you prefer).
I'm less knowledgeable about AppImages, but your download link implies it's for Ubuntu only? I'm a Fedora user.
It _should_ work just fine. I have several Fedora users (because they emailed me, not because of any analytics: nothing "phones home" with usage, but I do use Sentry for error reporting, and that can be disabled).
If you see any errors, please email me (support@photostructure), post to the forum, or ping me on discord (links to those are in the footer of every page on photostructure.com).
My experience with Flatpak is okay, the payload by runtimes is okay and it works. What worries me is the actual amount of files within /var/lib/flatpak, for 23 packages (apps and all depdencies) it is about a half million files. This a problem, moving /usr now takes a lot time. I think Flatpak applies well for checking out new applications and closed-source binary applications. You don't want many closed-source applications within the official package-mangement and the core system, it also prevent their publisher to create distribution specific packages.
Good things require always hard work, there is no magic solution. I appreciate API and ABI stability and slick package management of Linux distributions. The work of developers and package maintainers make this possible. This is the reason why Linux is efficient and comfortable. This is not a magic solution but work. And it makes efficient distributions possible.
The grass is green on the other side?
On Windows you have to install all automatic updates or new applications will not run. If you don't do this you will face errors about missing C#-Runtimes or a C++ Redistributable. And application developers need to ship dependencies within the application. This is the reason why Windows is so fat, why applications are fat and the usage of a permission system is not possible. The article doesn't states this.
MacOS on the other hand? The break compatibility every few years. MacOS Classic to MacOS X, Quartz, PPC, Intel and finally M1.
Android and iOS had the luck to be new. Permissions work, if they aren't requested all. But still, iOS fails to provide file-system access, file handling is burden for users and backups are impossible. Apps haven grown quickly into being fat. Garmin Connect, an app which cannot do anything without cloud, 340 MB. Do yourself a favor an check what your banking app requires, you won't get away with less than 200 MB and only the good ones can show you our balance locally - which requires how many kilobytes to store? Messengers like Signal are also growing an growing, Signal is right about 170 MB.
Back to Linux:
I can only appreciate the path to system wide ABI/API stability on Linux, with Systemd, GLIBC, LIBSTDC++ valuing stability and the recent changes around Gtk3 and Gtk4 the situation may become even better. I think Flatpak can be improved, it should be improved. But we don't need another competing standard, before we didn't tried hard to improve the existing solution.
I hope that at one day Canonical will start collaborating with the community and especially Red Hat, they always implement a less favorable solution (Mir, Unity, Snap, Upstart...) lose and harm Linux. Collaborate with Flatpak, add a payment solution and share the revenue with Red Hat and others?
Nah they take up too much disk space. Rather use Gentoo
Not sure if you misunderstand what Flatpak is or if you are really suggesting to ship a entire distribution for your users to install if they want to use the application you're delivering to them.
A typical system will end up with 10 versions of some .so library on disk. Sometimes more. Disk usage waste everywhere. You're better off having a package manager that can link multiple versions at the same time to the programs that need it instead of bundling.
But you're confusing what sides of the developer/user aisle you're on.
Yes, there are probably a better distribution for the user to use if they want to save disk space. But Flatpak is not a distribution, it's a delivery mechanism. Are you really gonna tell people "Hey, don't use $YOUR-CURRENT-DIST, use Gentoo instead if you want to use my application" if your goal is to provide value for as many users as possible?
It's simply not possible to ask people to change their distribution because you as a developer prefers a different one. Flatpak is for all distributions, and doesn't require someone to change from what they know and use already, it'll work for everyone equally.
it’s been tested and almost all library packages are only used by one other package on the system. And almost all of the sharing comes down to a very limited set of libraries.
Flatpak solves this with platforms. You can base a package on the gnome platform for example and get a whole bunch of the most used libraries.
But even without platforms, disk space is one of the most abundant resources right now. You could install millions of libraries in the space of a single AAA game.
IIRC Flatpak actually deduplicates all platform runtime and apps on file level automatically due to the backing storage being an ostree. The same thing also gives flatpak efficient diff updates for apps and platform runtime.
> it’s been tested and almost all library packages are only used by one other package on the system.
I have a KDE desktop, are you trying to imply that only one of the dozens of applications provided by it actually use Qt? That only one of the dozens of image related programs use libpng, libtiff,libjpeg, etc. ? Or are you just citing a highly misleading statistic?
GTK, QT and the core C stuff make up almost all of the shared library usage. Some obscure png cropping library will basically never end up shared even with dynamic linking. Flatpak does a great job of sharing the 1% that make up the 99% of usages while allowing version locking and bundling of that 1mb binary used by only one program.
Qt in your case is one of the exceptions, just like GTK for Gnome users.
Any application that depends on either of those indirectly depends on libpng and dozens of others. Its a dependency tree, just cutting it of after the first dependency does not make sense in any context and even less so in a context that claims to measure package reuse.
The KDE and GNOME platforms contain all of those indirect packages. They contain basically everything generic that you need to build a GUI app. Then when you go off in to the specific little tools you need, you bundle those in.
Exactly. The situation is the same on Windows and macOS: the platform contains the vast majority of dependencies that might be shared across apps. Beyond that, apps bundle whatever they specifically need.
Oh, I thought that the claim was that there was zero package reuse with linux package managers. Not that there was zero package reuse with flat pack.
Ex-(long time)-Gentoo user here. Linux packaging is insane without resorting to something like this. The problem could be fixed but it would require coordinating long-term-stable (like: 5 years, minimum) versions of an entire, fairly large set of basic libraries, across (at least) every distro that matters. For desktops this would probably have to extend to long-term-stable versions of X/Wayland and their related libs, plus QT and GTK.
Is that bad because you don't have the space to store it or is it bad because you don't like it in principle?
If its the former then its a non issue. Storage is cheap and plentiful and code takes up little space relative to other assets such as multimedia content.
If its the latter then I don't know what to tell you other than practical issues are more important than style ones when making practical software. The community has voted and decided that classic package management doesn't cut it and easy of installing software is important.
And I have to agree with them. Time and time again we've been shown that friction to users doing whatever it is they want is the biggest barrier to adoption. Its easy for tech types like ourselves to get stuck in the weeds of the computer. But to most its a means to an end and if it isn't facilitating that then they will find another.
Just use a deduplicating filesystem. Both btrfs and xfs can deduplicate identical files.
This is misleading. While btrfs and xfs have some dedup functionality, it must be invoked explicitly (e.g. by using `cp` with the `--reflink` argument). For application installation to benefit from this, business logic would have to pre-identify identical files and make such references.
There have been some experiments in btrfs to enable inband dedup on arbitrary writes, but nothing that made it in there as yet.
Note however that the discussion misses on the fact that Flatpak itself does perform dedup by using ostree for on-disk storage. ostree implements data structures and ideas very similar to git sans the version control.
Dedup can be run periodically, with crontab or systemd timers. Here's an example of that: https://aur.archlinux.org/packages/duperemove-service
Snaps can do this
I agree with all of this and... I just don't care.
I download a Flatpak from the Pop OS store and it works. It installs only in my profile, so another user on the same machine doesn't have access. You can't do that with a .deb!
I've never got into dependency hell where I need to apt-get a specific version of a library from a dodgy PPA.
If I uninstall it, Flatpak doesn't leave acres of cruft strewn around my disk.
I don't see how randomly installing a Flatpak is any worse for security than compiling the source myself. The permission model on Linux is far worse than Android - so I just hope for the best anyway.
Snaps never worked right for me - and seemed to frequently break. But all my Flatpaks have run fine and at full speed.
Does it take up more disk space? Sure. But that's a trade-off I'm willing to make in order to just download and run with no extra work.
Sure, there are some efficiency and security gains to be made. But I'm much happier with a Flatpak world than the alternative.
> You can't do that with a .deb!
Pretty sure you can by doing "dpkg --root=$HOME -i mypackage.deb" or something like that, long time ago I used dpkg, but it should be possible with some flag.
Otherwise I agree, Flatpak is a breath of fresh air!
>Pretty sure you can by doing "dpkg --root=$HOME -i mypackage.deb" or something like that,
As a sibling comment already noted, using "--root" doesn't always work and a q&a mentions the problems:
https://askubuntu.com/questions/28619/how-do-i-install-an-ap...
https://serverfault.com/questions/23734/is-there-any-way-to-...
This often doesn't really work in practice on its own at least, since packages tend to hardcode their in installation directories. (You could try using an overlayfs, but at that point it's becoming pretty cumbersome.)
You can recompile package from source to support installation into `~/.local` (see `man 7 file-hierarchy`), but burden to support this will be on you, including updates, protection from malware and viruses, firewalling, and so on.
For example, rust's cargo can compile and install applications into `~/.local`, but it's pain to keep them up to date, so I prefer to use same tools from my OS (Fedora) distribution repo instead.
Where does that put the Deb files ?
edit: should have written "where does dpkg put the files of the package ?" or "where does that put the Deb's files ?", sorry.
The .deb file should remain where it was when you downloaded it, dpkg just installs the contents of the .deb file.
Where is the contents of the .deb files put?
--root=dir Set the root directory to directory, which sets the installation directory to «dir» and the administrative directory to «dir/usr/local/var/lib/dpkg».Is there a point you're trying to make, or are you asking for technical information which would be better answered by man dpkg or a quick Internet search?
I am not trying to make any point, I am genuinely curious where dpkg is going to put those files since its the first time I read you can dot `dpkg --root=$HOME -i mypackage.deb`.
For instance I always do `python3 -m pip install --user wormhole` or `pip install --user wormhole` instead of sudoing my way to permissions hell. Python apps get installed in $HOME/.local/bin, so no cruft for other users.
But stuff installed with deb packages often have hard coded pathways and assumptions of where and how their files are run.
So I think it's fair to ask in reply to someone implying that `dpkg --root=$HOME -i mypackage.deb` is as clean/same as a flatpack behaviour "It installs only in my profile, so another user on the same machine doesn't have access. You can't do that with a .deb!"... It's fair to ask "where do those files in the deb package go ?" because if it works as implied (no pollution of the OS) I am certainly going to start testing installing things that way.
"I don't see how randomly installing a Flatpak is any worse for security than compiling the source myself."
It's worse because you're likely to install more packages using something like Flatpak than you would by downloading random binaries or building from source. That wantonness isn't justified given the current state of security on Linux.
(I'm just regurgitating OP: "Flatpak and Snap apologists claim that some security is better than nothing. This is not true. From a purely technical perspective, for apps with filesystem access the security is exactly equal to nothing. In reality it’s actually worse than nothing because it leads people to place more trust than they should in random apps they find on the internet.")
> It's worse because you're likely to install more packages using something like Flatpak than you would by downloading random binaries or building from source.
... what makes you say that ? if I want to run something, it'll run one way or another no matter how hard the "system" wants to prevent that. Even if it's patch.exe downloaded from a 2004 russian website.
> it works
No. It doesn't. You still need to trust the people who package the thing.
A working packaging system would give the user ultimate power to manage access to resources by each app. Overriding or mocking, if the user so decides, whatever access does the app believe to need. Flatpack does not give you such power, it removes this power from you and assigns it to the packagers. Thus, not only it doesn't work: it works against you!
EDIT: The "dependency hell" issue is separate, and is solved more easily by distributing static binaries.
> No. It doesn't. You still need to trust the people who package the thing.
Flatpak and Snap have never claimed to solve the trust issue though. Flatpak allows you to add your own repositories and thus developers can package their own applications. So if you trust the developer enough to run their software, you should be able to trust them to package their own app with.
I don't entirely disagree with this point, but i'd like to point out that running a program as a tarball/appimage downloaded from the dev's website places entire trust in the project's infrastructure, where on the other side of the spectrum distro packaging relies on strong vetting from a distro's community.
Flatpak/snap is somewhere in between where on the main repos (eg. flathub.org) anyone can publish a package for anything without being affiliated with upstream. It incentivizes users to just search for the app name and download whatever comes up as a result. That's a pattern we've known to be broken for years: from Windows users downloading the first link Google suggests (usually a sponsored link bundled with spyware/adware) to Android users downloading anything the Play Store suggests (usually spyware, see how many flashlight apps there are and what permissions they require). F-Droid in the Android ecosystem strikes a balance because there is strong community vetting for all packages published, so it's like a distro-agnostic repository following the distro packaging threat model.
I believe there are ways to mitigate those issues (eg. namespace enforcement on flatpak) but i don't think downplaying them is doing any good.
You are right. But with the marketing of them with sandboxing and whatnot, they create the impression and illusion that it is safe. Cos most of them install it from Flathub or Snapcraft. The assumption is that they go through all of it and that it is safe. Just like Play store and App store. I am pretty sure Flatpak folks now this. It is like... we won't lie. But we are not also gonna tell the truth.
To make things worse, Flathub changed the way they display "Publisher" field for a flatpak. Which says whether a package was published by Flathub maintainers, Upstream developer or somebody else in Flathub. Now instead of saying who, they just say a "See details" link under Publisher field in flathub.org for a flatpak. That link which in turn directs me to a github page and I am still unsure who the hell uploaded that flatpak.
Before, they used to say Upstream developer's name or say "Flathub maintainers" which means Flathub team uploaded the flatpak making it easier verify who uploaded the flatpak. But now it is making it more difficult. This has been the most pissing thing about Flatpak other than the security issues and problems which keeps coming up about Flathub every now and then. Why would you change something that is so crucial when it is working?
Cos now, I could package a software which is not in Flathub and it would just say "See details" instead of my name. This provides the illusion of trust. Cos if it were to show my name there, more people would've been like.. who the hell is this guy and do a check on me (I used to do that). But now, If I could slip through Flathub checks and provide malicious flatpak, majority of the folks will still install cos most of them are using Flatpak for convenience. Not security and performance.
Want proof? Just scroll up and you will see someone saying he don't care even though agrees to the things in the blog post. He just don't care. :shrug:
I was thinking about this the other day and a wasteful solution to the packaging problem in open source is the decentralized build solution on a blockchain like platform. Either with PoW or PoS. In PoS, a node builds the code pulled from source control, multiple other nodes validate the build and its hash and add to the blockchain and to the repository. Now the builds are relatively trustable. Of course need to figure out an incentive structure for miners/validators to do this expensive work.
Is there any difference in trust between package maintainers and flatpack packagers?
If anything, isn't the flatpack situation better in that regard because the end user is more likely to have a sandbox?
Maintainers are not developers, they are users, so the developer cannot push unwelcome changes, such as ads, trackers, trojans, backdoors, keyloggers, etc. directly to users because the maintainer will refuse to accept that.
On the other hand, maintainers can and have inserted (accidentally or not) vulnerabilities in software, and ignore developer wishes (like "please stop distributing this ancient unmaintained software without this warning that says it is ancient and unmaintained"), which reflects poorly on the developer in the mind of the user.
I personally see no upside to shoving an unpaid third party between user and developer.
> I personally see no upside to shoving an unpaid third party between user and developer.
I think F-Droid is a good example of striking a balance between those two extreme models. Their existence enforces community vetting of apps as well as somewhat-reproducible thanks to their standardized build infra, which are two major wins.
I personally have much more trust in such schemes (such as guix/nix) because i don't necessarily trust all of the developers of apps i use not to get hacked, and i believe enabling one-click updates to every user of an app without review is a dangerous pattern for security.
> On the other hand, maintainers can and have inserted (accidentally or not) vulnerabilities in software,
Such maintainer will be kicked off from distribution.
> and ignore developer wishes (like "please stop distributing this ancient unmaintained software without this warning that says it is ancient and unmaintained")
Developer wishes are developer wishes. User wishes are more important. If package has a maintainer, then it IS maintained.
You can use any distribution developed by developers (do you know any?) if you dislike maintained distributions and share experience with us.
> Such maintainer will be kicked off from distribution.
ORLY? What's Kurt Roeckx[0] up to these days? Oh right, he's the Debian Project secretary, despite famously crippling RNG in OpenSSL.
> Developer wishes are developer wishes. User wishes are more important.
You mean like the wish to get up to date software directly from the developer without waiting for some third-party middleman to get around to updating the repo?
> You can use any distribution developed by developers (do you know any?) if you dislike maintained distributions and share experience with us.
Such a beast doesn't seem to exist in the Linux world, so I just don't use Linux. Linux Desktop's abysmally low market share may or may not be related.
[0] To be fair to Kurt, he wasn't the only one who didn't see a problem removing those lines and he did ask around first. It is an understandable mistake and I don't mean to crucify him.
> Such maintainer will be kicked off from distribution.
Debian did this, they said oops and moved on. Packagers suck as developers, they apply patches they don't fully understand to solve problems they don't understand on codebases they don't understand.
> Is there any difference in trust between package maintainers and flatpack packagers?
You shouldn't need to trust either. Just the sandboxing system of your OS.
This is an inherent limitation of the way OSs are built. Linux, Windows, macOS are all like this. macOS is currently the furthest ahead in this since they're sharing code with iOS, but it's still not where it should be.
The Linux kernel is not at a point of allowing this kind of fine grained sandboxing or mocking of APIs. I'm guessing because it's a significant undertaking. I'm sure as more features become available in the Kernel w.r.t. sandboxing Snap and Flatpak will definitely utilise them.
Yeah, proper use of Flatpack requires antivirus, reverse firewall, hardware isolation (separate CPU core per application), user education, etc.
That's only true for the simplest of apps. The whole point of desktop OSs is that programs can integrate with each other, but that necessarily discards the notion of a sandbox almost entirely.
Yeah, e.g. file sandboxing approaches that work along the lines of "don't let the program access any files outside of its private directory except for those explicitly and lovingly hand-picked by the user" commonly ignore the existence of multi-file file formats.
> Is there any difference in trust between package maintainers and flatpack packagers?
Yes, and very big: Debian maintainers need to build a reputation for years to gain upload rights, meet in person, sign keys, and the packages are peer reviewed my multiple persons.
Plus, packages spend time in release freeze being tested by a large userbase before a distro is released.
You can override permissions (there is even a GUI called Flatseal for that). You are also not able to do any of that with distro package managers like apt or dnf. Ultimately you need to trust either.
What I mean is that permissions and accesses concern the operating system, not the software packaging. I don't understand why flatpack needs to deal with those, providing a false (and pernicious) aura of protection. I already run third-party software inside containers or virtual machines. No need for a GUI nor a flatpack-only solution like flatseal. It just looks pointless.
> No. It doesn't. You still need to trust the people who package the thing.
How is this any different than sudo apt install foo?
apt is not misleadingly advertised as a sandboxing environment. Flatpack is:
"Flatpak: Linux application sandboxing and distribution framework " [0]
"It is advertised as offering a sandbox environment in which users can run application software in isolation from the rest of the system." [1]
The whole point of a sandboxing environment is that you can run applications that do not want to be sandboxed. The flatpack proposition is directly contradictory with this basic requirement, in that it requires the application to be flatpacked to begin with.
Trust of the packager is still involved, no?
When I use something from a distribution, I trust the distribution as organization. When I use something packaged by a developer, I trust the developer. I cannot verify thousands of developers, so I must trust the distribution and I can trust few developers or packagers outside of distribution.
Yes, but only for things you give access to (data files, internet)... sandboxing by default, it should not be able to do anything except consume CPU (memory needs to be limited also, which might be an issue in practice).
There is a tool called flatseal which gives you a bunch of toggles so you can turn on or off any permission for a flatpak app.
Flatseal gives users a GUI to manage the permissions of each app. Would that address your concern?
I think it's great, but we're still far off from an ideal solution because it's not exactly fine-grained. For example, flatpak portals enable me to grant/block access to my home folder, but don't enable me to allowlist a specific folder in my home. So i'm stuck with the possibility that a vulnerability in the app can take over my entire system (eg. by rewriting ~/.profile), or with my app not accessing my home folder at all.
As a user, I'd like to give Krita/Libreoffice permissions for ~/Documents and Tor Browser permissions for ~/Downloads. I don't know yet of a user-friendly method to achieve that.
They already do with filesystem access. You can specify XDG folders or any specific folder you like[1].
[1] https://docs.flatpak.org/en/latest/sandbox-permissions.html#...
>As a user, I'd like to give Krita/Libreoffice permissions for ~/Documents and Tor Browser permissions for ~/Downloads. I don't know yet of a user-friendly method to achieve that.
The filesystem permissions are a bit more fine-grained than "all of home or nothing". Your two examples are already possible to achieve by granting filesystem access to xdg-documents or xdg-downloads.
So many flatpaks do this incorrectly though. For Browser by default saves in the flatpaks home/Downloads directory (which is 15 layers deep from ~). You just gotta know to navigate up.
Signal let's you save attachments anywhere on disk, but only if you manually navigate to ~/Downloads, does it actually save (in a way visible and accessible outside of the app). You just gotta know.
I forgot what exactly the problem was with Vscode(/ium), but it also has a catch like that. You just gotta know.
Flatpak turns out to be the best compromise between distribution and cross distro compatibility, but there's still some low hanging fruit that could be improved.
The apps that are "broken" are apps that are not "Flatpak native", so assume they have full write access to ~
Flatpak aware apps (like the ones I develop, or any on elementary OS since flatpak is the native packaging format there) tend to just work.
It's not true they assume full write access to ~, they just don't propagate they limitation of a constricted choise of paths in their GUIs. Because they isn't a way to do that, GUI toolkits don't really provide a way to clearly communicate you can only save/open in a specific dir.
There is a deep trend happening in software development.
As the number of dependencies for building an application grows, it becomes exponentially harder to shake the tree. This used to be the role of Linux distributions, they were acting as a push-back force, asking projects to support multiple versions of C libraries. This was acceptable because there is not C package manager.
Now that each language has their own package manager, the role of distributions have faded. They are even viewed as a form of nuisance by some developers. It's easier to support one fixed set of dependencies and not have to worry about backward-compatibility. This is something all distributions have been struggling with for a while now, especially with NodeJS.
This trend is happening on all platforms, but is more pronounced in Linux because of the diversity of the system libraries ecosystem. On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
> On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Basically, they are failing to provide a true OS: there is only a Kernel + various user space apps, from systemd to KDevelop, with no explicit line in the sand between what is the system SDK and what is a simple app bundled with the OS.
> This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Not really no. It's both simpler and more complicated.
At the very low level, everyone expects to get a POSIX system. At the user level, most users use either Gnome or KDE (well mostly Gnome to be honest but let's pretend) which provides what could be called a highlevel SDK for applications.
That leaves the layer inbetween which used to be somewhat distribution specific but is seeing more and more consolidation with the advent of systemd and flatpak.
Even Gnome and KDE are not really "high level SDKs". They mostly have GUI support of various kinds, but not a full SDK for interacting with the OS (e.g. interacting with networking, with other apps, with system services etc.).
I don't think this is really true. There is a fairly stable core set of C libraries present on virtually any Linux distro with a desktop environment that does most of what any application needs in terms of system services. The basic problem is more what the parent was discussing: these are C libraries, and rather than using the FFI for their app language of choice and using these pre-installed libraries, developers would rather rewrite all functionality in the same language as their applications, and the Rust crate, Go mod, or Node package you find when Googling whether something exists in your language of choice is likely to be much less stable than something like libcurl.
Mac and Windows solve this by creating their own languages for app development and then porting the system services to be provided in Swift and C# or whatever in addition to the original C implementation. There is no corporation behind most Linux distros willing to do that other than Canonical and Redhat, but they're perfectly happy to just stick with C, as is Gnome (KDE being perfectly happy to stick with C++). Most app developers are not, however.
For what it's worth, Redhat did kind of solve this problem for the Linux container ecosystem with libcontainer, rewriting all of the core functionality in Go, in recognition of the fact that this is the direction the ecosystem was moving in thanks to Docker and Kubernetes using Go, and now app developers for the CNCF ecosystem can just use that single set of core dependencies that stays pretty stable.
There is no common/stable set of c libs. libc on one distro is different than libc on another (even just within glibc). It's why you generally can't run an app compiled on one distro on different one. You also at times cannot do this on a newer version of a distro and run it on an older version. When a piece of software lists an rpm targeting Fedora or a deb targeting Debian, these are not just repackaging of the same binaries (unless they are providing statically linked binaries).
RedHat did not create libcontainer and it was never rewritten to go (it was always go). libcontainer was started by Michael Crosby at Docker as a replacement to execing out to (at the time unstable) LXC, later donated to OCI (not CNCF) as a piece of tooling called "runc", which is still in use today. As part of OCI RedHat and others have certainly contributed to its development. libcontainer lives in runc, and is actively discouraged to use directly because go is really quite bad at this case (primarily due to no control of real threads), and runc's exec API is the stable API here. Side note: much of runc is written in C and imported as cgo and initialized before the go runtime has spun up. That is not to say libcontainer is bad, just that it is considered an internal api for runc.
RedHat did end up creating an alternative to runc called "crun", which is essentially runc in C with the goal of being smaller and faster.
-- edit to add more context on development in OCI
> There is no common/stable set of c libs. libc on one distro is different than libc on another (even just within glibc). It's why you generally can't run an app compiled on one distro on different one
Pretty much everything uses glibc and distros that are to be used by end users (instead of specialized uses like embedded) also tend to have glibc versions. If someone uses a non-glibc distro it'd be out of their choice so they know what they're getting into.
And glibc has been backwards compatible since practically forever. You can compile a binary on a late 90s Linux and it'll work on modern Linux as long as all libraries it depends on have a compatible and stable ABI. I've actually done this[0] with a binary that uses a simple toolkit i hack on now and then, this is the exact same binary i compiled inside the VM running Red Hat from 1997 (the colors are due to lack of proper colormap support in my toolkit and the X in the VM using plain VGA) running in my -then- 2018 Debian. This is over two decades of backwards compatibility (and i actually tested it again recently with Caldera OpenLinux from 1999 and my current openSUSE installation). Note that the binary isn't linked statically but dynamically links to glibc and Xlib (which is another one with strong backwards compatibility).
Yes, you can compile with a really old glibc and use it on a newer one. But glibc does introduce incompatible changes that make it not work the other way and there are issues with stable ABI's across distros.
AFAIK the issue is the symbol versioning which is explicit by design. I can understand why it exists for glibc-specific APIs but i don't see why it is also used for standard C and POSIX APIs that shouldn't change.
It is an issue if you want to compile new programs on a new distro that can run on older distros, but IMO that is much less of a problem than not being able to run older programs in newer distros. And there are workarounds for that anyway, though none of them are trivial.
Forwards compatibility is basically non-existent in the software world. It's not like you can compile a Win 10 program and expect to any time run it on Win 7.
Actually you can, as long as it doesn't use Win10 APIs (or use them via dynamic loading) it will work on Win7.
The issue with glibc is that when you use it it adds requests for the latest versions of the exported symbols that the glibc you use in your system has. You can work around this in a variety of ways (e.g. use __asm__ to specify the version you want and use the header files of an older glibc to ensure that you aren't using incompatible calls) but you need to go out of your way to ensure that whereas in Windows you can just not use the API (well, also make sure your language's runtime library doesn't use the API either but in practice this is less of a concern).
Actually you're right, I forgot that we used to ship software to various versions of Windows with just a single build based on more or less the latest VC++ runtime (that had re-distributable versions for all intended targets).
Most of these libraries are either quite low level, or more GUI-oriented. They usually don't include mid-level decent system management utilities, like registering your app as a service, or checking if the network is up.
In general, when there are is some useful and commonly used C SDK , common FFI wrappers in any language quickly appear and become popular.
> Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
At the same time, the tools which solve this really shine. You inevitably run into these issues with random third party dependencies on other platforms, too, but it's further from the norm, so you end up with awful bespoke toolchains that contain instructions like "download Egub35.3.zip and extract it to C:\Users\brian\libs."
Developers on GNOME deal with this regularly, partially because of our lack of a high level SDK. So one of the tools we have to solve this is first class support for flatpak runtimes in GNOME Builder: plop a flatpak manifest in your project directory (including whatever base runtime you choose and all your weird extra libraries) and Builder will use that as an environment to build and run the thing for development. This is why pretty much everything under https://gitlab.gnome.org/GNOME has a flatpak runtime attached. It's a young IDE, but that feature alone makes it incredible to work with.
> You inevitably run into these issues with random third party dependencies on other platforms, too, but it's further from the norm, so you end up with awful bespoke toolchains that contain instructions like "download Egub35.3.zip and extract it to C:\Users\brian\libs."
On Windows, there is a very clear solution: for any 3rd party you need outside the Windows SDK, you bundle it into the MSI installer you distribute. When installing the MSI, it can check if this particular 3rd party is already installed, and avoid it.
I don't think I've seen anything like the manual instructions you discuss for more than 10 years. Even OSS projects typically ship with simple installers today.
I believe, but have not tried it personally so may well be wrong, that similar mechanisms are common on MacOS with App packages.
It also makes some software under the Gnome umbrella incredibly hard to package for distributions.
Oh, for sure. Reconciling a bunch of different things' ideas of what libturkey they think they need into a single image must be a nightmare. In theory, that's what BuildStream should be helping with since gnome-build-meta[1] is only going to have one of those for different components to depend on. (If there were two libturkeys, it would be very obviously wrong). But I guess the trouble then is a lot of extra apps aren't in gnome-build-meta?
When I was messing with BuildStream a while ago I found myself wishing projects put reference BuildStream elements in their own git repos, but I suppose that would get messed up in the same way.
I'd like to point out elemenatary OS as a counter-example:
They have consistently provided a high level SDK for their OS. With elementary OS 6, they moved their high level SDK to using flatpak (not flathub) as a distribution mechanism.
That's probably because they are a very niche OS, while flatpak seems to be gaining in popularity.
You are making it backward. Linux distribution is a distribution, not a vendor. If you will write a higher level SDK, and then will write an application using the SDK, and then this application will be requested by users, then your application and your SDK will be included into the distribution. UNIX has CDE[0], but nobody uses it, so no distribution includes CDE by default.
[0]: https://en.wikipedia.org/wiki/Common_Desktop_Environment
> Linux distribution is a distribution, not a vendor.
I don't understand what you mean here. Do you think that the Debian is not trying to provide a full OS, they are just curating a set of popular packages?
I think this is patently false, as most distributions typically take clear decisions to standardize and maintain particular OS components, such as choosing a particular libc (glibc in most distros, musl in Alpine), choosing a particular init system (systemd vs System V), particular network management demon etc.
However, instead of taking additional time to create and commit to a backwards compatible Debian SDK, Alpine SDK etc, they then package all of these OS components the same way they package popular software.
Yep, Debian is not even trying to develop a full OS. They are distributing GNU/Linux with few popular desktops (Gnome, KDE, Mate, XFCE, etc.), and popular applications.
GNU project tries to develop full OS for Linux kernel. GNOME project tries to develop full desktop for GNU/Linux. The Document Foundation tries to develop full office suite. And so on.
I watched a great rant by Linus T himself on how distributing code for Linux is a "pain in the arse", but is comparatively easier on Windows and MacOS: https://www.youtube.com/watch?v=Pzl1B7nB9Kc
Something I took away from that is that it is Linus himself, personally, that is responsible (in a way) for Docker existing.
Docker depends on the stable kernel ABI in order to allow container images to be portable and "just work" stably and reliably. This is a "guarantee" that Linus gets shouty about if the kernel devs break it.
Docker fills a need on Linux because the user-mode libraries across all distributions are a total mess.
Microsoft is trying to be the "hip kid" by copying Docker into Windows, where everything is backwards to the Linux situations:
The NT kernel ABI is treated as unstable, and changes as fast as every few months. No one ever codes directly against the kernel on Windows (for some values of "no one" and "ever".)
The user-mode libraries like Win32 and .NET are very stable, papering over the inconsistency of the kernel library. You can run applications compiled in the year 2000 today, unmodified, and more often than not they'll "just work".
There just isn't a "burning need" for Docker on Windows. People that try to reproduce the cool new Linux workflow however are in for a world of hurt, because they'll rapidly discover that the images they built just weeks ago might not run any more because the latest Windows Update bumped the kernel version.
I read all the way through this and wept: https://docs.microsoft.com/en-us/virtualization/windowsconta...
> Now that each language has their own package manager, the role of distributions have faded.
Yet most language-specific package managers are deeply flawed compared to distro package managers. But i agree with your sentiment that beyond LTS for enterprise, the deb/rpm packaging paradigm is becoming obsolete. I believe nix/guix form an interesting new paradigm for packaging where there is still a trusted third party (the nix/guix community and repo) but where packaging doesn't get in the way of app developers.
I'm especially interested in how these declarative package managers could "export" to more widely-used packaging schemes. guix can already export to .deb, but there's no reason it couldn't produce an AppImage or a flatpak.
> On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
Now, with Flatpak, each runtime is an SDK on its own. However, unlike Windows and macOS, specific runtime is not being bound to a specific OS release, but to the app requirement.
That's right, it makes the problem worse.
The first time I noticed that trend was on macOS, where applications bundle most of their libraries, and binaries compiled for multiple architectures.
Then we had Electron apps that ship with their own full copy of a browser and dependent libraries.
When NPM was designed, they observed that resolving colliding versions of the same dependency was sometimes difficult. Their answer was to remove that restriction and allow multiple versions of the same library in a project. Nowadays, it's not uncommon to have 1000+ dependencies in a simple hello world npm project.
Our industry is moving away from that feedback force that was forcing developers to agree on interfaces and release stable APIs.
> Our industry is moving away from that feedback force that was forcing developers to agree on interfaces and release stable APIs.
And parts of the industry are beginning to move back and favor stability, as a result of
- numerous NPM packages being taken over by coin miners and other malware - at the scale of some of them, even a ten minute takeover window were millions of installs, and source chain audits are almost impossible
- framework churn being an actual liability for large enterprises - many were burned when AngularJS fell out of love and now they were stuck with a tech stack widely considered "out of date". Most new projects these days seem to be ReactJS where Facebook's heavy involvement promises at least some long term support
- developer / tooling churn - same as above, the constant amount of training to keep up with breaking changes may be tolerable at startup, money-burn phase, but once your company achieves a certain size it becomes untenable
> On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
In practice I don't really see a difference, I link against Qt on all platforms, win, mac, linux, wasm, android... and that's it.
Dynamic linking is a creation of the devil.
People have tried to provide a packaging format that would allow apps to declare their depencies in a distribution neutral manner[1]. It was, uh, not a huge success.
Let's take an extreme example. If I build an app on Ubuntu and then try to run it against the system libraries on Alpine, it'll fail, because Alpine is built against a different libc. We can simply declare Alpine out of scope and only support glibc based distributions, but if we want good cross-distribution support we're still going to be limited to what's shipping in the oldest supported version of RHEL. So let's skip that problem by declaring LTS distros out of scope and only target things shipped in the last 3 years - and now apps can't target any functionality newer than 3 years old, or alternatively have to declare a complicated support matrix of distributions that they'll work with, which kind of misses the point of portability.
In an ideal world distributions would provide a consistent runtime that had all the functionality apps needed, but we've spent the past 20 years failing to do that and there's no reason to believe we're suddenly going to get better at it now. The Flatpak approach of shipping runtimes isn't aesthetically pleasing, but it solves the problem in a way that's realistically achievable rather than one that's technically plausible but socially utterly impossible.
Flatpak is a pragmatic solution for an imperfect world - just like most good engineering is.
Edit to add: one of the other complexities is that dependencies aren't as easy to express as you'd like. You can't just declare a dependency on a library SONAME - the binary may rely on symbols that were introduced in later minor versions. But you then have to take into account that a distribution may have backported something that added that symbol to an older version, and the logical conclusion is that you have to expose every symbol you require in the dependencies and then have the distribution resolve those into appropriate binary package dependencies, and that metadata simply doesn't exist in every distribution.
[1] http://refspecs.linux-foundation.org/LSB_4.1.0/LSB-Core-gene...
AppFS [0] solves this by just providing every file any package needs, including libc.
AppFS lazy fetch self-certifying system with end-to-end signature checks way to distribute packages over HTTP (including from static servers) so it's federated.
Since all packages are just a collection of files, AppFS focuses on files within a package. This means if you're on Alpine Linux and want to run ALPINE from Ubuntu (assuming Ubuntu used AppFS) then it would use Ubuntu's libc. This is pretty similar to static linking, except with some of the benefits from dynamic linking.
CERN has a similar, but more limited system called CernVM-FS [1]. AppFS isn't based on that and I only learned about it after writing AppFS.
AppFS is based (in spirit, not in code) on 0install's LazyFS. Around the turn of the century I was trying to write a Linux distribution that ONLY used 0install, but due to limitations it wasn't feasible.
This is possible to do with AppFS and how the docker container "rkeene/appfs" works.
> but if we want good cross-distribution support we're still going to be limited to what's shipping in the oldest supported version of RHEL.
Why should that requirement be considered so extreme when, in the Windows world, applications are often required to work as far back as Windows 7 (or were until a year or two ago)?
But on Windows most applications bring in copies of all of their dependencies. And ask for a 20th version of VC redistributable to be installes.
I'm trying to scope the "Just use system libraries" approach into one that's realistically achievable. I agree that this shouldn't be an extreme requirement, but it turns out that most app authors aren't terribly interested in restricting themselves to the libraries shipped in RHEL 7, so.
On Windows, .dll files are automatically searched in quite a few places, including current directory, directory where .exe was launched from, and PATH environment variable. Meaning it is far easier for apps to ship private libraries.
Plus, when linux apps try to ship private libraries, as official chrome packages do, that gets quite some backlash from distro maintainers.
And on Windows the default way to bundle an application is "make a separate directory for that application, put everything you need inside it" which is kinda equivalent to Linux "/opt" packages, IIRC? Anyway, that neatly combines with the lookup rules for .dlls (they're first searched next to the executable file itself) so that shipping mostly self-contained applications is sorta easy: the applications by default use their packaged libraries, and if those are missing, the system libraries or libraries from PATH are used.
> On Windows, .dll files are automatically searched in quite a few places, including current directory, directory where .exe was launched from, and PATH environment variable. Meaning it is far easier for apps to ship private libraries.
You can get the same behavior on Linux by linking with -Wl,-rpath,\$ORIGIN (minus the PATH env var, use LD_LIBRARY_PATH for that).
That was a very good summary of where flatpak (and snap, et al.) come from, thank you. Not a particularly convincing conclusion, though.
Same problem that flatpak is solving (shiping binaries that would work across many distros) already have at least two solutions - static binaries (where possible, golang is great here), or shell wrappers that point dynamic linker to appropriate private /lib directory.
Among the 3 solutions here flatpak is the most complex, and least compatible with what advanced user might do - run stuff in private containers, with different init, etc.
> or shell wrappers that point dynamic linker to appropriate private /lib directory.
You haven't needed shell wrappers to do that for a long time, just link with -Wl,-rpath,\$ORIGIN/whatever/relative/path/you/want where $ORIGIN at the start resolves to the directory containing the binary at runtime.
Of course other things like selecting between different binaries based on architecture or operating system still requires a shell script.
If reducing duplication is a goal, static linking or shipping private copies of libraries works against that. Building against standardised runtimes works much better in that respect.
> People have tried to provide a packaging format that would allow apps to declare their depencies in a distribution neutral manner[1]. It was, uh, not a huge success.
Have you looked into Nix?
Nix itself is more focused on "distribute from this host with nix, to this other host with nix".
Though, here is e.g. https://github.com/matthewbauer/nix-bundle, which is supported as an experimental command in nix 2.4.
Doesn't Nix ship its own version of every library dependency rather than using system libraries?
Nix allows each package to use a specific version of a library and let's them co-exist via unique paths.
In NixOS the whole system works this way, so there aren't really any global system libraries, except for the graphics system.
NixOS is the only sane Linux distro on the planet now. At the end of the day every distro wants their own app store.
> and now apps can't target any functionality newer than 3 years old
You can optionally support newer functionality with a single binary by dynamically loading libraries resolving functions at runtime using dlsym or API-specific mechanisms (e.g glxGetProcAddress).
Sure, you could dlopen() different SONAMEs until you find one that works, and now you just have to somehow deal with the structs having changed between versions, and the prototypes of functions not being compatible, so yes it's technically possible but nobody is going to bother
The idea isn't to open dynamic libraries randomly but to open dynamic libraries you explicitly know they provide the newer than 3 years functionality you want.
It isn't some fantastic never-seen-before concept, it is how applications on Windows can use new APIs from Windows 11 while still running on Windows XP or how OpenGL programs can use APIs from OpenGL 4.6 while being able to run on drivers that only expose OpenGL 3.1.
Yes, and for the reasons I expressed it doesn't work that way for most libraries under Linux.
OpenGL worked on Linux last time i checked (a few minutes ago).
Linux isn't some special case that makes this impossible, the only reason for this to not work is libraries themselves not making it possible. But the blame lies with the libraries not with Linux.
Like I said, it's technically possible and that is (outside a very small number of well-defined outliers like OpenGL) entirely irrelevant in terms of whether it's practically possible. Even if we rewrote every library now to have mechanisms to make this easier, it wouldn't help for any of the older versions that don't do this and which are already deployed everywhere.
Of course it is practically possible, as long as the developers care about backwards ABI compatibility - the issue isn't if it is possible or not (it certainly is as actual existing APIs and libraries show), the issue is library developers breaking their libraries' ABIs.
But that doesn't mean it isn't possible to do something like this, it means that you have to stick to libraries that do not break their ABIs. And this has absolutely nothing to do with Linux you brought previously, everything i mentioned works on Linux and any other OS that supports dynamic linking and has ABI backwards compatibility for the applications that run on top of it.
RHEL 8 doesn't ship with GTK 4. How do I ship an application that uses functionality that only exists in GTK 4 if available, but falls back to GTK 3 if it isn't? If libraries had been written with this in mind (like OpenGL was), then yes, there'd be a path to doing so. Libraries on Linux could work in the way you suggest. But, for the most part, they don't.
Well yes, you basically come to what i've been writing about so far: the issue is with libraries like GTK 4 that break ABI backwards compatibility. It isn't the issue with Linux, Linux doesn't break ABI backwards compatibility - if GTK 4 didn't break their ABI you could use the GTK 3 API as a baseline for your application and dynamically load the new GTK 4 stuff (and as a bonus you'd get any inherent improvements that might be there in GTK 4 that are exposed through the GTK 3 API, like how -e.g.- applications written for WinXP get the emoji input popup on Win 10 even though that didn't exist during WinXP's time).
The REAL problem is libraries breaking their ABIs, not Linux itself.
If you're defining Linux as a kernel, then yes, the kernel does not impose any constraints on userland that would make this impossible - and I never said it did. If you're defining Linux as a complete OS, then the fact that it's conceptually possible for libraries to behave this way is irrelevant; they could, but they don't, and any solution for distributing apps needs to deal with that reality rather than just asserting that everything else should be rewritten first before anyone can do anything.
> Let's take an extreme example. If I build an app on Ubuntu and then try to run it against the system libraries on Alpine, it'll fail, because Alpine is built against a different libc.
I mean, on windows if I use a given libc, say msvcrt or ucrt (or heck, newlib with cygwin) I have to ship it anyways with my app. Linux makes that harder but in practice there's no way around this.
The short (but polite) rebuttal to this is - OSX. DMG files are similar sized and proven to be very successful.
For e.g. - firefox (https://www.mozilla.org/en-US/firefox/all/#product-desktop-r...). Win64 installer is 50mb. MacOS installer is 130mb. Linux 64-bit is 70mb.
Same is the case with Chrome - https://chromeenterprise.google/intl/en_US/browser/download/... . Windows MSI is 79mb. OSX PKG for the same is 195mb.
So by that measure - OSX has already lost right ? The entire package-everything-together has won in probably one of the largest OS ecosystem that exists today. Size does not matter... mom-proof experience does matter.
>How much progress could we make if Steam deprecated their runtimes, abandoned containerization for new games, and let all new games just use the native system libraries? How loudly do you think gamers would complain if a distribution upgrade broke their favourite game?
This OTOH does not exist. Linux gamers are 1% of either the gaming market or the desktop computing market. All gamers either dual boot windows...or android. Even for those who game on Linux, they do it on an abstraction layer like Wine. Which is what Valve maintains - https://github.com/ValveSoftware/wine and https://github.com/ValveSoftware/Proton
Valve only needs to maintain wine & proton for a distribution. The games themselves ? well, they are windows only. Nobody compiles for Linux.
That just not true. "All Linux users dual boot" where's your data to say that? "Nobody compiles for Linux" factually false, many of the most-played games on steam are Linux-native, and most others run flawlessly on Proton.
And 1.2% is slightly more than half the mac market share on steam. Should mac users also be ignored?
>"Nobody compiles for Linux" factually false, many of the most-played games on steam are Linux-native, and most others run flawlessly on Proton.
The games that run on proton have not been compiled for linux.
E.g. factorio is compiled for linux.
Yes, linux-native games exist.
But games that run on linux via proton have not been compiled for linux, so mentioning them as a counter-argument to games not being compiled for linux makes no sense.
The broader context here is TFA rejecting the "steam runtime" idea, and for proton-based games proton is that runtime! They don't need anything from the system because proton/wine provides it.
linux native + proton means "some compiles for Linux[1], the rest either tests on proton (steamdeck should increase that segment at least) or does not care"
Your data is wrong. I wish it were correct. But it's not.
OSX users are 2.56% of Steam. Linux is 1.13%
https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...
> Valve only needs to maintain wine & proton for a distribution. The games themselves ? well, they are windows only. Nobody compiles for Linux. All gamers either dual boot windows...or android. Even for those who game on Linux, they do it on an abstraction layer like Wine
This isn't true at all. I'm afraid you've misunderstood the situation. Get on steam and check for yourself. There are many games compiled for Linux specifically (in addition to other platforms) that do not require proton. They use some ubuntu or debian packages for their libraries and have no linkage to anything windows.
Firefox is larger on macOS because it contains 2 archs (x86_64 and arm64), not because it bundles of full runtime -- also the compression algo of dmg files is typically quite bad (zlib or bzip2), which is not helping.
That still does not change the argument that file size does not matter.
In fact, this reinforces it. OSX believes file size matters so little they’re willing to double file size simply so users don’t have to pick x64 or x86 when downloading the app (and if they’re using the App Store the App Store could do it for them, but even that Apple thinks is too much complexity).
Folks who have the money for Apple stuff have money for large SSDs and fast unmetered Internet.
Aaaand folks who live in the Apple universe have been accustomed to accept that they are holding it wrong, and that there's someone whose job is to figure out what's the official best way to do things. If it involves downloading 130 megs it's 130 megs. No problem. The UX and end result is worth it for them.
Not sure about MacApp Store, but on iOS they do trim archs and assets as needed.
That's a surprisingly recent addition on iOS, and it appears to not have made it to macOS.
I have used Monolingual[1] on past laptops to remove unnecessary architectures from installed OS X applications when disk space was running low.
Having said that, I never bothered installing it on my current laptop which sort of reinforces the point others are making about being less concerned with application size.
I didn’t think that binaries would take up so much space, but Firefox 83 (the last x86-only release) is 73MB vs Firefox 84 (the first universal release) which is 126MB. Wow, I guess that makes sense then.
Firefox would be a bigger codebase than the whole Linux kernel so it makes sense.
Thanks for that! I quickly poked around at my Firefox install on MacOS. Firefox.app is 353M. The largest single file in there is Firefox.app/Contents/MacOS/XUL (binary) and that's 258M. Of that, lipo says 136165680 is x86_64 and 134106096 is arm64. My Firefox folder in Windows is 208M.
Yeah, really. OSX DMG's contain AppBundles, a concept OSX inherited from NeXT. NeXT wasn't the first implementation of the idea either, in fact pretty much every non-unix desktop OS ever released used some variation of the concept including DOS and the original Mac.
Sadly, it seems the Linux world just can't wrap its head around the idea of managing applications with the same simple mechanisms we use to manage regular every day files. It would seem that Linux Desktop users just love having inflexible management tools do it instead. Well, there is AppImage, but unfortunately its use is not wide spread.
"Sadly, it seems the Linux world just can't wrap its head around the idea of managing applications with the same simple mechanisms we use to manage regular every day files. It would seem that Linux Desktop users just love having inflexible management tools do it instead. Well, there is AppImage, but unfortunately its use is not wide spread."
Seems to me they love treating application management like one manages dependencies in a software project. There are a lot of parallels with the paradigm. Its clear where the notion came from. And I think in the context of unix/linux history it makes sense. If I were an MIT grad student in 1978 there probably isn't much difference between a library and an application practically.
But that was over 40 years ago. There is a clear distinction now and the overwhelming majority of PC users are not MIT grad students. Applications should be easy to install. People don't care that there is more than one version of some dependency. They don't care that your repro doesn't have what they want. They have work to do.
> This OTOH does not exist. Linux gamers are 1% of either the gaming market or the desktop computing market. All gamers either dual boot windows...or android. Even for those who game on Linux, they do it on an abstraction layer like Wine. Which is what Valve maintains
First off, 1.13% of Steam users are on Linux [1], and given how many users Steam has, just 1% of that population should be able to make quite a fuss. Secondly, Valve does need those containerized runtimes even if most games run on Proton, since Proton also depends on those libraries, so not sure why you're dismissive...
[1]: https://www.phoronix.com/scan.php?page=news_item&px=Steam-Li...
I'm on your side. I'm a Linux gamer..or rather used to be.
Until I saw this - https://news.ycombinator.com/item?id=28978086
Yeah sure, Linux gamers are more engaged, etc etc. But for a indie developer making games...it's like 5% of the market. Probably does not even recoup development cost.
I realised that I am probably doing a disservice to the indie gaming developers, by insisting on a militant stance.
Wine is fine. I'll just play it on Linux. Let them make more money.
osx can do this because "osx" is about as specific as "Ubuntu 20.04 LTS", so 90% of the problems that were described in the article wouldn't happen because there aren't many osxes.
I personally don't use it, so I can't tell how good the story is with 2 different versions. I've heard lots of upgrade troubles from mac users when a new OS release comes out, but maybe the authors are usually better in keeping a version for current-1 around, or the latest version compatible with current-1? Then it's still only 2 versions and not 10 distros.
My experience with these is terrible. I install a JetBrains IDE via a package. I spent a very long time trying to debug a CMake problem that I assumed was my inexperience with CMake. After attacking various processes with strace, it dawned on me that the problems were caused by the IDE not being able to see files in /usr/local. A sandboxed dev environment! Lost for words... A relatively inexperienced dev would have a terrible time with such a thing - it couldn't even see dev headers & libs installed with apt.
This is the biggest problem with these containers at the moment, you don't get any feedback what permissions are missing, first you need to understand this is a permission problem with the container itself and then to understand what permissions are missing and then apply them correctly.
For flatpak there is an api you can use to change permissions but for snap what I can remember that is not something you, the user, can change, that is up to the maintainer to enable them.
Applications like an IDE uses a lot of different resources so I gave up on using that as a flatpak, luckily Jetbrains ships their IDEs as tar.gz binary package you can use instead.
Flatpak works best when the application is very self contained, like Spotify, it streams music from an internet service, it doesn't require any special permissions.
I used Bitwarden as a flatpak, it had limited file access with one granted directory (Downloads), I was going to download an attachment from the Bitwarden application, the file saving dialog started one directory up from Downloads, you had to pick and open the Downloads directory first before saving, however I managed to save my attachment in that starting location outside of Downloads, some void directory that I never found.
Very similar experience with .NET runtime on Linux. Microsoft ships .NET runtime as a Snap package https://docs.microsoft.com/en-us/dotnet/core/install/linux
One time I didn’t paid attention, installed Snap package instead of the native binaries, and then I spent several hours debugging “access denied” status returned by mq_open https://man7.org/linux/man-pages/man3/mq_open.3.html kernel API in my program which only happened when called from a C# program, but not C++ program running under the same user account.
The (unofficial) JetBrains flatpaks have never worked for me for that reason, I don't know why flatpak keeps them around. I recommend JetBrains Toolbox[0] for managing their IDEs.
You discovered the one area that doesn't work well yet. Dev tools don't work great because they typically need access to just about everything. They either ask for very broad permissions or the app has to add support for portals - of which there may not be some that they need yet.
Can't upvote this enough. As other commenters have pointed out, understanding why certain features/patterns don't work in a given app due to sandboxing is a major hurdle.
Yea, a while back I tried out a flatpak for filezilla, but because it was sandboxed I couldn't set the default editor to anything on my actual machine, so uninstalled that and went with a deb which worked just fine
My main complaint about solutions, like Flatpak or Snap, is that they add another package manager. Now I have to manage two sets of packages.
The baseline tools, like all the GNU standard Unix tools, and other command line tools are still managed by DNF, APT, whatever, and then another set of apps are managed by Snap. It's pretty confusing, especially if you never use the "app stores" otherwise. You quickly end up with: apt install <some command line tool> and apt install gimp, but now that's a different Gimp than the one in App Store. You also can't really remove Gimp from apt, because it's weird that apt can't install everything.
I haven't really used Flatpaks, but Snaps is confusing (to me at least), because it's doesn't actually replace APT and pollute the output of "df" and "mount".
I'm also concerned that some packages will never be updated, or stuck on a library with security issues. Perhaps that me, but I'd rather that that one application breaks.
You should point the finger at package managers. They never really addressed needs outside of system administration (to be fair, that's the problem they were trying to solve). Scripting languages came up with their own package system because package managers didn't fit their needs, I often found myself building from source and installing into my homedir because package managers wouldn't accommodate, industries (like the one I work in) came up with their own package management outside of the OS.
Nix/Guix seems like the only one that's trying something different enough to address some of those other use-cases, but the cat's out of the bag for most people.
>You should point the finger at package managers. They never really addressed needs outside of system administration (to be fair, that's the problem they were trying to solve). Scripting languages came up with their own package system because package managers didn't fit their needs
I argue that's simply not true. Scripting languages came up with package managers because the largest platform (Windows) does not have a package manager. Actually that scripting languages implement package managers is a very good argument that their functionality is desired. As a side note, every language package manager is much worse in my experience than all of the system package managers I ever used.
Right. Windows support is a feature outside of the scope of Linux package managers. If they weren't tethered to specific distros I don't see why one couldn't support multiple platforms. It hasn't historically, but Windows has a bunch of package managers, even one supported by Microsoft.
I think "largest" is quite an exaggeration since historically scripting languages packaging has been rather terrible on Windows. While Windows was a consideration for languages like Python (I've read they avoided Make because of Windows), for me cygwin always coming up along with it until recently.
> Nix/Guix seems like the only one that's trying something different enough to address some of those other use-cases, but the cat's out of the bag for most people.
Maybe that's because nix and guix don't interop despite having rather similar principles, and because we mostly don't have GUI for it (like GNOME Software has integration for flatpak) nor desktop integration (like AppImageLauncher). Are you aware of work being done in this space?
I wasn't trying to undermine their hard work! I think both have a chance of being successful in their own domains. I only meant to call them out as exceptions.
All I was saying is that if a new package manager addresses all of the issues scripting languages had, scripting languages would still use their own package managers. Because the culture is now set, even new scripting languages would write their own package managers.
There's nix-gui: https://github.com/nix-gui/nix-gui
Package managers that integrate apt, flatpak, and (potentially) snap are a nice solution. Unfortunately, the one that happens to be on my system (Pop!_Shop) is awful.
Could you maybe elaborate, or link to more detailed criticism of Pop! Shop? I've also found that GNOME Software did a rather poor (though not that bad) job of exposing the multiple sources for a given package, but i believe this can be fixed with better UX (though i'm unaware of work being done in this space).
Pop!_Shop's UI frequently freezes when installing, searching, or even just opening an app's store page from the Gnome app menu. It also has some UI bugs, like the search bar being visible but un-interactable when on an app's page. It's definitely something that can be fixed with better UX. The actual selecting whether I want the apt version or flatpak version of a package works fine.
> If I ship an app for Windows I don’t have to include the entire Win32 or .NET runtimes with my app.
That's not the case anymore. If you're aiming higher than .net framework 4.x, you have the option of either self-contained package (your app + .net core - min 70MB - appimage approach), or framework dependent (requires .net version at installation - flatpak approach).
I find the "things are not shared" section a bit weird though. Yes, not everything uses the same base and that's sad. But I've got 8 flatpak apps and 7 of them do share the runtime. We'll never get 100% unified and deduplicated approach here - in the same way we'll never unify Gtk/qt/tk/wx/...
> Flatpak allows apps to declare that they need full access to your filesystem or your home folder, yet graphical software stores still claim such apps are sandboxed.
Yes, distros really need to start talking about permissions / capabilities rather than some generic "sandbox". Some apps need to effectively read the whole disk. But maybe they don't need to write to home. Or maybe they don't need internet access. We'll have to get mobile-phone-like permission descriptions at some point, because a single label just doesn't describe the reality.
Since Windows 3.0 it has only been the case when using OS APIs, without depending on C runtime library, C++ runtime library and respective frameworks, COM or DLLs from other SDK or third parties, xcopy installs and installers exist for a reason.
Even with .net framework 4.x you need to bring the framework with you, if you want to use the newest frameworks, as your customers might not have the most current version installed.
The point as formulated still stands though. You don't have to because you can always guarantee that your end users will have a Win32 runtima and a .NET runtime.
There is a choice, and it's between using a later framework, and relying on the OS one.
The choice being stuck with what a specific Windows version has available and using OS APIs directly instead of C ones, e.g. ZeroMemory() instead of memset(), as the C runtime library isn't part of the OS.
Yes, I was only talking about Win32 and .NET, which is always bundled in windows (Obviously newer versions of the OS may have newer versions of them).
How C libraries work has never been a problem I have encountered (I do deploy C++ libraries with apps though, and the tendency to move to "all apps deploy a full copy of whatever runtimes they need" is a much better situation for all 3 parties involved (developers, users and hackers...).
I get the point of memset (although that particular example I believe is now an intrinsic).
The new Gnome Software in Gnome 41 does include a serious warning label on Gimp now, for the record.
Flatpak is the future of Linux desktop applications for so many reasons, but among them it allows users not to be caught up in the dependancy mess that plagues Linux distributions. Some of the criticism here does not belong to Flatpak. E.g Fedora's weird duplication of repositories is exclusively a Fedora problem. Has absolutely nothing to do with Flatpak.
Applications such as VLC and GIMP which 'ship' with access to whole filesystem permission is an eternal dilemma. Would you rather the authors ship without access to this permission and break app functionality ? Or let them ship with restrictive permission and allow users to manually enable respective permissions to regain function when the software breaks? It is easy to see the feasible decision here that works for all parties here. The permission on the different accessibility store however has confusing labels on sandboxing, I agree with the point that the store should make this as clear as possible.
I think the bigger point the article misses is the ability to control these permissions at this level. Even if the software author ships 'dangerous' default permissions, the user can always revert this decision and sandbox it effectively if they so wish.
Flatpak is a crucial needed fix for the Linux package distribution problems highlighted elaborately by this article but in my humble assessment, the benefits to this solution massively outweigh the nuances such as the one the author mentioned about package sizes.
> Applications such as VLC and GIMP which 'ship' with access to whole filesystem permission is an eternal dilemma. Would you rather the authors ship without access to this permission and break app functionality ?
to give a data point, I work with a lot of artists who use Macs and none of them use mac app store apps because of endless issues when accessing the file system
Counterpoint: Flatpak doesn't really solve anything, and I have no reason to use a Flatpak'd version of a software when there's a native version in my system repos. A lot of people feel this way: they see it when Flatpak doesn't adopt their native system theme, they see it when they try opening a filepicker and it starts in some esoteric location, they see it when they want to edit files of a Flatpak'd app and need to spend the afternoon locating it's binary. There are so many papercuts, bugs, regressions and failures on Flatpak's behalf that I don't think anyone would really want to adopt it unless they were forced to.
I speak only for myself, but I will never enable Flatpak on any of my devices. A lot of other Linux users share the sentiment.
Apps not adopting to your theme is a feature, at least when you ask some Gnome devs.
It's a regression, when you ask the other 95% of Linux users who aren't "enjoying" stock GNOME.
The typical user also doesn't want file picker thumbnails when you ask the Gnome devs ;)
> How much progress could we make if Steam deprecated their runtimes, abandoned containerization for new games, and let all new games just use the native system libraries? How loudly do you think gamers would complain if a distribution upgrade broke their favourite game?
If the steam runtime didn't exist, most gamedevs would only target the most popular distro - probably the current Ubuntu LTS, and you would have to recreate the runtime on your distro of choice.
And once there's a new Ubuntu release you would also have to recreate it there (or the game updates and now you'll have to recreate it on the now-old version).
The choice isn't between steam runtime and a utopia, the choice is between the steam runtime and something much worse. Linux libraries simply aren't stable enough in API and ABI.
And realistically if they could only target a single distro, they would likely not bother making Linux builds at all.
Yet the steam runtime could be maintained/distributed using a more robust and explicit mechanism such as guix or nix, don't you think?
Guix is a great solution to this problem. It's a shame that there aren't any distributions based on it (afaik) besides the official GNU one, which is not very practical outside of a VM due to their aversion to anything proprietary.
If you are a Linux distribution maintainer, please understand what all of
these solutions are trying to accomplish. All your hard work in building
your software repository, maintaining your libraries, testing countless
system configurations, designing a consistent user experience… they are
trying to throw all of that away. Every single one of these runtime packaging
mechanisms is trying to subvert the operating system, replacing as much as
they can with their own. Why would you support this?
Uh, because User Freedom™? Or are we supposed to consume Linux applications exclusively via the maintainer-supplied packages? Also, is this part: "testing countless system configurations, designing a consistent user experience", ― something that actually happens? Given the example in this very article about Fedora going to auto-convert all its rpm packages to Flatpak it sounds, as the youth says nowadays, "kinda sus".The most important takeaway from this is, to me, the need to separate sandboxing from dependency management.
However, as can be seen with AppImage, which seems to be strictly focused on dependency management, that results in bloat.
It would be great if we could have a sandboxing solution that ignores the dependency management altogether.
The problem is not with Flatpak or Snaps (or Docker). The problem is that the fragmentation in the actual, live-deployed Linux system runtime is huge. At one point, I know Canonical suggested a common, shared baseline for all LTS editions of all distributions at some point, but there was no interest from Red Hat in particular (the other "largest" player). It's not too surprising, since Red Hat is the biggest contributor to the base GNU/Linux system, so it can decide how it deals with it and push it onto others.
Backwards compatibility is hard in software in general, but I think simple extensions of tools like apt and apt archives (like Launchpad PPAs) would have allowed binary distribution of software in an already familiar way where dependency management is not a solved problem per se, but has all the familiar gotchas.
As for app stores, that's surely an effort to extract some profits, but it's also what big customers are asking for for their IoT solutions.
> Backwards compatibility is hard in software in general
I disagree. Pretty much every desktop OS in the world manages to get quite a lot of backwards compatibility without a whole lot of trouble, except for Linux Desktop.
That suggests the problem isn't hard, it's just that the culture of Linux Desktop is incompatible with the concept.
You must have missed all the articles on the huge effort Microsoft went to to maintain backwards compatibility (like that famous SimCity story).
I can't find the original article anymore, but you can find references to it on Joel Spolsky's blog:
- https://www.joelonsoftware.com/2000/05/24/strategy-letter-ii...
- https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...
Referenced posts probably live somewhere on https://devblogs.microsoft.com/oldnewthing/ today, but the quoted snippets should tell you that it's not that easy at all, and that Microsoft is investing heavily in maintaining it.
Yes, in order to increase their backwards compatibility Microsoft has, historically at least, done things like that. However, mostly they achieve backwards compatibility by just not breaking ABIs all the goddamned time.
Windows backcompat isn't perfect, true, but largely the stories of where it fails are exceptions to the rule. Where as you can't even have backward compatibility in a Linux Desktop with applications compiled for the previous version of the same distribution in many cases.
You started off by disagreeing that backwards compatibility is hard, because other platforms (like Windows or MacOS) can do it.
When given evidence that it's at huge cost and investment that they manage to do that (basically an admission from them that it is hard — you may have a different definition of "hard" than I do), you seem to suggest that either they are not doing a lot of work towards compatibility or that they are wasting all that money because they could have simply maintained backwards ABI compatibility while developing new features "easy-peasy".
Why did they choose not to take the easy way out and made it hard for themselves?
From my software development experience, maintaining API backwards compatibility is "hard" (requires significant extra effort to achieve compared to just not doing it), not to mention ABI complexities on top.
You misunderstand. I am saying that all that effort was spent chasing the long tail of compatibility, which is valuable if compatibility is one of your big selling points.
However, one need not go to such extremes to get a good amount of compatibility, just don't have a policy of breaking ABIs constantly the way Linux Desktop does.
And let me address the "policy of breaking ABIs constantly the way Linux Desktop does."
As you probably know, there is no policy, and no "Linux Desktop" either. You've got Linux distribution releases that are supported for up to 10 years (like RHEL and Ubuntu LTS) by teams a couple of orders of magnitude smaller than either of MacOS or Windows. They usually get HWE kernel updates too (hardware enablement) from newer Linux series. By definition, they maintain ABI backwards compatibility for the duration of their support. If you care about it, that's what you should use.
But it's in the nature of free software world to want to mix and match, so the idea of a singular Linux Desktop does not exist. You should treat each one of them as a separate OS, which is unfortunate, but realistic. Flatpak/snapcraft recognise that to a point (packaging up everything but the kernel, regardless of the system packages already installed).
> As you probably know, there is no policy, and no "Linux Desktop" either.
Yes, a common refrain.
> You've got Linux distribution releases that are supported for up to 10 years (like RHEL and Ubuntu LTS) by teams a couple of orders of magnitude smaller than either of MacOS or Windows.
Yes, and they're not really equivalent because they keep everything stable. An OS like Windows or MacOS keeps the platform stable and allows applications to be bleeding edge or not as desired. This way applications are typically distributed on Linux make this very difficult, hence things like Flatpak.
Regarding the team size, I can only say that if everyone worked on the same "distribution", that gap would be significantly reduced. Instead the Linux Desktop world delights in duplicating effort, so here we are.
> But it's in the nature of free software world to want to mix and match, so the idea of a singular Linux Desktop does not exist. You should treat each one of them as a separate OS, which is unfortunate, but realistic.
If that is the policy, then no one should be surprised when developers choose not to target a non-platform that doesn't exist.
Ok, I see your premise now. I am not sure I agree: do you know of a system that has seen less investment on backwards compatibility yet successfully maintains it?
Of course, not "frozen" like TeX, but seeing active feature development too.
Pretty sure HaikuOS runs software originally compiled for BeOS in 1995. They have a very small development team and they are quite active today.
...for that matter WINE is even able to make Linux compatible with Windows applications to a great extent, and they certainly aren't backed with Microsoft's wealth of resources.
> Pretty much every desktop OS in the world manages to get quite a lot of backwards compatibility without a whole lot of trouble, except for Linux Desktop.
That's not at all my experience with Windows or MacOS. Sure some older apps work fine, but certainly not all of them.
The vast majority of them do, in my experience. Especially if you take into consideration how badly it doesn't work in Linux Desktop. You cannot usually even run a binary compiled for the previous version of your distro on the latest one!
The tool that flatpak uses for sandboxing is bubblewrap, that can be used to sandbox distro packages fairly easily. None of the desktops do that though.
Thanks: bubblewrap seems to have been extracted from flatpak.
The documentation seems to focus on filesystem access sandboxing. Is there something with practical suggestions on how to sandbox things like webcam access, clipboard, screen (for screen sharing/recording apps), networking... with bubblewrap?
(I know some of that is visible in the filesystem, but not all of it is)
Look into Flatpak portals. I'm not sure how they work wrt bubblewrap, but they pass limited interfaces to various things into the container. Things on the host side then prompt the user for which webcam/etc the app should get access to. I think they are similar to Android intents in that the app can't ask for access to a specific thing, just for access to certain kind of things and then it is up to the user to choose which one.
Yes, I am familiar with portals and what they do: I was exactly wondering if bubblewrap contains something similar — it's been ages since I was involved with Linux system architecture (let's say I am stuck in the SysV init world, and I still go for "service apache2 restart" :)), so it'd be great if there was a quick introduction for someone wanting to use flatpak/snap style sandboxing without the dependency management and distribution channels.
bubblewrap can bind-mount paths from outside the container inside the container. I haven't verified this but I'm assuming portals are just sockets bind-mounted into the container. I suggest you maybe install Fedora in a VM and run a flatpak app there that uses portals and try to inspect what files are passed into the container.
As Linux user and app developer (shameless plug https://github.com/olegantonyan/mpz/) I deliberately avoid snap/flatpak/appimage/etc.
Instead, I suffer with Open Build Service https://build.opensuse.org/. It's kind of cool, free and can build for multiple distros, but making it actually do so is a pain. But I still prefer this over flatpak&co both as user and as developer.
It just doesn't look like "the future of application distribution", https://nixos.org/ does.
> Open Build Service
I heard about that before, but never tried. How in practice do you deal with conflicting library versions?
> It just doesn't look like "the future of application distribution", https://nixos.org/ does.
Strongly agree. I just find it sad that nix/guix doesn't have proper desktop integration (to my knowledge), and that nix requires nixGL hacks to start any GUI on foreign distros.
Basically you write spec file for rpm (or debian/control for deb) and put your dependencies there as usual. But for each supported distro you can override packages names in OBS project config, so if the same package called differently in another distro you substitute it without modifying spec-file. Here's how it looks: https://build.opensuse.org/projects/home:oleg_antonyan/prjco...
It's a mess, without meaningful documentation and lots of bugs I've spent hours digging the source code to get any idea of how to use it.
I build everything static except Qt. My project is small enough with only 4 external libraries, so it's ok atm.
Why would you rather install some random rpm from someone you don't know as opposed to a flatpak?
This works both ways: Why would you rather install some random flatpak from someone you don't know as opposed to a rpm?
The trust is outside of the scope of package managers
Because installing an rpm allows you to run arbitrary code as root. Installing a flatpak does not. I mean there are many more reasons but that should be en.
It's explained in the article, security section: b/c installing flatpak can also run arbitrary code as a user. And I won't argue that running malicious code as a user is always harmless. Regardless of root access if you're installing flatpak and its author want to pwn you - they can do it even without root access
Yeah, so why even have user accounts and not just root everything? I mean if there's no difference...
If the future of application distribution looks like learning a new language just get things installed, count me out. Linux is obtuse enough as it is.
The article bashes Flatpak portals as complex but otherwise doesn't make any solid arguments against them. That's important because it's easily the most important remaining problem to solve to get an effective sandbox.
I actually like that for apps that have no business reading any files on my system they simply cannot do so. Sure, you can point to apps like GIMP and such that have chosen to give access to the file system by default to make it easier for users while portals get polished. If that bothers you then simply use Flatseal to change those permissions. Meanwhile apps like Spotify are entirely restricted from accessing any of my filesystem. Likewise Discord can only access files from a specific directory or a youtube-dl style tool like VideoDownloader also cannot access any of my files and always includes the latest version of ffmpeg and youtube-dl regardless of distro politics.
The article's main opposition is to containerization and not portals - the author would like having an API for portals (a stable libportal?), but likes that approach overall.
The problem is that we still need a solution to stop hacked apps from calling fopen() on unrelated files. The article ignores how bad the native runtime security solutions are - SELinux is nigh unusable, no distro has many apparmor profiles, etc.
Flatpak and snap use containerization because Linux's security modules aren't common across distros and aren't very usable.
It took android years and billions of dollars to get to the state it is at.
Flatpaks and portals are moving towards that direction (according to my non expert opinion).
All this friction and complaints, is just normal for such a big transition in how desktop apps should be written. I remember android devs going up in arms when there were significant API changes in past android versions. The situation for linux looks similar to me (aside from the fact the ecosystem doesn't have money to throw at the problem).
The annoying thing with android is that- even in the J2ME days, blackberry had a more sophisticated, fine grained per-app permission system that was easy to find and adjust at any time.
Android and iOS have been playing catchup ever since... Linux I at least will cut some slack, given its age and origins, though it would really have been nice had they addressed security by now. The situation can only get worse, but sooner or later something will happen that will encourage breaking backwards compatibility, I hope.
All that said, I praise the sun every day that I'm not writing BB apps or j2me anymore, so there's that at least.
I agree that the implementation is lacking. Snap has the abysmally named "--classic" parameter to allow installs to "run without confinement". Flatpak can request permission changes at install time (albeit declaring them), where users are likely to just click OK/OK/OK. The sandboxing needs to be tightened up.
Flathub is a strange beast. There's no mention of security on their wiki. They stopped publishing minutes (or moved them elsewhere?) in 2017 (https://github.com/flathub/flathub/wiki). They have a buildbot for automated updates from developers, but they accept binaries anyway (e.g. https://github.com/flathub/us.zoom.Zoom/blob/master/us.zoom....), so what's the point? It appears to be a fairly amateur effort, and yet is at the center of the infrastructure Red Hat and Gnome are pushing. I'd love to see some white hat activity targeted at compromising it, to demonstrate the shaky foundations.
But on the other hand, it's nice that I can run Zoom sandboxed (apparently - it's not obvious what the granted permissions are: https://www.flathub.org/apps/details/us.zoom.Zoom). It's nice that Jetbrains and Zoom have a way to publish apps that can run on all distros. It's nice that I could rollback a version of IntelliJ that was buggy with a single snap command that took 5 seconds. The goals are good.
I wish Linus took more of a BDFL approach to the desktop occasionally. Ubuntu & Red Hat need to sit down in a room and have a constructive conversation to converge Snap and Flatpak into something new, deprecating the infrastructure built to date, and fixing some of the glaring problems. There's room for both to make money without further diverging the ecosystem.
the flatpak sandbox UX is bleak. right now, you have to check the JSON file you linked to check how much access a flatpak program gets. i disagree with the --own-name and --talk-name flags (i think this is for screensharing; zoom should use the screensharing portal instead. letting zoom talk to gnome shell directly could be bad.).
--socket=x11 is a massive hole in the sandbox, since x11 does not have a security model - any client can observe and manipulate any other client. for x11, a viable solution would be running flatpak apps in xephyr, but flatpak doesn't do that. long-term, wayland is a better solution.
Try this to manage application permissions: https://flathub.org/apps/details/com.github.tchx84.Flatseal
It's not really that bad. You are told the permissions requested by an app on installation:
and you can query an apps permissions at any time:$ flatpak install flathub com.microsoft.Teams com.microsoft.Teams permissions: ipc network pcsc pulseaudio x11 devices file access [1] dbus access [2] tags [3] [1] xdg-download [2] org.freedesktop.Notifications, org.freedesktop.secrets, org.gnome.SessionManager, org.kde.StatusNotifierWatcher [3] proprietary ID Branch Op Remote Download 1. com.microsoft.Teams stable i flathub < 86.0 MB Proceed with these changes to the system installation? [Y/n]:
GNOME Software in GNOME 41 also has a much better list of permissions than the version shown in this article.$ flatpak info --show-permissions com.discordapp.Discord [Context] shared=network;ipc; sockets=x11;pulseaudio; devices=all; filesystems=xdg-download;xdg-pictures:ro;xdg-videos:ro;home; [Session Bus Policy] org.kde.StatusNotifierWatcher=talk org.freedesktop.Notifications=talk com.canonical.AppMenu.Registrar=talk com.canonical.indicator.application=talk com.canonical.Unity.LauncherEntry=talkhttps://blogs.gnome.org/tbernard/2021/09/09/software-41-cont...
> x11 does not have a security model - any client can observe and manipulate any other client
This hasn't been the case for decades. X has the ability to isolate clients and/or only allow partial access to other clients. For example try this:
The first line generates a new X authority file with a single entry that causes the display :0 to be untrusted. Then whatever runs inside xterm (or whatever you launch) will be considered as untrusted, e.g. you wont be able to run any OpenGL application or xdotool (it actually crashes after saying the XTEST extension isn't available, i guess it can't handle that case).$ xauth -f lockthisout generate :0 . untrusted $ XAUTHORITY=lockthisout xtermNote that this can be worked around by running "export XAUTHORITY=~/.Xauthority" since pretty much everything uses ~/.Xauthority for the local user (which is connected as trusted). This one can also be addressed by storing the session X authority file somewhere else (AFAIK some distros generate a new randomly named one for every session), or you could use something like AppArmor or SELinux to restrict the untrusted application's access to the session's X authority file. Or just run it as a different user who is always untrusted, though that can be inconvenient for some types of applications.
That said there are some issues, mainly because the whole X security functionality hasn't seen much attention in recent years. For example an untrusted client can't use 3D accelerated graphics but you may actually want to run a 3D app while not giving it access to everything else. Though that isn't due to some inherent limitation of X or whatever, just that since it never gained much attention (since the entire GUI sandboxing thing was never much of a priority) nobody bothered to work that part out.
In the future it might just be better for X desktop to be running untrusted applications under a Wayland "pseudocompositor" that simply lets X handle the actual window management (current Wayland compositors that can run under X create a window and treat it as a display which isn't exactly nice from a UX perspective and doesn't allow -trusted- X programs like xdotool work with Wayland windows).
> right now, you have to check the JSON file you linked to check how much access a flatpak program gets
In general, GUI package managers w/ Flatpak support show you the permissions, as does the CLI upon an attempted install.
> Snap has the abysmally named "--classic" parameter to allow installs to "run without confinement".
Only if the snap was built with classic confinemnt in mind. Otherwise, just slapping --classic on a random snap does nothing, there's even a warning display about that. Classic is very much the same as a random 3rd party vendor app built and unpacked under /opt. Unfortunately some software, especially IDEs and languages are unsuitable for running under confinement and need to be distributed this way. By passing --classic you give your consent to use an app package in this sub-par way.
FWIW, perhaps you meant --devmode, which as the name implies is mostly for developing a snap? snap install --help describes that as:
--devmode Put snap in development mode and disable security confinement> especially IDEs and languages are unsuitable for running under confinement and need to be distributed this way
I think the solution to that should be something other than removing the sandbox. I see "portals" referred to elsewhere in this thread w.r.t. flatpak; does snap not have similarly?
> By passing --classic you give your consent to use an app package in this sub-par way.
Sure, but it's a terrible name. It should be --unsandboxed or --fullaccess. "Classic" sounds like a mode you'd want.
> I think the solution to that should be something other than removing the sandbox. I see "portals" referred to elsewhere in this thread w.r.t. flatpak; does snap not have similarly?
Snaps use portals like flatpaks do. The problem is more about frameworks you build applications with not being ready to consume portals. Turns out only some reasonably recent Qt versions can work with portals. IIRC Electron only landed support couple of months ago. Back on the host side of confinement, you need to run a reasonably recent xdg-desktop-portal (and a maching GUI integration bit). Some old distros do not provide any of those packages or ship version that don't work, so things are DOA.
Meh... i used to install tarballz, hunting for .so files, then RPMs hunting dealing with circular RPM dependencies, then DEBs dealing with unmet dependencies. As I age all that got boring. I hate Snaps because of how intrusive is with the system.
Flatpak brings Linux to what OSX had like 10 years ago. It just works
Regarding download size? Even I, living in a third world country, have an internet connection good enough to not care about download sizes...
Regarding file redundancy. If that is much of a problem for anyone, I'm sure that could be dealt with at the file system layer: hash/detect duplicate files and compact them, then copy on write. Personally, HDD space hasn't been an issue for me on PC or laptops in a long time.
Flatpak finally just work and we have the technology to make it usable. Moreover I argue that flatpak IS the future as those 2 previous points will become more irrelevant as time goes by.
Unfortunately, Flatpaks are the new SystemD. As in, a solid technology which solves real world problems but will face immense resistance from a community filled with splinter groups that have their own pet technology on the same niche, and will accept absolutely nothing else as an alternative. There is simply no path to standardization in this community other than big players like Red Hat and Canonical strong arming their pet technologies in, much to the disdain of everybody else.
As for me I'll keep using Flatpaks as they work great and have wide support. Alternatives are either obnoxious to use (Nix and Guix) or lacking in basic functionality (Appimages).
SystemD is not harmful for distributions. Flatpak replaces high-granularity, properly managed dependencies and security updates with huge blobs.
Flatpaks are not meant to replace established package managers, but to act as a supplement to them. Flatapak's design decisions go to huge lengths to ensure they are a side-by-side system with default package managers e.g. the notion that Flatpak packages have a different XDG_CONFIG directory so as not to conflict with applications installed via the main package manager. Therefore the argument that they are harmful to distributions is absurd, as there is no aim to replace your precious "properly" managed dependencies and the option to never opt-in will be always available.
If flatpaks weren't in competition with packages repos they would serve no purpose. They are explicitly in competition with package repos. They exist so that app developers can get their work out to users without cooperating with downstream packagers.
Overlap in functionality does not equal competition. They exist so that app developers can get their work out to users without cooperating with downstream packagers yes, but that does not keep anyone from packaging applications using the traditional methods.
Exactly, app developers who opt for flatpaks no longer have to give a damn about downstream. App uses an ancient version of OpenSSL that downstream won't package? No problem, use a flatpak. App needs symbols from a dropped API in ImageMagick and downstream won't package the out-of-date library? No problem, use a flatpak.
When app developers needed to work with downstream, end users won because it forced app developers to keep up with the environment to meet the requirements of repo packagers. Flatpak is directly in contest with this, trading packaging efficiency and keeping apps up-to-date for app ossification and upstream convenience.
> app developers who opt for flatpaks no longer have to give a damn about downstream
As opposed to today, where app developers definitely don't just tell users to run:
Or even more simply:sudo apt-key <name> --keyserver hkp://<X>:80 --recv <X> sudo add-apt-repository 'deb [arch=amd64] https://<X> multiverse'
People can correct me if I'm wrong, but I think Ubuntu even allows double-click installs of downloaded Debian packages. Is there a complexity here that I'm missing?sudo apt install ./mydownloadedpackage.deb----
As far as I can tell, the parts of distribution that Flatpak makes easier aren't the literal distribution parts. Yes, they make it so you need to create fewer bundles (just Flatpak instead of deb, rpm, etc...). But they're more about the containerization than the distribution, if all you care about is distributing Debian packages outside of the official repos, that isn't necessarily that hard of a problem. I think you can even host a custom package source on a static site through Github pages, you don't even have to pay for the server.
The big change here is that you don't need to share dependencies, but importantly, you already didn't have to share dependencies, Linux apps could already embed unsupported system dependencies that they wanted to use, and many of them (particularly games) already did.
This makes that process easier, which you may view as a negative, but another benefit of that is that easy linking with Flatpak runtimes means applications are more likely to link to those libraries in a more inspectable way, and it's easier to at least audit the applications that were already bundling these dependencies before.
> Flatpaks are not meant to replace established package managers, but to act as a supplement to them.
I'd be genuinely surprised if distributions like Fedora still use, or actively maintain, RPMs for any GUI software after a few years.
For me both systemd and flatpak have made using Linux better, so I use them and basically ignore the political debates. If something better comes along, I'll use those too!
To be fair most of that is a result of an ecosystem of idiots and not of the fundamental design itself.
It has a namespacing feature, so f-ck'n use it. Instead, they intentionally produce collisions.
Providing and sharing common runtimes could have worked pretty well if developers actually used a small set of common runtimes instead of picking from one of a gazillion slightly different rebuilds.
As a replacement for "native" package management? No. Flatpak makes sense if you have a small number (< 5) packages that you want to run from upstream so you can always have the latest version while continuing to use your distro's native packages for everything else.
Unfortunately (and ironically) it's ill-suited for games, which need to have the latest GPU drivers available, which is antithetical to the whole idea of a stable, universal base system.
Game consoles are a stable universal base system, hence why game studios prefer them to regular home computers.
> if developers actually used a small set of common runtimes instead of picking from one of a gazillion slightly different rebuilds
All the runtimes on Flathub are built on top of the standard fd.o runtime, so the OSTree-level dedup should generally work for them.
> Unfortunately (and ironically) it's ill-suited for games, which need to have the latest GPU drivers available, which is antithetical to the whole idea of a stable, universal base system.
I believe there's an extension available for the latest Mesa builds, so you can opt for the bleeding edge and then just remove the extension if things break.
> As a replacement for "native" package management? No. Flatpak makes sense if you have a small number (< 5) packages that you want to run from upstream so you can always have the latest version while continuing to use your distro's native packages for everything else.
There are several "Linux" operating systems that use an immutable base image: Fedora Silverblue, Endless OS, purportedly Steam OS 3. The idea here is the distro packages are great for composing your base system, but once that is done, the entire `/usr` filesystem is frozen. You can upgrade it or roll it back atomically, and in some cases you can do some magic to apply smaller changes (Silverblue has a nice implementation with `rpm-ostree`), but distro packages are intentionally not a typical end user thing. In this kind of OS, the best way to install apps is using something like Flatpak. And that's very much what Flatpak is designed for.
So…
$ flatpak list --app | wc -l
94
And it works well :)
> If you are a Linux distribution maintainer, please understand what all of these solutions are trying to accomplish. All your hard work in building your software repository, maintaining your libraries, testing countless system configurations, designing a consistent user experience… they are trying to throw all of that away. Every single one of these runtime packaging mechanisms is trying to subvert the operating system, replacing as much as they can with their own. Why would you support this?
This sound like a luddite that is against machines. How about embrace the revolution and then you can save a huge amount of work and focus on other things. There are enough things distor people could focus on instead.
I don't buy any of the technical arguments here and it sounds mostly like moralizing argument rather then anything else.
The whole point is that this is technically worse and only being used to try to bootstrap a walled garden where they (RedHat, Canonical for flatpack or snap) get paid a fee due to a monopoly on app distribution to their users. This is not 'the revolution' we should support. I encourage you to study the technical arguments until you understand them well enough that they become convincing.
> monopoly on app distribution to their users
And the argument that it is a walled garden is simply nonsense. Its not monopoly any more then any default repository is.
> I encourage you to study the technical arguments until you understand them well enough that they become convincing.
I have been using flatpak since it came out and many other people in this thread have already pointed out the failure of the technical arguments, no need to do that again.
I guess Canonical, who makes 100% of their revenue on support contracts for open source software, doesn't release the snap server code in open source because... what's the reason you are going to come up with? What possible reason is there apart from making it difficult for anyone else to set up any repository at all?
It's the same reason they bet big on Ubuntu on Phones. App stores are worth a ton of money, you contribute effectively nothing and collect 25% of other company's revenue purely because you can lock out any competing app store.
I don't care about Snap, only Flatpak.
Good for you, here is some candy?
Your arguments simply don't hold any water when we are talking about Flatpak. So I guess you can have candy but no water.
Just announcing that you think something you disagree with is wrong isn't an argument at all. I know you think my arguments are wrong since you are disagreeing. The only reason you have given for thinking they are wrong is that you don't think they are convincing. This may be because you have a secret reason you don't want to share, but I believe it is more likely you are more interested in being disagreeable than in understanding.
> This sound like a luddite that is against machines.
The luddites were not anti-tech from a moral perspective, but understood mechanization of work to profit the bosses and go against the interests of workers (artisans) and therefore practiced sabotage. Despite this initial mischaracterization, i believe the metaphor holds and the author is against this application of this technology precisely because they believe it produces a net negative impact across the ecosystem.
Sure, I agree with your interpretation of luddites historically.
However I still think moral is the right word. The consequences are consider amoral. So its is very much a moral perspective.
And I think the argument that it is bad for the ecosystem is fundamentally wrong.
I dunno. I had my doubts for the longest time but recently I've started, little by little, placing my bets on Flatpak. Snap can go to hell, for reasons exhaustively discussed here and elsewhere, but Flatpak is reasonably solid, open and performant technology that fits the problem it's trying to solve. I wouldn't mind if it takes over the distribution of big, GUI desktop apps on linux.
The fact that Valve ships the Steam Deck with a read only root managed by ostree and user apps installable via Flatpak to me shows the direction of future Linux desktop, and I welcome it wholeheartedly. I am done with distro managed packages for desktop apps, custom patches and the impossibility to ship closed source software on Linux because it's a mess.
Flatpak in my book rocks, and it's ahead of macOS DMG files and light years ahead of the Windows .exe downloaded from the Internet strategy.
There quite a few things I don't agree with in regard of this article.
First, the suggestion that permission should be prompted when running the app, the app using some sort of API. This would mean that Linux app should become tightly coupled to Flatpak. I have no idea how it would be possible to convince all developers, distributions, vendor to adopt Flatpak. I think it would be largely ignored so an ineffective way to bring those change to the Linux ecosystem in the end.
Then, I don't have the real answer, but prompting users with questions when they use the app is the worst approach to security. Being tech literate helps understanding what's happening, but people with less understanding answer those out of annoyance because they're trying to do something and, all of a sudden, the operating system gets in the way. The choice is not driven by security concerns but by frustration or urgency.
I can't think of any operating system that found the right solution yet, but I do think that the Flatpak approach is by far the smartest. Just naturally give access to things by following user choices (just give access to the select file once the file is selected, just give access to the screen the user choose to share once the user choose to share it). No more stupid prompt that gets in the way of things.
Speaking of runtimes, it's exactly the same problem. For Flatpak to be adopted, it would require all distribution to invest a lot in it. How would it work for Gentoo and its use flags for example? How to deal with different versions of dependencies?
What would happen would likely be for distribution to build the equivalent of runtimes, but on their side, allowing to have several versions in parallel. So in the end, it would be a similar situation, but much more messy as each distribution would have to do the job. Is having those runtimes an actual problem or the solution that Flatpak came up with to the problem of fragmentation in distributions? In my opinion it's the second.
Nice to see some effort to stop the Docker insanity. It’s a great way to download 2GB on a slow interest so you can use a 200kb piece of software.
A good illustration of why easy wins every time.
In my experience, Flatpack and their ilk make it easier for end-users to install applications and for developers to distribute them.
Alternatives may be orders of magnitude better from a performance, size, and/or complexity standpoint but until they're at least as easy for both end-users and developers they'll never reach critical mass.
Snap startup speeds have increased considerably since https://ubuntu.com/blog/why-lzo-was-chosen-as-the-new-compre...
Spotify and telegram open up pretty much instantly for me. Ubuntu 20.04
I think the author's point was that snapd requires mounting apps at boot time which can take a while. That's a tradeoff i'm personally not willing to make, as someone who starts and stops machines several times per day.
From my subjective (user) point of view, flatpak is pretty fine. Yes, I do get annoyed whenever I run flatpak update and see 5-10 updates to drivers or runtimes in the range of ~100MB each. But I also want applications to be "just working".
What kinda annoys me though is that file dialogs open in the background. Now I am used to check for it, but when the main window is still in front, yet blocked, because focus is in the file dialog modal, that can get confusing for a moment.
AppImage is ok for me too, the application needs to be "complex" enough to justify the download size though. Precisely what the KCalc example from the article does not do. I didn't even know there is a daemon thingy that auto-integrates them, usually I put them in a folder, grab an icon somewhere and place it next to it with same name, then add an entry in OS manually, so it appears in app starter, can be placed on task bar, etc.
It's just a few clicks and if I ever get annoyed by it because I find myself doing it several times a day, I should ask myself if I am maybe just installing way too much stuff? :)
Edit:
Wanted to mention there is an application called "FlatSeal" that lets you view and (to a degree) edit permissions of Ftatpak Apps. https://github.com/tchx84/flatseal (I use it for spectating only on occasion, because too much fiddling will probably break the app at some point).
> From my subjective (user) point of view, flatpak is pretty fine. Yes, I do get annoyed whenever I run flatpak update and see 5-10 updates to drivers or runtimes in the range of ~100MB each. But I also want applications to be "just working".
For most updates those download sizes are not really representative for the actual update size. For example right now I had two updates available with a total size of 336.5 MB + 275.2 MB. However `flatpak update` actually only had to fetch 17.4 kB + 4.4 kB and was done in a few seconds.
The file dialog issue is fixed in xdg-desktop-portal-gtk 1.10: https://github.com/flatpak/xdg-desktop-portal-gtk/pull/347
> usually I put them in a folder, grab an icon somewhere and place it next to it with same name, then add an entry in OS manually
I'm also familiar with this approach, but that's unfortunately something i can't easily teach my friends. AppImageLauncher is arguably more user-friendly, and i wish it were more commonly distributed as a distro default.
> The state of software development is downright miserable in 2021.
This tired refrain shows an obscene lack of gratitude for all the things that just work. Yesterday evening I participated in a Twitter Space on gratitude for all the great things we have as software developers, and I will repost the recording here when it's available.
Edit: My mistake; the recording was already posted: https://www.youtube.com/watch?v=U10SuAHV8kQ
I have had the least issues with AppImage. It does not fit every use case. It does not need to. If you can ship an AppImage, please consider doing so.
I don't know about others, but I use snap basically for apps that are not in the repository, and I can avoid the PPA mess.
Currently the only app in my laptop with snap is Brave. I remember when I wrote the apt install brave, there was a line in the console suggestig to use snap.
So for me, as long is anecdotal, it's fine. I agree with the author, it's too much bloat, it doesn't matter that disk space is cheap.
Re security: This is probably best fixed by packaging apps with SELinux rules, which are enforced and cannot be disabled, plus responsive maintainers. There could be more than one app version to allow different SELinux rule strengths, per-app.
Re disk space: no mention of Nix? That also solves updateability for security issues. Another solution is Gentoo's -- deliver the sources and recipes, (re)build as needed. That could also work in userspace.
It would be nice to be able to 1) distribute Linux GUIs 2) let friends install them easily 3) have the friend edit the GUI app logic and run the modified version. AFAIK, this is currently only possible with apps based on interpreted languages (e.g., Python+GTK), but even there you'd need to guide your friend to install the required packages.
Re "the state of software complexity in 2021": Ignore complex software, and just don't include it on your system. Start with a subset, and add only what you need. That's how OpenBSD remains manageable. If people want to extend, they can go ahead. But you don't have to fix the world.
Making LSM-based rules for desktop applications is relatively complex and inflexible (what if the user wants to grant access to a file for a short amount of time? Portals let you do that). Are you rebuilding policies on the fly?
Is there a reason SELinux/AppArmor policies couldn't be live edited for that kind of purpose? As it is it would require root, but wouldn't it be possible to extend security rules with user rules which could add restrictions but not lift system-wide restrictions? This (hypothetical) way we'd package eg. GIMP with a user profile restricting it to ~/Documents and /media/USERNAME, but you could then grant it additional permissions (eg. to ~/Pictures).
> If I ship an app for Windows I don’t have to include the entire Win32 or .NET runtimes with my app. I just use what’s already on the user’s system.
No, you just trigger the download on install to get the 15th version of the runtime
End users do not care about this. If the download works then they are happy.
A non-flatpak example is Electron. No one cares how big it is. It works.
I download huge games from Steam all the time and have never looked at the size of a game even a single time... in years. Disk space is still cheap on end users machines.
As long as the calculator works I do not care how much disk space it takes up.
> No one cares how big it is.
As someone who's helped many friends/neighbors struggling with limited disk space (whether on desktop or Android), i don't think this is true at all. I mean, end-users are usually not conscious what a reasonable size is for an app and will often uninstall one app to install another one instead of complaining of app bloat. They're still very much suffering the problem and care for it.
> No one cares how big it is.
A lot of people care, especially Linux users.
> Disk space is still cheap on end users machines.
Not cheap for everyone.
>Note that the app package itself is only 4.4 MB. The rest is all redundant libraries that are already on my system.
The author promotes avoiding this redundancy by using the libraries on the host. But maybe we should go the other way, by having only the bare necessities to run the OS installed on the host, and install all applications as Flatpak.
I agree with some things though. Different versions of the runtimes should share as much as possible, so as to consume as little space as possible, and libraries retaining backwards compatibility would help with this. And applications should target the regular Freedesktop/GNOME/KDE runtimes if possible, Fedora shouldn't be doing their own thing completely separate from Flathub. And we need to get applications to use portals and not allow them to declare file system access on install time.
> But maybe we should go the other way, by having only the bare necessities to run the OS installed on the host, and install all applications as Flatpak.
I believe this is the approach Endless OS and Gnome OS are using.
>They should... Build a fine-grained user-interactive runtime permission system that requires the app to make Flatpak-specific API calls to activate permission dialogs
What the article wants is useless without fixing Linux's security modules. Very few people know how to use SELinux or AppArmor, there's no standardization between distros and apparently even RedHat has given up.
Flatpak and Snap do containerization because they have no good alternative.
P.S.
>If I ship an app for Windows I don’t have to include the entire Win32 or .NET runtimes with my app.
Note that Microsoft has been moving away from that with .NET Core/.NET. The new runtime does not come with the OS. It's still possible to create a package and have .NET installed separately, but as far as I can tell, most people prefer to package with the runtime.
I think this article is not providing adequate alternatives.
Most programs link against quite a bit more than libstdc++ and you clearly can't expect users to install a bunch of packages to get their program running. Those extra libraries usually take up most of the space in a package. If you include all the shared libraries you need, apart from the system libraries in your distribution, you pretty much have an AppImage already, just as a tarball instead of a squashfs image.
I think GOGs path is not horrible, but it is much less effective at making Linux a viable alternative as a gaming platform. They literally only support Ubuntu 16.04 and 18.04.
It also struck me as odd, that the author was complaining about the complexity introduced by Proton. What is Valve supposed to do?
> you pretty much have an AppImage already
That's not wrong, but AppImage is focused on the UX of the application as single file, which you can move/copy around and can be integrated with the desktop launcher, which i personally don't know of standard solutions for with a classic tarball. For example, Tor Browser is a classic tarball and works great, but the .desktop generation relies on some hacks rather than a properly-defined mechanism.
> the author was complaining about the complexity introduced by Proton
That's not what i understood from the article. I mean wine/proton makes sense on its own to run Windows apps, i think the author was criticizing that their specific approach relies on steam-specific runtimes instead of using system libraries.
> I implore you, do not use these packaging tools. Don’t add their services to your Linux distributions, don’t use apps packaged this way, and don’t ship apps that use them. Mass containerization and alternate runtimes cannot possibly be the future of desktop apps on Linux. If this is really the direction it’s going, the future will be so shitty that we’ll all end up back on macOS or Windows.
As a long time desktop user, I've dropped Ubuntu for Debian because of this. At some point in the past Ubuntu seemed like it was going all in on snaps. I don't know what they're doing now, but I don't care any more.
As a linux user, but not a developer, I've always been surprised by the backlash against Flatpack and Snap.
For me they work wonderfully. On Ubuntu I usually go for the Snap since it just works. Solving multiple dependency issues before I can use a tool always cost me time before. Now, not so much. Even AppImage is nice.
The fact that most software is even available on Linux now is due to the fact that developers can create one app for Linux, in stead of multiple.
Maybe I'm not familiar enough with GNU/Linux to see the drawbacks, but if it's just disk space I'll happily make that trade.
> Flatpak and Snap apologists claim that some security is better than nothing. This is not true.
Oh no we're all doomed.
Don't know if it is the future, but has been making my present much better.
I can finally have a stable distro with released yesterday packages. As a test I tried installing GIMP on raspberry pi, a x86 bits ubuntu 18.04 machine a x64 bits ubuntu 16.04 machine and x86 bits ubuntu 20.04. Three architectures, 4 distros, 4 machines: 1 command, the same software. It took a bit long to install on the raspi but it worked.
Flatpaks, snaps and appimages finally blurs the lines between distros. If there are problems, they should be improved, not abandoned.
Flatpak very likely is not the future indeed but it seems like a good step towards it.
Of all the arguments of the author I only agree with the RAM usage part. It's true that having containerized apps will lead to much more CPU L-cache trashing which can dramatically reduce performance if you are regularly starting and stopping programs. This is indeed a problem that needs solving. I can envision a NextFlatpak that keeps a global cache of versioned libraries (kind of like Apple does in macOS) so as to try and reduce RAM usage as much as possible.
And I agree with the author on the startup time (which is also related to memory usage and CPU cache trashing). That too is a problem that needs solving.
The rest of the arguments though, meh. Are you seriously arguing the case that we need to respect 120GB NVMe SSDs because they are "expensive"? Really? I spent 255 EUR (288 USD) on a 2TB NVMe SSD (and it has 3210 TBW of endurance, one of the very highest of all NVMe SSDs!) for a spare dev laptop and I am convinced most devs would have no trouble making that purchase any time they get the whim about it. And even people in my poor country, people who are mind-blown by their new 1300 EUR salary, can still plan financially well enough to afford a new iPhone 3-6 months later. So again, not convinced here. As much as NVMe SSDs are expensive per GB, they are still well within the grasp of most users who actually need them.
As for security, it seems the author overstates the efforts of distro maintainers. They do an admirable job, this is well-known, but they really can't be everywhere so him praising them so much might be a bit misguided (as do other posters here). Flatpaks don't change the game in almost no measure there, especially if they get properly isolated (as another poster alluded to: I want to be able to stop all internet access to an app; just one example). And none of what the author states will stop a malicious app that's not in a container so really, what's his point there? That the Debian maintainers inspect 100% of all apps and will spot a malicious calculator? Come on now.
And the rest of the post is just ranting and having a beef with a few things, which is clearly visible by his change of tone. Call me crazy but such an article can't be called convincing. ¯\_(ツ)_/¯
It seems that Flatpak, Snap and AppImage are the present and future of packages that work on multiple distros. There are no other new alternatives in the article.
You may be interested to check out nix/guix. They're certainly not perfect either, but they represent a radically-different paradigm which can be worth researching.
This gave me an idea: how about shipping an installer with an embedded appimage. The installer will check the system libraries for compatibility, and if compatible, it will install the application so that it uses the system libraries. If not compatible, it will install the appimage (and do the desktop integration stuff).
I actually really like appimages for distributing games and small apps. The user experience is a lot like executables on Windows, where you can just download any exe from anywhere and expect it to just work. Sure, the large size can be a problem, but not every appimage will necessarily end up that huge.
The lack of sandboxing isn't a big deal in my eyes because as the author mentioned with Flatpaks, most apps end up with too many permissions anyways. Proper sandboxing needs some kind of gate keeper/moderator to pressure developers into following the rules. This can be done with app stores, but the only working app store for Linux desktops is Steam, and that's only for games. (to be fair, KDE Discover works, but it's a very poor user experience)
In this scenario, what would be the point of deferring to the system libraries if you already shipped and made the user download the built in ones? As far as desktop integration goes, there is already a tool available that will setup the .desktop file for you when you first run an Appimage.
>The lack of sandboxing isn't a big deal in my eyes
It should be, executing random Appimages you've downloaded online is a huge security liability.
>Proper sandboxing needs some kind of gate keeper/moderator to pressure developers into following the rules
You make a good point in that a good moderation in the store is very important, but even lacking that you can still tweak the sandbox permissions yourself (very easily in fact with Flatseal) and it rocks to have that feature available to you.
> In this scenario, what would be the point of deferring to the system libraries if you already shipped and made the user download the built in ones?
Presumably you'd delete the installer after you finished installing the app, the same way people do on Windows. Also, the appimage could be compressed in the installer to reduce the size. I haven't worked out all the implementation details, but I'm sure it's doable in a user friendly way.
> It should be, executing random Appimages you've downloaded online is a huge security liability.
That's FUD, nobody is downloading and executing "random" software from the internet (if a trusted developer gives you malware, that's another story). Checksums and/or code signing can add peace of mind. Windows and MacOS have done just fine without any sandboxing whatsoever.
I'm all for sandboxing, but if it isn't being implemented properly, it's just an extra layer of headache on the frustration cake that is Linux software distribution.
> you can still tweak the sandbox permissions yourself
You and I maybe, but the average user who just wants to install GIMP isn't going to understand why they'd want to do that. Why should I be suspicious of this app? Is the developer is shady? Who even is the developer? Is the distributor shady? Who even is the distributor? Is my wifi password not strong enough? etc
This clearly shows how old I am (or have become), it is a (crappy) calculator:
>Note that the app package itself is only 4.4 MB.
Calc 98 is huge at 495 KB:
http://www.calculator.org/download.html
Xcalc (RPN) is around 200-240 KB:
I write commercial applications in Qt/C++ for Windows and Mac. People do ask me about Linux versions. It would be relatively straightforward to port the code to Linux. But the mess of different distributions and libraries really puts me off.
Is anyone here distributing a commercial Qt/C++ app on Linux? Are you using Flatpak or something else?
Just use AppImage. They can download it as-is from your website and simply run locally. No need to deal with distributions and their package managers.
EDIT: if you use CMake, which I guess you do, you can integrated it using CPack and an external generator: https://github.com/AppImage/AppImageKit/issues/160#issuecomm...
I will put AppImage on my todo list to check out. Thanks.
I still use qmake (old school!).
Vouching for AppImage too, I ship https://ossia.io like that
Ossia looks impressive. Does it use Qt or some other GUI library?
Yes, it uses Qt for the GUI. Thanks ^_^
I am very out-of-the-loop on Linux. Your download page says:
"Your system must have at least glibc-2.17, as well as X11, ALSA, libGL, librt, libdbus."
What rough percentage of modern Linux computers would you say that covers? 99%, 90%, 50%?
Should be very close from 90%. Anything >= Ubuntu 14.04 (just tested it quickly through docker) or CentOS 7 (on which the builds are done) should work
All of these things aped by dissertation
see: https://www.usenix.org/legacy/events/atc10/tech/full_papers/... and http://www.usenix.org/events/lisa11/tech/full_papers/Potter....
what I did differently was basically say that it should be a linux distribution. each layer should be equivalent to a debian/redhat package, with full dependency information between them. therefore it be easy to a) create an image (just pick the highest level things you want and the rest get resolved automatically, much like calling yum/apt-get in a Dockerfile) b) upgrade an image when you want to (i.e. similiar to upgrading an existing redhat/debian system, it just creates a new image artifact that can be tested and deployed)
you also don't have much hiding in the system as opposed to today. Yes, they might be built on debian, ubuntu or redhat, but you really can't verify easily what changes they made in the middle with ease. In my system, imagine there's a "Debian" layer repository, in general, you would end up with a bunch of easily verifiable debian layers and a small set of "user defined" layers (and when an image is deployed, the actual container layer). the user defined layers would be much harder to hide things in, i.e. it be very visible if one is overriding binaries or configuration files that you expect to come from a controlled package.
considering the amount of language and implementation that docker seems to share with mine that predates it, one has to wonder if they saw my talks / read my papers (though yes, its very possible they came up with it totally independently as well). They were active in the usenix and lisa communities (at least when they were more alive).
This article is full of either errors, or at least misleading statements. It comes across as trying to say something controversial to get attention.
Just one obvious glaring issue with the article:
>Note that the app package itself is only 4.4 MB. The rest is all redundant libraries that are already on my system
Um, no. It's not redundant. Your system might get updated, replacing the needed libraries with backwards incompatible libraries, and then your app would break.
I do agree with the general idea that not limiting possible runtimes to an approved list means essentially infinite runtimes, which can balloon disk space. I think you need choice to prevent stifling innovation. At least at first. But hopefully the community will end up doing the right thing here over time.
Right! Time to pushback against these developers who uses Flatpaks, Appimages, Snaps, Electron, etc. We are no longer in the world of doubling performance and memory size. Even more, nowadays people are conscious of the power usage. Time to fight the bloat!
Higher memory consumption as a trade off for developer convenience might be questionable in many cases but here it is to increase USER convenience.
What are developers supposed to ship their apps with? Make packages for every Linux distribution under the sun? Flaktpaks and Appimages solve a real problem. The current alternative is NOT being able to use the specific software at all because you don't have the right version of Ubuntu.
Maybe Guix/Nix are solutions for some of the pain points of traditional package management. And yes, libraries really need to focus on backwards compatibility. In the meantime, AppImage/Flatpack gets the job done. I am not out of luck when my distro does not offer the right package with the right version.
Static linking is always an option. It even can benefit from LTO and PGO this way.
Fight it yourself. Find a solution that's easy for developers and users and everyone will use it. It's very simple, actually (though not easy).
Should have performance indicators with every app - the kids who grew up on the performancelandfill slopes of the Mount Moore would have a hard time explaining away there whatsapp clone needing the same % as a major application.
Yeeh, who enter the plateau, abandon all hope you will get away with leaky abstractions.
Wish the lazy bloat-load was build as architectural pattern into apps. It only installs when you actually use it.
Doubling performance, no. But doubling memory size... kinda. 8-16GB is kind of the baseline now. Laptops with 64GB are increasingly common in certain niches.
This being said, I do see these systems that basically just double down on adding layers for layers' sake as more disruptive than systemd.
Agreed
"There will still be occasional differences between distributions. These are things you can work around. It may be painful at first but it will be worth it: you will provide a much superior user experience by working around the issues with your users’ libraries rather than attempting to replace them."
Sure, like it was happening 20 years ago, and we still had apps break ALL THE TIME when the user just upgraded to a new distro version or when she changed her distro. It's simply not scalable. It has already been tried. Sure, the flatpak world is not perfect, and I would never choose this option (maybe I like snap better), but it's a step in the right direction.
Snaps and Flatpaks are bloated, slow, and a downgrade on user experience at all levels. I use linux because it is lean, fast and has great package managment (apt). I'd rather go back to using Windows than use Flatpaks or Snaps.
> This is uncompetitive with Windows on its face. If I ship an app for Windows I don’t have to include the entire Win32 or .NET runtimes with my app. I just use what’s already on the user’s system.
What about the gargantuan WinSXS folder?
This is legacy now. The latest .Net stuff (whatever they're calling it this week) bundles its dependencies, so it's no better than flatpak in that regard.
After giving flatpacks and the like a good, solid try, I have come to truly detest them. This article hits on most of the reasons why. If I can only get software in flatpack form, that software does not exist for me.
This is all good and true, but it's looking at the problem from the Linux distribution's perspective. The user wants a working application. Something like AppImage allows a small team to distribute a working package for every distro. Now imagine that team of 5, working on their app, packaging it for ubuntu, fedora, arch, rhel, ... It's just not possible.
What would solve this issue? A common package manager and central repository, across all Linux distributions. Then the small team only needs to package and ship to one repository, in 1 format.
So I love linux. Particularly, as a desktop system, I like Arch and derivatives of it, mainly for the pacman package manager and the Arch user repository.
But I have trouble finding packages, and I hate to compile. So I use Debian instead.
I would rather just use a distro that isn't ideal for me than start using flatpak and snap and all this other stuff. I really don't like how fragmented packaging is in the Linux world, but I will not use these prepackaged containers that have all dependencies included. They're worse.
Flaptak? I'm going to call flatpak from that now.
OP might want to edit this.
Thank you for the heads up!
The future is Flatpak plus the heuristics Steam does to determine if something is a "core lib" (like LLVM, Mesa, etc.) that should be centralized rather than installed per-app. And then (and this is just me dreaming about a better future) the Linux world will eject all of the bad programmers and start maintaining backward compatibility, so you can just symlink libLLVM.so.5 to libLLVM.so.6 and it'll just work.
Feel like we’ve really missed an intermediate solution based around tree-shaken statically compiled code. A lot of the goals achieved by docker et al could be met with some kind of multi-elf file format that is basically a collection of statically linked executables combined with a way to read built-in configuration files. Maybe this already exists / works.
Not exactly what you're suggesting, but nix/guix addresses most of the problems around dynamic linking.
Don't know what to think about this blog post; it's not a very coherent critique, since it throws together various issues of user-packaging-formats that mostly specific to only one of the formats. And these don't seem to be 'inherent' in their design. Well, one is.
Let's go through some of the issues:
> Size
My /var/lib/flatpak is 13 GB in size. That's a lot of space. On the other hand; that's about one game with content and textures, so I'm not sure that is too much of an issue. With more and more applications agreeing on the base systems, I expect this to get smaller
> Memory Usage, Startup times
Snap is ungodly slow at starting up applications and that's broken and that is a fundamental design flaw. This issue, however, is not at hand with Flatpak; it doesn't have a startup problem.
Neither Flatpak nor Snap have increased memory usage because of containerization.
> Drivers
Yes, Nvidia sucks. Do you hear, Nvidia? Mainline your driver already.
> Security
> "Flatpack and Snap apologists claim that some security is better than nothing. This is not true."
That's debatable. The same argument was made against seatbelts, child-proof pill bottles or guard-rails on cliff streets and I don't think it holds up. Even if seatbelts don't protect you from any harm, they protect you against some harm.
Currently, Linux desktop software offers no security against malicious code. The only protection the 'traditional' Linux desktop offers against malicious code is user separation, which is no protection at all. Before full security can be offered, applications need to be migrated to safer practices.
> Permissions and Portals
This doesn't seem to be a critique of any user-packaging-format but rather of how GTK implements interactions with Portals.
> Identifier Clashes
That's a problem that arises from Flatpaks decentralized nature; everyone can create a repo and add packages of any names there. I agree that it would be nice to not have these clashes. Notice that this is not an issue with Snaps, since with Snaps there's a central authority assigning the identifiers.
> Complexity
Some of the complexity cannot be avoided. Either for backward compatibility or for security. I pretty much doubt that Flatpak this will be reason that civilization collapses. (That would be shortsighted greed and NIMBYism)
> All of these app packaging systems require the user have some service installed on their PC before any packages can be installed
All app packaging systems require at least some installed component. There's no way you can make a software run on all systems without requiring at least some infrastructure. Do you want your 64-bit AppImage to install everywhere? To bad! it requires 64-bit glib to be installed.
> App stores
> [...] This is the reason Ubuntu wants everyone to use Snap [...]
This seems to limited to Snap. Again, not a fundamental design issue.
> Backward compatibility
> Forcing Distributions to Maintain Compatibility
> I believe this is partly due to a militant position on free software.
This sounds like he wants to dictate FOSS developers how to develop their software? See, I understand your frustration - but that's not how FOSS works. You cannot force anyone to spend time on stuff they don't want to do.
There's some valid criticism in there, but it just reads like an angry rant. I think the author could do better by making individual articles about the shortcomings of both Snap and Flatpak, instead of just lumping them together. Snap and Flatpak are just too different.
In the end, Flatpak is the future. No one is going to package their software for the nth minor distribution and I rather have a slightly (size)-inefficient system of packaging software rather than not have access to that software. Distributions have started to recognize that they cannot package everything and have started to reduce focus to a smaller set of core packages that work well together.
> Currently, Linux desktop software offers no security against malicious code. The only protection the 'traditional' Linux desktop offers against malicious code is user separation, which is no protection at all. Before full security can be offered, applications need to be migrated to safer practices.
To be fair, he appears to be arguing for the current model, where armies of distro volunteers review and manually package up debs and rpms (granted that on smaller distro teams, this may be less thorough). There is an actual third party involved in this explicit review step. With flatpak the developer has a direct route to the user without any intermediary or checks.
> There is an actual third party involved in this explicit review step. With flatpak the developer has a direct route to the user without any intermediary or checks.
I think your reading of the author's argument is correct. To that I would counter that you can have just the same with Flatpak, it just depends on the Repository you are pulling your packages from.
It would be awesome if Flathub were to offer some kind of process to validate the packaged applications.
In my opinion the article was well written and well researched (many sources), and to separate different topics but relatable it used different sections.
> Size
As the author mentions, many users uses budget computers or something like a raspberry pi and not a big gaming rig, but you also have to consider the network usage, all that data needs to be downloaded, that in it self is a cost for the network for every computer running that distro.
I work from my laptop when I'm traveling, I use my phone as wifi hotspot, download gigabytes of data for installing some small app is not feasible.
And you also have to consider poor countries, in poor countries you don't have infinite downloads nor the largest drives.
I remember the old days of the critique against Windows from the Linux community, Windows forces you to constantly upgrade hardware even though that old machine still works fine. I guess this is now true for Linux too.
Flatpaks (and Snaps) do require more space.
However; when compared to other operating systems - like Windows - these space requirements are negligible: a full installation of Window 11 takes north of 30GB, so even with five years of accumulated flatpak runtimes, this is still less than a third of a Windows installation.
Even budget computers can satisfy this. And once the runtimes are installed, your mostly good - there are only so many runtimes[0].
With regards to raspberry pi; yes, that's not what Flatpaks are aimed at, at least not now.
[0] https://docs.flatpak.org/en/latest/available-runtimes.html
> There's no way you can make a software run on all systems without requiring at least some infrastructure. Do you want your 64-bit AppImage to install everywhere? To bad! it requires 64-bit glib to be installed.
Distributing applications with multiple architecture binaries and the correct one selected at run time is a solved problem. The original MacOS did it as far back as the 68k/PPC transition and the concept has been supported by NeXT/MacOSX Application Bundles since the beginning.
And then you want to run it on Alpine which isn't even compiled with glib and everything falls apart.
I think I know what you want to say (please correct me if I'm wrong); you can always provide more formats or reduce dependencies and go deeper.
But to me there's no reason to do that. Some infrastructure is always required, even if you just reduce it to the kernel. Demanding that a piece of software be installable without any (p)requirements whatsoever seems not something that is demanded from anything besides Flatpak (or Snap). So to complain that Flatpaks require a specific service is not a valid complaint.
Given how common Flatpak nowadays is - almost all distros support it out of the box - I would even say that this complaint is not even valid with regards to Flatpaks.
> Neither Flatpak nor Snap have increased memory usage because of containerization.
Yes it does, because you end up loading multiple versions of the shared libraries.
There is also the memory used by the flatpack/snap daemons. Snapd is quite big since it is written in Go.
> Yes it does, because you end up loading multiple versions of the shared libraries.
Given that not all applications necessarily use the same libraries and versions, that's an issue you cannot avoid even with shared libraries.
So, yes are right; it could be an issue. But it doesn't strictly have to do with the containerization.
To be thorough, let me qualify my statement:
Neither Flatpak nor Snap do necessarily result in an increased memory usage because of containerization.
> Snapd is quite big since it is written in Go.
Yes, that's true. Go probably wasn't the best idea for a system daemon.
IME, all running portal services add up to around 30MB RAM usage, and the two processes that need to run with each Flatpak are around 1MB each. (XDG portals aren't Flatpak-specific and can be used by host applications for some cross-desktop APIs as well.)
Very valid points. But the average open source app developer or even a developer of a commercial application for linux just doesn't have the time, energy or money to keep installation instructions up to date and support those both with development time and by responding to issues.
I'll keep this short and sweet, because I could complain about Flatpaks for hours:
If you distribute your software via Flatpak, you can forget about me ever using it. Every computer I own has Flatpak disabled, and I will use system repos until the day I die. Plan accordingly!
And you're totally free to do that. Maintainers should also be free to ignore support requests when people choose to use outdated versions of their software.
That's true, future app distributions should utilize web 3.0, e.g. decentralization. There is 0install ( https://0install.net/ ), for example, it is better.
And nobody talks about Slax modules. Too many years ago, they had some basic containerization (only at filesystem level). If it got some kind of dependency managment, it would be really great.
As I recall, Slax used aufs-based filesystem overlays.
If you find these sort of problems interesting, I couldn't recommend checking out NixOS more. IMO it's the next generation solution.
or GNU Guix
Sighs at the next 229Gb required update to Flight Simulator 2020
So Linux desktop is dead and MS wins it after all?
"Why do people continue to not use our glorious FOSS Desktop solution that can't even handle simple application management without a hundred different tools?"
People who desire to see more widespread Linux Desktop usage will some day have to come to terms with the original conception of the phrase "The customer is always right". If you continue to ignore what people want, don't be surprised when they don't show up.
It hardly had a chance thanks fragmentation and hard core beliefs in compiling everything from source, up to around 2005 I was still a believer, nowadays I don't care.
Snap certainly isn't. But flatpak might be
Lots of other debate (both pro and con) that I'm not going to get into. I like the effort going into Flatpak, but I don't like everything it's doing.
Is Flatpak really secure, is it's permission system good, how does resource sharing work, whatever. I'm not going to comment on any of that in either direction.
But I do want to call out this specific paragraph:
> Apparently, developing client APIs for apps themselves is antithetical to Flatpak’s mission. They want the apps running on Flatpak to be unaware of Flatpak. They would rather modify the core libraries like GTK to integrate with Flatpak. So for example if you want to open a file, you don’t call a Flatpak API function to get a file or request permissions. Instead, you call for an ordinary GTK file open dialog and your Flatpak runtime’s GTK internally does the portal interaction with the Flatpak service (using all sorts of hacks to let you access the file “normally” and pretend you’re not sandboxed.)
Heck client APIs. This is the correct way to do sandboxing, for two reasons:
----
First, this is not something that's immediately obvious, but over time with both phone platforms and on the web have been starting to learn that applications should not be able to change their behavior or monitor whether or not they are sandboxed. This closes an important hole in app security where application developers either change/block behavior, or try to "trick" users into granting unnecessary permissions.
For a filesystem, an application should not be aware of whether it has access to every file, it shouldn't be aware of whether or not it's running on a temporary filesystem. And while it should be able to ask the user to grant it access to a directory, it should have no easily way of validating whether or not the user granted that permission -- it should just get a folder pointer back, regardless of whether that pointer is real, a different folder, or a virtual/temporary location.
A lot of existing sandboxing doesn't follow this rule. The web is normally pretty good at sandboxing, but in this regard it's honestly pretty bad. We can't follow this ideal with everything, there are some permissions that are impossible to fake. But in general, we don't want to make things too easy for malicious developers.
> If I want file access permissions on Android, I don’t just try to open a file with the Java File API and expect it to magically prompt the user. I have to call Android-specific APIs to request permissions first. iOS is the same. So why shouldn’t I be able to just call flatpak_request_permission(PERMISSION) and get a callback when the user approves or declines?
In this regard, Android and iOS are wrong. Not as a matter of opinion, I will make a medium-to-high confidence claim that they are just flat-out approaching sandboxing security incorrectly. It's not their fault, I wouldn't have done a better job. This is something that we've become more aware of as we've seen how sandboxing has evolved, and as it's become more obvious how apps try to circumvent sandboxes. And this could be a longer conversation, yes there are tradeoffs, but the benefits of this approach far outweigh them, and I think in the future that more platforms (including iOS/Android) are likely to move towards seamless permissions that are hidden from applications themselves.
Remember we sandbox applications in part because we don't trust developers. And while it's not the only reason for hiding user controls from apps, it is a good enough reason on its own. Don't give attackers unnecessary information if you can help it, this is security 101.
And the author is right, this approach is genuinely more brittle because it requires building mocked/sandboxed API access rather than just throwing an error. But hey, sandboxing itself is more brittle. We deal with it for the UX/safety improvements.
---
Second, for the reason that the author hints at:
> Fedora is auto-converting all of their rpm apps to Flatpak. In order for this to work, they need the Flatpak permission system and Flatpak in general to require no app changes whatsoever.
There are substantial advantages to having sandboxes that work with existing apps. This is always a give-and-take process, but in general we don't want to force updates of a large portion of user-facing apps. And we want them to be easy to get into Flatpak.
The core libraries, user portals, distros themselves -- it is a pain to update them, but they have more eyes on them, they are more likely to be updated securely, and they have more resources available to them. I think it would be a mistake to shift that burden entirely onto application developers. Linux has a habit of doing this kind of thing sometimes, and it's a bad habit. We want Flatpak (or any sandboxing system) to be something that can be wrapped around applications without a lot of work. Ideally, a 3rd-party maintainer might even be able to bundle an app themselves.
There is also a sort of future-proofing built into this, which is that there is no such thing as a sandboxing system that's secure from day 1. We've seen this pop up on the web, which (for all of its general criticism about expanding capabilities) still has a much smaller attack surface than most native apps. Web standard developers are very careful, but there are still breaking changes sometimes when new security measures/permissions need to be added.
It would be very good if (as much as is possible), introducing additional new privileges and sandboxing capabilities to Flatpak did not require code updates for every single app using those capabilities that was built for an older version of Flatpak. If it's at all possible to avoid that scenario, avoiding it is the correct move to make.
And finally, touching on burdens for maintainers again, different people may prefer different permission systems. If at all possible, we want to avoid forcing individual app developers to fragment their codebase and maintain different branches for different security sandboxes. Flatpak should run the "base" version of your app with little to no changes. This also benefits users, because fragmentation and forcing users to install multiple simultaneous sandboxes on their machine is heckin awful.
So for all of those reasons, minimizing code changes for individual apps is a really good goal to have, even if it admittedly makes Flatpak more complicated and a bit harder on core platforms like GTK.
----
Other criticisms, whatever, I have some thoughts but it's not important for me to give them.
But the sandboxing criticisms I see in this article betray (to me) a lack of understanding about what the current problems are with phone/web security and what the next "generation" of sandboxing problems are that we're going to face. And I think that Flatpak is at the very least approaching sandboxing through a more mature/modern lens than the one we were using X years ago in early smart phones.
"Software has gotten so much slower and more bloated that operating systems no longer run acceptably on spinning rust."
I'd argue they never did. Hard drives have always been slow and when they were the primary means of storage on the PC, people were always trying to find ways to speed things up. Now it takes seconds to boot rather than minutes.
"Laptop manufacturers are switching to smaller flash drives to improve performance while preserving margins. Budget laptops circa 2015 shipped with 256 GB or larger mechanical drives. Now in 2021 they ship with 120 GB flash. "
This is outdated information. I just checked Lenovo's store and while they do have a super low end machine at $200 with 64Gb of EMMC, their real budget model laptops start with a 256Gb SSD. The standard ThinkPad we order this year for users had a 1Tb SSD. Is it budget? No. But building towards the lowest common denominator is rarely worthwhile without good reason.
"Chromebooks are even smaller as they push everything onto cloud storage. Smartphones are starting to run full-fledged Linux distributions. The Raspberry Pi 4 and 400 use an SD card as root device and have such fantastic performance that we’re on the verge of a revolution in low-cost computing. "
Chromebooks and RPi's are not PCs. Yes, some enthusiasts use them that way but that is a niche and done by people who know what they are doing. They can easily enough avoid flatpacks.
Linux is a big place and its versatility allows for designing distros around multiple use cases. Flatpack and Snaps are meant to make software distribution easier. Constraints such as storage are not a primary concern for most users. Most machines have enough. Their mere existence does not mean other methods of deploying software aren't available so if storage is a constraint, then simply don't use them.
A lot of the arguments against Flatpack and Snap seem to be based around "it's not good for my usecase" ok. Then don't use it.
"Each app with a new runtime adds another hundred megs or more of RAM usage. This adds up fast. Most computers don’t have enough RAM to run all their apps with alternate runtimes. The Raspberry Pi 400 has only 4 GB of RAM. Low-end Chromebooks have only 2 GB. Budget laptops tend to have 8 GB, mostly thanks to the bloat of Windows 10, but these app packaging solutions are catching up to it."
My case in point. Still referencing niche machines. Most RPi's are used for a dedicated purpose. 99% of Chromebooks never exist Google's ecosystem for it. And their concept for what qualifies as a lot of memory is, I think, out of touch. 8GB is the base for an x86 PC. Not a Chromebook but a real PC. Many have 16 gigs and its not uncommon to see up to 32 in laptops and even more in workstations.
"Why shouldn’t storage shrink anyway? Software should be getting more efficient, not less."
My knee jerk response is "why?". My more thoughtful one is that efficiency can be measured in many ways. Resource usage becomes inefficient when the user determines it is. And everyone has a different idea of where that lies. Nobody is multitasking more than a couple of applications at any one time on a PC. Servers are different but Servers aren't an intended use case for Flatpack and Snaps. If I can run the software I want at acceptable performance then its resource consumption is efficient enough. But you know what Flatpacks are more efficient with? Time. I don't have to deal with dependencies. I just tell it to install and it works. Given how awful the application management experience is in Linux traditionally, I am willing to take the downsides for that massive upside.
"Such an app can drop a malware executable anywhere in your home folder and add a line to your ~/.profile or a desktop entry to ~/.config/autostart/ to have it auto-started on your next login. Not only will it run outside of any container, it will even persist after the app is uninstalled."
I think he has a point with security. Permissions can be vague and misleading. And they do have a supply chain attack vulnerability. But that's true of any software you get from the repository, store etc. If the official Fedora's official Flatpacks can have malware then so can their official Repo.
His section about identifies clashes is not an inherent problem with Flatpack but rather a procedural problem with Fedora and Flathub. I think its weird he brought it up when he started the article saying he wouldn't include easily solvable issues. Yet here we are.
"You would think that these packaging mechanisms would embrace simplicity if they want to attract software developers. In fact they are doing the opposite. "
This is the efficiency problem again. Define simplicity. What is simple in one area is complex in another. The fact is nothing as complex as software on a computer can be simple everywhere. If it were it would be useless. So the question is not how to make it simple. It is where are you placing your complexity. Traditional package management places the complexity on the user while keeping the software simple. Flatpacks place the simplicity on the UX while making the software complex. None is inherently superior. Its all about design goals.
"All of these app packaging systems require that the user have some service installed on their PC before any packages can be installed."
And? All software has dependencies and a lot requires some kind of run time. Does he also consider all Java software problematic because they require a JVM first? What about all web based applications requiring a compliant browser? This is a non issue in context of how everything else works in the 21st century.
"A major goal of most of these technologies is to support an “app store” experience: Docker Hub, Flathub, the Steam Store, Snapcraft, and AppImageHub (but not AppImageHub?) These technologies are all designed around this model because the owners want a cut of sales revenue or fees for enterprise distribution. (Flathub only says they don’t process payments at present. It’s coming.)"
Conceptually there is no difference between an app store and repository. They are the same thing. App stores are just repos that are more user friendly.
"This is very far from the traditional Windows experience of just downloading an installer, clicking Next a few times, and having your app installed with complete desktop integration. This is true freedom. There are no requirements, no other steps, no hoops to jump through to install an app. This is why the Windows Store and to some extent even the macOS App Store are failing. They can’t compete with the freedom their own platforms provide."
I didn't expect him to advocate for the windows model. And I actually agree. Windows handles application management the best. However a lot of the criticisms he gives Flatpack would apply to windows too though. You are guaranteed to end up with multiple copies of certain dependencies because Microsoft's answer to dependency hell was to let developers ship their dependencies with their product and not have to care what everyone else had. This is extremely space inefficient and can make for ugly under the hood management but it works very well. The author misses that in his article because he just measures the size of an installer, often Windows software ships not as one file but as many and the install.exe merely orchestrates the process. Again, its all about where you place your complexity. In Windows world the complexity is rarely encountered. Apps get to bring their baggage with them and decide where it all goes. This means program files can get ugly and inconsistent and you may have to learn the behavior or individual software but it usually works the first time.
"The Current State of Backwards Compatibility"
I have news for you. Linux's biggest problem here isn't that new revisions break compatibility. Its that old versions can become unavailable. The repo model of software distribution lends itself to "link rot" Load up an older version of Ubuntu and it cant talk to the repos anymore. The servers are gone. And since until recently almost all software was distributed this way and managed by a package manager you effectively can't get software working in old Linux. Software preservation becomes monumental if not impossible. With older windows versions, if I have the install media whatever form it may take then I can install and use the software because it shipped with all of its dependencies included. No nebulous server required. DRM notwithstanding of course.
I usually steer clear of these types of discussions because I find the arguments ultimately pointless in the end. The users pick the winner based on their experience regardless of the merits of the systems available. Also I'm certain I'll come out of this worse for wear, as is the experience whenever one crashes a heated debate among engineers arguing over which is the better standard. So I'm going to hastily step into this den of wolves and jump out again.
The fundamental issue I find with all these distribution systems is that they always seem more designed to justify a computer science degree rather than solving the problem in a user friendly way. And that applies to debs and rpms as well. All the solutions seem so over-engineered to the point that only engineers have a hope to get any kind of consistent usable experience out of it. Certainly you can try to hide all that complexity behind a nice GUI with icons and layouts, but that only works so well until something goes wrong. And then the user is stuck juggling a broken system and waiting for a reply on a support forum that may never come. That also mirrors my views on why Linux has yet to break into the mainstream desktop market.
But lets go even further, another fundamental issue. Sometimes it feels like these systems are working to justify a basic design choice of Linux that's overstayed its welcome. The idea to separate libraries and binaries was a fabulously beautiful engineering idea to fix a problem that stopped being an issue a decade ago. When your system has 32mb of storage, it makes sense, but less so when you can buy a 2tb solid state drive for $120 on Amazon. Counter to the argument in the article, storage is cheap and becoming cheaper at such a rate that continuing this philosophy will appear more odd by the day. And now we've officially come full circle and engineered it away with flatpak and appimage.
In an ideal world, these perfect systems should work beautifully. But users aren't perfect and neither are developers. That's why I now value software not by the novelty of how it solves a problem but in how few parts it can do it. For all Elon Musk's personal issues, he was letter perfect about one thing. The best system is no system. The best process is no process. For every additional step of complexity you add when solving a problem, you gain three additional problems. 1. You have to maintain it across the half-lifes of developer interests. 2. You have to teach users and developers how to use it the right way, a nearly impossible task. 3. All the systems that rely on it become more complex, duplicating problems 1 and 2 ad nauseam.
For me part of the answer was missed 12 years ago with AppImage. It works so well that I find myself breathing a sigh of relief whenever I find an application I want that uses it. That's because I know it'll just work. It's a testament to the usability of AppImages that it's still in use years later even without support from major distros. I think the linux community at large is at risk of missing something very good there. At least for user facing programs, it works wonderfully.
Colour me surprised that Canonical and RedHat funded initiatives cause bad things to happen in the desktop space.