PostmarketOS in 2026-02: generic kernels, bans use of generative AI
postmarketos.orgThere are some really baffling takes here. And it doesn't really matter how good or bad coding agents are.
Coding agents greatly reduce the barrier to contributing something that at least looks okay at the surface, so reviewing contributions will quickly become even more of a bottleneck. Manual contributions used to filter away most low effort attempts, or at least they could easily be identified and rejected.
That dynamic is now different and the maintainers risk being swarmed with low effort contributions, that will take a lot of time to review and respond to. Some AI contributions might be reviewed and revised and overall be of acceptable quality, but how can the maintainers know which without reviewing everything, good and bad alike.
I think we will see multiple attempts like this to shift things back to the old dynamic, by rejecting things that can be identified as AI generated a glance, but I suspect over time it will be difficult to do so, so my prediction is that we will soon see more open source repos stop accepting outside contributions entirely.
Even if LLMs one day will be good enough to quickly produce code that is on par with humans (which I strongly doubt), why would the contributors have any incentive to have someone else do that (the easy part), rather than just doing it themselves?
Very happy to see PostmarketOS take an uncompromising stance and also providing justification for it.
Feels pretty Luddite to me.
I remember when people were crying about how much power a google search uses. This is the same thing all over again and it is as pointless now as it was back then.
https://arstechnica.com/ai/2025/08/google-says-it-dropped-th...
> Google says it dropped the energy cost of AI queries by 33x in one year. The company claims that a text query now burns the equivalent of 9 seconds of TV.
The audacity to call an organisation that works on making mobile phones and other small PCs work with free software Luddite is impressive.
That's like calling a person going for seconds a conservative (in the USA political sense).
No, it's entirely justified when quality of code matters. They don't want a thousand gallons of unreviewable slop. They want a reasonable amount of code that can be sensibility reviewed.
There are ways to achieve that without a blanket ban, if you read their AI policy it seems more "ethically" motivated. They certainly address this first, with many more words and 7 references.
They do go on to address code quality but it is more of an after thought with 0 references, less words and appears lower down the page.
The timing is also suspicious, shortly after publication of this report: https://www.reuters.com/business/media-telecom/smartphone-ma... which forecasts declining smartphone sales meaning less devices for this OS to run on.
> The timing is also suspicious, shortly after publication of this report: https://www.reuters.com/business/media-telecom/smartphone-ma... which forecasts declining smartphone sales meaning less devices for this OS to run on.
Why would declining sales of new smartphones have anything to do with PostMarketOS, which only supports phones more than half a decade old?
PostmarketOS doesn't exist in a vacuum. It’s the final stage of a device's life cycle. If the initial sales of new devices decline, the pool of available hardware for enthusiasts to tinker with in five years will be significantly smaller.
Yes. In five years, once the PMOS devs manage to get a 2025 device in working state, they might have less devices to play around with, so there could be an indirect effect on the project.
What I struggle to believe - what I don't believe - is that there any sort of connection between the report about likely declining sales and PMOS' announcement.
pmOS does support recent phones, provided that they can be bootloader-unlocked - and that's only a few brands these days.
Right now, their wiki page on device support [0] lists zero actual devices as "fully supported":
> These are the most supported devices, maintained by at least 2 people and have the functions you expect from the device running its normal OS, such as calling on a phone, working audio, and a functional UI.
> Besides QEMU devices, this is currently empty. The ports we had here earlier weren't as reliable as we would have liked. We plan to add new devices here with a higher standard.
The most recent smartphone in the Community section of that page is the Fairphone 4, released half a decade ago, in 2021. Pixel devices can trivially be bootloader unlocked, but that doesn't make the work that goes into supporting them much easier: the latest device in Testing is the 6a/6 Pro, from 2022, and its device page lists all the features but the most basic (touchscreen, flash, internal storage) as "Untested".
Agreed. I would have chosen differently, but I appreciate the policy is unambiguous and explained succinctly with references.
Some people enjoy the outcome, others enjoy the process.
I find the criticism interesting. It's like one restaurant saying they'll use only electric stoves for the climate, then chefs all over the world calling them stupid naive for it.
It's like ethical arguments rationalizing local behavior are automatically interpreted as a global attack that has to be rejected.
I wish more projects would take the same stance.
You say "uncompromising stance" with "justification", I say stubborn prejudice. They simply state the same weak, nonsensical complaints that apply to many other technologies that they undoubtedly don't have issues with and are happy with the use of.
I do not understand why Lineage insists on waiting for eBPF back ports when PostmarketOS has a far newer kernel running on the same hardware.
Core Android functionality relies on eBPF in a way that PostmarketOS does not. PostmarketOS is much more of a linux distro than Android is. They are not very comparable.
AOSP patched kernels still include some features that are not in the mainline version. The LineageOS folks are working on support for mainline kernels, but AIUI it's not there yet.
It's not the sky that's falling; it's the value of an SWE's labor.
Fun while it lasted, huh?
> Submitting contributions fully or in part created by generative AI tools to postmarketOS.
So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all.
How it is possible to distinguish between the two in the vast majority of cases where the hand written code and autocompleted code is byte-by-byte identical?
Are we supposed to record video of us coding to show that we did type letters one by one?
> 2. Recommending generative AI tools to other community members for solving problems in the postmarketOS space.
Is searching for pieces of code considered parts of solving problems?
Then how do we distinguish between finding a a required function by grepping code or by asking LLM to search for it?
Can we ask LLM questions about postmarketOS? Like, "what is the proper way to query kernel for X given Z"?
If a community members asks this question and I already know the answer via LLM, then am I now banned from giving the correct answer?
--
Don't get me wrong. I am sick and tired of the vomit-inducing AI bullshit (as opposed to the tremendous help that LLMs provide to experienced devs).
I fail to see how a policy like this is even enforceable let alone productive and sane.
On the other hand, I absolutely see where is this policy coming from. It seems that projects are having a hard time navigating the issue and looking for ways to eliminate the insurmountable amount of incoming slop.
I think we still haven't found a right way to do it.
> So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all.
Because autocomplete still requires heavy user input and a SWE at the top of the decision making tree. You could argue that using Claude or Codex enables you to do the same thing, but there's no guarantee someone isn't vibecoding and then not testing adequately to ensure, firstly, that everything can be debugged, and secondly, that it fits in with the broader codebase before they try to merge or PR.
Plenty of people use Claude like an autocomplete or to bounce ideas off of, which I think is a great use case. But besides that, using a tool like that in more extreme ways is becoming increasingly normalized and probably not something you want in your codebase if you care about code quality and avoiding pointless bugs.
Every time I see a post on HN about some miracle work Claude did it's always been very underwhelming. Wow, it coded a kernel driver for out of date hardware! That doesn't do anything except turn a display on... great. Claude could probably help you write a driver in less time, but it'll only really work well, again, if you're at the top of the hierarchy of decision making and are manually reviewing code. No guarantees of that in the FOSS world because we don't have keyloggers installed on everybody's machine.
Fully agree with you on all points.
But again: how do we distinguish between manual code input and sophisticated autocomplete?
The project is simply saying what they want. If you choose to ignore that for some weird reason congratulations for being a jerk, I guess.
Can you confirm that continuing to use autocomplete in a code base against the policy of the project does make the person a jerk?
Yes, actually. Knowingly violating the policies of a project while pretending you aren't, so you can continue participating in the fully voluntary project, does make you a jerk.
If you don't like the policies they set, just leave.
I'm willing to bet that every single person on here complaining has zero contributions to PostmarketOS.
If it's crap then it's ai. If it's okay, then we pretend that is just sophisticated auto complete.
It's pretty much obvious but the policy specifically argues against it and stands on moral grounds.
I don't think anybody seriously cares about such "policies".
Unfortunately exactly this kind of policies are the ones people care the most about.
It's moral values, virtue signaling and rage. Extremely engaging.
This sounds impractical and like they will probably not keep the ban
AI use should be able to accelerate the development of ports on currently unsupported or undersupported devices which would directly support the project
I guess I wouldn't worry about the policy, they will probably naturally switch it if / when AI becomes more useful in practice
> bans use of generative AI
that ship has sailed with codex 5.3 in 90% SWE jobs, unfortunately. I expect the next 9% won't survive the following 12 months and the last 1% is done within 5 years.
it isn't even about principles - projects not using gen AI will become basically irrelevant, the pace of gen AI allowed competitors will be too great.
Alright, let's see Codex 5.3 create a competitor to postmarketOS (without just copying the homework of other devs). If you believe in the technology so much, put it to the test, see what it can really do.
Reminds me how one year ago people were saying "sure, GPT-4o can write a function, but try to make it write a whole application"
Sure, AI has developed quickly, but let's see it take on a real engineering challenge, rather than regurgitating boilerplate code.
Writing device drivers from incomplete specs is much harder than "writing a whole application" where the specs are clearly defined and there's a lot more example code to reference. If you believe in AI so much, and believe that it's unreasonable for postmarketOS to not want to use it, put it to the test, prove the doubters wrong, what have you got to lose?
I don't have anything to win either.
What does a developer who writes a driver from incomplete specs do? Writes some values in some registers, sees how the device behaves, updates the spec. Rinse and repeat. Sounds exactly the kind of stuff coding agents thrive at - a verifiable loop. And they can do it 24x7 until done.
> I don't have anything to win either.
Sure you do, you can prove those that doubt your views wrong.
> Sounds exactly the kind of stuff coding agents thrive at - a verifiable loop. And they can do it 24x7 until done.
Go for it then, you're not putting in any work into it other than giving it a task to do.
I'm sure you know what opportunity cost is
Haha, are you trying to suggest you'll have lost much by putting an AI tool to the test? You seem to think it's powerful enough to do the work of porting Alpine Linux (or equivalent) to new hardware without human intervention (beyond the initial prompt), what exactly are you losing by trying this out? It's not your time, as you would have spent less time on giving a simple instruction to an AI tool than you spent in talking to me.
Perhaps the reality is that you know AI needs more hand-holding than this, and the tools aren't up to the task you're thinking of setting them.
I never said it requires zero hand holding.
You are also strangely fixated on today's capabilities, completely missing the exponential we are on.
In a few months will have posts here from device driver writers explaining how they hooked up a phone to an Arduino and a video camera and how the AI is automatically writing device drivers.
> You are also strangely fixated on today's capabilities
I am talking about today's capabilities because this comment thread started with the suggestion that the benefits of AI for coding was no longer avoidable after the launch of Codex 5.3.
> In a few months will have posts here from device driver writers explaining how they hooked up a phone to an Arduino and a video camera and how the AI is automatically writing device drivers.
A few months? Almost zero chance. If it happens in the next 5 years I'd be less surprised, but I suspect it'll take longer.
Fun that you had to caveat it with some hand wavy homework bull. Gives you a nice get out of jail free clause when inevitably an AI writes an OS.
> Fun that you had to caveat it with some hand wavy homework bull.
Not really. If AI is just copying someone else's code, it's not really designing it is it. If you want it to truly design something, it needs to be designing it using the same constraints that the human engineers would face, which means it doesn't get the luxury of copying from others, it has to design things like device drivers with the same level of information that human engineers get (e.g. device specifications and information gathered through trial and error).
Are you suggesting that a human being writes an OS in a vacuum without seeing any other OS or looking into how it is built. That feels a little facetious, no?
> Are you suggesting that a human being writes an OS in a vacuum without seeing any other OS or looking into how it is built. That feels a little facetious, no?
No, I'm suggesting in order for it to be a fair test, you need to impose the same restrictions that a human engineer would face.
For example, consider the work done by the Nouveau team in building a set of open source GPU drivers for NVIDIA GPUs. When they started out the specs were not so widely available. They could look at how GPU drivers were developed for other GPUs, but that is not going to be a substitute for exploratory work. Let's see how well AI does at that exploratory work. I think you'll find it's a lot harder than common uses for AI today.
This stat is grossly inflated. I don't disagree with the general point but adoption isn't that high yet and certainly not for codex specifically.
sure, but how do you make irrelevant something which is already irrelevant (PostmarketOS)?