Gnome Shell on the Apple M1, bare metal
twitter.comI’m extremely impressed with anyone’s ability to do this at any age, but if I remember correctly, she’s like 17-19 years old or something? Incredible! You go Alyssa!
From what I remember from that age, it was the sweet spot for having the intelligence, the enthusiasm and the time to just grab a problem and completely loose myself in it. I didn't care at what time I slept, there were no children requiring attention, my bachelors was pretty easy and didn't require much time investment. My parents were also pretty easy and let me be at my computer for very long times at very strange hours.
It was a nice time for me to learn Linux, compile kernels, install Gentoo. For me this was about 2004 btw.
Ok, this is pretty next level and way cooler than what I did, but my point is that people at this age are not to be underestimated, they are smart AND have resource ;) (Note that I'm also not claiming that this is what is happening here, but it could be).
It’s not so much cognitive capability, but experience and knowledge. This is well beyond the experience and knowledge of many professional senior software engineers, let alone someone who hasn’t actively worked at NVIDIA/etc on video chips. She’s figured all this out on her own — not even a full CS degree at a top tier school would go into this level of detail to know how these chips & drivers work, let alone reverse engineer a cutting edge chip from a black box. Reverse engineering is its own skill set, and be the first in the world in this uncharted territory means this isn’t your first time reverse engineering something. It isn’t her first, which makes it even more impressive what she was doing at an earlier age. That combination of self-drive, knowledge, and talent is extremely rare… even more rare when her peers are wanting to hang out and do normal teen stuff!!!
Modern society tends to infantilize people.
(Congrats to Alyssa and everyone making this possible!)
> Modern society tends to infantilize people.
Well, from what I've heard regarding the frontal cortex, adult decision making is fully crystallized on average around age 25. So it kind of makes sense why on average society these days doesn't trust kids with important stuff.
Though, you know, statistics. There are outliers :-)
I’m so worried this project will get 95% of the way there, and then all the fun issues will run out and the M1 will be just another MacBook with WiFi, Bluetooth and sleep issues.
The obvious issue is to pay someone to do the work, and I am, but I still can’t shake the fear.
I wonder why we even bother trying to support these hardware vendors sometimes. I have been trying really hard to simply not deal with them for my own sanity. Are we not simply letting the leash out further for what we will accept and buy? Are we truly that powerless against the market forces which drive HW/OS sales?
Currently probably because M1 is absurdly better than the competition. They will certainly draw users away from Linux unless either this porting effort gets done, or unless other ARM options that support Linux better become available.
> They will certainly draw users away from Linux
And the beauty of non commercial software is that we don't actually have to care about that. If people choose performance increase over freedom, you can't really chose for them.
Now I'm not saying that we should not port free software to the M1. I'm saying that the good reason to do so is because the people porting it want to have it there, rather than thinking in term of user retention.
> And the beauty of non commercial software is that we don't actually have to care about that.
If that's really true, then why are so many so intent on increasing Linux Desktop adoption? Popularity means more people working on it, more people making software for it, more hardware having drivers, etc.
The problem, as I see it, is that "free software" becomes unfree when you have to pay to port it.
Back in the glory of more universal general computers this was perhaps a lesser spoken requirement of the system.
Today, it's clear to me that we are slipping back into chaos.
EDIT: Seems like FSF's "freedom to run" might fit the definitional benchmark for me. I'm not really sure how people are going to react to that though ;)
Free Software is not about getting stuff for free and never was. Free refers to freedom/liberty.
> Free software is software that gives you the user the freedom to share, study and modify it. We call this free software because the user is free.
> "free software" becomes unfree when you have to pay
Not the same meaning of "free". But anyway, for now, you have to pay Apple prices to have a computer with an M1 chip on it. If the price is a string requirement, one probably won't buy Apple hardware and rather get something that less expensive and is already well supported by free software :).
To quote fsf,
> Free software means that the users have the freedom to run, edit, contribute to, and share the software. Thus, free software is a matter of liberty, not price.
"freedom to run"
The remarkable thing about Apple prices these days is just how affordable powerful M1 computers are. The entry-level Mac Mini costs $699 (https://www.apple.com/shop/buy-mac/mac-mini). In single-core CPU benchmarks the M1 chip has a Geekbench score of 1744 (https://browser.geekbench.com/v5/cpu/9460112), which is slightly higher than the Intel Core i9-11900F, which scored 1726 (https://browser.geekbench.com/processors/intel-core-i9-11900...) and has a recommended customer price of $422-$432 (https://ark.intel.com/content/www/us/en/ark/products/212254/...). (To be fair, in multi-core benchmarks the i9 outperforms the M1 by nearly 2000 GeekBench points, but the M1 is still comparable with good chips like the AMD Ryzen 9 5900HX) By the time you add a motherboard, RAM, storage, and graphics, a Core i9-11900F build would be more expensive than an entry-level Mac Mini. Also, the M1 chip has a TDP of just 15W, while the Core i9-11900F has a 65W TDP.
While it's unfortunate that Apple has kept many of the technical details of their M1 Macs secret, thus making it a gigantic effort to port Linux and other alternative operating systems to it, what has people so excited about the M1 is the performance-per-watt and performance-per-dollar ratios the chip provides.
> Not the same meaning of "free". But anyway
Would you care to enlighten me please? Otherwise your comment is of no value.
“Free software” means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, “free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer”. We sometimes call it “libre software,” borrowing the French or Spanish word for “free” as in freedom, to show we do not mean the software is gratis.
You may have paid money to get copies of a free program, or you may have obtained copies at no charge. But regardless of how you got your copies, you always have the freedom to copy and change the software, even to sell copies.
The meaning of "free" in "free software" is the one from "freedom" or "free speech". It is about liberty, not price. You can totally sell free software for example. And a lot of people are paid to develop free software (so the software itself is not free as in $0 even if it is so for end users).
I'm far from agreeing with everything the FSF says or does, but see https://www.gnu.org/philosophy/free-sw.en.html for more information.
There's basically only 2 meanings for free (not in jail, price of 0), so if it's not the one, it's the other. Don't think his comment wasn't of value to me.
Then I fear you've oversimplified your model. I believe there is a lot of grey area between being placed behind physical bars and being forced to pay for services.
For example, let's say one comes down with a horrible disease, clearly they are not being directly charged in cash and no police have been involved. Yet I can't shake the feeling that they have been parted with some freedoms.
Anyway, food for thought. This thread is perilous.
But the point is that Apple's software that runs on the M1 is absurdly better than the competition, especially on the M1, because both the macOS software and the M1 hardware were designed to work together hand-in-hand fast and efficiently.
So even if you could get all the hardware drivers working properly, Linux/Gnome still will lose out to macOS because that hardware simply wasn't designed for that software, and that software simply wasn't designed for that hardware, while macOS and M1 were both designed to work together.
But Gnome was originally designed to run on X-Windows, whose hardware model is a MicroVAX framebuffer on acid.
https://donhopkins.medium.com/the-x-windows-disaster-128d398...
The color situation is a total flying circus. The X approach to device independence is to treat everything like a MicroVAX framebuffer on acid. A truly portable X application is required to act like the persistent customer in Monty Python’s “Cheese Shop” sketch, or a grail seeker in “Monty Python and the Holy Grail.” Even the simplest applications must answer many difficult questions:
WHAT IS YOUR DISPLAY?
WHAT IS YOUR ROOT?display = XOpenDisplay("unix:0");
AND WHAT IS YOUR WINDOW?root = RootWindow(display, DefaultScreen(display));
OH ALL RIGHT, YOU CAN GO ON.win = XCreateSimpleWindow(display, root, 0, 0, 256, 256, 1, BlackPixel( display, DefaultScreen(display)), WhitePixel( display, DefaultScreen(display)));
WHAT IS YOUR DISPLAY?(the next client tries to connect to the server)
WHAT IS YOUR COLORMAP?display = XOpenDisplay("unix:0");
AND WHAT IS YOUR FAVORITE COLOR?cmap = DefaultColormap(display, DefaultScreen(display));
WHAT IS YOUR DISPLAY?favorite_color = 0; /* Black. */ /* Whoops! No, I mean: */ favorite_color = BlackPixel(display, DefaultScreen(display)); /* AAAYYYYEEEEE!! */ (client dumps core & falls into the chasm)
WHAT IS YOUR VISUAL?display = XOpenDisplay("unix:0");
AND WHAT IS THE NET SPEED VELOCITY OF AN XConfigureWindow REQUEST?struct XVisualInfo vinfo; if (XMatchVisualInfo(display, DefaultScreen(display), 8, PseudoColor, &vinfo) != 0) visual = vinfo.visual;
WHAT??! HOW AM I SUPPOSED TO KNOW THAT? AAAAUUUGGGHHH!!!!/* Is that a SubstructureRedirectMask or a ResizeRedirectMask? */(server dumps core & falls into the chasm)The "it was all designed by Apple so can't be outperformed in parts" has got to be a trope at this point.
If that's the case why is Chrome able to put benchmark Safari on my M1?
Not to mention the OS shouldn't be the bottleneck for anything performance related in a desktop type system anyways.
Where did you get that quote? It's certainly not what I wrote, or meant.
And what is a "put benchmark"? Why would you only benchmark a web browser's HTTP "PUT" method?
The quote is to describe the aforementioned integration trope, not sure it has a succinct name beyond that hence the long description in quotes. It first got really popular when it was note one of the iPhone A* processors added JavaScript specific rounding to much "that's how safari can be so great on this device release, it integrates straight to the hardware" only to find out from a safari dev it hadn't even gained that yet. Yes end to end integration is a huge boon to a consistent user experience but it doesn't change efficiency nearly as !uch as some like to think, certainly not more than can be gained from normal optimizations still available and it's certainly not the ultimate goalpost even for consistent experience just a great aid.
Put = out, please forgive mbile keyboard while on a plane :). I do like the level of creativity for an http put benchmark though!
Because safari isn't purely optimized for speed. It's optimized for 'fast enough', but also low power usage. Chrome is _only_ optimized for speed (and thusly uses far more power), though it's my understanding that google is rethinking that balance somewhat.
Recent Ryzen chips perform better than M1 chips.
Not in most workloads, no. M1 is made on a better fab. Future chips might be better, provided the next process they use (rumors are, TSMC "6nm") is close enough to TSMC "5nm".
I'm curious, sources ?
Define "perform". Power efficiency? Raw processing speed? Both?
The M1 significantly outperforms Ryzen when measured by performance/watt and single-core performance
It's a strategic move from Apple to allow other OSses. They could've gone the iOS route, requiring jailbreaking.
They know very well how they became so big. It was in the early 2000's, because they had the support of many developers.
Yes and no. Specifically, Mac was lifesupport marketshare until Intel. Bootcamping or VMing Windows was a huge selling point at the time. (But maybe in the long-run, it turned out that UNIX was more valuable after all.)
With apple silcon, Apple is dropping what was a huge feature -- Windows. Ability to run Linux distros on Mac hardware is like rounding error hacker stuff.
Less than 5% of Mac users used bootcamp according to Apple.
Now, but I expect it was a lot more common in the early days of the Intel transition.
Or, or, or.. bear with me.. do we simply like what’s being offered and nothing about it turns us off to change our habits and try alternative products, all of which come with their own brand of bullshit anyway?
* turns us off enough
For some people it's a nice challenge to tackle.
Additionally, M1 is currently best? platform when comparing performance with power usage, and (at least MBA) comes in a great form factor. AFAIK there is no comparable device that has decent support for Linux.
Ryzen mobile is of course still faster in multi-core tasks, although with a higher power demand.
Actually not--a 5900HX is about 1400/7500 (ST/MT GB5)
An M1 is about 1700/7600.
So call that a tie for MT, and 20% faster in ST...with a chip using about a third the power.
No, M1 is clearly slower in multi-core (to be fair, it has fewer threads). Not sure where you got your numbers from. 5900HX gets about 7800 in Geekbench 5, and the difference is much larger in some other multi-core tests. For example Cinebench gives 7800 for M1, 13800 for 5900HX.
E.g. https://nanoreview.net/en/cpu-compare/apple-m1-vs-amd-ryzen-...
It says in your link there is a 3% difference in "Geekbench 5 (Multi-Core)" between the two? I would say that is practically equivalent, particularly since 5900HX comes in several TDPs. (It says "54 Watt" at your link.)
I took my data from perusing the Geekbench DB for the latest submissions...
Yeah I agree GB5 is roughly equivalent, others not so. Anyway this is just me being pedantic as so many people seem to think for whatever reason that M1 is the most powerful mobile chip while it certainly isn't. I do expect Apple's follow-up chips to take the multi-core lead too eventually.
> For some people it's a nice challenge to tackle.
So is killing a tiger, but you don't see any stripes on my wall.
Sure, getting Linux to run on hardware you purchased is equivalent to poaching. Very reasonable take.
I guess I took the OS X 10.4 joke a bit too far huh?
Well sticking the skin on your wall is just tacky. I normally just wait until people ask about the clawmarks and then feign reluctance as I explain.
Why all this anxiety over a proprietary general purpose computer?
> I’m so worried this project will get 95% of the way there, and then all the fun issues will run out and the M1 will be just another MacBook with WiFi, Bluetooth and sleep issues.
No one cared that the old G3/G4 Macbooks couldn't run "X" at the time they were released.
AMD is also working on an Arm with RDNA2. I'm more excited about that chip than the M1 as it will be inside of more machines than the M1, even if outperformed by the M1. The M1 is boring as it just comes glued inside of Apple hardware that I have no interest in.
> The obvious issue is to pay someone to do the work, and I am, but I still can’t shake the fear.
If you want the M1 opened because you already bought one hoping it would become open then you counted your chickens before they hatched. The anxiety is your own fault.
> AMD is also working on an Arm with RDNA2. I'm more excited about that chip than the M1 as it will be inside of more machines than the M1, even if outperformed by the M1. The M1 is boring as it just comes glued inside of Apple hardware that I have no interest in.
AMD GPU on ARM is also likely going to be a good but since that RDNA2 is going to paired up with a Samsung CPU it's not as compelling as Samsung CPU's are not close to the M1 CPU. So far the rumours suggest RDNA2 is coming to Samsung phones. I don't about you but that's hardly exciting. For me to interested about RDNA2 on ARM it needs to come on laptop with a great CPU as well.
> I’m so worried this project will get 95% of the way there, and then all the fun issues will run out and the M1 will be just another MacBook with WiFi, Bluetooth and sleep issues.
You are right to be worried. Getting all the nitty-gritty details about modern hardware without access to documentation is impossible. In the end this will somewhat work but will almost certainly have worse performance than MacOS with higher energy usage and be otherwise rough around the edges (like sleep problems, lack of support for fingerprint scanner and other issues typical to hardware that are not directly supported to run Linux by the vendor).
> The obvious issue is to pay someone to do the work, and I am, but I still can’t shake the fear.
I don't think just paying someone is going to help unless Apple provides documentation (which they won't do).
If they can reverse engineer a whole undocumented display controller and GPU, then a fingerprint scanner is easy. The barriers to these 'minor' things is motivated people to do it, not anything technical. This is something that can be helped with money.
Fundamentally like many engineering things, it's a Pareto principle thing. You can have a "basically working" device but it's a surprising amount of (potentially dull) work to get every last thing working properly.
That's why I started a Patreon for this, and set a minimum threshold below which the project would not start. I've been there and done "fun challenges only" ports (e.g. PS4 Linux) and I know things never get to the point where they need to be if people are only working on it for fun.
The Patreon turns this into a job, which means I have reason to keep chipping away at all those "minor" things. It also means everyone else working on the project can choose what they work on, and I pick up whatever is left that nobody wants to do.
> If they can reverse engineer a whole undocumented display controller and GPU
Who says they reverse engineered whole of it? Making it display images is easier compared to making it work fast, support video decode, power saving, etc. Nouveau has been around for long time and never transitioned from former to latter state.
> then a fingerprint scanner is easy.
Yes, but it is not the point I was making.
> The barriers to these 'minor' things is motivated people to do it, not anything technical. This is something that can be helped with money.
I've been working on Linux kernel for some time now and I stand by my opinion that the main barrier to do it is technical. You can disagree though.
Nouveau has a problem with nonredistributable firmware. We don't have that problem because Apple distributes their firmware themselves and it gets loaded before Linux boots. I already put together a prototype installer that deals with the whole firmware situation for users.
I've been porting Linux to undocumented platforms for 10 years and the main barrier to getting it polished is motivational, not technical. It's precisely the hard problems that motivate people.
Okay, Hector, so assuming your Patreon reaches 100% when can we expect to have Linux working on M1 with similar performance, battery life and hardware video decoding in mplayer and Firefox?
There's no way to make hard promises about reverse engineering projects, but if you want an educated guess: basic accelerated graphics by the end of this year, and polish to the level of proper sleep states/PM/video decode and such by the end of 2022.
> There's no way to make hard promises about reverse engineering projects, but if you want an educated guess: basic accelerated graphics by the end of this year, and polish to the level of proper sleep states/PM/video decode and such by the end of 2022.
Thanks for the honest response. IMO you are being _extremely_ optimistic, but I would be glad to be proved wrong on this one.
Apple is probably actively working against you if you want to get the fingerprint scanner and its ‘Secure Enclave’ to work.
Linux can use the Secure Enclave just as well as macOS can. We fully intend to support Touch ID and things like offloading SSH key authentication to the Secure Enclave from Linux.
All this "Apple hates us and half the things are never going to work" FUD is getting really tiring. There isn't a single instance where Apple have put roadblocks in front of Linux support in the history of the Macintosh. All existing problems come down to lack of drivers or nonstandard design choices. Solving that is the entire goal of the project: developing support for the hardware.
"WiFi, Bluetooth and sleep issues."
This is true for almost any random laptop, and has been since forever.
Its obvious the M1 and its successors have a chance of becoming a well known constant since its a SoC, and not a collection of random components from many vendors (that might or might not have good inline-Linux driver). Intel Macs, other laptops have had different hardware components even for the same model over its lifetime.
> This is true for almost any random laptop, and has been since forever.
Not this old trope again.
I haven’t had WiFi and sleep issues with Linux for at least a decade (I don’t use Bluetooth on laptops so can’t comment there). And I do use random laptops, including MBPs.
People seem to hold on to the same old arguments about Linux that were true back when XP was released but things have unsurprisingly moved on since then.
Haven't had any Bluetooth issues in the better part of a decade on any laptop running Linux, and I've been using Bluetooth headphones as my main audio output for the majority of that time, so I'd think I would have.
Counterpoint: I have an old ThinkPad, known for good compatibility. WiFi and sleep work great. Bluetooth? Not so much. It works, but I get lots of random disconnections, after which I may or may not be able to reconnect.
The real problem is with desktops, on laptops it's more standardized (most have Intel wireless).
> This is true for almost any random laptop, and has been since forever.
For sone random hp or whatever crappy netbook maybe yes, but you can’t compare a MacBook to that. There are good laptops that don’t have those problems, I haven’t run into any issues with my Thinkpad.
> The obvious issue is to pay someone to do the work, and I am, but I still can’t shake the fear.
Sounds like you’re already onboard, so this comment isn’t for you. For those who don’t know, you can donate to this project directly. Please sign up if you want to see this project succeed. I’m not affiliated, just a fan of the work.
Years ago I did a hackintosh just for fun but it was completely unusable. NIC issues and no way to use Ethernet. There were keyboard and mouse issues but they were easy to work around.
Sleep would definitely be a fun issue, without ACPI to just do it™ for you.
Wi-Fi, well, it's still Broadcom. It's gonna work as well as it does on Intel macs.
My suspicion is that you'd be lucky if there are only Wi-Fi, Bluetooth, and sleep issues.
Considering that the GPU is completely properietary, without specs, and only used in a couple models, it's highly unlikely you'll ever see any driver in a working state for it. I mean, etnaviv and the like do exist, but it's definitely NOT in the same state as say intel or amd.
And if there's any type of roadblock (say signed firmware like nvidia) then goodbye.
There aren't any roadblocks; we already have the signed firmware situation worked out, and Alyssa's Mesa driver is passing >90% of the GLES2 tests under macOS (Apple kernel, open userspace not using Metal). What's left is the kernel side driver.
Your "highly unlikely" is my "I'm aiming for an accelerated desktop by the end of the year" ;)
Sure, and you will succeed where countless others have miserably failed because ... ?
The PowerVR driver is the oldest of the bunch, has been a FSF priority project for like _a decade_ and has produced exactly 0 usable results (but a lot of prototypes!), and the hardware was of such popularity that it is the one used by Apple before they looted Imgtech. So why expect a usable driver when tens of people have failed on literally more popular hardware? What's different this time? The planets are better aligned?
For the people who expect to ever have a RE'd driver that is on the level of Intel or AMD's, just go and use any of the existing RE'd drivers on your favourite ARM platform, and check for yourself. Try Etnaviv on a Purism for a couple days. If you think the AMD drivers are crashy, or slow, or use a lot of power..
And ironically poster was complaining about potentially unstable Wi-Fi, which is several orders of magnitude easier to RE than a GPU.
I am really not sure what your problem is. You are bitter because, what, they're actually managing it this time? This entire project so far has been an absolutely incredible exercise in reverse engineering, with excellent results, from a whole range of people. It should be applauded.
Bitter? I am just warning that it's highly unlikely that they will be succesful, and that even the meaning of succesful does not mean exactly what the poster has in mind if he thinks "Wi-Fi, Bluetooth or sleep problems" are relevant. The problems that you are to expect are in an entirely different league, it will not be "yet another slightly non-functional x86 laptop" like the previous Macs. We are talking about a graphics card with fully RE drivers and if it is actually usable it would be a _FIRST_ in the community -- so yes, I'm skeptical. Even Larabel agrees with me:
> the elephant in the room will be the custom Apple graphics hardware and the significant resources there needed to bring up a new driver stack for Apple M1 without any support or documentation from Apple. The reverse-engineering is more complicated there than the likes of other ARM SoCs where at least there is generally closed-source Linux blobs to plug into and slowly replace. Even in those other ARM cases like with Panfrost, V3DV, Freedreno, and Etnaviv it's been a multi-year effort and that is with having a better starting point than Linux on the M1.
https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-A...
And it is not like people weren't trying "hard enough" before.
You do realize the same person working on the Panfrost driver is working on Asahi, right?
We aren't a bunch of random people; we've been in this game for years. I think we have a better idea about the development effort required, likely timelines, and what project structure works than Larabel, who runs a blog.
As I said, we already have the userspace graphics stack passing a big chunk of basic test suites. We're already a good part of the way to getting this to work, in ~8 months including all of the hardware bring-up, not just GPU.
Yes, and the infrastructure(Mesa, LLVM) and the like is much better, or at least much better than during the PowerVR MBX days, almost 20 years ago, when I was in the game. I have no doubt that something is going to come out of this. But really, are you going to claim that you have the people to make a RE driver on the level of, say, the Intel one?
For someone who is complaining about "sleep issues", I'm quite sure he is not understanding how different the situation is going to look.
Considering the Intel driver can't even manage tear-free display on some of my machines... Yes, I am.
It helps that we only have to support one hardware platform at this time, not a whole line-up of legacy cards, and that as far as we've seen so far, Apple's hardware design is much cleaner than the competition.
> Apple's hardware design is much cleaner than the competition.
So the "planets are aligned differently", this time. Well, we'll see. I have my doubts since my impression is that Apple looted Imgtech for their GPU, and everything I remember about Imgtech/PVR is a complete disaster.
I don't know exactly how much of Imgtech is still in these GPUs, but the coprocessor/firmware interface is Apple's own design, the AGX2 shaders are a completely new design, etc. I wouldn't be surprised if Apple only keep paying Imgtech because they have patents on TBDR, and there is otherwise none of their IP involved any more.
When I say looted I mean "most of the people I knew who used to work in Imgtech now work at Apple", not any licensing agreement, which I would guess they keep just to avoid a nasty lawsuit.
So they shouldn't try? What exactly is your point, other than being obtuse?
Please, look at the post I'm replying to
> I’m so worried this project will get 95% of the way there, and then all the fun issues will run out and the M1 will be just another MacBook with WiFi, Bluetooth and sleep issues.
I'm saying if you end up with only Wi-Fi, Bluetooth and sleep issues you will be lucky, since to have a problem-free RE'd GPU driver would be a first, while plenty of laptops have problem-free Wi-Fi, bluetooth, and sleep. So it is definitely not Wi-Fi, Bluetooth, and sleep that should make you afraid.
I am forever in awe of the engineers that can work with the low-level bits and make things like this happen. I'm a deer in the headlights whenever I see addrs and `eax 0xDAFAC000...`
I was until I did some device driver development. Imagine it like trying to write a HTTP client for an undocumented SMTP-as-a-service HTTP server.
* you know how to make HTTP requests (you know how to use I2C or PCI or ...)
* you know roughly what an SMTP-as-a-service should do (you know roughly what a display driver should do)
* you don't know the URLs (you don't know the addresses)
It's a fun exercise in collaborative reverse engineering.
Also there's survivorship bias. The reverse engineering that's most likely to succeed (and thus be written about) are the most approachable ones.
Bravo to all those doing this stuff!
That's regular device driver reverse engineering stuff, and not a bad analogy BTW.
But what marcan is doing is another level of awesomeness altogether. The m1n1 bootloader that runs the rest of MacOS in a VM for logging purposes is a hail mary move of such epic brilliance, it brings tears to my eyes.
I know a pipedream, but if apple provided the docs and drivers for Linux on m1, it would really cement the current defacto standard of a MacBook as the development laptop of choice
Amazing progress - but I wonder if getting accelerated gui is 10x the current effort
No skin in the game, but can't wait to see how this progresses
Apple is not known to provide any documentation, and it's incredibly hard to get Linux running since the introduction of T1 chips.
The T1 had zero impact whatsoever on getting Linux running. The T2 did, but only because Apple's NVMe implementation wasn't quite spec compliant. The trouble we've had with Apple hardware other than that is that Apple have made whatever design choices suit them best rather than following any external standards.
> The T1 had zero impact whatsoever on getting Linux running.
If you define running as "Linux boots", than this is correct, but as the the T1 chip provides access to the Touch Bar which is necessary to have function keys, I'd argue that there was indeed impact of the T1 chip for Linux essential compatibility. Also access to the webcam is provided by the T1 chip and required a quirk to work, as well as Touch Id, which isn't even supported at all yet.
What the parent comment was probably referring to is not the impact of the T1 chip per se, but of all changes Apple introduced with the MacBook Pros featuring the T1 chip, like a different way of interacting with the input devices, a different setup for audio and Bluetooth, a new chipset for Wifi and so on. The sheer number of changes caused these devices having a pretty bad compatibility with Linux when they came out and even today there are still a lot of unsolved issues around audio, Bluetooth, Wifi and other components [1]. And of course some features like the extended capabilities of the Touch Bar or the Touch Id sensor are still completely unsupported.
Btw: T1 MacBook Pros also required a quirk for NVMe, because Apples implementation back then also wasn't standard-compliant [2], [3].
[1]: https://github.com/Dunedan/mbp-2016-linux
[2]: https://github.com/torvalds/linux/commit/124298bd03acebd9c9d...
[3]: https://lists.infradead.org/pipermail/linux-nvme/2017-Februa...
I didn't mean that T1 was directly responsible, but running Linux is quite difficult after T1/Touchbar MacBooks were introduced in 2016.
WiFi and audio devices still don't work on most models released after that: https://github.com/Dunedan/mbp-2016-linux
That's something that changed for the worse in the last decades. In the 2000's Apple documentation was terrific.
It puzzles me how things like that go. Google and Microsoft improved their developer documentation in the last years
I’m pretty sure if you’re not signed into iCloud, you’re of no use to Apple. They’d probably prefer folks not buy their hardware just to wipe it.
I'm pretty sure that after paying Apple ~$1700 for a M1 MacBook Air, they not only don't care if I don't sign into iCloud, they don't care if I smash it repeatedly with rocks. They have the money already.
They definitely do care. They want you in their entire ecosystem using the Apple Watch, iCloud etc. then you’re a sticky customer who’s paying them more money and less likely to ever leave.
> They have the money already.
Not how a publicly traded company works. Broadly speaking, services are becoming a larger and larger part of apples bottom line
Yes, I know. I also know that if you look at the numbers Apple still makes the majority of their income on hardware. By a lot. (Like, 79% to 21% as of the last reported quarter.) Maybe one day this will not be true, but that day is not today, and it is unlikely to be a day next year, or the year after that, or the year after that.
In any case, I was making a dry joke about Apple not wanting your money if you weren't signed into iCloud, because how all companies, private or public, work generally includes "this person giving us some money may not be as good as this person giving us more money, but is obviously better than this person giving us no money at all." One day I will learn that dry humor rarely flies on Hacker News, but that day is not today, and it is unlikely to be a day next year, or the year after that, or the year after that. So it goes.
I think that's a little dramatic and lacking nuance, but I can't disagree
Not signed into iCloud on my M1 Mac mini and everything works perfectly.
We need a legislation that will compel hardware manufacturers to provide such documentation. Simply so that we can reduce e-waste and protect the consumer in case manufacturer decides to abandon the platform or change it so its use is no longer safe or acceptable.
> the current defacto standard of a MacBook as the development laptop of choice
the problem with that is that the macbook hardware sucks. a lot.
The number of developers on Macbooks suggests that there are differing yet equally valid opinions out there.
The trackpad is unparalleled. I'm assuming they have a patent on it, because the physical clicker trackpads in every other laptop feel awful.
It's good for a trackpad, but I personally really don't like it; it's too big and I'll take a Thinkpad with smaller trackpad, 3 physical buttons and a touchpoint any day.
Maybe it sucks, but still much less than anything else that I can buy for the same price. I can even buy two different MacBooks and be sure that the hardware is identical, even the screen.
Phenomenal work by Alyssa, she and her contemporaries are making excellent progress on this project!
Can it run anything with thumbnails in the filepicker?
I kid. Now what I really wanted to ask.. Is it using Wayland?
Only framebuffers are currently available and a typical Wayland compositor runs through DRI (direct rendering infrastructure), which is a whole bunch more complex.
I haven't ruled out Mutter having a framebuffer backend, but I suspect she just ran it through X, which has one.
Edit: Looks like DRI is there after all: "Sven and the #dri-devel crew helped me spin up #2, which is what I'm using now."(https://twitter.com/alyssarzg/status/1429583864679129098).
"DRI" is a fuzzy term :)
What's there is a basic KMS driver, so rather than one single framebuffer, userspace can do pageflips. Mutter can run with a display-only KMS driver + llvmpipe for rendering by now I'm pretty sure. wlroots has landed this feature recently-ish too.
> Can it run anything with thumbnails in the filepicker?
This is of course a harder and more pervasive problem than one might realize. macOS Finder RAM usage can explode catastrophically and take down the whole system if you have image previews enabled and click on a very large (gigabyte) TIFF.
> macOS Finder RAM usage can explode catastrophically and take down the whole system if you have image previews enabled and click on a very large (gigabyte) TIFF.
I've experienced (and confirmed with a quick search) that Finder won't display a preview for any file that doesn't have the resources to show.
Watch what can happen: https://imgur.com/a/BAmzgkm
It's quite amazing.
I’ve experienced QuickLook bringing my whole system to a halt just opening a Save dialog. Not actually previewing anything at all.
Both QuickLook Satellite and also com.apple.appkit.xpc.openAndSavePanelService are affected. Who knows what else. I think the save panel is actually trying to generate a preview in those instances, though.
That sounds like an edge case that should just be accounted for in finder. lgnoring for certain file sizes is a fast solution that can be expanded upon as your thumbnailing programs grow in efficiency and improve their handling of low-IO situations or large files.
Should! But apparently isn't. In fact, nearly every image viewing/accessing program assumes that your image content will just happily fit entirely in available RAM after decompression and doesn't bother with things like live tiling.
>In fact, nearly every image viewing/accessing program assumes that your image content will just happily fit entirely in available RAM after decompression
Sounds like a totally reasonable assumption, for 99.9% of the cases (including graphic designers).
Absolutely, and the pervasiveness of that mindset leads to things like tearing through swap and grinding the computer to a halt because the user clicked on a random file. It doesn't take supergenious or paranormal powers to realize that files might be arbitrarily large. Engineers mostly just fail to care.
Wow going through this persons online presence, she is astoundingly prolific. Transgender, as well.
There seems to be a lot of trans folks doing crazy tech stuff. Her, the Perl 6 on Haskell woman, the actually portable executable woman, byuu (RIP), etc.
> Transgender, as well.
I might be wrong, but that seems to be the norm for techies. At least from what I've seen online.
Theres a somewhat significant correlation between hf autism & gender dysphoria; and hf autism & technical proficiency.
Are there any videos of the movement of windows? Trying to gauge if it’s sluggish or not.
It is using llvmpipe which is a LLVM-based software renderer, so it will be sluggish than what everyone here is used to which is direct rendering manager hardware acceleration. The graphics drivers are not there yet.
Hence this, don't expect graphical elements or graphics intensive software to be any faster than other systems using accelerated graphics right now.
It looks like that's no longer true: https://twitter.com/alyssarzg/status/1429583864679129098
"Up until now both the Asahi and Corellium kernels were on #1. Sven and the #dri-devel crew helped me spin up #2, which is what I'm using now. #3 is the ultimate here be dragons, but will get us 4k display"
That tweet, and the whole thread, only speak of the Display Controller and Linux render/framebuffer management. Graphics drivers are another separate topic, which as the parent noted, aren't usable yet.
Excellent!