macOS Sonoma 14.4 might break Java on your machine
appleinsider.comI find it difficult to imagine how a change like this (sending a SIGKILL to the process instead of SIGSEGV on a page fault) can be done in the final release and not in one of the EA releases or betas. It is clearly a breaking change with no easy workaround (since SIGKILL cannot be caught), for a behaviour which is well defined by POSIX.
Even if you momentarily ignore the reasons why someone thought this could be a good idea, why not do it in one of the pre-releases or betas??? Doesn’t look like the kind of thing you’d want to do in a last minute change.
The real problem in my opinion is the fact that you cannot go back after a macOS upgrade. So if something like this happens, you literally have no option but waiting for Apple to release a fix, if they want to do it at all.
It sounds like the kind of half-baked change a junior dev might come up with, but lord knows how it made it through code review and into a release.
hypothetical excerpt from the commit message:
code review: self-reviewed test plan: this change is so obvious no tests are neededI wish you were joking…
those who don't know: ^_^ those who know: -_-
I think someone changed the SIGSEV to a SIGKILL to debug something and forgot to revert it before merging (boo reviewer boo !)
That's easy to guess
The good kernel engineers are working on iPhone or Vision Pro, not on MacOS
Don't they all use the same kernel?
No, because naturally it is different kind of requirements and space is at a price as well.
Technically it is kind of the same, but with different set of configured features.
Already a bit oldie,
"Mac OS X and iOS Internals: To the Apple's Core"
https://www.amazon.com/Mac-OS-iOS-Internals-Apples/dp/111805...
Mostly, but JIT is not allowed in the App Store apps, so probably this change didn't broke anything over there.
But it will soon be once the EU DMA changes require support for alternative browser engines. Was this change an attempt to lock down too wide permissions a JIT might set up and perhaps really intended for iOS instead of macOS?
I don't think the EU DMA changes will require that JIT is supported. Supporting alternative browser engines is something that can be done perfectly without JIT.
>Supporting alternative browser engines is something that can be done perfectly without JIT.
Unless I missed something and the DMA requirement is along the lines of "having to allow other browser engines as long as they don't bring their own JITting Javascript engine", no, it literally can't.
Depends on whether Apple’s own browser engine requires JIT. I think the point is to ensure fair competition.
Uhm, yes, you are right.
Nope it cannot. If Apple is using JIT, which massively speeds up JS, and is denying it to competitors, then Apple is giving itself an unfair advantage and is breaking the DMA.
And you know, you’re one web search away from finding the page where Apple directly mentions access to a JIT for alternative browsers: https://developer.apple.com/support/alternative-browser-engi...
Especially when it was not in the beta version and introduced with the release candidate.
I do not have very kind words for Apple's dev teams today. Charitably I am trying to think that screwups happen, but this is bad and it is very hard to see how anyone thought merging it into an rc was okay.
It’s obvious that all the good Unix people left Apple eons ago.
Been saying this for as long as I’ve been using macOS: it is not a developer friendly OS and am close to the conclusion that this reputation is a psy-op. Yeah it’s pretty, it mostly works when the box is first turned on and the hardware is unmatched but macOS itself is actually subpar. QA seems second tier, things you’d except from other OSes like, I don’t know, using a third party second display are just bad experiences. Docker sucks, posix compatibility is technically there but isn’t really useful, the thing randomly loses network and only rebooting fixes it. I reboot my corporate Mac more often than I rebooted my windows enterprise laptop.
The reputation was well earned when Linux on the Desktop wasn’t as easy or friendly as it is today and was severely lacking good quality GUI applications.
Windows was a virus laden mess and was not useful for running Linux apps.
And besides the flaws of the other OS’es, OS X had some of the nicest window management features (Expose from the Snow Leopard is still my favorite window switcher), was a UNIX and had a thriving indie development scene (which was basically killed by iOS…).
Since then OSX has completely languished as a developer platform. It’s not clear what you can do today as a developer to make your life easier that you could not a decade ago on OSX. And in fact, the destruction of the indie dev scene, combined with the many heavy handed security restrictions of dubious benefit have made it a far worse dev environment than a decade and a half ago.
Further, Linux DEs have greatly improved and Windows now supports Linux development.
The Mac ecosystem has seen a complete turnaround where you now buy a Mac for the hardware, not the software.
Here are few particular things that I like about development in my mac:
1. Terminal is very usable, compared to Windows cmd. Modern Gnome Terminal is good, though.
2. Cmnd+C for copy, Ctrl+C for SIGINT.
3. Touch ID instead of root password, which works with Bluetooth keyboard as well, and that's with absolutely minimal configuration, uncommenting single line.
1. Default is now Windows Terminal which is so much better than Cmd and Mac Terminal.
2. Ctrl+C works for copy when text is selected otherwise SIGINT.
But both are awful compared to the other terminal emulators available. My favourites are alacritty and kitty.
IMO Windows Terminal is amazing
Good design, great performance, great text rendering
Has tabs, themes, quake mode, a settings UI, and keyboard shortcuts (that you can rebind)
On macOS, I like iTerm2 a lot!
I've never had a terminal crash, except Windows Terminal.
> great performance
compare with what?
> Good design
I strongly disagree. I feel that Windows Terminal suffers from a bloated UI and completely asinine organization of the configuration options.
Yep, I use Alacritty both on macOS and Linux and I love it. Still not sure about Kitty, I hate Python scripting but might pull the trigger at one point, we'll see.
I would use alacritty exclusively if it supported ligatures.
I use Alacritty in Windows with WSL2/Ubuntu. Works without troubles.
> Terminal is very usable, compared to Windows cmd.
It is subpar, however, when compared to Windows Terminal.
Which is probably subpar comparing to Linux terminals and macOS iTerm2?
Having used iTerm2 and macOS for a long time, Alacritty in Windows with WSL2 and a Linux distro is a much better experience.
To be fair, I think it's very subjective, given the fact that Alacritty doesn't even support tabs (by design).
Also I find a simple Konsole on KDE+Linux superior for every Windows and macOS setup (including iTerm2).
I find iTerm2 to be slightly better than Konsole in a handful of small ways, like having a built-in quake mode instead of having to use yakuake instead (which comes with various little oddities relative to konsole), ability to add colors to tabs, etc.
That said, iTerm2 isn't good enough to make me use MacOS again.
Right, point being that using terminals in Windows is not necessarily subpar to iTerm2 on macOS.
> Cmnd+C for copy, Ctrl+C for SIGINT.
Can't compete with the streamlined easy of highlight-to-copy. I never use a keyboard shortcut to copy text from a terminal (except for yanking in vim / evil-mode)
1. Who cares? I use iTerm2, Alacritty and Rio and they all work very well. Programmers don't have trouble installing alternative programs.
2. I remember the XFCE4 Terminal using Ctrl-Shift-C and Ctrl-Shift-V for copy/paste and liking it, no more SIGINT by mistake. But IMO a minor gripe, you can remap keys for copy/paste in most self-respecting terminal emulators anyway.
3. I agree on that but passwordless sudo saved my sanity and I don't care anymore. If I install a virus then I had all the troubles coming and I'll take responsibility. ¯\_(ツ)_/¯
1. I care, because I prefer default setups as much as possible. Third-party software is often a burden.
2. That's bad approach, because it'll be Ctrl+Shift+C in terminal, but Ctrl+C in your IDE. That's inconsistent. I'm usually setting it the other way: Ctrl+C is copy and Ctrl+Shift+C is interrupt, but it's not as good as two different shortcuts.
3. I would say, that it's not only about sudo, it's about other things like revealing passwords in browser, using touch id for passkey authentication, laptop unlock, of course. Those are not strictly developer things, but it's very convenient when it works uniformly. And having some security measure is better than having none, even if it as simple as touch the button, it might cause some second thought. For example few days ago I did a mistake and installed caddy (terrible software, don't recommend). After launch it started tinkering with my system, installing some certificates, and if not for Touch ID, I'd end up with system in unknown state. Probably it'd happily corrupted my system with passwordless sudo.
1. It's your own idealistic preference. Low priority for everyone else.
2. Sure, I don't disagree, I was mostly saying the whole stuff is configurable and we can tune it however way it's more convenient for us.
3. I agree and I'd want a fingerprint or a proper FaceID (with depth-radar and lasers and all, not just something that can be duped by print photos) on my computers but it's not something that would stop me from buying an otherwise excellent machine if it doesn't have this feature.
As for Caddy, let's agree to disagree. :D I will concede that many programs' installers are quite dumb but Caddy itself is excellent once you start it up and it starts doing its own thing.
> As for Caddy, let's agree to disagree. :D I will concede that many programs' installers are quite dumb but Caddy itself is excellent once you start it up and it starts doing its own thing.
Installing root certificate into my system without any explicit verbose flags is something malware would do. The developer who thought that's a good thing is the one whose software I'd avoid. His views on usability are completely opposite to mine. What else will he do by default? Listen on 0.0.0.0? Add firewall exception for usability? Create tunnel to the Internet automatically so I can share my work with others without those pesky port forwardings? Some people might like it, but not me, I prefer things as explicit as possible, when it comes to questionable features.
My reason to use Macs is for the modern version of NeXTSTEP and iDevices, not really for the overpriced hardware.
However at home I have always been a Windows/Amiga/UNIX head, with Linux being the cheaper path to that UNIX experience, had Microsoft not messed up the POSIX layer, probably I would never have bothered.
For some time I even tried to acquire one of those nice Toshiba laptops using Solaris that Sun used to have.
> And besides the flaws of the other OS’es, OS X had some of the nicest window management features
I'm always confused by such statements, because what KDE offers on Linux easily dwarfs every window management concept in every major OS. I always need to install additional third-party apps (e.g. Rectangle on macOS) to get a poor-man's equivalent of KDE-style window management functionality.
> had
Nothing came close to OSX’s Expose 15 years ago.
OSX has gone backwards in terms of windows management since then.
Linux is far superior. Even Windows is slightly better because at least windows snap to edges.
But I don’t have a subset of apps that work in x11, and a subset in wayland… nor inconsistent display fractional scaling in MacOS. Wanted to love KDE, but in complex setups, it’s difficult.
I cannot believe that, on macOS, high cpu usage leads to audio buffer underruns and popping, like something from the 90s but on today's premium hardware. It's inexcusable. On a platform that is constantly touted as the best for audio work, DAWs etc no less.
Wow, I thought I'm the only one as when asked in my Team no one agreed to facing these issues - but I faced it on multiple new Macbook Pros.
I discovered them when running stable diffusion which maxes out my m1max and audio started to stutter a lot despite everything else running fine. I was even thinking to myself how nice it is to be able to tax the machine and this be able to use it.
I experienced it, but I'm not sure if it relates to high CPU. My guess it's something about kernel locks.
I experience this a lot (even up to complete freezes, even of the touchpad feedback, leading to crashes) and am confident that it is at least related to swapping due to memory pressure / RAM usage.
Why they'd let the system become unstable rather than killing some apps that are too much goes beyond my understanding.
Yeah, the same on m2. And to thing I originally bought this machine with audio work in mind...
Yep, it's been like this for.. I don't know how long now. I have "killlall coreaudiod" always ready to be used on my M1 Mac since it's a constant problem.
changing the volume of the sound under high CPU load will also move the balance one side.
I often have to re-center the balance, it's driving me nuts.
I've found this to be useful to solve that problem. https://www.tunabellysoftware.com/balance_lock/
How the hell does something like that even happen lol
increase-left-volume; increase-right-volume;
inside a function that can be preempted
Right but that would only happen if the code literally increased/decreased the channel volume instead of simply setting it to the new value. Either way it's an embarrassing bug to have for something so essential and simple, the kind of bug that reminds me of Linux desktops from 15 years ago or so
Thought the popping was a Rosetta 2 bug as I had only noticed it running x86 software on m2 pro.
Your explanation makes a lot more sense as x86 is probably the only time I’m pushing the cpu usage high enough.
The popping is darn annoying
It precedes the switch to ARM. My 2019 (last Intel generation) MBP also had this issue. And the previous one from 2015 (though that poor thing only had 8GB of RAM).
I’ve been using macOS as my primary development environment for the past ~4 years and have loved every minute of it. I haven’t run into any of the issues you’ve mentioned thankfully, and docker works absolutely fine for my use cases. I can’t see myself ever switching for any reason.
I've been using MacOS for about 5 years, as it is the machine issued by my workplace. And I hate it. It's so much worse than Linux that it's not even a joke. Hell, I think I would have preferred to work on Windows with WSL than this crap.
The hardware is not even that good. I presume people like it because it looks slick and serves as a status symbol.
Please name a competitor with better laptop hardware (I assume your workplace provided you with a laptop). Things such as better screen, lower fan noise/computing power ratio, speaker quality, touchpad and battery life.
The only thing I prefer the mac M3 PRO (work), hardware wise, to my MSI w/4060 and 64GB ram, is the battery life. Everything runs slower on my mac. There are some niceties (the color filter for my color-blindness, for example, and the keychain), but not a fan of
* lack of usb ports necessitating dongles * lack of hdmi/dvi outs * No camera lens cover * Uncomfortable chiclet keyboard * oversized touchpad * overheating * lack of power cores leads to slower parallel compilations
I do appreciate the ease of use of (especially) the network utilities of mac, but it's definitely not my preferred machine.
> Please name a competitor with better laptop hardware (I assume your workplace provided you with a laptop).
My workplace provides me with a MacBook yes.
I honestly prefer using my 5 year old Dell G5.
I absolutely hate MacBook screen. It's reflexive, whereas I very much favor the matte finish of basically every other manufacturer.
Fan noise for me is essentially irrelevant. Not that my Dell makes that much noise anyway. I am normally listening to music while I work, and when gaming there's in game sound.
Speaker quality is shit on either laptop. My cheap speaker/subwoofer set that is connected to the docking system is leagues better than the laptop speaker (that I never use anyway).
Touchpad is... as bad as any other touchpad? I use a mouse for a reason.
Battery life is the only thing a MacBook would have in its favor. Again, irrelevant. 99% of the time it is plugged into power.
MacBook comes with a major downside of being tied to OSX however. This cannot be understated.
You sound a lot like me.
I think macOS has the worst window management of any major OS especially for power users. It's so bad that buying a half dozen third party apps to "fix" the UX disaster that is macOS has basically been normalized.
I don't like any touchpad so even if a MacBook touchpad is better it's still just the best worst method for mouse input.
I think Apple makes nice hardware though. But I also work docked basically 99% of the time and use it as a desktop replacement so many of the Apple hardware advantages are moot for me. I'm kind of shocked when I see people voluntarily working on a laptop undocked. Working with a crappy keyboard, a crappy touchpad, and a small screen is torture to me even if it's the best crappy keyboard, the best crappy touchpad, and the best small screen.
Precisely.
I use it undocked very rarely, to the point that those hardware advantages are irrelevant.
The downsides are still there when docked however (the horrible glossy screen and the awful OS I have to handle).
> I think macOS has the worst window management of any major OS especially for power users.
Do you use full screen windows ?
Ok so you couldn’t list a better laptop. Any laptop is good docked and ignoring the noise with headphones. The only thing valuable of your response is the matte screen. Which can be fixed with a cheap Amazon film if wanted. Still with the glossy screen I can see my MacBook much better in direct sun than I can any matte HP elitebook I’ve used in the last 5 years. The sure view screens are actually almost unusable indoors.
Speakers bad in a MacBook? Come on man at least try to be objective.
> Ok so you couldn’t list a better laptop
I did. You just didn't want to listen.
> Speakers bad in a MacBook? Come on man at least try to be objective.
Mac users that list the laptop speakers as good are really digging in. It is marginally better than regular shitty laptop speakers, but it is still very shitty. Any cheap proper speakers are much better, to the point that listing it as a positive is misleading.
And I can use my boring Dell with my windows open on a sunny day without issues. MacBook requires me to close the blinds.
> I did. You just didn’t want to listen.
Hate to be that guy, but you literally didn’t. Your “I honestly prefer using my 5 year old Dell G5” does not answer that question.
I presume reading skills are hard to acquire these days. I can tell you in more simple terms. Unfortunately HN does not allow me to draw with crayons:
"Any manufacturer that provides hardware that has screen with a matte finish and whose hardware is not tied to a shitty OS".
I didn’t realize you were joking until the second paragraph.
The only joke here is people white-knighting for a manufacturer of luxury toys.
How dare someone doesn't like a MBP?
MacOS has bugs and inconsistencies everywhere. Some are bugs, some are inconsistent UX, only a few of them are "well... I just don't like it". An example: you cannot right click app-icons in the Docker in the mission control overview.
The list of specific annoyances and bugs is likely in the 3 digits by now, and I've only used it for half a year.
The worst of all was getting the M2 soft-bricked by an update, because I had changed the display frame rate to 60Hz, because the tween duration when moving between desktops was for some reason tied to this refresh rate. About 2 second tween duration on 120 Hz until input control, and one second on 60 Hz. Impressive for such a thing to not be picked up by QA.
Mac OS UX is just bad. I've been using it daily for a few years already, but I still struggle to perform basic tasks. And by "basic" I mean "How do I move a file?" or "How do I go one folder up?". Sure, I can read about this, but this shouldn't require me to read a manual.
Finder is an abomination. No app has been quite as rage inducing for me personally as Finder.
I use the command line almost exclusively for file management and avoid it like the plague.
Not as bad as:
How to lose your work using Undo Copy in Windows
It's a special kind of stubbornness that makes it so that a file manager doesn't have support for cut and paste, in order to move files.
The answer to most "it's a bit dumb that MacOS doesn't let you / forces you to" is "install app X, Y, Z".
- Don't like that apple's "Music" app pops up when you connect a Bluetooth headset? => Install an app.
- Want to be able to "alt tab" through windows of the same program, or in general not be uselessly flawed? => Install an app
- Want to be able to move and resize windows without aiming at the exact edge pixels of the window? => Install an app.
- Want to move & resize windows to very common places and sizes on a screen? => Install an app.
- Want global hotkeys for whatever? => Install an app
- Want a software package management system a'la apt? => Install an app.
- Want to rebind keys or make things like Home/End not be dead keys... because apple keyboards don't have that, and they cannot be bothered with it. "you should be using "⌘ + →" anyways... or, I suppose it depends on the window"? => Install an app.
- etc....
You don't get any of these annoyances with Linux / Gnome. "Why not use that if you hate MacOS so much?" I pretend to hear you say. First of all, because of anti-competitive reasons by Apple, I sort of have to. Secondly... something something angry old man yells at clouds.
>Don't like that apple's "Music" app pops up when you connect a Bluetooth headset? => Install an app.
Get a better bluetooth headset that doesn't send play when it connects. That's the problem. It's the headset that's doing something wrong.
It's always something else doing something wrong. Everything has to conform to Apples design guidelines.
Heaven forbid the OS could let you chose what you want to happen... like launching Spotify instead, or simply nothing at all...
It's the headset doing something wrong. What a fascinating take. You wouldn't happen to be working at Apple R&D?
Maybe I missed the /s, it's just too on the nose, almost as if you're ridiculing it.
> Heaven forbid the OS could let you chose what you want to happen... like launching Spotify instead, or simply nothing at all.
Yes, the OS could do that. I'd prefer if Apple made that key configurable. But the problem isn't Apple and Bluetooh Headsets, it's that Apple made the play/pause button unconfigurable (without additional software).
>It's the headset doing something wrong. What a fascinating take. You wouldn't happen to be working at Apple R&D?
IT IS THE HEADSET DOING SOMETHING WRONG! No one asked it to send play when it connects, the manufacturer decided to do something brain dead there. It still messes with Windows and Linux machines too if you set the play/pause button to launch an app.
Do you happen to work for a bluetooth headset manufacturer? It sure sounds like you're excusing their broken behavior.
> But the problem isn't Apple and Bluetooh Headsets, it's that Apple made the play/pause button unconfigurable (without additional software).
Compare it with
> - Don't like that apple's "Music" app pops up when you connect a Bluetooth headset? => Install an app
The only difference is that you add a puzzling self contradiction of "the problem isn't Apple, but that Apple has..."
This is entirely an Apple and MacOS thing, and most certainly not an issue on Linux. Though I don't know about Windows.
Cmd backtick? For tabbing through windows.
Finder cut and paste is cmd+c then cmd+option+v for move. On the rest I agree.
Which is pretty counter-intuitive: it's "copy", then "paste but delete the original"? Much more natural to remember ⌘-c, ⌘-delete, change folders and ⌘-v.
You can also drag and drop and toggle between moving and copying by holding a key
The gui is no different than windows in this regard. If you do it on command line its just mv and cd like its been for 40 years.
Not to defend MacOS, but Windows is the no. 1 of incosistent UIs - by large. If you want a consistent UI, choose Linux.
Windows has a mess of legacy UI that are never fully replaced.
They could have gone down the path of translating the windows UI APIs, though I think it's better that they left it as is. The bigger issue however, is that there are different systems depending on what it is you want to configure, and it's all ducktaped in an ancient registry that I'm just amazed only breaks as often as it does.
Not to mention that the thing Windows was supposed to always be better at, was driver support. On windows, you have to manually source them, and try as best to avoid all the bloatware that comes with. Windows itself might also decide to replace a driver with an older one (version, release date...). WiFi drivers didn't work last time I upgraded the mobo either.
As for Linux. Completely agree. Gnome is consistent, and gets out of the way often enough. There are some annoyances there too. I have my 90 y/o grandma use Linux/Gnome, because that's what Just Works these days.
Linux can be consistent if you reject all but applications built in your desktops toolkit of choice. Which means you're missing out on a lot of applications.
Strange. I've got the complete opposite experience. Most displays work fine. It helps if you stick with models that advertise at least a bit of macOS support (like some Dells or Samsungs). Docker sucks, but there's Orbstack. I don't care for posix, but most of my *nix tools are available in Homebrew. Network is steady and more bulletproof than wireless on Linux. Is your experience based on a recent mac?
The problem with third party displays on the Mac is the system has some deeply held assumptions that all monitors are as pixel-dense as the ones which Apple ships with their machines. 100% DPI scaling for classic ~100ppi monitors is a poor experience since macOS no longer supports subpixel font rendering, and DPI scales higher than 100% but less than 200% are really just 200% in a trenchcoat because the system renders everything at 200% then uses non-integer resampling to squish it onto the display. That works well enough on the ~220ppi panels that Apple uses but isn't ideal on common 4K displays which are usually around 140-160ppi.
It's not unusable if you at least have one of those medium density 4K monitors, but it feels like a step backwards if you're used to Windows which still (mostly) supports subpixel font rendering for crisp text at 100% scale, and can render natively at 125/150/175% scales.
While this is definitely an issue, ironically we may get to a point at some time where subpixel rendering becomes less and less useful, for reasons aside from pixel density.
As you know, subpixel rendering only works when you have a very specific display (i.e. LCD) since it takes advantage of, and hence relies on precise characteristics of how the pixels are physically laid out in the display.
This means that subpixel rendering fails to work on displays that have different layouts, the most recent example of this have been newer OLED displays (I think QD-OLED) which has a different pixel arrangement and then you ironically had Windows users complaining that the text looked jagged (although you are able to change the algorithm for subpixel rendering to match the QD-OLED, the unfortunate problem here is that its not really possible for this to work all for applications as it depends on which UI engine you are using, Windows is a giant mess here).
Long story short, I can see why Mac removed sub pixel rendering, its basically a workaround that reflected a time when you had less dense displays which were all LCD and had the same physical layout. Nowadays though high pixel density displays are a lot more common and then you don't need sub pixel rendering at all (and it works with all of the different physical pixel layout arrangements)
Yes, this is irking me too. You really need a high density display to enjoy macOS, it shouldn't be like that. I guess they like to leave the baggage behind.
"Retina" Macs were introduced 12 years ago. So what should Apple do? Expend serious time and engineering effort to fix problems with hardware that's far behind the times? Nobody's calling for the return of MCGA graphics so it's clear there's a cutoff needed somewhere.
I can agreee with accommodating old ~100ppi displays not being worth it past a certain point, but the way that Apple handles high DPI by standardizing on ~220ppi across the board is setting the bar way higher than most people can afford even today. The cheapest large format monitors which meet that standard are at least $1500, when you can get a 4K 160ppi monitor for as little as $200, and 160ppi is absolutely fine on systems like Windows which natively support fractional scaling. I think Wayland originally planned to work like macOS but they are in the process of implementing true fractional scaling akin to Windows.
Another factor is that focusing on pixel density above all else comes at the expense of pixel speed, there are no large monitors which meet Apples 220ppi standard and have a refresh rate of 120hz or better, no matter how much money you're willing to spend, they just don't exist.
On macOS 4K displays are fine. Not the best, but fine.
I think the display resolution market suffers from some opposing directions: people who do productivity work like high resolutions, but since Windows (majority of the market) still seems to struggle to scale UI elements correctly some people prefer lower resolution. Also, gamers associate high resolutions with heavy GPU requirements or lower framerates. And then there are companies that are used to spending the absolute minimum on screens that are barely 1080p.
Either make or subsidize someone to make under $1000 monitors that fit their specs.
The worst part is you can't even throw money at the problem without still compromising somewhere - third party 6K/8K monitors which meet Apples PPI standard do exist, but those are currently limited to 60hz, so if you want a fast refresh rate you have to settle for 4K at a lesser PPI.
I use both, Windows and macOS on the same two 4K 27" monitors and don’t see any problems or that Windows is sharper or something like this
I have a modern Dell supported 4K display supporting USB-C to USB-C connection. macOS shows black screen for a few seconds before being able to show anything. Also using the 4K monitor with an actual 4K resolution makes everything very slow, so I just use it with lower resolutions :|. Never saw such issues on even entry level Windows laptops.
Is it macOS or the monitor not being responsive? Also, if it's slow your laptop might be too old to properly support 4K or you might have misconfigured something.
At least Dell supports their hardware: have you tried updating the monitor firmware or submitting a report?
It's the built-in mac display that goes blank along with the external monitor. On my Intel 2019 MBP it was very slow (many seconds), on this new M3 it's much shorter but still more than a decent Windows laptop which is almost instant.
> Network is steady and more bulletproof than wireless on Linux.
I gotta disagree hard here. Macs have by far the most obnoxious and temperamental WiFi stack I've ever experienced. Constant disconnects, have to turn it off and on to get it to bother looking for APs again. All of them constantly trigger bad experience scores in UniFi.
Absolutely subpar compared to any of my Linux devices, even the raspberry pi jammed inside a metal box.
> Been saying this for as long as I’ve been using macOS: it is not a developer friendly OS
My opinion and experience is the exact opposite. In fact, I switched to Macs BECAUSE of how good macOS was for development and just general work and daily life.
Around 10-12 years ago I got an iPad, my first ever Apple purchase, as a gift for my aunt. I loved how simple and clean iOS was and found the apps and games interesting, so I thought I'd dabble in iOS dev. I was on Windows 8 at the time (and already sick of Microsoft's bs) so I downloaded a VMWare image for Mac OS X Lion.
As the days went by I found myself spending more time in macOS than in Windows, and enjoying it! A month later I bought my first ever MacBook and never looked back.
Well, sometimes I do look back at Windows, in a VM on macOS, just to try some games, and man, it's still a sad joke in 2024.
> posix compatibility is technically there but isn’t really useful
It's been a long time since I ran a Macbook, but this was my biggest problem. The weird uncanny valley where its almost the same but then not.
WSL has problems but there's a very clear line in the sand between Linux and Windows and you know what you're getting.
Indeed - I recently switched to Linux and I'm much happier. I'm shocked how much better the experience is lately.
Agree with this. I tried to create objective C files in xcode 15.2 many times, spent days thinking I must have messed up somewhere and finally I found this https://forums.developer.apple.com/forums/thread/743032.
Tried updating to latest xcode, learned that my Mac's storage is almost full. Why? iOS simulator images were taking a whopping 40 GB of space even when I didnt target those iOS versions nor tested on those simulator devices. I uninstalled all the images keeping the one I build for. Next tried updating Xcode again, the issue with creating objective C files was fixed. But then it forced me to download the iOS 17.2 again along with tvOS and a bunch of other extra things. Now my space is close to full again. Why Apple? Why do I need iOS 17.2 when I build for 15.4?
The third party display thing is dead on.
I have a $2000 AUD LG monitor that Mac OS just occasionally decides to overdrive (or something) and cause instant but temporary burn in. I'm not the only one - you can find others on Reddit.
Funny, I have the same feelings about Linux. So many bugs and glitches that it really feels like nobody actually tests anything. And Windows, while I don't remember any weird glitches, just has so many ads that it makes me feel like I'm browsing some sort of yellow newspaper. Almost every update of Windows brings with it new installed apps like Candy Crush or Amazon Prime Video that I never opted into.
Linux is that home made thing that is all function over form, you’re proud of but any time a guest wants to use it you have to be there to make it work, and it’s pretty ugly. Windows is the tacky plasticky thing you bought at Walmart/Amazon. It works, but you’re not putting it out to show anyone. It’s a cheap TV. Apple is the nice expensive thing you got from a design magazine but sometimes you wish they had thought of function over design.
The solution then is to use all three interchangeably, like me. You hate them all, but since the hate is spread among all three it is more manageable.
Mac os is literally a gui on a turnkey linux distro. Under the hood its very comfortable.
Some people think that I am crazy when I said that I would prefer to use Linux instead of the MacOS in a MacBook if this was an option for me. I constantly have network issues and UI issues that I can just recover after reboot the system. But the big problem is the MacOS experience, I really hate it.
What MacBook do you have that you can't run Linux on it?
Any with arm
All the ARM ones run Asahi Linux.
Not saying you're wrong, but what else is there that just works and is usable instantly when you open the lid?
While my work Windows laptop might be faster, it's certainly not the one I'm going to pick in a pinch or when I want to travel with just one laptop.
The best mobile configuration I know right now is a Macbook Pro + Parallels. Even with all of its deficiencies.
Are there any good Linux laptops with similar experience as Macbooks when it comes to power management and time from lid opening to usable state?
You could always install Linux on your Macbook Pro. Personally I prefer MacOS though.
Not so simple for apple silicon.
I haven't tried Asahi Linux myself, but it looks promising. Do you know if it lives up to expectations?
Why? Macos is already a unix os.
Everyone says this until they're spending their whole weekend working around some weird tiny differences in between a GNU tool and something that comes with OSX or homebrew or something.
And when you finally get it working you have the "satisfaction" of knowing you bought NOTHING of value with all that wasted time -- just the opportunity to be able to use FaceTime on the same laptop you develop on, or something.
If you're looking for a UNIXy nice desktop OS and that's the entirety of your requirement, mac OS is great.
If your REAL use case isn't just generic hanging around, but specifically developing/maintaining an app that runs on linux in production, then you're only hurting yourself the more you introduce mac or windows into the dev process. Nothing good can come of it. Best case you get lucky and you don't run into any issues... and also reap no real benefits because a linux desktop nowadays works fine, it's not the year 2000.
One line into the cli and you can have your gnu tools installed. Its no harder to manage software environments in mac than it is in linux because you can use the same tools. I use conda on mac and the linux server that does the compute. I could use docker too if i wanted.
The parent comment was asking about a laptop that was as good as a macbook but to run linux on.
That's a definite it depends.
Macs are finnicky with hardware (but hdmi sucks by definition, they think you bought the cable and monitor to pirate movies and not to do some work).
However the GUI actually works and if you spend a week on windows 10+ you'll remember why people buy Mac OS.
Personally I have a Mac for stuff that requires a GUI and a headless linux box that I ssh into. And I switched to Macs from ... Linux on the desktop.
Edit: docker is shit because they just install a Linux VM and run their Linux stuff in there. Same on Windows I guess.
Windows 10 (can’t speak for 11…I haven’t upgraded) with a lot of the animations etc turned off is a far superior developer experience than macOS.
macOS is just so clunky. It tried to be so smooth all the time but just ends up being annoying.
I just set up a windows box for some development last week. Spent 2-3 hours per day on it (on work not setup).
So far I haven't managed to turn off the firewall scare popups, I did manage to remove that crap in the task bar that pops up the weather and selected news covering half the screen if you hover in the wrong place, I may or may not have turned off the OneDrive upsell, and I also got a full screen message to upgrade to 11 for free when booting once.
Great user experience overall. And don't tell me I can spend another week to turn those off, is Microsoft paying for that wasted time?
Let's not pretend that Mac OS is immune to a bit of the old upsell too. Core OS that features only work when paired with iPad/iPhone. iCloud is actively difficult not to use. The sidebar with the Apple shares and news apps.
> iCloud is actively difficult not to use.
Agreed, but if you turn it off it stays off, it doesn't pop up something every day.
> The sidebar with the Apple shares and news apps.
I kinda know it's there, but I can't remember when I last activated it by mistake *. But then I turn off all notifications except app icon badges so I don't use it at all otherwise.
> Core OS that features only work when paired with iPad/iPhone.
Which ones? Doing phone calls and texts from your laptop? I suppose that requires control of the OS on both sides to work well. I don't know what's available on the Android side.
If there are other features that work when paired with apple mobiles and don't work with Android, I don't know about them.
* Last time I've seen that bar I think it was my cat's butt on the F keys :) She taught me a lot of keyboard shortcuts.
> Let's not pretend that Mac OS is immune to a bit of the old upsell too.
Of course but Microsoft is way more obnoxious about it. In macOS (and iOS) the upsell is a small icon in your system settings. On Windows it's big notifications when you start up your devices. Links in your start menu to third party apps. Notifications when you install and run third-party browsers.
Most mac apps work without an Apple ID. On Windows you can't even use the built in video editor (Clipchamp) without logging in to a Microsoft account.
Microsoft just sucks at UX. On Windows 11 a lot of people now have three versions of Microsoft teams installed. Teams Classic, Teams New, and Teams personal edition. And the search makes sure to avoid showing you the one you use most as the default result.
Don't get me started on how many nag screens and cookie warnings you have to go through when you start Microsoft Edge. I use both Windows and macOS regularly (macOS is my daily driver right now) and Windows has the way worse user experience.
As of recently, Docker on MacOS has improved and AFAICT the performance penalty is gone.
Is the problematic corporate Mac an M-series Mac or Intel?
M-series have been great in my experience. I did used to get random full system crashes on Intel Macs which haven't happened in a few years on M1/M2.
I mean the performance penalty is just inherent to how Docker on Mac works. Instead of being a container like on Linux, it’s a virtual machine, which will necessarily be slower.
The difference is absolutely marginal. The main slowdown sources is mounting huge volumes from the host, that's definitely works better with Linux. And emulating x86_64, if your container does not have arm64 build. But if you don't need it, I'd argue that M1 performance will yield faster containers compared to average Intel laptop.
Docker Desktop still sucks at least on my corp mac.
I'd recommend you try OrbStack.
And which commercial OS do you think is better? Windows?
https://stackoverflow.com/questions/66408996/python-not-foun...
> using a third party second display
Has always worked fine for me
> Docker sucks
This is Apple’s fault how? It also sucks on Windows.
> posix compatibility is technically there but isn’t really useful
What does this mean exactly? Can you find an example where it’s not useful? In my experience most of the command line applications I would want on Linux are easily installable via Brew and I can choose all the same shell environments as Linux/Unix.
> The thing randomly loses network and only rebooting fixes it.
On your machine. Not my experience with any Mac I’ve owned. That isn’t expected or common behavior.
> I reboot my corporate Mac more often
I’m going to guess this is because your IT department sucks. I never reboot except for OS updates.
It's not a psyop, people are just having different experiences to you. I've had endless problems on Windows and Linux machines, and never had a serious or even annoying issue on a mac
Alas the corporate bloat wars ruins many a persons experience of macOS
Completely agree. Have been using a MBP for almost three years now at work (after using Windows machines for a couple of decades), and I can see how laughable many of the design decisions in macOS are (though I highly doubt they were deliberate 'decisions' at all), and I still fail to understand why people use them over Windows or Linux (sure, great hardware, mostly). It's probably fine as a consumer device, but for developers/power users the UI/UX is just bad. Finder, the built-in File Manager is an abomination of a software. It's like someone paid them to deliberately write a bad quality application.
Finder has always been like that. There have been some updates, but the core features of the UI have barely changed since 10.1.
I'm finding this with software everywhere. Products keep doing the same old stupid shit they did when they were first released. "Refinements" are poorly-designed cruft.
Is there anyone in charge of the OS X experience? There seems to be a lot of resume development - features that can be illustrated with smiling people in a video but don't really work all that well - and not so much interest in the core UX.
I still find it better than Windows, but the gap between what it could be and what it is keeps growing.
What do you not like about finder?
Enter to rename rather than open is one of the dumbest UX decisions ever made in a file manager. And then doubling down by not allowing any key remaps at all so you're stuck with it.
Oh, then you were probably not pissed enough to search more thoroughly. :) Fortunately, there's a godsend man who wrote a utility called 'PresButan' - https://briankendall.net/presButan/index.htm . Doesn't always auto launch with the OS start for some reason, but works great when it's running and fixes this madness.
Oh ... Where to begin. I can write an essay on it, but just to name a few:
1. There's a button on my M3 Mac keyboard that says 'delete'. It deletes stuff everywhere else, but welcome to Finder, this simple button doesn't delete a file or a folder. They thought giving it a two/three keys combination was a better idea.
2. Similarly, they thought you rename file/folders more often in a day than you open them. Why else would they make you press two keys to open one, and the most common single button in the world to open files (Enter/return) to rename one instead?
3. No 'Cut' (I know the alternatives). One might find it surprising but there are fans that defend even this move - they say it's because this is more "intuitive". You only copy everything first and only at the time of pasting you decide whether you want to move it or copy it. I say, if that's really the case, why does every other app and Editor (including the ones made by Apple) have a Cut option? Why don't we always follow this more intuitive method of "copying" first and then pressing the Option button while pasting. Let's remove Cut from everything and see how intuitive people find it.
4. By default, the Finder doesn't even tell you where you are. That's a basic requirement from a File Manager. Sure, fiddle with the settings and at some place you'll find an option to kind of enable that.
5. No option to quickly create a text/other file in a given folder. If you've struggled enough and enabled the view where you're able to see where you are at the moment, there's a _chance_ you'd also see that from that view you can actually go to Terminal directly in that folder. Go there, and type `touch <filename>` to create a file in that folder.
6. You got a full path to go to somewhere on the disk. You quickly open Finder. Oh, the default view doesn't even have a place to paste it and hit Enter. Who could have thought to hide it? Same problem with the native 'File Open' dialog that's used by all the other apps on the system. Even if you have the full file path, unless you go to settings you won't find a way to go to that file directly.
7. No easy (if at all) way to persistently map a network drive that automatically remaps when the network drive is available. You have to keep connecting to the SMB server again and again.
8. Side bar folder shortcuts get removed when the folder is deleted and recreated for any reason. You have to recreate them. Not sure who made all these decisions or if they were even thought about.
9. No straight way to even 'Refresh' the files in a folder. Try going out and in, closing and reopening Finder and just 'hope' that it will update and show the newly created files or changed file properties outside. Many times it just doesn't.
10. 'Get Info' allows you to also 'Set' (a lot of) Info. This is UX 101. They could have just named it `Properties` instead.
11. Hell, you can't even maximize this app window by double clicking on the Title bar, unlike for example another Apple made app 'App Store'. No consistency.
12. In List view there's no padding, I can't even find a place where I can right click and paste a previously copied file in the 'current folder', without it hitting a subfolder and pasting the files into that instead (assuming the folder has many folders inside). I'm surprised no one found it in internal user testing.
These are just off top of my head, I'm sure I can find more if I spend some time. There might be involved solutions to these, but there's no way we can call this an 'intuitive' interface. And this is just one application in the whole Operation System.
> There's a button on my M3 Mac keyboard that says 'delete'. It deletes stuff everywhere else, but welcome to Finder, this simple button doesn't delete a file or a folder. They thought giving it a two/three keys combination was a better idea.
It would be pretty darn annoying if an accidental key press could just delete a file!
> No 'Cut' (I know the alternatives). One might find it surprising but there are fans that defend even this move - they say it's because this is more "intuitive". You only copy everything first and only at the time of pasting you decide whether you want to move it or copy it. I say, if that's really the case, why does every other app and Editor (including the ones made by Apple) have a Cut option? Why don't we always follow this more intuitive method of "copying" first and then pressing the Option button while pasting. Let's remove Cut from everything and see how intuitive people find it.
The problem is, where does a file go in between the time you cut and you paste? If you accidentally copy something else to your clipboard, do you loose the entire file? Does the file appear in the Trash or is it deleted permanently?
You could argue that the same problem exists for non-file content (like text) which Apple allows you to cut. I think files and folders are a bigger deal, though. There's a practical limit to how much you can highlight. A folder might contain your entire life's work. (Yes, hopefully you have backups, but better to not reach that point.)
> By default, the Finder doesn't even tell you where you are. That's a basic requirement from a File Manager. Sure, fiddle with the settings and at some place you'll find an option to kind of enable that.
`View` → `Show Path Bar`? I agree it should be enabled by default but it's so easy to change! You can also right click the folder icon at the top of the window, but Apple made this much more difficult beginning in macOS 11.
> You got a full path to go to somewhere on the disk.
I don't think non-developers ever end up in this situation. Those who do can use `Go` → `Go to Folder...`. I realize it's not in the toolbar but it's right in the menu bar.
> In List view there's no padding, I can't even find a place where I can right click and paste a previously copied file in the 'current folder', without it hitting a subfolder and pasting the files into that instead (assuming the folder has many folders inside).
Click the gear in the toolbar → `paste`.
---
I think your other points are valid, and I think the Mac's UX has declined a ton ever since OS X 10.9 in 2013. But I feel quite strongly the above complaints are merely different from other operating systems (and thus what you are used to), not actually worse.
> It would be pretty darn annoying if an accidental key press could just delete a file! : - Isn't the 'Bin' made exactly for that purpose? And also, just like any other accidental delete, Undo is always there. Very easy!
> The problem is, where does a file go in between the time you cut and you paste? : - It doesn't have to go anywhere, it's just "marked for" 'cut', just like a file marked for copying doesn't go anywhere until you paste it. All other Operating Systems have got it exactly right for eons. If you accidentally copy something else, that mark is removed, and the file is still happily sitting where it was. (Windows even shows it visually in their File Explorer). No safety hazard. I don't understand your other point - be it my entire life's work, it's always at one place or the other. It can't go to a third place, and in any case Undo and Bin are always there. At least in Windows moved files go back to their previous places on Undo. This is a much more intuitive default, making sure a recovery option is always there for exceptions.
> `View` → `Show Path Bar` : Yes, that's what I had done, but imagine having a proper address bar which both tells you where you are, and is editable so can be used to paste a new address to go to. That will be much more intuitive, and that's what other OSs have done.
> Click the gear in the toolbar → `paste`. : I don't see a gear icon, but sure, I can also do it using Cmd+V and also from the Edit Menu. But a 'Paste Item' is still there in the Context menu that is unusable in a lot of situations. Wasn't Steve Jobs really particular about pixel perfectness? I don't see that here.
I didn't even talk about my other problems with this OS especially when you use non-Apple hardware. I just want to point out that unlike what a lot of people believe, Windows (and even modern Linux) UI is much more intuitive and arguably causes less repetitive strain injury to our hands with more frequent OS operations made easier. The only thing I had found great in Mac's UI was Spotlight (though even that leaves a lot to be desired), but Windows now offers that too under their new PowerToys fleet of applications (less capable in some places but it should only get better) and I think people should give it a try.
I only started using macOS a decade ago. OSX was good in the past - smooth, clean, minimal-reboots and lightning fast. It has progressively become worse and worse over time in most aspects with some marginal improvements in other aspects.
The hardware definitely keeps getting better and yet the software keeps getting worse. sigh.
I mean they have even screwed up a nice app like iBooks. I used to use it for reading ePubs all the time, but now I dread opening up one. Lags like crazy. And so many crashes and reboots needed. Keep submitting crash reports but fairly certain that no-one ever reads them.
Yes, remarkably today - the Windows desktop needs less reboots than macOS today. Can anecdotally confirm this with 2 windows PC's, 3 windows laptops and 3 Macbooks in the family.
Absolutely agree. We changed on my team to primarily use Macbooks. Originally just to make easier testing on Safari. Later, just because the hardware is pretty nice.
It has been a pretty frustrating experience at times. Most of the time is _fine_, but the problems after updates, Docker bugs, certain libraries that we cannot install..
On the other hand, it was never perfect with Linux either. But that was expected. And I can say that macOS does not deserbe the reputation it has.
Overall, kind of a mixed bag. There are some very nice aspects to both he hardware and software, but some that are jarring and make me thing "this is not really meant for professional users". Like the atrocious window management (that admittedly can bve fixed with a couple free applicaitons).
Effectively, software/hardware is hard and you will have issues in some way with all platforms.
Sounds like your Mac is broken. Hardware fault. I've been using multiple Macs daily with a third party 4k display, Docker, and stable network connection for years. No issues with any of that. Never needs rebooting aside from installing OS updates.
I have two emails in my work Inbox in this order.
One that says don't update mac os to avoid breaking Java. Another that essentially says upgrade macos to latest version within x days else the issue will be escalated.
It is going to be quite a hassle for IT teams across companies to deal with this problem.
Reminds me of this exchange from the bank robbery scene in Raising Arizona:
As Gale and Evelle bang in through the door. Evelle holds a shotgun; Gale holds a shotgun in one hand and Nathan Jr. in his car seat in the other. GALE All right you hayseeds, it's a stick- up! Everbody freeze! Everbody down on the ground! Everyone freezes, staring at Gale and Evelle. An Old Hayseed with his hands in the air speaks up: HAYSEED Well which is it young fella? You want I should freeze or get down on the ground? Mean to say, iffen I freeze, I can't rightly drop. And iffen I drop, I'm a gonna be in motion. Ya see - GALE SHUTUP! Promptly: HAYSEED Yessir. GALE Everone down on the ground! EVELLE Y'all can just forget that part about freezin'. GALE That is until they get down there. EVELLE Y'all hear that?
> The problem does not affect most typical Mac users, as Java was deprecated for the Mac back in 2012.
Haha, this article is quite something :D
The Java Applet was removed from the safari browser. That is unrelated to java apps running on the desktop.
> As a normal part of the just-in-time compile and execute cycle, processes running on macOS may access memory in protected memory regions. Prior to the macOS 14.4 update, in certain circumstances, the macOS kernel would respond to these protected memory accesses by sending a signal, SIGBUS or SIGSEGV, to the process.
> With macOS 14.4, when a thread is operating in the write mode, if a memory access to a protected memory region is attempted, macOS will send the signal SIGKILL instead.
What is bizarre to me is that Oracle relied on receiving SIGSEGV as normal mode of operation. That should have been a hint where things are going, no?
That's actually a pretty normal way to do things. Optimization for JITting, let the CPU hardware to do the heavy lifting instead of putting conditional jumps everywhere.
It's useful for other things as well. I've used SIGSEGV to emulate hardware interrupts. Normal execution wouldn't trap and there's no need for tests + branches (= normally no slowdown), but when an interrupt occurs a specific often accessed page is marked unreadable.
Which other JITs behave like this? AFAIK neither V8 nor Spidermonkey nor LuaJIT rely on segfaults as a normal part of their operation?
Android ART uses this mode of operation too. There's absolutely nothing wrong with relying on SIGSEGV and other synchronous signals in this manner and POSIX should make it easier and safer instead of trying to pretend signals are bad and useless.
Personally I'd expect this would affect the GC more than the JIT. But I'm not surprised that the JVM uses every trick for speed.
That you can doesn't necessarily mean that you should.
It's documented and part of the interface for POSIX:
> Write attempts to memory that was mapped without write access, or any access to > memory mapped PROT_NONE, shall result in a SIGSEGV signal. > > References to unmapped addresses shall result in a SIGSEGV signal.
How a SIGSEGV can be handled by the program to continue execution normally need some OS specific code. For Linux there's also userfaultfd to suit this need better.
> How a SIGSEGV can be handled by the program to continue execution normally need some OS specific code
A JVM's use of SIGSEGV might include platform-dependent details for recovery. But for simple application usages (e.g. eliding inlined bounds checks in a performance critical loop operating on an array) longjmp can suffice for recovery. POSIX very carefully defines async-safety and longjmp to permit jumping out of a signal handler and resuming normal execution, provided certain constraints are met, such as that the signal did not interrupt a non-async-signal-safe function.
> ...such as that the signal did not interrupt a non-async-signal-safe function.
So you have to disable signals prior to doing anything "non-async-signal-safe" and re-enable them thereafter? That's a pretty big "but"...
> What is bizarre to me is that Oracle relied on receiving SIGSEGV as normal mode of operation. That should have been a hint where things are going, no?
Not bizarre at all, this how the runtime has always operated, as anyone one who's ever attached a debugger to a Java process knows. The SIGSEGV handler is also responsible to handling NullPointerExceptions IIRC.
Correct:
> ... the JVM can intercept the resulting SIGSEGV ("Signal: Segmentation Fault"), look at the return address for that signal, and figure out where that access was made in the generated code. Once it figures that bit out, it can then know where to dispatch the control to handle this case — in most cases, throwing NullPointerException or branching somewhere.
https://shipilev.net/jvm/anatomy-quarks/25-implicit-null-che...
>> As a normal part of the just-in-time compile and execute cycle
This means a workaround is running java with -Djava.compiler=NONE, no?
I was thinking more about -Xint, or in Docker, or x86 JVM, but my guess is that somebody already tested it ;-) Other thing is that one of developers in my team who is on M1 and 14.4 is able to run Java app, so...
A better choice would be -Xrs which keeps optimizations enabled, but disables use of SEGV.
This disables use of all signal handlers, which means Java apps will also e.g. fail to quit cleanly in response to issuing SIGQUIT, or hitting ^C at the terminal. Better than "no workaround whatsoever" but far from ideal!
I think it is used to avoid doing null checks on every pointer access.
Despite what the other commenters are saying, it is bizarre.
1. There is very little you can safely do in a signal handler. For a threaded application, that pretty much boils entirely down to setting a bit and leaving it at that. If they did anything more, the behavior is undefined.
2. The memory state that a program receiving a SIGSEGV in is often undefined/garbage, and attempting to execute further at this point is at best unsafe, at worst trampling on state further, continuing execution in a broken state and destroying all evidence that would be useful for debugging - whereas a coredump preserves the state at the time the issue occurs.
There are cases where you need to catch SIGBUS, such as if an anonymous file has been truncated after you mmap'ed it.
The signal comes from a safe fetch, which is just a read that allows ignoring the fault as if it never happened. Such a signal is deliver synchronously, so the usual restrictions for asynchronous signal handlers do not apply.
The code in question takes into account that the value read might be garbage. See the big comment here: https://github.com/openjdk/jdk/commit/29397d29baac3b29083b1b...
On current CPUs and operating systems, this is not an optimization, so the code was removed earlier this year: https://bugs.openjdk.org/browse/JDK-8320317
The "safe fetch" code relies on a signal handler (either here https://github.com/openjdk/jdk/blob/48717d63cc58f693f0917e61... or here https://github.com/openjdk/jdk/blob/3c70f26b2f3fa9bc143e2506...), which is considered asynchronous delivery (i.e., delivered mid-execution, see `man 7 signal`) - which is why the `async-signal-safe` manpage simply states that it is functions that can safely be called within a signal handler.
This is opposed to calling `sigwait` or similar to actively suspend and wait for a signal, which is not possible to do here.
Granted, it may be that the stars align and their implementation works in practice, but that does not make it any less bizarre.
It's not "bizarre" at all. It's a direct translation of hardware CPU traps to userspace API. That's what signal handers are: virtualized interrupts! There's nothing wrong with using signals to achieve performance levels otherwise not possible.
This did not improve performance, it was just an unnecessary hack. The authors agree, as they removed it realizing it was not an optimization.
UNIX signals are in no way or form direct translations of hardware CPU traps. The kernel handles hardware traps, which may or may not lead to UNIX signals. Heck, with userfaultfd, a different userspace process could be handling, or injecting, the fault! Not to mention VMs, where the the guest userspace is very far away from any real hardware traps.
There are basically two classes of UNIX signals: Signals that indicate that you might need to take some action (SIGTERM, SIGALRM, SIGUSR1, ...), and signals that indicate that your process did something illegal (SIGILL, SIGSEGV, SIGFP, ...). There is a very, very limited number of cases where handling these errors make sense, and trying to be "clever" to make (faulty and ill-advised) performance optimizations is not one of them.
Please, knock it off with the value judgements. "Ill-advised" according to whom? You? Why should your opinion prevail?
> This did not improve performance, it was just an unnecessary hack
Well, the Android VM certainly uses a signal mechanism for safepoints and stack overflow checking, for a reason (https://android.googlesource.com/platform/art/+/master/runti...) (both latency and code size), so don't sit there and tell me that the VM running the world's most popular OS is pessimizing itself pointlessly.
> The kernel handles hardware traps, which may or may not lead to UNIX signals.
Not all traps result in signals, and not all signals are traps. Nevertheless, the POSIX signals API is the means through which Unix OSes provide user programs the ability to interact with CPU traps. (Windows does the same thing, morally, with SEH and vectored exception handling --- https://learn.microsoft.com/en-us/windows/win32/debug/vector...). Any decent OS should provide applications with the tools they need to make full use of the underlying hardware. All these anti-signals people are just arguing that programs be bigger and slower, because they can't make full use of all the hardware features of the system, out of, ultimately, an aesthetic objection to signal handling.
> Please, knock it off with the value judgements. "Ill-advised" according to whom? You? Why should your opinion prevail?
Ill-advised as per the authors decision to remove said hack as it brought none of the intended benefits. Or are you suggesting that your opinion is more valuable than the authors whose code we're discussing?
Be careful with fallacies suggesting only one side of an argument is based on opinions. :)
The actual text from https://bugs.java.com/bugdatabase/view_bug?bug_id=8327860 says this:
> We have been working on a patch that switches the jit protection mode to EXEC around these potential faulting memory accesses.
Yes, they're changing one aspect of signal handler use to work around this problem. They're not stopping the use of signal handlers in general. Hotspot continues to use signals for efficiency in general. See https://github.com/openjdk/jdk/blob/9059727df135dc90311bd476...
> Be careful with fallacies suggesting only one side of an argument is based on opinions. :)
The wonderful thing about choosing not to care about facts is having whatever opinions you want.
> Yes, they're changing one aspect of signal handler use to work around this problem. They're not stopping the use of signal handlers in general. Hotspot continues to use signals for efficiency in general. See https://github.com/openjdk/jdk/blob/9059727df135dc90311bd476...
This whole thread is about SIGSEGV, and specifically their SIGSEGV handling. However, catching normal signals is not about efficiency.
Some of their exception handling is still odd: There is no reason for a program that receives SIGILL to ever attempt continuing. But others is fine, like catching SIGFPE to just forward an exception to the calling code.
(Sure, you could construct an argument to say that this is for efficiency if you considered the alternative to be implementing floating point in software so that all exceptions exist in user-space, but hardware floating point is the norm and such alternative would be wholly unreasonable.)
> The wonderful thing about choosing not to care about facts is having whatever opinions you want.
I appreciate the irony of you making such statement, proudly thinking that your opinion equals fact, and therefore any other opinion is not.
This discussion is nothing but subjective opinion vs. subjective opinion. Facts are (hopefully, as I can only speak for myself) inputs to both our opinions, but no opinion about "good" or "bad", "nasty" or not can ever be objective. Objective code quality does not exist.
But this also means that signal handlers are running in what's effectively a separate thread of execution - even in otherwise single-threaded code! So the things you're allowed to do safely in your signal handler are very limited, they boil down to atomically tweaking some lightweight data structure (or even just setting a flag) that the main code will look at later and behave accordingly.
While it's true that async-signal-safe programming bears some similarities to multi-threaded programming, an interrupt really isn't the same as a thread. Also, there's a lot more you can do in a signal handler than just set a flag: you can longjmp or even make calls to regular functions.
For example, one kind of program organization that used to be more common in Unix but that remains legal involves keeping signals masked all the time except around certain blocking system calls, e.g. ppoll, that atomically unblock signals and wait. In such a program, a signal can arrive only inside the blocking system call and so the handler can call regular functions without the usual strictures of asynchronous signal safety. (Consider responding to SIGWINCH, which tells you about changing terminal size.)
Synchronous signals are also special in that they're, well, synchronous. That means that in the signal handler you can examine the target memory address or instruction pointer and take action specific to a given spot in your program --- e.g. longjmp to an error handler.
All of this is useful, safe, and legal under POSIX. The main problems with POSIX signals are that, 1) as this thread underscores, most people don't understand them, and 2) signal handlers are process global and hard to share. (Consider if you're running two VMs in a process and each wants to use a SIGSEGV GC safe point trick.)
(userfaultfd, sadly, requires more system calls than a synchronous signal handler to handle anomalous memory access.)
We should be enhancing POSIX signals to make them easier to share, not casting aspersions on them.
How is this pattern any different from "keep interrupts disabled all the time except when the program is ready to yield()"? I'm not really seeing the inherent difference w/ the well-known pitfalls of multi-threaded programming. The defining characteristics of signals is that, like interrupts and multi-threaded execution, they can pre-empt your program at any time. (Synchronous signals as you describe them are indeed different, they look more like a "recoverable exceptions" system.)
> keep interrupts disabled all the time except
It's not. So what?
> The defining characteristics of signals is that, like interrupts and multi-threaded execution
No, they can't. You control when they are masked and unmasked. Certain signals are delivered in response only to certain actions (e.g. floating point errors when enabled, or memory access failures). It's not the chaos you think.
> Also, there's a lot more you can do in a signal handler than just set a flag: you can longjmp or even make calls to regular functions.
... As long as the called functions are fully async-signal-safe/reentrant. It used to be even more sensitive, in that not all register state was correctly saved/restored on Linux.
(On the fabled plan9, where signals are replaced with arbitrary-text "notes", the issue is even bigger as floating point registers are not saved/restored)
> We should be enhancing POSIX signals to make them easier to share, not casting aspersions on them.
That's fair, but they have already been fixed with signalfd, which more modern processes use to deal with the few POSIX-isms that require it, but many of the original uses have been superseded.
E.g., few modern applications use `timer_create` and SIGALRM, as they can instead use timerfd or poll with strategic timeout (although usually abstracted away by their eventloop). SIGBUS is a POSIX-way to deal with mmap truncation, but sealed memfds can be used to avoid the issue altogether. SIGUSR1/2 can be replaced by any IPC mechanism to give much more flexible controls than a single "reload"/"toggle" signal.
(The POSIX ways can still be useful in certain simpler programs of course.)
> As long as the called functions are fully async-signal-safe/reentrant.
That is not accurate. If I have a single-threaded program sitting around a ppoll(2) loop and a signal can arrive only inside my main loop ppoll(), then I know a priori that my signal handler can't be interrupting non-reentrant code and so I can call whatever I want inside it.
> It used to be even more sensitive, in that not all register state was correctly saved/restored on Linux.
Don't confuse architectural flaws with implementation bugs. I'm reminded of an argument on emacs-devel in which someone argued that we couldn't call malloc(3) in a multi-threaded program because one beta version of glibc once had a thread safety bug in the malloc implementation.
> SIGBUS is a POSIX-way to deal with mmap truncation, but sealed memfds can be used to avoid the issue altogether.
Sealing a file descriptor doesn't physically seal a USB key into a USB port. :-) You also get SIGBUS on IO failures on mmaped files, and surprise removal is an IO failure. There's really no alternative to a synchronous signal here --- a regular file being mmap()ed isn't a userfaultfd.
IMHO, I think it's crying shame that Hotspot (last time I checked --- maybe it's fixed?) doesn't watch for SIGBUS on access to MappedByteBuffer and translate surprise USB device file removal into a nice clear VM-level exception.
Even for cases for which userfaultfd can work, a synchronous signal can be more efficient because it involves just one entry into the kernel. (sigreturn is optional.) I'd really hate to give up the conventional signal mechanism entirely, although of course I approve of things like signalfd and userfault that reduce the need for signal handling.
> That is not accurate. If I have a single-threaded program sitting around a ppoll(2) loop and a signal can arrive only inside my main loop ppoll(), then I know a priori that my signal handler can't be interrupting non-reentrant code and so I can call whatever I want inside it.
If you have a single-threaded program sitting in ppoll, you cannot ever receive a SIGSEGV during said ppoll unless you passed it bugus fds, timeout or sigmask pointers. A sleeping process cannot segfault.
If you register a SIGSEGV handler, you have zero guarantees that it will only fire at a particular time in your code as it is delivered on any pagefault from any code accidentally generating it. This is why the async-signal-safe rules apply.
If you try handling such fault, what needs to be reentrant is the full call stack that lead to the fault, and every call made from the handler. If, for example, the fault is generated from an event loop handler (in case of a single-threaded example, most things run off event loop handlers), the signal handler must in turn not touch the event loop (no adding/removing/adjusting events, no dispatch) unless the event loop is fully reentrant.
> you cannot ever receive a SIGSEGV during said ppoll
Unless someone sends one with kill(2). Also, I was talking about signals in general, not SIGSEGV in particular. Who uses SIGSEGV as an async work dispatch mechanism?
> unless you passed it bugus fds, timeout or sigmask pointers. A sleeping process cannot segfault.
No, you get EFAULT. System calls don't work that way.
> If you register a SIGSEGV handler, you have zero guarantees that it will only fire at a particular time in your code as it is delivered on any pagefault from any code accidentally generating it
That's why you use sigaction(2) to register signal handlers --- your callback gets both a siginfo_t and a ucontext_t you can use to figure out whether your segfault came from a region of code you know about or some other random thing going wrong in your process. In principle, you can do the non-reentrant thing after having checked that the signal came from the context you expect, and you can do this checking in an async-signal-safe manner.
> If, for example, the fault is generated from an event loop handler (in case of a single-threaded example, most things run off event loop handlers), the signal handler must in turn not touch the event loop (no adding/removing/adjusting events, no dispatch) unless the event loop is fully reentrant.
Of course. That's a matter of program design.
> No, you get EFAULT. System calls don't work that way.
`ppoll(2)` is a C library function you call. On glibc, depending on which implementation you hit on your architecture, the `ppoll` call will segfault before the syscall if you call it with bogus timeout or sigmask pointers, as glibc is dereferencing them.
> That's why you use sigaction(2) to register signal handlers --- your callback gets both a siginfo_t and a ucontext_t you can use to figure out whether your segfault came from a region of code you know about or some other random thing going wrong in your process.
It is true that you could try to figure out whether the faulting address is within a range you thought belonged to a particular thing, but I don't see any reason or benefit. However, this is fragile: Distinguishing between a page fault from a corrupt program state and a page fault in a valid program depends on the program state (specifically, inspecting memory that is likely foo), which in turn makes the output undefined. You won't get false negatives, but will get false positives. You could also try to establish a list of all program counter values (PIE/PIC caveats), and check if the signal originated from near those, but... Ugh.
Now, any other signal I could level with you, but not SIGSEGV. Not until I see some code where it's truly justified: a real, tangible benefit that can only be obtained in this fashion, justifying all the gymnastics and (in my opinion) nastiness of trying to handle a signal that should never be handled in the first place (not even for crash dumps, they're always worse than a proper core).
> `ppoll(2)` is a C library function you call.
ppoll is a real system call on all Linux systems and has been for over a decade.
> It is true that you could try to figure out whether the faulting address is within a range you thought belonged to a particular thing,
Which programs do, reliably, all the time.
> You won't get false negatives, but will get false positives.
No, you won't. Once a program wanders into la-la land of memory corruption, all bets are off anyway.
>There is very little you can safely do in a signal handler
You can actually do pretty much anything you want, it's just the C library that uses a lot of global state and internal memory allocations, which messes things up. The core syscall API and any reentrant code you write yourself are not affected.
>The memory state that a program receiving a SIGSEGV in is often undefined/garbage
That may be true for arbitrary segfaults caused by bugs, but the JIT has 100% control over what instructions to emit, it is not restricted by ABIs or platform-specific issues, so there is no problem to use SEGV as a signaling mechanism.
You can "do pretty much anything you want" as long as you carefully avoid doing anything non thread-safe (even indirectly) in both the main app and the handler itself? How reassuring!
The HotSpot JVM is already multithreaded, they understand how to write thread-safe code (I'm not sure which JVM(s?) are reported to be impacted, HotSpot was the first I found a result for).
I'm on 14.4 and using Jetbrain's IDE. So -this- is the reason my IDE randomly crashes. I'd been chalking it up to 14.4 but didn't have any specifics.
It's mostly fine, though. The crashes are rare, and since everything auto-saves, you're not really losing anything. It's just an "oh, okay." moment.
Obviously it'll be good when it's fixed, but on my personal list of impactful bugs, this doesn't crack the top 10.
Yeah, I've been hearing about that, too. And yeah, it's probably a nuisance. I'm wondering how this extends to running Java inside docker... If you're a dev and you run a lot of Java code locally during development and testing, this would bea real nuisance... .
Why does this affect only Java? It seems any jit should be affected, and surely people would notice.
Maybe other JIT environments do not rely on SIGSEGV.
I find this hard to accept. Doesn't Apple do pre-release testing of their updates? How the release process looks like?
News like these are the major reason why I apply updates only after long periods of waiting if anything blows up for others. Why companies use their userbase as testers?
Strangely, the issue wasn't present in pre-release builds. I agree though, Apple's internal testing before final release should have picked this up.
There is some sample C code to test with, and the issue is actually in the pre-releases. It’s just not in the first couple.
> News like these are the major reason why I apply updates only after long periods of waiting if anything blows up for others.
But then you are accepting that you are running an exploitable OS since you are lacking the latest security fixes. Not sure if that‘s an acceptable tradeoff.
Apple doesn't EOL the last OS version when the latest comes out. I think they mean they wait a few months to make sure all the issues have been worked out.
But that only works if you stick to old majors. At this point Sonoma is out for almost 6 months, so even if you waited a few months to upgrade to Sonoma you are out of luck now. You are either stuck on 14.3 without security fixes or you upgrade to 14.4.
What's the difference? Wait a month (or whatever) before upgrading to the next major or minor release.
My work laptop is stuck on 14.3 for a few weeks until they fix this issue. So what? Actual security risk is practically zero. Whereas if I update to 14.4 today the risk is that I can't do my job.
The difference, if I read gp correctly, is that an older major release would still get new security updates when necessary, but if you already are on the current major, which had been out without this problem for quite a while, you will only see security updates bundled with minor feature changes like the one that introduced the JVM incompatibility.
Not really an Apple-specific problem, it could hit anyone who supports multiple versions without opening that can of worms of allowing completely free mix&match of fixes and updates.
No difference to me. It's an OS update. My policy is to wait before installing any OS update on my work Mac, whether its major or minor. Just wait at least a month. So far I have avoided all of these bullshit issues over several years of updates.
The difference is those who still waited on the major update ("almost 6 months") would still be able to jump on some 13.x.y security fix on short notice, without breaking their JVM, whereas people already on 14.x who can't work without JVM are cut off for the time being.
Still no difference. I would not update to 13.x.y until at least a month later. Defer all updates.
previous discussion: https://news.ycombinator.com/item?id=39726292
So I ranted about macOS a month ago here -- https://news.ycombinator.com/item?id=39369788 -- and in the meantime my Alacritty and iTerm2 began doing cold start up for 2-3 seconds now (granted they get cached so the cold start delay does not happen more than two or three times a day) and I am just left scratching my head and wondering WTF are the macOS devs thinking.
As other posters said: macOS might have had an edge over Windows and Linux before but that's no longer the case for a few years now. I'll definitely be looking for ways to use 5K display with my Linux laptop and will likely make a full transition to Linux in the next year or two.
Macs have amazing displays. So I'll use mine as thin clients I suppose. My eyes are happier with an Apple display so I'll use them for that alone.
Apple can still turn this around but their bogus security claims that serve mostly to annoy devs is them shooting themselves in the foot and making themselves a very uncomfortable bed to sleep in just some very short years in the future. Hope somebody at HQ understands that and is able to see the problem before too many people leave.
Maybe the Oracle blog post [1] would be a better link than the Apple Insider article, which says "The problem does not affect most typical Mac users, as Java was deprecated for the Mac back in 2012."
Yeah, but all developers who are working with Java or JVM related languages or using JVM based tools like jetbrain's IDEs are affected. That's not "typical" but still many people.
I think they're confusing the Java plugin for websites with the normal Java runtime.
Exactly. We shouldn't promote such poor quality journalism.
Poor quality, yes, but journalism on blog.oracle.com?
I can't find that quote in your link. I can't find any mention of the word "deprecated" or "2012". Did you send the wrong link?
My quote is from the 5th paragraph of the posted article.
I suggest the Oracle blog as an alternative.
I thought it was clear, but I have replaced the "this" in my comment anyway.
Aha, sorry, I totally misunderstood the meaning of your comment.
In that case, I 100% agree with you, the Oracle article seems much better than the Apple Insider article.
So this will break IDEs and anything that uses the JVM natively on macOS. But if I’m reading the bug report right, should leave dockerized JVM services intact?
This is why most enterprise workplace tech teams don’t roll out any OS level updates immediately. Regardless of whether they are on windows or macOS. Also a good idea to disable automatic updates on all devices that you use daily.
Yeah, I was also wondering what happens to JVM within docker. I don't really know enough about how deep the virtualization goes... . On the other hand, I'd find it difficult if CPU level signal handling would be emulated within MacOS to fit what Linux expects to happen...
Docker on MacOS runs through a Linux VM. Native containerization on MacOS is so badly supported many container runtimes don't even try. E.g. Podman on MacOS also runs through a Linux VM.
Does Apple consider this to be a serious issue? Does anyone know if Apple plans to release a fix soon, perhaps in version 14.4.1?
“The problem does not affect most typical Mac users, as Java was deprecated for the Mac back in 2012.”
This is misleading. What was deprecated was the browser Java plug-in distributed by Apple. That’s very different from “deprecating Java”.
Didn't they have their own JVM distribution back in the Sun days? The sarcastic take would be on MacOS, third party software is deprecated, period.
Yes, Apple had their own Java runtime in the past, but this was discontinued. I think Mojave (10.14) was the first version without official Java support from Apple.
Has anyone here actually experienced the SIGKILL? M1 Pro Max on 14.4 for ten days now, using Eclipse & Tomcat & whatever Java all the time, and still waiting for it to happen...
And yes, I did just now. I had to work for 10 hours to let Eclipse crash...
Sonoma has been trash so far. I faced an issue where they changed they way linking is done in the new Xcode version and that broke builds for erlang. This is so bad that programs that run on versions of OTP prior to 25 don’t work on the m1 macs anymore. At least last time I checked. Yes, this affects Xcode primarily but still it makes one think what the hell is going on over there.
They basically bamboozled us with fancy wallpapers and gave us this immensely substandard software.
Has anyone as yet designed a work-around? My thought is to (at least partially) avoid JIT compiliations. This will of course greatly reduce performance (50 times slower?). But with GUI programs, such as Eclipse, it will hopefully hardly be noticeable.
This is affecting me on Minecraft Java Edition.
This is obviously a problem, but it does appear to be an intermittent one, and not easy to provoke for me. I upgraded last week, and have seen precisely one unexpected exit of a JVM process, and I think that memory analysis toolkit running out of memory, and I have been running a lot of stuff.
Good thing that the changelog for the 3gb update only mentions emoji and podcasts:
macOS Sonoma 14.4 introduces new emoji as well as other features, bug fixes and security updates for your Mac.
Emoji
• New mushroom, phoenix, lime, broken chain and shaking heads emoji are now available in emoji keyboard • 18 people and body emoji support facing the opposite direction
This update also includes the following improvements and bug fixes:
• Podcasts Episode text can be read in full, searched for a word or phrase, clicked to play from a specific point, and used with accessibility features such as Text Size, Increase Contrast and VoiceOver • Safari Favourites Bar adds an option to show only icons for websites
According to the bug tracker changing it from a WRITE to an EXEC avoids the SIGKILL issue.
I'm not comfortable upgrading my macbook to test this; but, if you migrate your java build pipeline to use a docker container for the jdk, do you think it might not run into this problem?
Windows 10 IoT Enterprise LTSC doesn't have this problem.
Java? Who's still running Java on their PCs? Wild.
lol, Java has been broken for some time and I’m convinced Apple do this intentionally
I won't "upgrade" to Sonoma at all. I'm done with Apple shitting all over my apps and data.
Broken software might crash -- no news move on
Imagine all devs working on macs, myself included. If I were to update and I can't run java, I can't work so this is pretty serious.
Oh, no. Anyway...
"It just works" ... not
MacOS is the worst OS on planet. I lost all my data after MacOS updated to 14.x from 13.x, because the laptop stopped starting after update and apple employees have to factory reset the entire system. And unlike any other laptop, where you can just remove hard drive and save your data, this is not possible on Apple devices, because HDD cannot be removed... Also since 14.x in on my laptop, it restarts EVERY SINGLE DAY. I also have a lot of other issues, but I will not write a book here. This was the last time I bought something from Apple.
Apple has a great backup tool called Time Machine that would have had you whistling a different tune if it were used before your system failure (which can happen with any system fwiw).