Apple announces Foundation Models and Containerization frameworks, etc
apple.com803 points by thm a day ago
803 points by thm a day ago
There's a different thread if you want to wax about Fluid Glass etc [1], but there's some really interesting new improvements here for Apple Developers in Xcode 26.
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
> And it's local and on device.
Does that explain why you don't have to worry about token usage? The models run locally?
> You don’t have to worry about the exact tokens that Foundation Models operates with, the API nicely abstracts that away for you [1]
I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?
[1] https://developer.apple.com/videos/play/wwdc2025/301/?time=1...
I guess I'd say "mu", from a dev perspective, you shouldn't care about tokens ever - if your inference framework isn't abstracting that for you, your first task would be to patch it to do so.
To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes
Ish - it always depends how deep in the weeds you need to get. Tokenisation impacts performance, both speed and results, so details can be important.
I maintain a llama.cpp wrapper, on everything from web to Android and cannot quite wrap my mind around if you'd have any more info by getting individual token IDs from the API, beyond what you'd get from wall clock time and checking their vocab.
I don’t really see a need for token IDs alone, but you absolutely need per-token logprob vectors if you’re trying to do constrained decoding
Interesting point, my first reaction was "why do you need logprobs? We use constrained decoding for tool calls and don't need them"...which is actually false! Because we need to throw out those log probs then find the highest log prob of a token meeting the constraints.
I might we wrong but I guess this will only works on iphone 16 devices and iphone 15 pro - thus drastically limits your user base and you would still have to use online API for most apps. I was hoping they provide free ai api on their private cloud for other devices even if also running small models
If you start writing an app now, by the time it's polished enough to release it, the iPhone 16 will already be a year old phone, and there will be plenty potential customers.
If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.
Skate to where the puck is going...
Developers could be adding a feature utilizing LLMs to their existing app that already has a large user base. This could be a matter of a few weeks from an idea ti shipping the feature. While competitors use API calls to just "get things done", you are trying to figure out how to serve both iPhone 16 and older users, and potentially Android/web users if your product is also available elsewhere. I don't see how an iPhone 16 only feature helps anyone's product development, especially when the quality still remains to be seen.
Exactly, it can take at least a couple of years to get big/important apps to use iOS, macOS features. By that Iphone 16 would be quite common.
If the new foundation models are on device, does that mean they’re limited to information they were trained on up to that point?
Or do have the ability to reach out to the internet for up to the moment information?
I hoped for a moment that "Containerization Framework" meant that macOS itself would be getting containers. Running Linux containers and VMs on macOS via virtualization is already pretty easy and has many good options. If you're willing to use proprietary applications to do this, OrbStack is the slickest, but Lima/Colima is fine, and Podman Desktop and Rancher Desktop work well, too.
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
--
1: https://macoscontainers.org/
Hard same. I wonder if this does anything different to the existing projects that would mean one could use the WSL2 approach where containerd is running in the Linux micro-VM. A key component is the RPC framework - seems to be how orbstack's `macctl` command does it. I see mention of GRPC, sandboxes and containers in the binfmt_misc handling code, which is promising:
https://github.com/apple/containerization/blob/d1a8fae1aff6f...
> The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers
Linux container processes run on the host kernel with extra sandboxing. The container image is an easily sharable and runnable bundle.
macOS .app bundles are kind of like container images.
You can sign them to ensure they are not modified, and put them into the “registry” (App Store).
The Swift ABI ensures it will likely run against future macOS versions, like the Linux system APIs.
There is a sandbox system to restrict file and network access. Any started processes inherit the sandbox, like containers.
One thing missing is fine grained network rules though - I think the sandbox can just define “allow outbound/inbound”.
Obviously “.app”s are not exactly like container images , but they do cover many of the same features.
You're kind of right. But at the same time they are nowhere close. The beauty of Linux containerization is that processes can be wholly ignorant that they are not in fact running as root. The containers get, what appear to them, to be the whole OS to themselves.
You don't get that in macOS. It's more of a jail than a sandbox. For example, as an app you can't, as far as I know, shell out and install homebrew and then invoke homebrew and install, say, postgres, and run it, all without affecting the user's environment. I think that's what people mean when they say macOS lacks native containers.
What would these be useful for?
Providing isolated environments for CI machines and other build environments!
If the sandboxing features a native containerization system relied on were also exposed via public APIs, those could could also potentially be leveraged by developer tools that want to have/use better sandboxing on macOS. Docker and BuildKit have native support for Windows containers, for instance. If they could also support macOS the same way, that would be cool for facilitating isolated macOS builds without full fat VMs. Tools like Dagger could then support more reproducible build pipelines on macOS hosts.
It could also potentially provide better experiences for tools like devcontainers on macOS as well, since sharing portions of your filesystem to a VM is usually trickier and slower than just sharing those files with a container that runs under your same kernel.
For many of these use cases, Nix serves very well, giving "just enough" isolation for development tasks, but not too much. (I use devenv for this at work and at home.) But Nix implementations themselves could also benefit from this! Nix internally uses a sandbox to help ensure reproducible builds, but the implementation on macOS is quirky and incomplete compared to the one on Linux. (For reasons I've since forgotten, I keep it turned off on macOS.)
Clean build environments for CICD workflows, especially if you're building/deploying many separate projects and repos. Managing Macs as standalone build machines is still a huge headache in 2025.
What's wrong with Cirrus CLI and Tart built on Apple's Virtualization.framework?
Tart is great! This is probably the best thing available for now, though it runs into some limitations that Apple imposes for VMs. (Those limitations perhaps hint at why Apple hasn't implemented this-- it seems they don't really want people to be able to rent out many slices of Macs.
One clever and cool thing Tart actually does that sort of relates to this discussion is that it uses the OCI format for distributing OS images!
(It's also worth noting that Tart is proprietary. Some users might prefer something that's either open-source, built-in, or both.)
I might misunderstand the project, but I wish there was a secure way for me to execute github projects. Recently, the OS has provided some controls to limit access to files, etc. but I'd really like a "safe boot" version that doesn't allow the program to access the disk or network.
the firewall tools are too clunky (and imho unreliable).
Same thing containers/jails are useful for on Linux and *BSD, without needing to spin up an entirely separate kernel to run in a VM to handle it.
MacOS apps can already be sandboxed. In fact it's a requirement to publish them to the Mac App Store. I agree it'd be nice to see this extended to userland binaries though.
You can't really sandbox development dependencies in any meaningful way. I want to throw everything and the kitchen sink into one container per project, not install a specific version of Python, Node, Perl or what have you globally/namespaced/whatever. Currently there's no good solution to that problem, save perhaps for a VM.
uv doesn't provide strong isolation; a package you install using uv can attempt to delete random files in your home folder when you import it, for example.
People use containers server side in Linux land mostly... Some desktop apps (flatpak is basically a container runtime) but the real draw is server code.
Do you think people would be developing and/or distributing end user apps via macOS containers?
ie: You want to build a binary for macOS from your Linux machine. Right now, it is possible but you still need a macOS license and to go through hoops. If you were able to containerize macOS, then you create a container and then compile your program inside it.
No, that's not at all how that would work. You're not building a macOS binary natively under a Linux kernel.
Orchestrating macOS only software, like Xcode, and software that benefits from Environment integrity, like browsers.
It's not that macoscontainers is empty, it's that the site is https://darwin-containers.github.io
Read more about it here - https://github.com/darwin-containers
The developer is very responsive.
One of Apple's biggest value props to other platforms is environment integrity. This is why their containerization / automation story is worse than e.g. Android.
In case others are confused about the term "Foundation Models":
"Foundation Models" is an Apple product name for a framework that taps into a bunch of Apple's on-device AI models.
I'm still a little dissapointed. It seems those models are only available for iPhone series 16 and iPhone 15 pro. According to mixpanel that's only 25% of all iOS devices and even less if taking into account iPadOS. You will still have to use some other online model if you want to cover all iOS 26 users because I doubt apple will approve your app if it will only work on those Apple Intelligence devices.
Why should I bother then as a 3rd party developer? Sure nice not having a cost for API for 25% of users but still those models are very small and equivalent of qwen2.5 4B or so and their online models supposed equivalent of llama scout. Those models are already very cheap online so why bother having more complicated code base then? Maybe in 2 years once more iOS users replace their phones but I'm unlikely to use this for developing iOS in the next year.
This would be more interesting if all iOS 26 devices at least had access to their server models.
Uptake of iPhone 16+ devices will be much more than 25% by the time someone develops the next killer app using these tools, which will no doubt spur sales anyway.
App development could be as quickly as a few weeks. If the only "killer apps" we have seen in the past three years are the ChatGPT kind, I'm not holding my breath for a brand new "killer app" that runs only on iPhone 16+.
Okay, the AI stuff is cool, but that "Containerization framework" mention is kinda huge, right? I mean, native Linux container support on Mac could be a game-changer for my whole workflow, maybe even making Docker less of a headache.
FWIW, here are the repos for the CLI tool [1] and backend [2]. Looks like it is indeed VM-based container support (as opposed to WSLv1-style syscall translation or whatever):
Containerization provides APIs to:
[...]
- Create an optimized Linux kernel for fast boot times.
- Spawn lightweight virtual machines.
- Manage the runtime environment of virtual machines.
[1] https://github.com/apple/container
[2] https://github.com/apple/containerizationI'm kinda ignorant about the current state of Linux VMs, but my biggest gripe with VMs is that OS kernels kind of assume they have access to all the RAM the hardware has - unlike the reserve/commit scheme processes use for memory.
Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Or maybe could Apple patch the kernel to do exactly this?
Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.
That's called memory balooning and is supported by KVM on Linux. Proxmox for example can do that. It does need support on both the host and the guest.
It's still a problem for containers-in-VMs. You can in theory do something with either memory ballooning or (more modern) memory hotplugging, but the dance between the OS and the hypervisor takes a relatively long time to complete, and Linux just doesn't handle it well (eg. it inevitably places unmovable pages into newly reserved memory, meaning it can never be unplugged). We never found a good way to make applications running inside the VM able to transparently allocate memory. You can overprovision memory, and hypervisors won't actually allocate it on the host, and that's the best you can do, but this also has problems since Linux tends to allocate a bunch of fixed data structures proportional to the size of memory it thinks it has available.
> Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.
> or just what the guest OS is actually using should depend on the hypervisor itself.
How can the hypervisor know which memory the guest OS is actually using? It might have used some memory in the past and now no longer needs it, but from the POV of the hypervisor it might as well be used.
This is a communication problem between hypervisor and guest OS, because the hypervisor manages the physical memory but only the guest OS known how much memory should actually be used.
A generic vmm can not, but these are specific vmms so they can likely load dedicated kernel mode drivers into the well known guest to get the information back out.
Just looked it up - and the answer is 'baloon drivers', which are special drivers loaded by the guest OS, which can request and return unused pages to the host hypervisor.
Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.
The short answer is yes, Linux can be informed to some extent but often you still want a memory balloon driver so that the host can “allocate” memory out of the VM so the host OS can reclaim that memory. It’s not entirely trivial but the tools exist, and it’s usually not too bad on vz these days when properly configured.
It’s one reason i don’t like WSL2. When you compile something which needs 30 GB RAM the only thing you can do is terminate the wsl2 vm to get that ram back.
Since late 2023, WSL2 has supported "autoMemoryReclaim", nominally still experimental, but works fine for me.
add:
[experimental] autoMemoryReclaim=gradual
to your .wslconfig
See: https://learn.microsoft.com/en-us/windows/wsl/wsl-config
I just noticed the addition of container cask when I ran b”brew update”.
I chased the package’s source and indeed it’s pointing to this repo.
You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.
There’s some problem with networking: if you try to run multiple containers, they won’t see each other. Could probably be solved by running a local VPN or something.
WSLv1 never supported a native docker (AFAIK, perhaps I'm wrong?)
That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.
This doesn't look like WSL1. They're not running Linux syscalls to the macOS kernel, but running Linux in a VM, more like the WSL2[0] approach.
[0] https://devblogs.microsoft.com/commandline/announcing-wsl-2/...
In the end they're probably run into the same issues that killed WSL1 for Microsoft— the Linux kernel has enormous surface area, and lots of pretty subtle behaviour, particularly around the stuff that is most critical for containers, like cgroups and user namespaces. There isn't an externally usable test suite that could be used to validate Microsoft's implementation of all these interfaces, because... well, why would there be?
Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.
Yeah, it probably would be feasible to dust off the FreeBSD Linux compatibility layer[1] and turn that into native support for Linux apps on Mac.
I think Apple’s main hesitation would be that the Linux userland is all GPL.
If they built as a kernel extension it would probably be okay with gpl.
There’s a huge opportunity for Apple to make kernel development for xnu way better.
Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).
If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.
I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.
It's impossible to have "native" support for Linux containers on macOS, since the technology inherently relies on Linux kernel features. So I'm guessing this is Apple rolling out their own Linux virtualization layer (same as WSL). Probably still an improvement over the current mess, but if they just support LXC and not Docker then most devs will still need to install Docker Desktop like they do today.
Apple has had a native hypervisor for some time now. This is probably a baked in clone of something like https://mac.getutm.app/ which provides the stuff on top of the hypervisor.
In case you're wondering, the Hypervisor.framework C API is really neat and straightforward:
1. Creating and configuring a virtual machine:
hv_vm_create(HV_VM_DEFAULT);
2. Allocating guest memory: void* memory = mmap(...);
hv_vm_map(memory, guest_physical_address, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
3. Creating virtual CPUs: hv_vcpu_create(&vcpu, HV_VCPU_DEFAULT);
4. Setting registers: hv_vcpu_write_register(vcpu, HV_X86_RIP, 0x1000); // Set instruction pointer
hv_vcpu_write_register(vcpu, HV_X86_RSP, 0x8000); // Stack pointer
5. Running guest code: hv_vcpu_run(vcpu);
6. Handling VM exits: hv_vcpu_exit_reason_t reason;
hv_vcpu_read_register(vcpu, HV_X86_EXIT_REASON, &reason);
One of the reasons OrbStack is so great is because they implement their own hypervisor: https://orbstack.dev/
Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.
How does it compare to apple’s hv?
Better filesystem support (https://orbstack.dev/blog/fast-filesystem) and memory utilization (https://orbstack.dev/blog/dynamic-memory)
Using a hypervisor means just running a Linux VM, like WSL2 does on Windows. There is nothing native about it.
Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.
Hyper-V is a type 1 hypervisor, so Linux and Windows are both running as virtual machines but they have direct access to hardware resources.
It's possible that Apple has implemented a similar hypervisor here.
Surely if Windows kernel can be taught to respond to those syscalls, XNU can be taught it even easier. But, AIUI the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
XNU similarly has a concept of "flavors" and uses FreeBSD code to provide the BSD flavor. Theoretically, either Linux code or a compatibility layer could be implemented in the kernel in a similar way. The former won't happen due to licensing.
> the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.
Exactly. So it wouldn't necessarily be easier. NT is almost a microkernel.
Yep. People consistently underestimate the great piece of technology NT is, it really was ahead of its time. And a shame what Microsoft is doing with it now.
Was it ahead? I am not sure. There was lots of research on microkernels at the time and NT was a good compromise between a mono and a microkernel. It was an engineering product of its age. A considerably good one. It is still the best popular kernel today. Not because it is the best possible with today's resouces but because nobody else cares about core OS design anymore.
I think it is the Unix side that decided to burry their heads into sand. We got Linux. It is free (of charge or licensing). It supported files, basic drivers and sockets. It got commercial support for servers. It was all Silicon Valley needed for startups. Anything else is a cost. So nobody cared. Most of the open source microkernel research slowly died after Linux. There is still some with L4 family.
Now we are overengineering our stacks to get closer to microkernel capabilities that Linux lacks using containers. I don't want to say it is ripe for disruption becuse it is hard and again nobody cares (except some network and security equipment but that's a tiny fraction).
> Was it ahead? I am not sure.
You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
NT brought a HAL, proper multi-user ACLs, subsystems in user mode (that alone is amazing, even though they sadly never really gained momentum), preemptive multitasking. And then there's NTFS, with journaling, alternate streams, and shadow copies, and heaps more. A lot of it was very much ahead of UNIX at the time.
> nobody else cares about core OS design anymore.
Agree with you on that one.
> You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
I meant that NT was a product that matched the state of the art OS design of its time (90s). It was the Unix world that decided to be behind in 80s forever.
NT was ahead not because it is breaking ground and bringing in new design aspects of 2020s to wider audiences but Unix world constantly decides to be hardcore conservative and backwards in OS design. They just accept that a PDP11 simulator is all you need.
It is similar to how NASA got stuck with 70s/80s design of Shuttle. There was research for newer launch systems but nobody made good engineering applications of them.
> The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It's built on an open-source framework optimized for Apple Silicon and provides secure isolation between container images
That's their phrasing, which suggests to me that it's just a virtualization system. Linux container images generally contain the kernel.
> Linux container images generally contain the kernel.
No, containers differ from VMs precisely in requiring dependency on the host kernel.
Hmm, so they do. I assumed because you pulled in a linux distro that the kernel was from that distro is used too, but I guess not. Perhaps they have done some sort of improvement where they have one linux kernel running via the hypervisor that all containers use. Still can't see them trying to emulate linux calls, but who knows.
> I assumed because you pulled in a linux distro that the kernel was from that distro is used too,
Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
> Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
Can't edit my posts mobile but realized that's, what's the word, not useful... But yeah, sharing the kernal between containers but otherwise makes them isolated allegedly allows them to have VMesque security without the overhead of seperate VMs for each image. There's a lot more to it, but you get the idea.
They usually do contain a kernel because package managers are too stupid to realise it’s a container, so they install it anyway.
The screenshot in TFA pretty clearly shows docker-like workflows pulling images, showing tags and digests and running what looks to be the official Docker library version of Postgres.
Every container system is "docker-like". Some (like Podman) even have a drop-in replacement for the Docker CLI. Ultimately there are always subtle differences which make swapping between Docker <> Podman <> LXC or whatever else impossible without introducing messy bugs in your workflow, so you need to pick one and stick to it.
If you've not tried it recently, I suggest give the latest version of podman another shot. I'm currently using it over docker and a lot of the compatibility problems are gone. They've put in massive efforts into compatibility including docker compose support.
Yeah, from a quick glance the options are 1:1 mapped so an
alias docker='container'
Should work, at least for basic and common operationsWhat about macOS being derived from BSD? Isn’t that where containers came from: BSD jails?
I know the container ecosystem largely targets Linux just curious what people’s thoughts are on that.
OS X pulls some components of FreeBSD into kernel space, but not all (and those are very old at this point). It also uses various BSD bits for userspace.
Good read from horse mouth:
https://developer.apple.com/library/archive/documentation/Da...
BSD jails are architected wholly differently from what something like Docker provides.
Jails are first-class citizens that are baked deep into the system.
A tool like Docker relies using multiple Linux features/tools to assemble/create isolation.
Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
Someone correct me please.
> BSD jails are architected wholly differently from what something like Docker provides. > Jails are first-class citizens that are baked deep into the system.
Both very true statements and worth remembering when considering:
> Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
You are quite correct, as Darwin is is based on XNU[0], which itself has roots in the Mach[1] microkernel. Since XNU[0] is an entirely different OS architecture than that of FreeBSD[3], jails[4] do not exist within it.
The XNU source can be found here[2].
0 - https://en.wikipedia.org/wiki/XNU
1 - https://en.wikipedia.org/wiki/Mach_(kernel)
2 - https://github.com/apple-oss-distributions/xnu
3 - https://cgit.freebsd.org/src/
4 - https://man.freebsd.org/cgi/man.cgi?query=jail&apropos=0&sek...
Thank you for the links I will take a closer look at XNU. It’s neat to see how these projects influence each other.
> what something like Docker provides
Docker isn't providing any of the underlying functionality. BSD jails and Linux cgroups etc aren't fundamentally different things.
Jails were explicitly designed for security, cgroups were more generalized as more about resource control, and leverages namespaces, capabilities, apparmor/SELinux to accomplish what they do.
> Jails create a safe environment independent from the rest of the system. Processes created in this environment cannot access files or resources outside of it.[1]
While you can accomplish similar tasks, they are not equivalent.
Assume Linux containers are jails, and you will have security problems. And on the flip side, k8s pods share UTM,IPC, Network namespaces, yet have independent PID and FS namespaces.
Depending on your use case they may be roughly equivalent, but they are fundamentally different approaches.
[1] https://freebsdfoundation.org/freebsd-project/resources/intr...
„Container“ is sort of synonymous with „OCI-compatible container“ these days, and OCI itself is basically a retcon standard for docker (runtime, images etc.). So from that perspective every „container system“ is necessarily „docker-like“ and that means Linux namespaces and cgroups.
With a whole generation forgetting they came first in big iron UNIX like HP-UX.
Interesting. My experience w/ HP-UX was in the 90s, but this (Integrity Virtual Machines) was released in 2005. I might call out FreeBSD Jails (2000) or Solaris Zones (2005) as an earlier and a more significant case respectively. I appreciate the insight, though, never knew about HP-UX.
HP-UX Vault, released with HP-UX 10.24, in 1996,
https://en.m.wikipedia.org/wiki/HP-UX
What you searched for is an evolution of it.
Does it really matter, tho?
Another reason it matters is they might have done it differently which could inspire future improvements. :)
I like to read bibliographies for that reason—to read books that inspired the author I’m reading at the time. Same goes for code and research papers!
Some people think it matters to properly learn history, instead of urban myths.
History is one thing, who-did-it-first is often just a way to make a point in faction debates. In the broader picture, it makes little difference IMHO.
WSL throughput is not enough for file intensive operations. It is much easier and straightforward to just delete windows and use Linux.
Using the Linux filesystem has almost no performance penalty under WSL2 since it is a VM. Docker Desktop automatically mounts the correct filesystem. Crossing the OS boundary for Windows files has some overhead of course but that's not the usecase WSL2 is optimized for.
With WSL2 you get the best of both worlds. A system with perfect driver and application support and a Linux-native environment. Hybrid GPUs, webcams, lap sensors etc. all work without any configuration effort. You get good battery life. You can run Autodesk or Photoshop but at the same time you can run Linux apps with almost no performance loss.
FWIW I get better battery life with ubuntu.
Are you comparing against the default vendor image that's filled with adware or a clean Windows install with only drivers? There is a significant power use difference and the latter case has always been more power efficient for me compared to the Linux setup. Powering down Nvidia GPU has never fully worked with Linux for me.
How? What's your laptop brand and model? I've never had better battery life with any machine using ubuntu.
WSL doesn't have a virtualization layer, WSL1 did have but it wasn't a feasible approach so WSL2 is basically running VMs with the Hyper-V hypervisor.
Apple looks like it's skipped the failed WSL1 and gone straight for the more successful WSL2 approach.
If they implemented the Linux syscall interface in their kernel they absolutely could.
Aren't the syscalls a constant moving target? Didn't even Microsoft fail at keeping up with them in WSL?
Linux is exceptional in that it has stable syscall numbers and guarantees stability. This is largely why statically linked binaries (and containers) "just work" on Linux, meanwhile Windows and Mac OS inevitably break things with an OS update.
Microsoft frequently tweaks syscall numbers, and they make it clear that developers must access functions through e.g. NTDLL. Mac OS at least has public source files used to generate syscall.h, but they do break things, and there was a recent incident where Go programs all broke after a major OS update. Now Go uses libSystem (and dynamic linking)[2].
Not Linux syscalls, they are a stable interface as far as the Linux kernel is concerned.
They're not really a moving target (since some distros ship ancient kernels, most components will handle lack of new syscalls gracefully), but the surface is still pretty big. A single ioctl() or write() syscall could do a billion different things and a lot of software depends on small bits of this functionality, meaning you gotta implement 99% of it to get everything working.
> Meet Containerization, an open source project written in Swift to create and run Linux containers on your Mac. Learn how Containerization approaches Linux containers securely and privately. Discover how the open-sourced Container CLI tool utilizes the Containerization package to provide simple, yet powerful functionality to build, run, and deploy Linux Containers on Mac.
> Containerization executes each Linux container inside of its own lightweight virtual machine.
That’s an interesting difference from other Mac container systems. Also (more obvious) use Rosetta 2.
Podman Desktop, and probably other Linux-containers on macOS tools, can already create multiple VMs, each hosting a subset of the containers you run on your Mac.
What seems to be different here, is that a VM per each container is the default, if not only, configuration. And that instead of mapping ports to containers (which was always a mistake in my opinion), it creates an externally routed interface per machine, similar to how it would work if you'd use macvlan as your network driver in Docker.
Both of those defaults should remove some sharp edges from the current Linux-containers on macOS workflows.
The ground keeps shrinking for Docker Inc.
They sold Docker Desktop for Mac, but that might start being less relevant and licenses start to drop.
On Linux there’s just the cli, which they can’t afford to close since people will just move away.
Docker Hub likely can’t compete with the registries built into every other cloud provider.
There is already a paid alternative, Orbstack, for macOS which puts Docker for Mac to shame in terms of usability, features and performance. And then there are open alternatives like Colima.
Use OrbStack for sometime, made my dev team’s m1 run our kubernetes pods in a much lighter fashion. Love it.
How does it compare to Podman, though?
Podman works absolutely beautifully for me, other platforms, I tripped over weird corner cases.
That is why they are now into the reinventing application servers with WebAssembly kind of vibe.
It’s really awful. There’s a certain size at which you can pivot and keep most of your dignity, but for Docker Inc., it’s just ridiculous.
It's cool but also not as revolutionary as you make it sound. You can already install Podman, Orbstack or Colima right? Not sure which open-source framework they are using, but to me it seems like an OS-level integration of one of these tools. That's definitely a big win and will make things easier for developers, but I'm not sure if it's a gamechanger.
All those tools use a Linux VM (whether managed by Qemu or VZ) to run the actual containers, though, which comes with significant overhead. Native support for running containers -- with no need for a VM -- would be huge.
Still needs a VM. It'll be running more VMs than something like orbstack, which I believe runs just one for the docker implementation. Whether that means better or worse performance we'll find out.
there's still a VM involved to run a Linux container on a Mac. I wouldn't expect any big performance gains here.
Yes, it seems like it's actually a more refined implementation than what currently exists. Call me pleasantly surprised!
The framework that container uses is built in Swift and also open sourced today, along with the CLI tool itself: https://github.com/apple/containerization
It looks like nothing here is new: we have all the building blocks already. What Apple done is packaged it all nicely, which is nothing to discount: there's a reason people buy managed services over just raw metal for hosting their services, and having a batteries included development environment is worth a premium over the need to assemble it on your own.
The containerization experience on macOS has historically been underwhelming in terms of performance. Using Docker or Podman on a Mac often feels sluggish and unnecessarily complex compared to native Linux environments. Recently, I experimented with Microsandbox, which was shared here a few weeks ago, and found its performance to be comparable to that of native containers on Linux. This leads me to hope that Apple will soon elevate the developer experience by integrating robust containerization support directly into macOS, eliminating the need for third-party downloads.
Docker at least runs a linux vm that runs all those containers. Which is a lot of needless overhead.
The equivalent of Electron for containers :)
I’ve been using Colima for a long while with zero issues, and that leverages the older virtualization framework.
yeah -- I saw it's built on "open source foundations", do you know what project this is?
My guess is Podman. They released native hypervisor support on macOS last year. https://devclass.com/2024/03/26/podman-5-0-released-with-nat...
Being able to drop Docker Desktop would be great. We're using Podman on MacOS now in a couple places, it's pretty good but it is another tool. Having the same tool across MacOS and Linux would be nice.
Migrate to Orbstack now, and get a lot of sanity back immediately. It’s a drop-in replacement, much faster, and most importantly, gets out of your way.
There's also Rancher Desktop (https://rancherdesktop.io/). Supports moby and containerd; also optionally runs kubernetes.
I have to drop docker desktop at work and move to podman.
I'm the primary author of amalgamation of GitHub's scripts to rule them all with docker compose so my colleagues can just type `script/setup` and `script/server` (and more!) and the underlying scripts handle the rest.
Apple including this natively is nice, but I won't be a able to use this because my scripts have to work on linux and probably WSL
Colima is my guess, only thing that makes sense here if they are doing a qemu vm type of thing
That's my guess too... Colima, but probably doing a VM using the Virtualization framework. I'll be more curious if you can select x86 containers, or if you'll be limited to arm64/aarch64. Not that it really makes that much of a difference anymore, you can get pretty far with Linux Arm containers and VMs.
Should be easy enough, look for the one with upstream contributions from Apple.
Oh, wait.
They Sherlocked OrbStack.
Well, Orbstack isn't really anything special in terms of its features, it's the implementation that's so much better than all the other ways of spinning up VMs to run containers on macos. TBH, I'm not 100% sure 2025 Apple is capable anymore of delivering a more technically impressive product than orbstack ...
That's a good thing though right?
It would be better for the OrbStack guy if they bought it.
Apple sees some nice code under a pushover license and they just can’t help themselves.
Interestingly it looks like Apple has rewritten much of the Docker stack in Swift rather than using existing Go code.
Microsoft did it first to Virtual Box / VMWare Workstation thought.
That is what I have been using since 2010, until WSL came to be, it has been ages since I ever dual booted.
Orbstack has been pretty bulletproof
It's a VM just like WSL... So yeah.
WSL 2 involves a VM. WSL 1, which is still maintained and usable, doesn't.
https://learn.microsoft.com/en-us/windows/wsl/compare-versio...
Ok, I've squeezed containerization into the title above. It's unsatisfactory, since multiple announced-things are also being discussed in this thread, but "Apple's kitchen-sink announcement from WWDC this year" wouldn't be great either, and "Apple supercharges its tools and technologies for developers to foster creativity, innovation, and design" is right out.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
> Apple Announces Foundation Models and Containerization frameworks, etc.
This sounds like apple announced 2 things, AI models and container related stuff I'd change it to something like:
> Apple Announces Foundation Models, Containerization frameworks, more tools
The article says that what was announced is "foundation model frameworks", hence the awkward twist in the title, to get two frameworkses in there.
Some 15 years ago, A friend of mine said to me "mark my words, Apple will eventually merge OSX with iOS on the iPad". And with every passing keynote since then, it seemed Apple's been inching towards that prophecy, and today, the iPad has become practically a MacBook Air with a touch screen. Unless you were a video editor, programmer who needs resources to compile or a 3D artist, I don't see how you'd need anything other than an iPad.
The fact that they haven't done it in 15 years should be an indication that they don't intend to do it at all. Remember that in the same time period Apple rebuilt every Macbook from scratch from the chipset up. Neither the hardware nor software is a barrier to them merging the two platforms. It's that the ecosystems are fundamentally incompatible. A true "professional" device needs to offer the user full control, and Apple isn't giving up this control on an i-Device. The 30% cut is simply too lucrative.
If anyone wants to read up on how much effort Apple actually went through to keep Apple Silicon Macs open, take a look here: https://asahilinux.org/docs/platform/security/#per-container...
Secure Boot on other platforms is all-or-nothing, but Apple recognizes that Mac users should have the freedom to choose exactly how much to peel back the security, and should never be forced to give up more than they need to. So for that reason, it's possible to have a trusted macOS installation next to a less-trusted installation of something else, such as Asahi Linux.
Contrast this with others like Microsoft who believe all platforms should be either fully trusted or fully unsupported. Google takes this approach with Android as well. You're either fully locked in, or fully on your own.
> You're either fully locked in, or fully on your own.
I'm not sure what you mean by that. You can trivially root a Pixel factory image. And if you're talking about how they will punish you for that by removing certain features: Apple does that too (but to a lesser extent).
https://github.com/cormiertyshawn895/RecordingIndicatorUtili...
On Android devices with AVB (so basically everything nowadays), once the bootloader is unlocked, so many things already either lock you out or degrade your service in various ways. For example, Netflix will downgrade you to 480p, Google Pay will stop working, many apps will just straight up disappear from the Play Store because SafetyNet will stop passing (especially on newer devices with hardware attestation), banking apps (most notably Cash App) will often stop working, many other third-party apps that don't even have anything to do with banking will still lock you out, etc.
On many Android devices, unlocking the boot loader at any point will also permanently erase the DRM keys, so you will never again be able to watch high resolution Netflix (or any other app that uses Widevine), even if you relocked the bootloader and your OS passed verified boot checks.
On a Mac, you don't need to "unlock the bootloader" to do anything. Trust is managed per operating system. As long as you initially can properly authenticate through physical presence, you totally can install additional operating systems with lower levels of trust and their existence won't prevent you from booting back into the trusted install and using protected experiences such as Apple Pay. Sure, if you want to modify that trusted install, and you downgrade its security level to implement this, then those trusted experiences will stop working (such as Apple Pay, iPhone Mirroring, and 4K Netflix in Safari, for instance), but you won't be rejected by entire swathes of the third-party app ecosystem and you also won't lose the ability to install a huge fraction of Mac apps (although iOS and iPadOS apps will stop working). You also won't necessarily be prevented from turning the security back up once you're done messing around, and gaining every one of those experiences back.
So sure, you can totally boil it down to "Apple still punishes you, only a bit less", but not only do they not even punish your entire machine the way Microsoft and Google do, but they even only punish the individual operating system that has the reduced security, don't punish it as much as Microsoft and Google do, and don't permanently lock things out just because the security has ever been reduced in the past.
Do keep in mind though, the comparison to Android is a bit unfair anyway because Apple's equivalent to the Android ecosystem is (roughly; excluding TV and whatever for brevity) iPhone and iPad, and those devices have never and almost certainly will never offer anything close to a bootloader unlock. I just had used it as an example of the all or nothing approach. Obviously Apple's iDevice ecosystem doesn't allow user tampering at all, not even with trusted experiences excluded.
Fun fact though: The Password category in System Settings will disappear over iPhone Mirroring to prevent the password from being changed remotely. Pretty cool.
That is a good point. I wish dual booting with different security settings was possible on Android as well. The incentives for Google to implement that aren't really there though.
Out of interest, are you currently using android (or fork) or iOS?
I used Android until around last January year when I switched to iPhone, because it works better with Mac (which I'd switched back to about a month prior, after having enough of around four years of dealing with Windows's bullshit). Not that Android worked well with Windows... I just didn't even have the idea in my head that devices could work well together at all. AirDrop changed my mind! (And all the other niceties, like Do Not Disturb syncing, and so on...)
I used to tweak/mod Android and most recently preferred customizing the OEM install over forks. I stopped doing that when TWRP ran something as OpenRecoveryScript and immediately wiped the phone without giving me any opportunity to cancel. My most recent Android phone I never bothered to root. I may never mod Android again.
This is a pretty wild take.
Its reasonable to install a different OS on Android, even if some features don't work. I've done this, my friends and family have done this, I've seen it IRL.
I've never seen anyone do this on iPhone in my entire life.
But I flipped and I'm a Google hater. Expensive phones and no aux port. At least I can get cheap androids still.
If anyone wants to read up on all the features Apple didn't implement from Intel Macs that made Linux support take so long, here is a list of UEFI features that represents only a small subset of the missing support relative to AMD and Intel chipsets: https://en.wikipedia.org/wiki/UEFI#Features
Alternatively, read about iBoot. Haha, just kidding! There is no documentation for iBoot, unlike there is for uBoot and Clover and OpenCore and SimpleBoot and Freeloader and systemd-boot. You're just expected to... know. Yunno?
To be fair, this is how homebrew for Apple devices has always worked. You've always had to effectively reverse engineer the platform in order to write privileged code. Although I get the argument that if Apple were explicitly trying to support alternative operating systems they probably could have done more to make it easy, really what they were doing with this was first and foremost enabling additional use cases for macOS, and then maybe silently doing it in a way that third parties would also be able to benefit from. The Asahi wiki does a bit of a better job of explaining this, but the suspicion is that Apple did this not necessarily to make it easier for alternative operating systems to exist but to prevent the Mac from needing to be jailbroken when alternative operating systems were bound to happen anyway.
It's not how homebrew worked on Intel Macs, or even PowerMacs[0] either. It's a change made with the Apple Silicon lineup - I cannot speak on Apple's behalf to tell you why they did that. But I can blame UEFI as the reason why the M3 continues to have pitiful Linux support when brand-new AMD and Intel chips have video drivers and power management on Day One.
The EFI environment does provide some basic drivers for the boot environment, but they all go away once the OS loads, except for a handful of functions such as EFI variable management. (Linux can also reuse a framebuffer originally obtained from EFI for a very limited form of video support - efifb - but that’s not proper video support.) So EFI doesn’t get credit for video drivers or power management.
For power management, you can however give some credit to ACPI, which is not directly related to UEFI (it predates it), but is likewise an open standard, and is generally found on the same devices as UEFI (i.e. PCs and ARM servers). ACPI also provides the initial gateway to PCIe, another open standard; so if you have a discrete video card then you can theoretically access it without chipset-specific drivers (but of course you still need a driver for the card itself).
But for onboard video, and I believe a good chunk of power management as well, the credit goes to drivers written for Linux by the hardware vendors.
Sorry, I should have specified Apple Silicon rather than just "Apple devices". Obviously the devices that used widely supported CPUs running pretty much widely supported firmware were pretty easy to install non-Apple things on. My Mid-2015 A1398 ran a triple boot between macOS, Windows and Arch Linux thanks to rEFInd.
the only macbook I’ve tried to put linux on was a t2 machine, and it still doesn’t sleep/suspend right, so I’m a bit skeptical that apple is really leading the way here, but maybe I’ve just not touched any recent windows devices either
To be fair, sleep/suspend has been a rather infamously difficult problem for Linux when it comes to devices that weren't designed to run Linux. I think the Macs with T2 chips were a bit weird anyway and I wonder if they had already been working on Apple Silicon Macs that far back and that's why the T2 became a thing?
Apple is also rather notorious for tinkering with Intel's ACPI files, for better or worse. Suspend is finnecky enough on hardware that supports it, and probably outright impossible if your CPU power states disagree with what the software is expecting.
They don’t want to overtake their desktop device market. If the UI fully converges, then all you have a iPad with a keyboard across all devices (laptops, desktop).
I think practically everyone is better off with a laptop. iPad is great if you're an artist using the pencil, or just consuming media on it. Otherwise a macbook is far more powerful and ergonomic to use.
I think perhaps you are overestimating the computing needs of the majority of the population. Get one of the iPad cases with a keyboard and an iPad is in many ways a better laptop.
But the majority won't pay extra for an ipad and a keyboard, when they can pay less for an air with everything included...
I'm not sure - I just looked casually at some options and it appears one can find an iPad between $700-$900 for a pretty solid model, which includes the $250 folio keyboard. The base model MBA starts at $999. So depends on whether you want a traditional laptop or a "computing device."
Or maybe a stand and separate keyboard. Better ergonomics than a laptop that way with similar portability.
Any keyboard you recommend? I'm looking around myself.
Something wireless would be nice for portability IMO, e.g. Apple or Logitech Bluetooth. Security considerations there though.
I wouldn't want a numpad. A track point would be ape.
I struggle with keyboard recommendations b/c I'm not fully satisfied lol.
I have an iPad and really like it, but no, it is not.
Several small things combined make it really different to the experience that I have with a desktop OS. But it is nice as side device
I'm guessing you are coming at it from the perspective of a laptop user and likely a power user. The majority of the population just needs to scroll social media, message some friends, send an email or two, do a little shopping, maybe write a document or two. For this crowd an iPad is plenty. When I was a software developer - yeah, I had a Mac Pro on my desk and a MBP I carried when I traveled. Now as a real estate agent, an iPad is plenty for when I'm on the go.
I used to think that, not having used an iPad. Now I carry a work-issued iPad with 5G and it's actually pretty convenient for remote access to servers. I wouldn't want to spend a day working on it, but it's way faster than pulling out a laptop to make one tiny change on a server. It's also great for taking notes at meetings/conferences.
It's irritatingly bad at consuming media and browsing the web. No ad blocking, so every webpage is an ad-infested wasteland. There are so many ads in YouTube and streaming music. I had no idea.
It's also kindof a pain to connect to my media library. Need to figure out a better solution for that.
So, as a relatively new iPad user it's pleasantly useful for select work tasks. Not so great at doomscrolling or streaming media. Who knew?
There's native ad blocking on iOS and has been for a while—I've found that to significantly enhance the usability of the device. I use Wipr[0], other options are available.
I use Wipr on my phone, the experience is a lot worse than ublock origin on desktop...
Use Orion Browser. It allows installing Firefox/Chrome extensions. Install Firefox unlock Origin.
Try the Brave browser for YouTube. I used Jellyfin for my media library and that seemed to work fine for tv and movies.
I just got a Macbook and haven't touched my iPad Pro since, I would think I could make a change faster on a Macbook then iPad if they were both in my bag. Although I do miss the cellular data that the iPad has.
> practically everyone is better off with a laptop
The majority of the world are using their phones as a computing device.
And as someone with a MacBook and iPad the later is significantly more ergonomic.
I prefer MacBook to iPad most of the time. The only use case for iPad for me where it shines is when I need to use a pencil.
I don't understand why my MacBook doesn't have a touchscreen. I'm switching to an iPad Pro tomorrow. I use Superwhisper to talk to it 90% of the time anyway.
My theory is because of the hinge, which is a common point of failure on laptops. Either you are putting extra strain on it by having someone constantly touching the screen, and some users just mash their fingers into touch screens. Or users want a fully openable screen to mimic a tablet format, and those hinges always seem to fail quicker. Every touchscreen laptop I've had eventually has had the hinge fail.
There seems to be some kind of incompatibility between antiglare and oleophobic coatings that may also contribute.
Every single touch screen laptop I’ve seen has huge reflection issues, practically being mirrors. My assumption is that in order for the screen to not get nasty with fingerprints in no time, touchscreen laptops need oleophobic coating, but to add that they have to use no antiglare coating.
Personally I wouldn’t touch my screen often enough to justify having to contend with glare.
Apple is capable of solving it if they want to. They don't want to (yet at least).
Because MacBooks have subpar displays, at least the M4 Air does. The iPad Pro is a better value.
Yeah I think the majority of users, even in an office environment would be better of with an iPad in 99% of cases. All standard office stuff, like presentations; documents and similar are going to run better on an iPad. There are less foot guns, users are less likely to open 300 tabs just because they can.
If you are a developer or a creative however, then a Mac is still very useful.
I don't use an iPad much, but it's been interesting to watch from afar how it's been changing over these years.
They could have gone the direction of just running MacOS on it, but clearly they don't want to. I have a feeling that the only reason MacOS is the way it is, is because of history. If they were building a laptop from scratch, they would want it more in their walled garden.
I'm curious to see what a "power user" desktop with windowing and files, and all that stuff that iPad is starting to get, ultimately looks like down this alternative evolutionary branch.
Its obvious isn't it? It will look like a desktop, except Apple decides what apps you can run and takes their 30% tax on all commerce.
Yeah, it's like we're watching two parallel evolution paths: macOS dragging its legacy along, and iPadOS trying to reinvent "productivity" from first principles, within Apple's tight design sandbox.
I really wish there was some sort of hybrid device. I often travel by foot/bike/motorbike and space comes at a premium. I'd have a Microsoft Surface if Windows was not so unbearable.
On the other hand, I have come to love having a reading/writing/sketching device that is completely separate from my work device. I can't get roped into work and emails and notifications when I just want to read in bed. My iPad Mini is a truly distraction-free device.
I also think it would be hard to have a user experience that works great both for mobile work and sitting-at-a-desk work. I returned my Microsoft Surface because of a save dialog in a sketching app. I did not want to do file management because drawing does not feel like a computing task. On the other hand, I do want to deal with files when I'm using 3 different apps to work on a website's files.
Whether or not they eventually fuse, I don't know—I doubt it. But the approach they've taken over the past 15 years to gradually increase the similarities in user experience, while not trying to force a square peg in a round hole, have been the best path in terms of usability.
I think Microsoft was a little too eager to fuse their tablet and desktop interface. It has produced some interesting innovations in the process but it's been nowhere near as polished as ipadOS/macOS.
ipad hardware is a full blown M chip. There's no real hardware limitation that stops the iPad from running macOS, but merging it cannibalizes each product line's sales
The new windowing feature basically cannibalizes MacBook Air.
A Macbook Air is cheaper than an iPad Pro with a keyboard though. Not to mention you still can't run apps from outside the app store, and most of these new features we're hoping work as well as they do on MacOS, but given that background tasks had to be an API, I doubt they will.
iPad+keyboard is also awkwardly top heavy and not very well suited for lap use. That might cease to be an issue with sufficiently dense batteries bringing down the weight of the iPad though.
I still find iPadOS frustrating for certain "pro" workflows. File management, windowing, background tasks - all still feel half-baked compared to macOS. It's like Apple's trying to protect the simplicity of iOS while awkwardly grafting on power-user features
> The iPad has become practically a MacBook Air with a touch screen. Unless you were a video editor, programmer who needs resources to compile or a 3D artist, I don't see how you'd need anything other than an iPad.
No! It's not - and it's dangerous to propagate this myth. There are so many arbitrary restrictions on iPad OS that don't exist on MacOS. Massive restrictions on background apps - things like raycast (MacOS version), Text Expander, cleanshot, popclip, etc just aren't possible in iPad OS. These are tools that anyone would find useful. No root/superuser access. I still can't install whatever apps I want from whatever sources I want. Hell, you can't even write and run iPadOS apps in a code editor on the iPad itself. Apple's own editor/development tool - Xcode - only runs on MacOS.
The changes to window management are great - but iPad and iPadOS are still extremely locked down.
But when you have so many customers buying and using both, seems like it'd be bad business for them to fully merge those lines.
With Microsoft opening Windows's kernel to the Xbox team, and a possible macOS-iPadOS unification, we are reaching multiple levels of climate changes in Hell. It's hailing!
> I don't see how you'd need anything other than an iPad.
For the same price, you still get a better mac.
Does an iPad allow for multiple users?
Yes, but only if it's enrolled in MDM, bizarrely enough.
I wish Apple provided the MDM, rather than relying on a random consumer ecosystem of dodgy companies who all charge 3-18$ per machine per month, which is a lot.
Auth should be Apple Business Manager; image serving should be passive directories / cloud buckets.
Apple launched their own solution last year (maybe it was the year before).
Haven’t tried it though, still using JamF.
I wish they’d focus on just enabling actual functionality on iPad - like can I have Xcode please? And a shell?
I dgaf what the UI looks like. It’s fine.
Nothing Apple can do to iPadOS is going to fix the fundamental problem that:
1. iPadOS has a lot of software either built for the "three share sheets to the wind" era of iPadOS, or lazily upscaled from an iPhone app, and
2. iPadOS does not allow users to tamper with the OS or third-party software, so you can't fix any of this broken mess.
Video editing and 3D would be possible on iPadOS, but for #1. Programming is genuinely impossible because of #2. All the APIs that let Swift Playgrounds do on-device development are private APIs and entitlements that third-parties are unlikely to ever get a provisioning profile for. Same for emulation and virtualization. Apple begrudgingly allows it, but we're never going to get JIT or hypervisor support[0] that would make those things not immediately chew through your battery.
[0] To be clear, M1 iPads supported hypervisor; if you were jailbroken on iPadOS 14.5 and copied some files over from macOS you could even get full-fat UTM to work. It's just a software lockout.
The video on Containerization.framework, and the Container tool, is live [0].
It looks like each container will run in its own VM, that will boot into a custom, lightweight init called vminitd that is written in Swift. No information on what Linux kernel they're using, or whether these VMs are going to be ARM only or also Intel, but I haven't really dug in yet [1].
looks like there isn't much to take away from this, here's a few bullet points:
Apple Intelligence models primarily run on-device, potentially reducing app bundle sizes and the need for trivial API calls.
Apple's new containerization framework is based on virtual machines (VMs) and not a true 'native' kernel-level integration like WSL1.
Spotlight on macOS is widely perceived as slow, unreliable, and in significant need of improvement for basic search functionalities.
iPadOS and macOS are converging in terms of user experience and features (e.g., windowing), but a complete merger is unlikely due to Apple's business model, particularly App Store control and sales strategies.
The new 'Liquid Glass' UI design evokes older aesthetics like Windows Aero and earlier Aqua/skeuomorphism, indicating a shift away from flat design.
Full summary (https://extraakt.com/extraakts/apple-intelligence-macos-ui-o...)
App Store control is something that EU is challenging, including on iPads. So while there’s no macOS APIs on ipadOS, I can totally see 3rd party solutions running macOS apps (and Linux or Windows, too) in a VM and outputting the result as now regular iPad windowed apps.
> including over 250,000 APIs that enable developers to integrate their apps with Apple’s hardware and software features.
This doesn’t sound impressive, it sounds insane.
I'm cautious. Apple's history with developer tools is hit or miss. And while Xcode integrating ChatGPT sounds helpful in theory, I wonder how smooth that experience really is.
Oh, Apple is doing windows Aero now? Wonder how long that one'll last.
Related ongoing threads:
Containerization is a Swift package for running Linux containers on macOS - https://news.ycombinator.com/item?id=44229348 - June 2025 (158 comments)
Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44229239 - June 2025 (11 comments)
I don't understand the foundation models here. Are they new LLMs trained by Apple such as Qwen?
iPad update is going to encourage a new series of folks trying to use iPads for general programming. I'm curious how it goes this time around. I'm cautiously optimistic
Isn't it still impossible to run any dev tools on the iPad?
IIRC Swift Playgrounds goes pretty deep -- a full LLVM compiler for Swift and you can use any platform API -- but you can't build something for distribution. The limitations are all at the Apple policy level.
Not quite. As another user mentioned, there's Swift Playgrounds which is complete enough that you can even upload apps made in it to the App Store. Aside from that, there are also IDEs like Pythonista for creating Python-based apps and others for Lua, JavaScript, etc. many of which come with their own frameworks for making native iOS/iPadOS interfaces.
I can assume that they are going to bring the Container stuff to iPad at some point. That would unlock so many things...
Does this mean we will longer need Docker Desktop or colima?
That's what it sounds like: https://developer.apple.com/videos/play/wwdc2025/346
Not if musl is standard
That's just for the statically-linked vminitd binary in the VM I believe. Containers should still be running whatever's packaged into them.
I wonder what happened to Siri. Not a single mention anywhere?
I actually loved Siri when it first came out. It felt magical back then (in a way)
They also just announced that Shortcuts can use these endpoints (or Private Cloud Compute or ChatGPT).
Will they ever update Terminal.app?
They did!!! At least color options. Just announced at platform state of the union
Yes! In another comment of mine, that was the main thing I mentioned, haha.
edit: For those curious, https://youtu.be/51iONeETSng?t=3368.
- New theme inspired by Liquid Glass
- 24-bit colour
- Powerline fonts
Unlikely to happen soon. It’s maintained by one engineer who is very against anything resembling iTerm2.
Just use iTerm2 (Warp or Kitty are two other options out of many) and be done w/it; why would Apple even worry about this when so few people who care about terminal applications even think twice about it?
I've tried all of them, including ones that yourself, and others, haven't mentioned like Rio. I stand by wanting Terminal.app simply updated with better colour support, then it's one less alternative program to get.
WebKit is also being swiftified, as mentioned on the platforms state of the union.
As in they're integrating Swift into the WebKit project, or exposing Swift-y wrappers over WebKit itself?
There is probably going to be a session later this week, the reference seemed to imply they are integrating Swift into Webkit project for new development.
Interesting, I wonder if that pushes Swift on Linux further given other projects (webkitgtk etc).
Most likely not, for Apple what matters for Swift on Linux, is being a good server language for app developers that want to share code between app and server, with Apple no longer caring to sell macOS for servers.
Everything else they would rather see devs stay on their platforms, see the official tier 1 scenarios on swift.org.
> Every Apple Developer Program membership includes 200GB of Apple hosting capacity for the App Store. Apple-Hosted Background Assets can be submitted separately from an app build.
Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
Is it a way to give developers a reason to limit distribution to only the official App Store, or will this be offered regardless of what store the app is downloaded from?
> Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
They've offered 25hrs/mo of Xcode Cloud build time for the last couple years.
Background Assets have existed for years. I’m not sure that 200GB figure is new.
I watched the video and it seems they are statically linking atop musl to build their lightweight VM layer. I guess the container app itself might use glibc, but will the musl build for the VM itself cause downstream performance issues? I'm no expert in virtualization to be able to understand if this should be a concern or not.
See also:
- https://edu.chainguard.dev/chainguard/chainguard-images/abou...
Excited to try these out and see benchmarks. Expectations for on device small local model should be pretty low but let’s see if Apple cooked up any magic here.
some benchmarks here: https://machinelearning.apple.com/research/apple-foundation-...
Hopefully not bound to SwiftUI like seemingly everything else Apple Intelligence so far. But on-device llm (private) would be real nice to have.
The api looks like "give it a string prompt, async get a string back", so not tied to any particular UI Framework.
"The framework has native support for Swift, so developers can easily access the Apple Intelligence model with as few as three lines of code."
Bad news.
Does this mean, we can use llm for free in developing ios apps?
I like that there's support for locally-run models on Xcode.
I wish I thought that the Game Porting Toolkit 3 would make a difference, but I think Apple's going to have to incentivize game studios to use it. And they should; the Apple Silicon is good enough to run a lot of games.
... when are they going to have the courage to release MacOS Bakersfield? C'mon. Do it. You're gonna tell me California's all zingers? Nah. We know better.
I sure hope they provide an accessibility option to turn down translucency to improve contrast or this UI is a non-starter for me. Without using it, this new UI looks like it may favor design over usability. Why don’t they do something more novel and let user tweak interface to their liking?
They’ve had Reduce Transparency (under Accessibility) for a long time now. It still works.
I hope they don't turn Liquid Glass into Aqua... which I hated. The only time I started to like the iOS interface was iOS 7 with flat design. I hope they don't turn this into old, skeuomorphic, Aqua-like UI by time.
It's the year of Linux on the desktop!
TIL macOS doesn’t have native containers, just in vm.
Don’t use macOS but had just kinda assumed it would by virtue of shared unixy background with Linux
Dont containers imply a linux kernel interface ? hence, you can only have truly native containers on linux or use containers in a VM or some kind of Wine-like translation layer.
Is there a beta we can install to try out these models yet?
This press release says it will be available “starting today” through developer program https://www.apple.com/newsroom/2025/06/apple-supercharges-it...
they mention kata, so is this using kata underneath instead of their Hypervisor.framework?
im confused
Can someone who uses Xcode daily compare to say Cursor or VsCode how the developer experience is. Just curious how Apple is keeping up
XCode so far is very rudimentary. miles behind VSCode in autocomplete. autocomplete is very small, single line, and suggests very very rarely. and no other features except autocomplete exist.
very good to see XCode LLM improvements!
> I use VSCode Go daily + XCode Swift 6 iOS 18 daily
Several years ago XCode also had “jump to definition” and a few other features.
All this focus on low power gaming makes me think Apple wants to get in on the Steam Deck hype.
Apple is in a reasonably good place to make gaming work for them.
Their hardware across the board is fairly powerful (definetly not top end), they have a good API stack especially with Metal. And they have systems at all levels including TV. If they were to just make a standard controller or just say "PS5 dualshock is our choice" they could have a nice little slice for themselves.
As I understand it, Apple has a long history of entitlement and burning bridges with every major game developer while making collaboration extremely painful. They were in a much better place to make gaming work 10 years ago when major gaming studios were still interested in working with them.
Just let me have JIT! My jailbroken iPad Pro can emulate Wii at 4k without getting warm. Unfortunately you have to hack around enabling JIT on newer ios releases.
Until Apple-ported games are able to be installed from Steam instead of the App Store, you can count me out.
They better have a partnership with Sony in the works, then. Valve and Apple's approach to supporting video games diverged a decade ago. Hearing "Steam" and "Apple" uttered in the same breath is probably giving people panic attacks already.
> New Design with Liquid Glass Yes, bringing back aqua! I even see blue in their examples.
Does the privacy preserving aspect of this mean that Apple Intelligence can be invoked within an app, but the results provided by Apple Intelligence are not accessible to the app to be transmitted to their server or utilized in other ways? Or is the privacy preservation handled in a different way?
I think they just mean private from Apple. I don’t see how they can keep it private from the developer if it’s integrated into the app
What model are they bundling? Something apple-custom? How capable is it?
They described their home-grown models last year: https://machinelearning.apple.com/research/introducing-apple...
I'm assuming this is an updated version of those.
Apple has their own models under the hood I believe. I remember from like a year or two ago they had an open line called "ELM" (Efficient Language Model), but I'm not sure if that's what they're actually using.
I am excited to see what the benchmarks look like though, once it's live.
They're also working with Anthropic on a coding platform: https://www.macrumors.com/2025/05/02/apple-anthropic-ai-codi...
Not sure about that Liquid Glass idea.
Ultimately UI widgets are rooted in reality (switches, knobs, doohickeys) and liquid glass is Salvador-Dali-Esque.
Imagine driving a car and the gear shifter was made of liquid glass… people would hit more grannies than a self-driving Tesla.
There is almost no information under the link
iPadOS and OSX continue to converge into one platform.
Multi-user iPadOS when?
When they figure out how to make it not dent sales of individual devices. If you and your spouse could easily share one around the house for different purposes but still having each of your personal apps and settings, you might not buy two!
I think this may be overestimating how often people buy tablets. My wife has an iPad Air 1 or 2, so it's close to 10 years old and mostly sits in a drawer. I had a VERY old iPad 2 that I held off on replacing because I wanted to wait for a multi-user iPad.
I finally gave up and bought a Mini6 a year or two ago, which gets.... also minimal use. And I'm sure not buying ANOTHER tablet we're not going to use.
If they were multi-user I actually think we'd both get more value out of it, and upgrade our one device more often.
> If you and your spouse could easily share one around the house for different purposes but still having each of your personal apps and settings, you might not buy two!
I get it, but an iPad starts at $349; often available for less.
At this point, an iPad is no different than a phone—most people wouldn't share a single tablet.
Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
> an iPad starts at $349; often available for less.
It's less about the cost and more about having to have another stupid device to charge, update, and keep track of, when a tablet is not a device that gets used enough by any one person to be worth all that. It would be much more convenient to have a single device on a coffee or end table which all family members could use when they need to do more than you can do on a phone.
> Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
Maybe. Probably 90% of work laptops are single-user, I'm sure. But for home computers, multi-user can be very useful. And it's better than ever to use laptops as dumb terminals, since all most people's stuff is in the cloud. It's not nearly as much trouble to get your secondary user account on a spare laptop in the living room to be useful as it was in the Windows XP days. Just having a browser that's signed into your stuff, plus Messages or Whatsapp, and maybe Slack/Discord/etc. is enough.
> most people wouldn't share a single tablet.
Since iPads have never supported doing so in a sane way, that unfounded assertion is just as likely due to the fact that it's a terrible experience today, since if you share one today, someone else will be accidentally marking your messages as read, you'll be polluting their browser or YouTube history, etc.
It's also the kind of dismissive claim true Apple believers tend to trot out when someone points out a shortcoming: "Nobody wants to use a touchscreen laptop!" "Nobody wants USB-C on an iPhone when Lightning is slightly smaller!" "Nobody needs an HDMI port or SD slot on a MacBook Pro!" "Nobody needs a second port on the 12-inch MacBook!" Most of the above things have come true except the touch laptop, and somehow it hasn't hurt anyone, but the "nobody wants..." crew immediately stops when Apple finally [re-]embraces something
We use iPads interchangeably. All personal apps like banking are on phones. Some apps that only I would use such as for the roomba and car are on both.
Having profiles for the kids however would be nice though. But most apps have that built in themselves.
Thank goodness… this will hopefully help keep app bundle sizes down, and allow developers to avoid calling AI APIs for trivial stuff like summaries.
back to "glass" UI element/design? Early 2000s is back, I guess.
Edit: surprised apple is dumping resources into gaming, maybe they are playing the long game here?
After reading the book "Apple in China", it’s hilarious to observe the contrast between Apple as a ruthless, amoral capitalist corporation behind the scenes and these WWDC presentations...
> New Design with Liquid Glass
Looks like software UI design – just like fashion, film, architecture and many other fields I'm sure – has now officially entered the "nothing new under the sun" / "let's recycle ideas from xx years ago" stage.
https://en.wikipedia.org/wiki/Aqua_%28user_interface%29
To be clear, this is just an observation, not a judgment of that change or the quality of the design by itself. I was getting similar vibes from the recent announcement of design changes in Android.
To me it looks more like Windows Vista's "Aero" than OS X's "Aqua".
And I couldnt be happier to see it back. I have not been a fan of the flattening of UI design over the last 15 years.
But the opposite of "flat" is not "transparent".
This was posted in another HN thread about Liquid Glass: https://imgur.com/a/6ZTCStC . I'm sure Apple will tweak the opacity before it goes live, but this looks horribly insane to me.
Agreed, people have said perhaps its Apple's way of bringing VR vibes to the UI, showing layers of UI elements.
But I'm not so sure if I want transparent.
The explicitly mention this is (paraphrasing) "bringing elements from visionOS to all your devices" in the video in TFA.
I'll just want the option to turn it off because it will use extra CPU cycles just existing.
I remember the catastrophe of Windows Vista, and how you needed a capable GPU to handle the glass effect. Otherwise, one of your (Maybe two) CPU cores would have to process all that overhead.
bleary eyed, waking up while trying to find my reading glasses would make that interface essentially useless.
It also looks like KDE 4.
Maybe this is consequence of the Frutiger Aero trend, and that users miss the time where user interfaces were designed to be cool instead of only useful
Current interfaces are not aimed at being optimally useful. Padding everywhere as of today means more time scrolling and wasted screen space. Animations everywhere means a lot of wasted time watching pixels moving instead of the computer/phone giving us control immediately after it did the thing we (maybe) asked for. Hiding scrollbars is a nightmare in general in desktop OSes but is the default (once lost half an hour setting up a proxy because the "save" button was hidden behind a scrollbar).
Usability feels it has only been down since Windows 7. (on another hand, Windows has plenty of accessibility features that help a lot in restoring usability)
I love that we're getting some texture back. UI has been so boring since iOS 7.
Sebastiaan de With of Halide fame did a writeup about this recently, and I think he makes some great points.
Interesting, I never made the connection between dashboard widgets UI and early iPhone UI. It does make sense, early iPhone had a UI that was glossier and more colorful than "metallic" aqua.
Open link and type into this box "physicality is the new skeumorphism"
Read on and:
They are completely dynamic: inhabiting characteristics that are akin to actual materials and objects. We’ve come back, in a sense, to skeuomorphic interfaces — but this time not with a lacquer resembling a material. Instead, the interface is clear, graphic and behaves like things we know from the real world, or might exist in the world. This is what the new skeuomorphism is. It, too, is physicality.
Well worth reading for the retrospective of Apple's website taking a twenty year journey from flatland and back.
They’re describing material design, which Google popularized. Skeuomorphism with things that could exist in the real world, avoid breaking the laws of physics, etc. Which then morphed into flat design as things like drop shadows were seen as dated. You are here.
I kind of hate it. Every use of it in the videos shown so far has moments where it's so transparent as to have borderline unreadable contrast.
Same. And white on light blue is just as bad. Looks like I’ll be using more accessibility features.
This is the first time I have ever thought “maybe I don’t want to update my phone“. Entirely because of the look.
In Settings -> Accessibility -> Display, you can enable Increase Contrast or Reduce Transparency to get rid of some of the worse glass effects, and Settings -> Accessibility -> Motion, you can enable Reduce Motion to get rid of the some of the light effects for content passing under glass buttons.
I agree with you, I hope they quickly tweak this into something more readable. There could be a really nice mid ground here.
I used to find these changes compelling but now I think they are mostly a pain in the ass or questionable.
Proof of a well-designed UI is stability, not change.
Reads to me strongly of an effort to give traditional media something shiny to put above the headline and keep the marketing engine running.
If you read the press release, you can see it's 100% about marketing and nothing else.
Apple will spend 10x the effort to tell you way a useless feature is necessary before they look at user feedback.
I’m usually on board with Apple UI changes but something about all the examples they showed today just looked really cheap.
My only guess is this style looks better while using the product but not while looking at screenshots or demos built off Illustrator or whatever they’re using.
In fact, Apple once did a version of Aqua that did an overengineered materials-based rasterization at runtime, including a physically correct glass effect.
It was too slow and was later optimized away to run off of pre-rendered assets with some light typical style engine procedural code.
Feels like someone just dusted off the old vision now that the compute is there.
Just one or two years ago I remember a handful of articles popping up that Gen Z was really into Frutiger Aero, that's the first thing I thought of, with the nature themes and skeuomorphic UI elements.
https://www.yahoo.com/lifestyle/why-gen-z-infatuated-frutige...
Back when Jobs was introducing one of the Mac OS X versions, there was a line that stuck with me.
Showing off the pulsating buttons he said something like "we have these processors that can do billions of calculations of second, we might as well use them to make it look great".
And yet a decade later, they were undoing all of that to just be flat an boring. Im glad they are using the now trillions of calculations a second to bring some character back into these things.
He was selling. The audience were sales. OS's were fully matured at that point. Computers were something you buy at a store. It was a selling point.
A decade later they were handling the windfall that came with smartphone ascendancy. An emergence of an entirely new design language for touch screen UI. Skeumorphism was slowing that all down.
Making it all flat meant making it consistent, which meant making it stable, which meant scalability. iOS7 made it so that even random developers' apps could play along and they needed a lot of developers playing along.
Liquid Glass is not adding a dimension. It is still flat UI, sadly. They just gave the edges of the window a glass like effect. There's also animation ("liquid" part). Overall, very disappointing.
The world flip flops from flat to 3D UI design every few years.
We were in a flat era for the last several years, this kicks off the next 3D era.
HN should have a conference-findings thread for something like WWDC, with priority impact rankings
P4: Foundation models will get newbies involved, but aren't ready to displace other model providers.
P4: New containers are ergonomic when sub-second init is required, but otw no virtualization news.
P2: Concurrency now visible in instruments and debuggable, high-performance tracing avoid sampling errors; are we finally done with our 4+ years of black-box guesswork? (Not to mention concurrency backtracking to main-thread-by-default as a solution.)
P5: UI Look-and-feel changes across all platforms conceal the fact that there are very few new API's.
Low content overall: Scan the platforms, and you see only L&F, app intents, widgets. Is that really all? (thus far?) - It's quite concerning.
Also low quality: online links point no where, half-baked technologies are filling presentation slots: Swift+Java interop is no where near usable, other topics just point to API documentation, "code-along" sessions restating other sessions.
Beware the new upgrade forcing function: adding to the memory requirements of AI, the new concurrency tracing seems to require M4+ level device support.
> This year, App Intents gains support for visual intelligence. This enables apps to provide visual search results within the visual intelligence experience, allowing users to go directly into the app from those results.
How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like installed apps when searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name.
Spotlight is unbelievably bad and has been unbelievably bad for quite a few years. It seems to return things slowly, in erratic order (the same search does not consistently give the same results) and unreliably (items that are definitely there regularly fail to appear in search results).
Fwiw, spotlight in MacOS seems to be getting a major revamp too (basing this on the WWDC livestream, but there seems to be a note about it on their blog[0] too), pushing it a bit more in the direction of tools like Alfred or Raycast, and allegedly also being faster (but that's marketing speak of course, so we'll see when Fall comes).
[0]: https://www.apple.com/newsroom/2025/06/macos-tahoe-26-makes-...
“How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like <…> searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name”
Even I can, and have, build search functionality like this. Deterministically. No LLMs or “AI” needed. In fact for satisfying the above criteria this kind of implementation is still far more reliable.
I've also written search code like this. It's trivial, at least at the scale of installed apps and such on a single computer.
AI makes it strictly worse. I do not want intelligence. I want to type, for example, "saf" and have Safari appear immediately, in the same place, every time, without popping into a different place as I'm trying to click it because a slower search process decided to displace the result. No "temperature", no randomness, no fancy crap.
I have no idea what happened to my Mac in the last month but for some reason, spotlight isn't able to search by name any app name anymore. Like if search for Safari, it will show me results for everything except the Safari app. Even tried searching for Safari.app and still no results. It can't find any apps.
[dead]
[dead]
[flagged]
k
Apple's integration of AI into its MacOS is the one reason why I am considering a switch back to Linux after my current laptop dies.
If that’s the one reason, have you considered just… not using the AI features?
Sure you can for now. But what when it's forced upon you to use them?
I find it offensive to have any generative AI code on my computer.
I promise you there is Linux code that has been tab-completed with Copilot or similar, perhaps even before ChatGPT ever launched
That is true. I actually was ambiguous in my post, because I meant code that generates stuff, not that was generated by AI, even though I don't like the latter, either.
I think I know what you meant. You mean you don't want code that runs generative AI in your computer? But, what you wrote could also mean you don't want any code running that was generated by AI. Even with open source, your computer will be running code generated by AI as most open source projects are using it. I suspect it will be nearly impossible to avoid. Most open source projects will accept AI generated code as long as it's been reviewed.
Good point, and you were right. I was ambiguous. I meant a system that generates stuff, not stuff that was generated by AI. But I'd rather not use stuff that was generated by AI, either. But you are also right. That will become impossible, and probably already is. Not a very nice world, I think. Best thing to do then is to minimize it, and avoid computers as much as possible....
So, then don’t do that? It’s not like it’s automatically generating code without you asking.
I didn't say "generating code", I meant I find it offensive to have any code sitting on my computer that generates code, whether I use it or not. I prefer minimalism: just have on my computer what I will use, and I have a limited data connection which means even more updates with useless code I won't use.
I find it offensive to have any generative AI code on my computer.
Settings → Apple Intelligence and Siri → toggle Apple Intelligence off.
It's not enabled by default. But in case you accidentally turned it on, turning it off gets you a bunch of disk space back as the AI stuff is removed from the OS.
Some people are just looking for a reason to be offended.
The theatrics of being *forced* to use completely optional, opt-in features has been a staple of discussions regarding Apple for years.
Every year, macOS and iPadOS look superficially more and more similar, but they remain distinct in their interfaces, features, etc. But the past 15 years have been "we'll be *forced* to only use Apple-vetted software, just like the App Store!"
And yeah, the Gatekeeper mechanism got less straight-forward to get around in macOS 15, but … I don't know, someone will shoot me down for this, but it's been a long 15 years to be an Apple user with all that noise going on around you from people who really don't have the first clue what they're talking about — and on HN, no less.
They can come back to me when what they say actually happens. Until then, fifteen dang years.
Not forced to use, forced to download and waste 2GB of disk space.
I presume you're talking about Apple Intelligence.
It's not forced. It's completely optional. It has to be downloaded.
And if you activate it, then change your mind, you get the disk space back when you turn it off.
I have a limited connection, and don't want to update my computer with AI garbage.
So don't. You have to tell the computer to download Apple Intelligence. It doesn't just happen on its own.
Just don't push the Yes button when it offers.
Well, I thought it came with the OS update, so I guess I was mistaken then.
With a single toggle, you can turn off Apple Intelligence
See (System) Settings
But I can't toggle off downloading it, which is 2GB on my limited connection and 2GB of MY disk space.
This reads like the crotchety and persnickety 60-somethings in the 1990's who said the internet was a passing and annoying fad.
I was musing before sleep days ago about how maybe the internet still is just a fad. We’ve had a few decades of it, yeah, but maybe in the future people will look at it as boring tech just like I viewed VCRs or phones when I was growing up. Maybe we’re still addicted to the novelty of it, but in the future it fades into the background of life.
I’ve read stories about how people were amazed at calling each other and would get together or meet at the local home with a phone installed, a gathering spot, make an event about it. Now it’s boring background tech.
We kind of went through a faze of this with the introduction of webcams. Omegle, Chatroulette, it was a wild Wild West. Now it’s normalized, standard for work with the likes of Zoom, with FaceTiming just being normal.
A few years ago I would've said you were incredibly cynical, but nowadays with so much AI slop around social media and just tonnes of bad content I tend to agree with you.
Now the Cyberpunk pen and paper RPG seems prophetic if turn your head sideways a bit https://chatgpt.com/share/684762cc-9024-800e-9460-d5da3236cd...
I think younger me would think the same. Its not even the AI slop or bad content but also the intrusive tracking, data collection, and the commercialization of interests. I just feel gross participating.
I do think there is a lot of valid criticism of the internet. I certainly don't think it's an annoying fad but I do think it has caused a lot of bad things for humanity. In some ways, life was much better without it, even though there are some benefits.
It is impossible to have a negative opinion of AI without silly comments like this just one step removed from calling you a boomer or a Luddite. Yes all technological progress is good and if you don’t agree you’re a dumb hick.
AI maximalists are like those 100 years ago that put radium everywhere, even in toothpaste, because new things are cool and we’re so smart you need to trust us they won’t cause any harm.
I’ll keep brushing my teeth with baking soda, thank you very much.
I am a Luddite, but I think that's a good thing. I don't mind the negative comments at all. I get them all the time.
On the other side of that are the people screaming that AI is murder.
There are lots of folks like this, and it's getting exhausting that they make being anti-AI their sole defining character trait: https://www.reddit.com/r/ArtistHate
It's also exhausting to see endless new applications of AI, even worse IMO.
Actually, most "AI" cults blindly worship at their own ignorance:
https://www.youtube.com/watch?v=sV7C6Ezl35A
The ML hype-cycle has happened before... but this time everyone is adding more complexity to obfuscate the BS. There is also a funny callback to YC in the Lisp story, and why your karma still gets incinerated if one points out its obvious limitations in a thread.
Have a wonderful day, =3
Good move, not sure they are exposing other modalities as well ?
I guess LLM and AI are forbidden words in Apple language. They do their utmost to avoid these words.
Because they don't own it, or the models they (don't) own aren't good enough for a standalone brand? Sure seems like it.