Apple moving to ARM for Mac in 2021: Analyst
macrumors.comThis is an analyst's opinion, and is not supported by Apple on-record. "We expect" != "Apple confirms". Relevant quote from the article:
> We expect that Apple's new products in 12-18 months will adopt processors made by 5nm process, including the new 2H20 5G iPhone, new 2H20 iPad equipped with mini LED, and new 1H21 Mac equipped with the own-design processor. We think that iPhone 5G support, iPad 's adoption of innovative mid-size panel technology, and Mac's first adoption of the own-design processor are all Apple's critical product and technology strategies. Given that the processor is the core component of new products, we believe that Apple had increased 5nm-related investments after the epidemic outbreak. Further, Apple occupying more resources of related suppliers will hinder competitors' developments.
That said, as far as I know Ming Chi Kuo has a pretty good track record on his Apple predictions.
No disagreement here, but the title stated (now fixed) as fact something that wasn’t known to be fact yet.
It's interesting that the actual quote from the investor call is that it's a processor designed in house, and doesn't call out ARM.
IMO, an x86_64 chip makes way more sense. The patents are about to expire. Removing nearly all of the legacy mode only cruft (which is not as much as you might think, but tends to be in the critical data path) and making a chip that runs at least x86_64 user mode code would align with how they removed 32 bit support in Catalina.
The patents for x86_64 might be expiring soon, but SSE3/4 and AVX1/2/512 are newer. I'd imagine there is a lot of performance critical code written making use of those extensions, and that's just vector stuff. The x86 architecture has added a lot of other new extensions in the past 20 years as well.
Yeah, but for that, Apple very well might have enough patents in the CPU design space to negotiate a license at this point.
> legacy mode only cruft (which is not as much as you might think, but tends to be in the critical data path)
I'm curious about what you're thinking about here. In fact almost all the code paths for user mode code are running out of the uOp cache in modern devices and completely decoupled from the legacy stuff. And even in the kernel, doing locking and mode switching on the normal paths doesn't hit any major fallbacks. There's a ton of microcode and other legacy handling for odd stuff for sure, but really not on performance loads.
One example: the segmentation hardware needs to evaluated in the TLB lookup path between L1 and L2. Even special casing base=0 length=4G or not (and do the slow path in the not case) and just adding an extra mux there is still a minor burden in the designs I've heard about.
Also, the instruction decode cases for 16bit mode is still in the main instruction decoder and not ucode AFAIK. They're almost the same encoding, and there's not enough ucode pace for it all, but removing those cases from the muxes there would help power consumption. Yes, you run out of the uOp cache a lot of the time, but not as much as you might think, and AFAIK the instruction decoder is still cranking away in the background because you want it to be immediately available as soon as an instruction is not in the uOP cache. That means the power efficiencies can be gained there.
I'm incredibly late with this reply and I completely understand that you may never see it, but I'd be extremely interested to dump myself in the deep end of stuff that's on the same wavelength as what you're describing, if you have any suggestions for resources I might be able to follow up on. Thanks!
How hard would it be to re-use a lot of the ALU, MMU, and other components from the Apple A line of ARM64 chips with a different decoder and pipeline? Pretty much all modern chips "emulate" their instruction set anyway with the real core being a proprietary uop machine.
ALU - easy, it's pretty much orthogonal to ISA layout
MMU - they're pretty different
I def bet that if they're making an x86 chip, it shares a lot of RTL with their A series cores, but the distinction is probably more like they have a shared library of a lot of primitives, and have pretty different uarchs built from them.
Would they do something crazy like x86-64 usermode and aarch64 kernel mode? You might be able to share more of the MMU and so on with that - though given the memory-ordering differences it would still be difficult.
The thought excites me of running embedded x86_64.
They’re also pretty close buddies with AMD right now, who they share a fab (TSMC) with. Could we be seeing something akin to an Apple-flavored Ryzen?
On one hand, everyone who cares about the newest smallest nodes either has their own fab (Samsung, Intel), or uses TSMC. Like AMD and Nvidia share TSMC, and they are far from buddy/buddy.
On the other though, AMD legitimately does have fairly close ties to Apple. Jim Keller has bounced around a lot, but Apple and AMD is where he started new major uarchs. And Hugon Dhyana, and the game consoles show that AMD is more than willing to work with high volume OEMs to have semi custom designs, particularly to empower their security architectures. Yes, Intel includes custom logic for security, but not to the same degree as AMD. I think Intel includes all of their customer's custom logic on most of the masks, but fuses or otherwise hides the functionality; AMD goes hog wild with custom masks.
You've given me a bunch to think about, thanks! I hadn't really considered AMD here.
It's just going to be a low-end Mac Mini that's really nothing but a high-end AppleTV. The pieces are already there!
This is huge for Raspberry Pi’s. Once ARM is more standardized, more docker Conro arts and binaries will be available in arm as well.
What isn't standardized about ARM that Apple moving their PC segment to ARM would fix? Every single cell phone on earth uses ARM pretty much already. It's unlikely that if Apple did move their desktop stuff to ARM that they wouldn't use some derivative (i.e. non-standard, effectively custom) ARM instruction set. They already do that with their phone processors.
out of curiosity what are the non-standard instructions?
Similar to how many big names change standards: if you take a look at the instruction sets used, the Apple A13 processors seem to implement a newer instruction set than even what Qualcomm is using.
A13 Bionic uses ARMv8.3-A. Qualcomm 865 (which is newer than A13) uses ARMv8.2-A.
My guess is they influenced the spec to add feature deltas from v8.2->8.3 into the spec. Sure, it's not non-standard, but it seems like most vendors pick and choose what instructions to implement from each ARM spec.
What that means is the compiler for each processor needs to effectively be specific to that processor. If it's not, you might not be totally optimized.
Those versions literally mean that they're part of the spec. That other companies are shipping out of date ARM specs doesn't mean apple has magic proprietary instructions. Hell, IIRC the first we heard about the new ARM8.3 instructions was a Qualcomm white paper.
Anyway, this is no different from targeting x86_64 - you can compile targeting the most recent ISA, or you can run on more hardware. No one says AVX2 is a proprietary extension, but it sure as heck won't work on an older x86_64 cpu.
Alright, you're right - it is part of the spec. Non-standard is the wrong terminology. My point being though, Apple tends to be ahead of others in terms of adopting (for better or for worse) new standards, and their influence in the market can sometimes make those changes happen at a spec level (i.e. tell ARM they want a new instruction, and magically it'll be in the spec).
The original comment's point is this would be good for the RPi org... which use comparatively ancient ARM processors (even the brand new RPi4). I don't see how Apple entering the ARM space on desktop is relevant at all.
oh yeah, i don't see how it impacts RPi at all.
RPi benefits from general adoption of ARM in performance markets, as low performance ARMs benefit from general improvements in compiler ARM backends.