Nvidia to make Arm-based PC chips in major new challenge to Intel
reuters.comWell, I have the first model of laptops using nVidia Arm chip, it's called Toshiba AC100, and rocked a nVidia Tegra 2 in 2010.
It launched with Android, but I helped put GNU/Linux on it (got my first mainline author-ship with it), and daily drove it for a year at university (back then not everyone had laptops)
It was a hell of a device, and to this day I'm still missing anything close to it. 10 inch, 900g, 10 hour battery life (which was unthinkable on PCs back then), removable battery so when traveling I just had two batteries, extremely sturdy (I used to launch it in the air, and failed some times without worrying) [1]. That being said, even back then it was seriously limited, and I would max out the RAM just by doing ssh + quasselclient. By killing quasselclient I could have enough RAM to launch firefox to a simple web page, but that's it.
[1] It wasn't made to be sturdy, it's just that there was basically no weight, the pcb was very small, and it had pretty huge bezels
The Tegra series is still ongoing, its just marketed at self-driving vehicles.
You can get dev boards, but they are hilariously expensive, and I'd guess the idle/low load performance pales in comparison to other vendors these days.
The tegra series also still exists in all the nintendo switches of the world ;)
It's also used in the Nvidia Shield, which is basically an overpowered Android TV stick.
I had tegra 2 tablet. Couldn't play video my year older phone could. Also didn't have NEON so was stuck using old versions of everything.
> Microsoft's plans take aim at Apple, which has nearly doubled its market share in the three years since releasing its own Arm-based chips in-house for its Mac computers.
I know that Mac was always in single digit marketshare (but still a healthy amount as far as money for apple goes) but still doubling seems to be quite an achievement?
I am curious if this is actually from an increase in Mac sales or a decrease in PC sales and Mac has just been stable? Or a mix of both. I will need to look this up. (Side note: I HATE when we see something as unhelpful as "doubled" and they could have included some numbers at least).
On the topic of the article, I was kinda surprised to see that Microsoft has some initiates for Windows on Arm. I know it was technically a thing but it seemed like a thing that we just stopped hearing about?
Do they have an answer to rosetta so the transition can be mostly seamless (for everyone except developers if the M series is any indication...).
Also I have to wonder how much pre-built Windows computers are still sold vs moving to non traditional platforms like an iPad?
I am curious because gaming will likely never move to arm. Unless I have missed it I have never seen ARM in a system that you can build yourself. Even Apple's ARM Mac Pro is questionably "Customizable" after the fact. I just don't see most PC gamers giving up the upgradability.
> gaming will likely never move to arm.
Oh, it can if Nvidia, AMD and Microsoft push it.
Most of the back catalog can probably run fine emulated, though you may want to stick to x86 for those older CPU bound sim games that aren't going to get a recompile.
You missed the part after it. The first problem is software, which given what Apple managed with Rosetta I think will be able to be addressed.
But the big problem is hardware. Are we going to see customizable ARM systems or are all ARM systems going to be basically SOC's and basically just be a console. Maybe with an expansion port for something other than graphics?
I am asking because to my knowledge we have not seen this yet. But upgradability and building your own computer is a big reason that people choose a PC vs a console.
Is this a limitation of ARM (and could Nvidia, AMD, and Microsoft just go down that same path) or is it just a limitation of how it has been implemented so far.
If it is perfectly feasible, would we still see the big performance improvements like we are seeing on the M chips with everything combined?
> But upgradability and building your own computer is a big reason that people choose a PC vs a console.
I hate to break it to PC builders, but DIMMs are starting to max out, signal integrity over those pins is becoming a problem. Before long, CPU RAM is probably gonna be soldered and packaged with the CPU anyway.
The architecture won't necessarily be like Apple though. The CPU/GPU could be their own tiles, meaning we could have GPU-less SKUs that still use PCIe GPUs. And the SoCs or whatever they are can still be socketed.
> Do they have an answer to rosetta so the transition can be mostly seamless (for everyone except developers if the M series is any indication...).
Microsoft had an AoT x86->ARM (and now, amd64->aarch64) binary translation layer before Apple's Rosetta had that capability (at least, publicly).
Yes but it's still terrible with a much bigger performance loss than rosseta, partially because the M chips have some tricks for better emulation.
The performance is actually much better than most people from that skillset originally expected, and it generally gets better performance from arbitrary code paths/machine logic than Rosetta does.
Rosetta outperforms it because of hardware assistance, not because Microsoft's implementation is bad (let alone "terrible").
> I am curious if this is actually from an increase in Mac sales or a decrease in PC sales and Mac has just been stable?
Once the M1 came out everybody HAD to upgrade, so Apple got a huge boost for a couple years as people cycled out their x86 stuff.
Latest data shows Apple's computer sales slumping much harder than the PC industry in general.
You stopped hearing about it because the chips Qualcomm made were dreadful.
IIRC ARM64 Windows running in a VM on Mac greatly exceeded performance on native windows hardware with those qcom chips.
Most gaming is on ARM (mobile) now I think. Most of the revenue too.
From the stats I could gather I really doubt macOS market share increased substantially in usage. What I think happened is that they doubled their usual sales figures. Which is unsurprising if you look at the buildup of not-so-great hardware they release post-2015 before AS. Their machines had annoyances and were very uncompetitive with comparable offering because of soldered RAM/SSD pricing.
Then they released AS and the first round of hardware looked competitive because at least they seemed to have something different, a real advantage worth paying more. At least that was the marketing. I believe it got many to update very old hardware that was kept running because Apple offerings seemed so out of touch; then some others got interested to "complete" their iOS devices and even traditional PC users got sucked in for novelty factor or battery life argument. In practice if the second-hand market is to be looked at, many went back to other machines and the market is inundated by underpowered, overpriced, close to entry level machines (people figure out the hard way that 8GB of RAM is very tight for a post 2020 computer no matter how good your software optimisation is...).
Now the second release of AS was disappointing to say the least (pretty bad considering the price hikes) and I think many are holding to see what they can do with M3. So the sales have dropped a lot, at least as much as all other OEM if not more (especially in comparison to previous years).
So I think what they call the market share is actually the sales number, that doesn't mean much. If you account for all the hardware that got retired plus all the hardware that sit unused (waiting to be sold or else) macOS market share has been slightly slopping upward at best. Mostly stable in practice.
When you look at the sales numbers, it is almost 80% laptops. They barely sell any of what you would call a "PC". It makes sense; since laptops is the only place where AS has any advantages and it is also the only way to use your extra expensive "computer" as a status symbol. This is also what most employers are going to buy for their staff because unless you need real power where AS is almost disqualified from the get go it makes everything easier. They used to sell a lot of iMacs (especially the 27" version) because it was very convenient, but they don't have those anymore so...
If anything, macOS is becoming less relevant as a computing platform by the day and more of a luxury alternative brand. So its market is becoming less relevant by the day too. I think Apple is on the path to become to computing what Campagnolo is to cycling...
Traditional PCs are not going anywhere and for way more reasons than just upgradability (your car is not technically upgradeable but is made with components from many different competing suppliers).
Given how much I love my MacMini M2 Pro - its perfectly silent operation and its incredibly small form factor combined with great graphics performances, I can imagine that having similar machines available for Windows and Linux would be very attractive.
We can get that with x86.
We just don't, because it's cheaper to clock the chips higher and burn more power, and to keep large graphics seperate from the CPU.
Intel's/AMD's attempts at addressing this (Broadwell/Skylake with eDRAM, Vega M, AMD's Van Gogh and Dragon Crest) were universally shot down by laptop OEMs. I don't know why, but they were probably just being cheap.
I had the NUC with Vega M, the so called Hades Canyon, and it was awesome. Pricy but awesome. It was my media and gaming PC in the living room. For a while it was also the VR machine too with Steam-based VR working great.
Van Gogh is in the Steam Deck.
Yeah, but a little late, and the successor was seemingly canceled.
Let's not exaggerate here. I've got the M2 Pro Mac Mini, and it is nearly silent, but depending on your workload you still generate nontrivial heat and the fans do need to run to keep it cool. Maybe I notice more because I build a lot of software, but this is not some magic bullet that avoids the need for traditional heat management.
Very much dependant on your workload. At least on my M1 Pro MacBook. I spend my day doing JavaScript and Rust dev and the fans spin up maybe once every few weeks. Usually when a process is accidentally in an infinite loop and consuming far more resources than it should. Occasionally when I need to compile something really big. And it only gets warm when I use it on my bed leaving it with no airflow at all.
The fan never spins up for me (does the MacMini have a fan?), but then again I just do web development all day. I have my super powered machine (and hot and noisy) in the back room (Ryzen 5950X + NVIDIA RTX) that I remote into if I need to do a render.
I don't think web dev is particularly taxing. I never hear the fans go off while working (also a web dev), but try playing a game and it spins up after a minute or two, especially when playing recent games on GPT. Diablo IV makes them go whrrrr almost right after the menus.
On a M2 MacBook Pro with the lid closed.
We've had them direct from Intel since at least 2013: https://en.wikipedia.org/wiki/Next_Unit_of_Computing
Except OP said, "combined with great graphics performances" - the NUCs never had good graphics performance ... and their CPU performance was actually not that great either.
The same can literally be said about M1/M2 Macs. Graphics performance compared to anything discrete is terrible. My 3090Ti wipes the floor with an M2, despite being last generation's model. M2's CPU performance only looks good when divided by power consumption. Whole categories of Intel and AMD chips outperform it, just at higher power usage.
It's way better than other integrated GPUs though. And performance per watt matters a lot for heat and noise. I eventually gave up my gaming PC because it was so loud and hot, switching to Geforce Now instead.
I own an M1 and like it. But loading it up feels like loading up any other NUC style machine. I'm not sure on what axis it's better. Folks talk up the memory bandwidth, but in Apple's design, neither CPU, GPU, or NPU have access to full memory bandwidth. Not even close. Apple's caches are fast at 3 cycles, but AMD's are 10x larger. Apple's GPU claims further complicated given that they want everyone to use Metal as opposed to OpenGL, Vulkan, or DirectX on their platform. And compatibility and speed suffer as a result. Heat and noise are indeed important, from my perspective it simply seems that Apple's chosen a particular spot on the curve with an attractive outcropping thanks to early integration of on-package DRAM, but otherwise nothing special. Other manufacturers will ship on-package DRAM. AMD and NVIDIA have been shipping on-package HBM to the datacenter for an age.
If anything, it seems that Apple's much larger product - the iphone - had a lot more to do with M1/M2 design than high end desktop equipment. On package DRAM is common in cell phones to reduce board size. And M1 is a scaled up A series SoC. I think it's a great product for Apple. But as someone who's supported Mac, Linux, and Windows machines at a university for a decade, it doesn't feel appreciably different from Intel NUCs over that timespan. AMD's 7840U is setting a new standard for power and performance on the PC side right now. And the two will continue to leapfrog.
I have trouble finding objective flops/watts or similar tests, but subjectively, Apple Silicon is fast, quiet, and long-lasting in a way that no laptop has ever been for me. It's a dramatic upgrade in performance, noise, battery life, and heat vs all the previous laptops I've ever owned or used, including Intel MacBooks, ThinkPads, Surface Books, Chromebooks, small biz machines, gaming machines, and countless others.
"Sweet spot" is exactly right. It's not the best at anything but the best balanced that I've ever used, by a huge margin.
I don't think it's magic either and I hope that Intel, AMD, Microsoft, Google, Nvidia etc. can deliver a similar package in a future laptop. For now though they seem hopelessly behind, at least among the products I've used.
Also, among those companies, Nvidia is the only one I'd trust to do a decent end user experience. Maybe Google to some degree, but they'd end up sunsetting the product after a generation or two.
Microsoft tried to do the same with the Surface line, but every single one I've used sucks. The tablets are way too heavy and clunky. The laptops overheated and couldn't even charge while playing games. Windows has a ton of ads.
Apple's integration of all the things really makes it a standout in today's commoditized and enshittified world, IMO. It's not just tech specs but how the product feels to use at the end of the day.
I used my M1 Mac Mini (16gb / 512gb) for a month, then switched back to my 5800X3D / 3090Ti system and even the desktop seemed to be twice as fast. Applications screamed. Code compiled 5x faster. Probably an unfair comparison. Not much difference in what I paid though.
My most recent laptop purchase was a $300 AMD Ryzen 3 to which I added 64gb of ECC DRAM for another $300ish. Zero complaints. It's faster than the M1 at every game the two can both play.
I run Linux on both. I'm not really tied to platform - if I can run Linux on it and it performs better, I'm game.
Sure, but performance isn't the primary (or at least only) consideration for many users. M1/M2 was never top at performance, just balancing it for perf/watt.
eDRAM Broadwell was respectable.
Unsurprisingly, Intel made it for Apple, and they were pretty much the only OEM that used it.
Anyway, I think Intel sold some NUCs with them, but they were very expensive.
Intel sold it as Iris Plus graphics and Intel NUC units with Iris Plus were available as part of the product lineup: https://ark.intel.com/content/www/us/en/ark/products/graphic...
> Nvidia has quietly begun designing central processing units (CPUs) that would run Microsoft’s (MSFT.O) Windows operating system...
> Advanced Micro Devices (AMD.O) also plans to make chips for PCs with Arm technology...
> Qualcomm plans to reveal more details about a flagship chip..
(That would be the Nuvia core, I assume)
Really, just read the whole thing. Its a brief but juicy report.
Anyway, I wonder if Nvidia is going to make an SoC or a discrete CPU. Seems like an either-or proposition, as a big CPU with a small IGP (like AMD/Intel) doesn't make much sense for Nvidia.
> Advanced Micro Devices (AMD.O) also plans to make chips for PCs with Arm technology
Technically, the Platform Security Processor that's integrated in most (all?) Zen CPUs is already an ARM chip, though I guess that they mean actual general purpose CPUs.
AMD has long history of ARM chips already, some examples:
https://www.anandtech.com/show/7724/it-begins-amd-announces-...
https://www.anandtech.com/show/7990/amd-announces-k12-core-c...
None of that was released.
Prior to switching to Apple Silicon, Apple prepared the path with their effort to push for universal binaries by default
Software made yesterday were already prepared to run on their new silicon
Rosetta was only a transition helper, not meant to be a permanent solution
Microsoft didn't do any of that, and still doesn't, their leadership is clueless and dangerous
If Microsoft doesn't put in the effort, it'll never work
Let's hope there's no secret agreement to exclude Linux (?AMD AI?)
I wish Valve would encourage developpers to submit ARM binaries to prepare for the future...
Why only Apple is able to pull it off? Why this lack of care from everybody else?
Meanwhile.. https://www.huaweicentral.com/harmonyos-to-launch-for-pc-win...
>Why only Apple is able to pull it off? Why this lack of care from everybody else?
Apple got burnt, multiple times, by both Intel and Nvidia. That set them on a complete war path to move to arm where they control the chips. As part of that war path, their goal was to drop x86 entirely so they needed the transition layer.
Microsoft has no need to drop x86 entirely, in fact x86 will continue to remain a good part of a market for the foreseeable future. Who knows, maybe RISCV suddenly takes off because even Qualcomm has begun pushing a cool billion dollars towards RISCV development due to ARM/Softbank attempting to make them destroy their IP. Heck, ARM/Softbank is basically trying to destroy the ARM market for profit by terminating Qualcomm's licenses by 2025.
That is one way to put it. Another way to look at it is that Apple had unreasonable demands for those companies considering their goals and strategies. They tried to strongarm them and failed. Now they market their stuff as the second coming even though it is actually inferior in many way.
Nvidia and Intel are not necessarily nice irreproachable companies, but if there is one company that is known to be a major d*ck with their suppliers it must be Apple. You only need to look at how they treat their developers to understand, and the only reason suppliers are not treated worse is because they actually need them. I am sure Intel and Nvidia are perfectly OK with Apple doing their own stuff even if they lost a bit of business. Not unlike a tech support guy who finally got rid of a decently well-paying but majorly annoying customer, I guess. Sometimes the money isn't worth it...
> Microsoft didn't do any of that, and still doesn't, their leadership is clueless and dangerous
It did with .NET but most software isn't .NET I think.
dotnet supported Apple Silicon way before ARM for Windows, this behavior shouldn't be overlooked, this is part of the issues with Microsoft, their lack of care/anticipation
Anticipation of what? Microsoft has "supported" ARM as far back as Windows RT, but developers and integrators don't care. The hardware didn't (and still doesn't) sell, and every usable core design is either from the stone age or completely proprietary. Somehow, x86 SOCs are easier to iterate on.
Dotnet supports Apple Silicon for the same reason it supports AArch64-Linux; it's a real user platform. Windows-on-ARM is really not, and it won't get the attention it needs until attractive hardware is ready to ship.
> Anticipation of what?
> it won't get the attention it needs until attractive hardware is ready to ship.
Dunno what to say!
Windows RT is the perfect example; software was not ready, and is still not
It's only been 11 months that Visual Studio can run on ARM
>It's only been 11 months that Visual Studio can run on ARM
It's only been 3 years since Visual Studio runs in 64-bit on x86 ;)
I think the biggest holdback for VS with ARM was there were no good build boxes out there. With VS for ARM, Microsoft released their own ARM dev kit hardware loaded with 32gigs of ram and a snapdragon 8cx. Anyone else selling ARM hardware for Windows skimps with like 4g of ram.
VS could however crosscompile to ARM32 and later ARM64 for quite awhile.
Dotnet Core and Mono both supported AArch64 targets long before Apple Silicon existed, too. I think characterizing Microsoft as "lazy" in regards to RISC is wrong from a pragmatic point-of-view.
I think unlike Apple, Microsoft doesn’t commit to Arm. And because it doesn’t commit it doesn’t truly invest like Apple in ensuring success.
I hope these ARM chips are Apple-level ARM designs. Apple has an ARM license that allows them to completely design their own cores. Only Apple's ARM designs have cutting edge performance. Everyone else (Qualcomm, Samsung, etc) uses stock, or nearly stock, ARM designs off-the-shelf that, performance-wise, aren't at all competitive with AMD/Intel. I'd love a Surface Pro with an M1/M2 chip, but the Qualcomm ARM chips are dogs.
The problem with the current Qualcomm cores is that they are cheap and low power, but the new Nuvia cores should alleviate this.
...But be careful what you wish for. There have been some promising ARM core designs (Samsung's Mongoose series, Nvidia's Denver/Carmel) that all ended being worse than tweaked ARM-designed cores.
Others (Marvell's SMT ThunderX series, Fujitsu's HPC A64FX) were too niche, and ultimately discontinued.
Also, based on the M2's rather limited gains, some are suggesting that the M1 was an anomalously good design, and that Apple can't necessarily keep that massive edge.
Didn't they also lose one of their key employees? That was arguably in 2019, before the M1 came out, so I don't know how much of an impact Gerard Williams actually had, but I do wonder if they can keep the team together.
(Edit: Just googled, Gerard Williams is now at Nuvia and in the middle of a lawsuit with Apple)
Yeah, and Nuvia (now Qualcomm) aren't the only ones who sniped them.
Apple's effort to make a good modem also reportedly failed, so its not like their chip team are miracles workers.
Not that I am skeptical. Apple has a long history of making good premium SoCs.
This has probably been the biggest issue Apple has faced post M1. They've lost lots of top people to other companies trying to design performance cores like Apple's. That and their SoC development resources are spread across far more products than they used to be.
Everyone keeps going on about ARM, but isn’t the difference mostly memory latency and throughput? Less waiting for memory means less wasted clock cycles. And probably also hardcore engineering in choosing the right tradeoffs? The x86 "penalty" seems to be mostly a few precent from what I’ve read. Yes x86 wastes some transistors on legacy instructions, but it doesn’t fully explain the year long difference between Apple and Intel and AMD. (Although AMD seems to catch up now and Intel also a bit with their HBM2 chips.) It’s also TSMC and Apple working together on designing a great chip.
First step is convincing Windows development community that Windows on ARM actually matters.
Until then, whatever Apple has over 80% of the worldwide desktop market hardly matters.
And after how UWP was managed, there aren't that many that care.
Not so much an issue now: fewer native apps left, and current x86 emulation is great.
Yet you don't see anyone rushing to buy Windows ARM laptops.
I suspect, based on almost nothing, a vibe perhaps, that this will be an iteration of the Denver core. Previous versions have not been amazing on power iirc, I don’t think they got major design wins with it. I think at one point (2010 ish?) they considered an x86 processor based on this core as well. How much it remains a dynamic recompilation based “software isa” that could be x86 or arm or risc-v I have no idea.
I hope so. Then I can put some weird OS on it that nobody uses because I really miss the security-through-obscurity I got back in the day running Linux on PowerPC before either were popular. Nobody was going to bother writing shellcode for that shit back then :P
A linux kernel on Arm is by far the widest deployed pairing in history.
Indeed, it is not "some weird OS [] that nobody uses." :D
I do wonder if we will see these chips available for purchase to install in standard form factor ATX style systems? This is something I haven’t seen Arm crack into yet.
I think ATX style machines are very niche. Gamers love them and 3D artsits, but outside of that I think no one wants them anymore. I think that ARM-compatibility for games in the near term is a hard sell (although Blender, Maya, Photoshop, Final Cut Pro have been ported to ARM for running on Apple chips.)
I think the market that NVIDIA should be chasing with the ARM CPUs + good GPUs is business machines. Our company is filled with very poor performing Window Surface devices - outside of those who insist on Macs this is what people get. Companies are spending a lot on these. And they desperately need better performance while also being cool with long battery life.
Nvidia should really be able to do both. Their integrated solutions like Orin scale up to a certain point, but the demand for modular compute is still enormous on the server side (and enthusiast market). Given that CUDA will support both going forward, I don't think there's a technical incentive to kill the market.
My guess is that Nvidia will have a line of mobile cores/chipsets for integrators that want them, while also offering PCI-enabled boards for gamers and enthusiasts. Even Apple can't outrun the demand for a PCI-enabled machine, and they don't even support eGPUs. Nvidia's incentive to abandon ATX (or at least modularity) is even slimmer.
I think you’re right about the class of device. MS can’t just treat ARM-based products the way it does now and make a leap in terms of end-user experience - wondering if this changes in the future.
I'm really curious if unified cpu/gpu chips are the future for laptop/desktop hardware. Mac is now unified across its product line, consoles are unified, phones are unified. My limited understanding though is that unified memory means giving up either high speed (for the cpu), or high bandwidth (for the gpu). Is that correct?
You can have your cake and eat it with a wide LPDDR5X bus. This is what Apple does, and its fine. Other specialized chips (like Tenstorrent's accelerators) do this too.
I think an AMD 7900-like approach where the memory controllers/cache are on tiny external chips is particularly practical. Its efficient and economical. I hope AMD (and others) repeat this with laptop CPUS.
GDDRX is not a good fit for laptops anyway because its so power hungry. GPUs and the Xbox/Playstation use it just because its the absolute cheapest bandwidth/$, at the cost of everything else.
HBM is very expensive and being hoovered up by the AI market. I wouldn't count on seeing it in consumer stuff again.
It's more of a low latency (DDR/LPDDR) vs. high bandwidth (GDDR/HBM) tradeoff.
Orin Nano could already be a fine desktop if you can accept Linux4Tegra as your one and only option.
$499 and way more GPU than one would need. Releases March 2022 with a reasonably competent 6x Cortex-A78's.
I don't understand the aarch64 hotness. Is it because Apple did it? At the end of the day it's just a different instruction set- what makes that such an advantage?
People are misunderstanding the source of Apple's advantage.
That's what I think but I'm assuming that I'm naive. Apple's advantage is the vertical integration, putting memory on-package, and buying the best node.
Even that is too clever IMO. I don't think there's much gain in integration or on-package memory.
There are definitely gains with the on-package memory, but that's only the tip of the spear. The tight integration of hardware and software is also an advantage -- look at how Apple Silicon is optimized for faster retain/release of NSObjects, with less memory overhead. This is a function performed by every piece of software on the system, but executed several orders of magnitude faster than on x64 because Apple can customize the hardware to fit their model of computing. Earlier discussion of this: https://news.ycombinator.com/item?id=25203924
The only other designer of ARM chips at this scale is Qualcomm, and they stick to creating very general purpose designs (chips that conform to known designs and can be decent for all of their customers.) The exception being some minor one-off tweaks for Microsoft's ARM laptop. Intel and AMD are in the same boat -- they can do new and innovative things in hardware, but it doesn't really mean anything unless software is optimized for it. And if software is never optimized for it, was it worth the engineering investment?
Of course Apple is also offloading a lot of stuff from the CPU cores into specialized on-die units for machine learning, video CODECs, etc. Along with very decent GPUs. No, it's not all strictly CPU stuff, but it does all matter in the end.
Not all of what Apple is doing is everyone's cup of tea, and no it's not "the best performance in the world", but it's hard to find better performance in the same power envelope.
Is this a continuation of the existing Grace Hopper SoC [0]?
[0] - https://www.nvidia.com/en-gb/data-center/grace-hopper-superc...
A Grace Hopper workstation sounds cool but I would expect Nvidia PC chips to be an evolution of Orin.
Are we rooting for an ecosystem where there are next to no standards? Not even a standard boot protocol. The world is going to be full of devices with one off SOCs with terrible after release support.
Why is it taking so long to launch competitive ARM Windows laptops ?
Apple is already in it's third iteration of the M series and there's still no proper competition to the Macbooks.
Because the best chip designs in the world are made by Apple, Intel, and AMD. ARM's chip designs are years behind those three. Apple has shown ARM can have excellent performance, but they are the only ones to do it so far. Android users have shown that they don't care about buying new phones with CPUs that are slower than 3-year-old iPhones.
Because faster CPU on phones mostly doesn't matter.
I have a midrange Android phone that I use to watch videos, listen to music and podcasts, internet browsing, chat, banking. The only thing I don't do on my phone is play games. I never once caught myself thinking "boy, this phone is slow, a faster CPU would do wonders here". It doesn't ever even feel hot to the touch.
I bet it can run just fine most mobile games available. What the hell are people running on their phones that need a better CPU?
The fact that my 6yo iPhone 8 is still as fast as a 3yo Pixel phone definitely is part of why I can keep my iPhones far longer than another other phone I've ever had. You rarely see 6yo Android phones still in use, because it'd be as slow as a 9yo iPhone.
I have a OnePlus 6T acquired in 2019 that is absolutely fine to this day. I only replaced it because they dropped support for new OS updates.
I installed DivestOS on it and use it as a backup phone (i.e.: I take it with me to the gym, and keep my main phone at home). If they had kept support, I wouldn't even have replaced it.
Single threaded (best indicator of device responsiveness) Geekbench score:
2018 OnePlus 6T: 521
2017 iPhone 8: 1020
My older phone is literally twice as fast. From a performance perspective it will be usable far longer than the OnePlus.
I have a Redmi note 7 with a 351 single core score and I am perfectly happy with it. Just replaced the original firmware with lineageos.
And yet, despite benchmark numbers, my point still stands. For all the usage I listed above, my old Android is perfectly serviceable. Not once I was using it and thought a better CPU might speed something up, because nothing was slow. The phone never feels hot while using it.
When I try a new phone, I don't see any noticeable difference in speed. Everything loads more or less the same.
A better CPU (even one twice as fast) on a smartphone brings virtually no benefit at this point. Half of "this is pretty fast" is still "pretty fast".
Benchmarks aren't perfect but this shows the reality of Apple vs non-Apple ARM cores. An iPhone 12, from 3 years ago, has a 13% single thread performance advantage over the top Pixel 8 Pro.
Geekbench:
* Pixel 8 Pro: Single: 1760 Multi: 4442
* iPhone 15 Pro: Single: 2894 Multi: 7192
* iPhone 12: Single: 1995 Multi: 4401
Everyone but apple uses ARM-designed cores. Why are ARM-designed cores behind? Go check what the comp is for a CPU designer at apple, and then go check what it is at ARM. Suddenly, you'll be overcome with a feeling of clarity.
It takes four years or more to design a CPU. The M1 came out three years ago.
Anyone remember xScale? Intel once upon a time had an arm division. They (foolishly) sold the entire division Marvell.
Yeah, Intel got xScale (née StrongArm) from Digital Equipment Corp as part of a settlement after the Alpha lawsuit for infringing on Pentium patents. Intel bought StrongArm which maybe meant Intel didn't really want it and sold it to Marvell. Intel didn't want Alpha either which was sold to Compaq, the last resort and buyer of all things great and small.
Qualcomm+Nuvia PC ARM chip announcing tomorrow. “Snapdragon X”
Of course, ARM is suing them over it. Oh well.
nv has been making tegras for ages now, and i'm pretty sure some of them ran windows at some point. so this doesn't seem that big news, especially if they don't have an oem in pocket yet.
Neat, since Qualcomm seems intent on producing unusable chips.