Alibaba Open Source XuanTie RISC-V Cores, Introduces In-House Armv9 Server Chip
fuse.wikichip.orgIntel is gonna be in deep trouble in couple of years. They are the PowerPC of modern era. IDK what held them back from reimagining the architecture for less power hungry processors and tooling. With Mac mainstream switching to ARM and Microsoft doing something similar, I think within next 5 - 10 years ARM is gonna take over Intel architecture in general.
Macs are still a relatively small (but profitable) chunk of the computing world. The ground Intel has lost in desktop and mobile is more than made up for by their utter dominance in server. Intel also has a lot of plays that should bear fruit in holding or retaking ground on consumer compute in the next few years:
1) Arc GPUs. Looks like they will be seriously disruptive.
2) Hedging their process node bets by using TSMC too. The only magic in the wins AMD and Apple have had the last few years have been in them being effectively a manufacturing process node ahead of Intel.
3) Intel will likely be first to market with getting a large set of on-package memory that serves as a bridge between CPU cache and DRAM in terms of latency. Think 8GB+
I am no Intel partisan. My core interest is actually more in seeing performant, open (that is, blob-free [1][2]) hardware. RISC-V and Power10 [3][4] are what I am looking at in that regard.
I expect the reports of Intel's impending descent to be largely exaggerated. Still, it is good drama to fuel a hearty compute war. That is to the benefit of all, so have at it.
1: https://raptorcs.com/content/base/faq.html 2: https://www.osnews.com/story/133093/review-blackbird-secure-desktop-a-fully-open-source-modern-power9-workstation-without-any-proprietary-code/ 3: https://en.wikipedia.org/wiki/Power10 4: https://www.hpcwire.com/2021/09/08/ibm-introduces-power10-based-server-the-power-e1080-targets-hybrid-cloud/> The ground Intel has lost in desktop and mobile is more than made up for by their utter dominance in server. Intel also has a lot of plays that should bear fruit in holding or retaking ground on consumer compute in the next few years.
The basic problem is that Intel hasn't lost ground in mobile, it's not even on the playing field, a huge strategic blunder. That leaves it defending its dominant positions on server and desktop / laptop. At the same time billions of $ pour into TSMC from mobile and those same facilities are now being used to make desktop and server CPUs that compete with Intel.
Its execution has been poor too but this massive strategic issue is a bigger problem.
Macs are a small market, but Apple definitely sets trends. Once they adopt a technology, people "know" it works and it's safe to switch.
If servers can run on cheap ARM hardware that uses less electricity and runs far cooler without much of a performance difference, that's a massive improvement. Electricity savings would more than justify a switch. Apple has shown that it's possible at the desktop/laptop level. Now they or someone else can prove it works at the server level as well.
You are giving too much praise to Apple. They never set trends in the datacenter world.
Arm servers have been sold for some years now. AWS, Oracle and others already made the switch. And the fastest super computer (Fugaku, Japan) uses ARM.
I'm building server software for my startup. My development environment is a linux docker container running on a linux vm running on an x86 mac. Its running the same binary as the docker container running in kubernetes on an x86 aws server.
The fact that apple is making arm mainstream means if I switch to arm, so are a lot of others working to make the arm port of my stack a first class consideration.
Suddenly the gavatron cores aws has are becoming a whole lot more viable. I personally plan to move our api instances over to them once the erlang vm's arm port matures a bit.
In CPU choices, when Apple have lead (PowerPC) nobody followed. Their other CPU choices have hardly been brave. 6502? Not the first. 680x0? Followers. Intel they were way behind and all their laptop and desktop gear was a generation behind everything other companies were using.
Phones bit of a mixed bag but their computing gear has always been a bit old hat since the 1980s.
The M1 is the first time since PowerPC they took a risk. My guess is that it will turn out the same PowerPC. I really want a linux powered ARM workstation but Apple aren't going to make that.
> In CPU choices, when Apple have lead (PowerPC) nobody followed.
PowerPC was developed by IBM. Apple got involved when Motorola could not deliver a faster 68k to beat Intel's 486 which was passing 50MHz. So they joined IBM along with Motorola and formed the AIM alliance (Apple, IBM, Motorola) to build better processors.
> The M1 is the first time since PowerPC they took a risk.
There's little to no risc (buh-dum-tish) in moving to Arm these days. It's a well supported and understood architecture and found everywhere.
> I really want a linux powered ARM workstation but Apple aren't going to make that.
Apple has the power to steer its own ecosystem. When AMD was working on its Arm A series server processors I kept thinking "This sounds like a backwards approach destined to fail. Why not start by making a performant Arm SoC with 2-4 cores and GPU with a TDP of 10-20W? Target it at consoles/tv/laptops/desktop computing and jump start the desktop Arm market which will naturally lead to demand for Arm servers." The Idea was an Arm SoC that could fill the gap between low power/performance Arm SoC's for mobile/embedded and the power hungry yet performant x86 chips. Basically an AMD version of the M1. That could have really changed things but the big issue AMD would face is where's the Arm Desktop software ecosystem? That's why Apple can take these "risks", they have full control over the whole stack.
It's not that Motorola failed to delivery a faster 68k, it's that IBM's actions forced them to invest in PowerPC at a time when they should have been focused on advancing 68k. The 68040 was quite competitive at the time with performance that beat the Pentium. And any time a company is forced to split resources across multiple product lines, none of the projects will be as successful as a more focused competitor. Since Intel had gobs more revenue from x86 than Motorola had from both PowerPC and 68k lines at the same time, Intel was instead able to invest more in development of x86 with multiple teams without the distraction of supporting dual architectures. Intel's progress accelerated while Motorola's limited resources were diluted across projects that didn't have any common infrastructure.
The complexity of developing validation tests suites for PowerPC alone would have sucked up all the software resources inside of Motorola at the time, as all the old 68k OSes and support code, etc had to be rebuilt from scratch for PowerPC. Not at all a small undertaking.
They need to steer it so it can address more than 32Gb of memory. On any particular day I can chew that up with a couple of fat VMs running legacy stuff that customers still need supported but I don't want a physical machine hanging around to work on.
You can order a MacBook Pro with M1 Max with 64 GB of RAM right now.
Apple not the first with 6502?
The 6502 was introduced at Wescon in September 1975.
Apple I was out in April 1976. That's seven months.
The KIM1, a board made by MOS to demonstrate their 6502 chip, was also released in April 1976. Even they didn't beat Apple to it.
Commodore Pet was December 1977. Rockwell AIM65 was 1978. Acorn System 1 was March 1979. Atari 400 was November 1979
In short: I don't know what the heck you're talking about.
Similarly, the Lisa was a very early 68000 machine. The Amiga and Atari ST were years after the Lisa and Mac. Only very expensive workstations from HP, Apollo and Sun were before the Apple Lisa.
In the era of Steve Jobs's second stint at Apple as the interim head, the Mac was designed on the very expensive Sun workstation.
So, they are batting 50% either being first with a dead end (6502, PowerPC) or being late (68000, Intel). Not exactly a stellar record of innovation in CPU adoption.
If the pattern holds the M1 is due to be a dead end.
Irrelevant, Palm did it better.
No one got a Newton, but every mid-boss or manager got a Palm.
But Palm didn’t drive the CPU direction of the industry as GP claimed.
Neither did the Newton.
> Their other CPU choices have hardly been brave.
It basically says that the "intel inside"-logo on laptops has become meaningless.
> Once they adopt a technology, people "know" it works and it's safe to switch.
Not always. Firewire, for example.
Afaik it flies in figher jets, experimental military UAVs and also civil satellites. Just not so common in consumer tech anymore.
Besides that it's fun to debug with on older systems, or even new ones if they have it. If not it can be plugged in from anything between 30 to 50 universal credit units from 1 to 4 port Firewire800 as PCIe 1x to 4x. It's fun to have in a homelab. You can even tunnel IP over it.
For most purposes, electricity is almost a rounding error in datacenter asset management concerns. Server compute is expensive. It is all about performance. ARM is just starting to get competitive in that ballpark. So we will see.
All of these ARM server chips are going to be very low on the pecking order for TSMC fab time, so they won't have the edge that Apple had by buying up the first slot on the latest manufacturing node. Even being a process node ahead, M1 only just squeaks by with comparable single threaded performance to Zen 3 cores, so ARM still has some catching up to do in the performance realm.
"For most purposes, electricity is almost a rounding error in datacenter asset management concerns."
I don't think this is true. It is not only power consumption but also power backup. If you can use smaller diesel engines and smaller battery packs this will lower the cost.
And why do you think datacenters have been raising temperatures? It saves a ton of money: https://www.datacenterknowledge.com/archives/2008/10/14/goog... HP estimated they saved 8 million. That is not a rounding error.
A server that uses less power will also generate less heat.
I have to admit it is years ago I worked for a company owning datacenters but at that time the highest costs were always: power and connectivity.
That article cites a 100,000 sq. ft. data-center as the source for that $8 million savings estimate, along with roughly a 30% power savings figure. No mention on the period, so presumably it would be a TCO for the servers going into the initial buildout.
100,000 sq. ft. is roughly enough space for about 250,000 1U servers. Say conservatively the servers are in the ballpark of $2,000 each. That's $500 million just in server hardware costs. If the total electric cost is around 3*$8 = ~$24 million over their lifetime, then we're talking about <5% of just the server expenses. Never mind the facility and staff costs (and, as you've said, connectivity).
So maybe not a rounding error, but way down the priorities list.
These are AC costs not total costs and the $8m is probably an annual figure - although it's not clear.
Yeah, unfortunately it's a bit vague on the period. It is not unreasonable to assume an annual figure, in which case you're looking at something closer to 20% of the server costs for A/C power (assuming an average life of about 4 years).
Problem is, that sounds high... the coefficient of performance for chillers is around 4 to 7 [1]. That puts ~15-30% of the total energy demand from chillers, though often datacenters do budget a factor up to about 60% of equipment power demand for cooling power demand. Sum energy related expenses for datacenters tends to be around 10-15% of the total costs [2]. So it would be odd to have such high cooling costs. A TCO over 4 years seems more in line with typical figures.
1: https://www.energy.gov.au/sites/default/files/hvac-factsheet-chiller-efficiency.pdf 2: https://www.missioncriticalmagazine.com/ext/resources/MC/Home/Files/PDFs/(TUI3011B)SimpleModelDetermingTrueTCO.pdf
Are you speaking from experience doing data center asset management or are you just speculating? I don’t know about data centers, but for colocation, electricity cost is far from a rounding error. Also, cooling is a major factor that is directly related to power consumption, as pointed out by siblings.
See [1], figure 1 there is a fairly typical data center costs breakdown at the highest level for TCO. Total energy is by far the lowest, at around 10-15%. Site infrastructure, IT infrastructure, and staff are the real cost priorities.
There's a caveat to that in the planning phase of a data center: A lot of the site infrastructure costs are a function of the total power requirement. So if - when building out a data center - you can get more power efficiency, then that does translate to a significant cost savings.
Cooling tends to be somewhere between a 20 and 60% add on to the direct power consumption of a server.
1: https://www.missioncriticalmagazine.com/ext/resources/MC/Home/Files/PDFs/(TUI3011B)SimpleModelDetermingTrueTCO.pdf
In Ireland the national grid is under strain due to electricity usage from data centres, so much so that there is discussion to deny permission for new ones. I very much doubt that the electricity costs are negligible, it’s also one of the reason data centres are in Ireland as due to the relatively mild climate it reduces hearing/cooling costs.
https://www.irishtimes.com/news/politics/data-centres-could-...
Will be interesting to see where datacenters wind up in Europe this next decade. My money is on Norway.
Norway seems poised to build out significantly. They're actively inviting datacenters [1]. They also seem to have lower electric costs for large businesses [2] vs Ireland [3]. FAANG are already populating the Nordics too [4].
1: https://www.datacenterdynamics.com/en/news/norway-wants-data-centers-to-locate-there-and-be-more-sustainable-too/ 2: https://www.statista.com/statistics/595859/electricity-industry-price-norway/ 3: https://www.statista.com/statistics/595806/electricity-industry-price-ireland/ 4: https://www.zdnet.com/pictures/the-nordic-datacenter-boom/Isn't more of the reason political (taxes etc)?
The same climate benefits apply to northern England and Scotland, but Great Britain's grid's capacity has around 10 times the capacity.
(Ireland's grid's peak demand was 6.8GW, Great Britain's 63GW.)
Although the same argument then applies for siting the datacentres in continental Europe, where the grid is around 667GW. Perhaps this is part of Facebook's reasoning for a second datacentre in Denmark.
(Most of Denmark, including Facebook's existing datacentre, is connected to the continental European grid. The eastern islands are part of the Scandinavian grid.)
[1] https://www.thelocal.dk/20211013/facebook-eyes-second-danish...
As far as I'm aware, efficiency is key for datacenters. It can lower direct electricity costs, but also the cost of cooling. I've always been under the impression that the cost of running is much more than the cost of the hardware which is why they are willing to pay so much for the best/most efficient hardware?
The average consumer cost for 1kWh of electricity in the EU is 20¢. A well-used server might consume 500W on average.
I'm sure businesses get a discount, but we haven't paid for cooling yet. This is hardly a rounding error.units '500W * 1 year * 0.20 €/kWh' '€' € 876>I'm sure businesses get a discount
Not as large as some may expect once you factor in power delivery guarantee and redundancy. But basically yes. Every single HyperScaler has been pointing out electricity as one of their largest item on their TCO, 2nd only to hardware cost.
Which units is that? Mine does return stuff in Euros but with a fraction.
and no customizations.GNU Units version 2.19 with readline, with utf8, locale en_IE
would also work.units '500W * 1 year * 0.20/kWh' '1'Do you have a custom ~/.units file?
user@xirl>units '500W * 1 year * 0.20/kWh' '1' * 876.58128 / 0.0011407955
IMHO server marketshare is much more fickle than the consumer one.
For consumers, compatibility is a big issue and hard to switch unless a myriad of things work properly.
For servers, if just a single commonly used system (e.g. mariadb or wordpress) works and you can do it 10% cheaper, companies can replace large quantities of hardware to a completely different architecture.
Server marketshare has been fairly sticky -- despite AMD offering low-risk, same ISA migration, and attractive pricing, the server market was relatively slow to adopt Zen.
OTOH, new HPC systems have been quick to adopt Zen in the form of dual-socket EPYC nodes.
Actually if you look at the current top 10 supercomputers, only 2 are Intel. The rest are custom ARM or RISC CPUs (in Japan and China), IBM Power9, or AMD EPYC.
HPC is a special industry in that regard. HPC is know to take well managed technical risks such as building custom CPUs (A64FX in Fugaku) or using less popular architectures (SPARC, Power). If one can be certain that with enough coding hours codes can be adapted to run on those platforms and they will end up faster more exotic hardware will be bought or considered. Nation level HPC projects are not conservative, Industrial HPC maybe a bit more so.
My take at this is because of inertia. I have a large fleet of servers that have been Intel since forever. I see no reason to migrate them today as things work reasonably well. We upgrade the hardware every 4-5 years. So during the next upgrade for sure I'll choose AMD, there is no question of that. ARM will be a serious competitor and RISC-V, too, that's for sure - but they are just not there yet when we talk about maximum performance (in our case, energy consumption is not the main concern)
I think this is largely just lag between orders for new xen parts and the capacity coming online. I would expect to see a lot of xen processors in fleets being built today.
> I think this is largely just lag between orders for new xen parts and the capacity coming online.
That may be a piece of it, but I think there's a little more. AMD released the first generation of Epyc processors in June 2017. They didn't see much adoption for years.[1]
[1]: https://cdn.mos.cms.futurecdn.net/vono9miWH83ZVeqRfE9CCV-970...
>the server market was relatively slow to adopt Zen.
Is this finally an accepted truth now? I have been banging on about it for years that AMD are not doing good enough on Server. They will still gain a little more market share with Zen 4, but now Intel is coming back. The perfect opportunity windows is closing down.
AMD doesn't have any product to go around. And this was true even before covid.
Yeah, but big companies have long lifecycle and bundling contracts that makes them more resistant to change.
I was thinking about the fact that more and more companies are renting compute from e.g. AWS - so for them switching to a different architecture can be as easy as changing a single parameter in a config file, if the cloud provider offers it a bit cheaper.
> The only magic in the wins AMD and Apple have had the last few years have been in them being effectively a manufacturing process node ahead of Intel.
While I mostly agree with that, the fact that Intel are also moving to a chiplet-style architecture that AMD adopted beforehand means I think there is at least _some_ other bits AMD was ahead of Intel on aside from purely the process node.
I'm nowhere near as knowledgeable about this stuff as others, but from my understanding from Dr Ian Cutress' articles and YouTube channel, Intel is somewhat following AMD in this area. I could be misunderstanding, of course.
No, that's pretty accurate. Though it could be argued that it was only the move to the smaller process node that necessitated a chiplet architecture (as a means of spreading the thermal density, and to improve yield in a more hostile process).
Also, Intel's tiling tech is more versatile than chiplets. Though AMD is adopting TSMC's SoIC, which should be comparable. https://www.hardwaretimes.com/intel-believes-its-tiles-are-a...
From what I'm seeing, I think assuming that RISC-V CPUs being blob free and open, just because the ISA is open is illusory at best and wilfully lying at worst.
I agree that open ISA does not equate to blob free hardware. And many RISC-V designs out there will have royalties, and devices blobs.
But the open ISA levels the playing field and allows for upstart hardware designers to make compatible hardware that is royalty and/or blob free.
When you have the complete verilog source code for the CPU core it's pretty hard to hide anything.
Following the source code for the C910 core upload 24 hours ago, Olof Kindgren is live-tweeting getting it going in an FPGA using the existing FuseSoC framework. He made significant progress in the first session before going to bed. It would not surprise me if it's working in the next 24 hours.
> When you have the complete verilog source code for the CPU core
That's a big "when". The road from an ISA to a core is long, and unless the core is copylefted (this isn't), then you aren't going to get the source code for something someone else manufactured either.
Copyleft won't help you there either if the license issuer is also the producer of the physical chip: Licenses are regulating what third parties can do with the source, they don't restrict the issuer.
They do actually restrict issuer in most cases. Linus can't just strip Linux of other developers parts because they are tightly interconnected. He could relicense 30 year old version of Linux, but it would be useless
I just have been more precise and say copyright holder instead of issuer. Linus doesn't hold the copyright for most of the Linux source, hence he can't relicense without explicit oks from the other developers.
But how can I be sure the chip I have in my hand is built upon this code and that no blob has been added?
By (1) purchasing anonymously from retail sources, and
(2) having researchers with anonymous retail samples verify through decapping and inspection. I believe der8auer in Germany does die shots -- it would not be a bad idea to kick start this kind of research for community assurance purposes.
It's difficult to prove there's no hidden logic, but it's also not trivial to hide complex logic needed to introduce covert undetectable vulnerabilities (probably around things the RNG source or crypto).
Also assuming you're a high value target, otherwise this is mostly going too far.
It still removes a serious obstacle to getting a blob free cpu where user can have control over what it does. The ISA is no more a "magic sauce" to be protected and patented and therefore hidden behind closed source.
Well, if they wind up with blobs I think they will at least be more limited. The "Management Engine" style ones are the most egregious. Ones for the sake of hardware video encode/decode are a bit more forgivable, though still regrettable.
And there is certainly growing interest in blob-free computing [1], so some at least will exist to fill that demand. There is some hope for video with Linux landing blob-free hardware encode/decode very quickly the last couple of years [2].
1: https://www.crowdsupply.com/mnt/reform/updates/post-campaign-orders 2: https://www.youtube.com/watch?v=E9JLxjYlIWg
Did you forget about Amazon and Google Arm servers? It's not just Apple, everyone is moving away from x86.
Indeed.
AWS launched their own ARM based processor, Graviton a while ago.
Now they have begun nudging them to serverless fleet by making them available at a lower price point.
Not too different from their parent, Amazon, pushing their own branded products, cutting out the middleman.
ARM and AMD are both making steady gains in the server space as well though. AMD is already offered by all the hyperscalers, and AWS recently introduced ARM as well.
> utter dominance in server
Things are hesitantly shifting towards a more AMD-oriented lineup now. The enormous kickbacks and discounts don't (always) weigh up to the increasing performance differences.
> The only magic in the wins AMD and Apple have had the last few years have been in them being effectively a manufacturing process node ahead of Intel.
Except for all the features on the chips, differing design philosophies, and peripheral interconnects, sure the only difference is manufacturing node. But what have the Romans ever done for us?
> 1) Arc GPUs
I've been hearing the same story for 10+ yrs on how Intel is going to disrupt the GPU market any day now. For real.
Remember Larrabee?
It's not going to happen. Intel is not good at disruption. They're good at several things. But not disruption.
"This time fer sure, Rocky!"
Seriously. "More cache on die?" reminds me of the scene in "Smokin' Aces" ... "we did that. all of it."
Intel is ossified so totally that the top i dunno how many layers will have to be brutaly cleaved off it before the remaining pieces can hope to be productive again.
Arc is way more conventional than Larrabee though.
All they have to do is bring these to market without bugs and they'll sell like hotcakes.
Ctrl+F'd Graviton and found disappointingly few results.
The server marketplace is shifting, slowly, but it is happening.
I agree with your assessment in general, but why do think Arc GPUs will be "seriously disruptive"? If anything, they seem to be too late to the party.
> 1) Arc GPUs. Looks like they will be seriously disruptive.
I'm curious to read more about this. Everything I've seen about Intel's new discrete GPUs makes it sound like it'll be more of the same when compared to AMD and nvidia, and potentially not even as powerful.
Intel can also play the "government please help us, we're critical to national security" card...
> IDK what held them back from reimagining the architecture
They'd been successful for decades, were raking in cash, and remembered the last time they tried something new AKA Itanium AKA the itanic AKA the most mocked architecture of all time. It's a failing, but I can totally understand how they got there.
Classic innovator's dilemma. They're retreating up-market.
Are you comparing ARM to x86 or Intel chips specifically? Intel could design their own ARM chips…it’s an architecture family with a SUPER wide variety of implementations.
Everyone seems to be excited about the newer AMD chips which are x86.
ARM is nice for low power, low performance workloads like embedded, phones, and laptops.
Isn't pretty much all modern x86 cpu's ARM at their core anyways, or am i missing something? The big deal about Apple scilicone is how its an all-in-one and obviously having things compiled for arm in the first place helps
> arm at their core
Most modern "CISC" processors internally are composed of RISC-like elements with instruction decoders in front.
ARM is a specific almost-RISC instruction set architecture.
They've tried a lot of "reimagining." It never works. No one wants Intel on mobile, despite Intel's mobile chips being absolutely great when they were actually releasing them, and no one wants a genuinely high-performance chip if it means they have to make alterations to how their software is written.
I have a lot of criticisms of Intel, but held back from reimagining isn't quite a great one.
Their low power stuff was garbage. And itanium went unsupported and was half baked.
Intel knew where their money makers were, x86 servers. Their attention and investment showed a lack of foresight, combined with their famous bloated engineering teams, is a story as old as time itself.
When has intel reimagined products that have low power and low price? Intel has been notorious for having power hungry and overpriced stuff.
Didn't Pouslbo and friends have negative prices ("contra-revenue") yet still nobody wanted them?
For those who, like me, never heard of Poulsbo, it's a chipset for Intel's first generation of Atom processors.
Poulsbo was disastrously bad, quirky, and impractical.
It took all x86 platform warts, and replaced it with even harder to get right "Poulsbo warts" like SFI, and etc.
And to think they had it in their hands in 1997 with StrongARM
I think alderlake is very promising. Big-small designed cpus, even on x86, seem like the way to go.
The way to go for what?
Mobile/laptops eg the sector where apple/ms are switching. Big-small is the way to get high power efficiency.
low power usage mobile stuff which still packs some processing power.
for Intel to not go out of business...?
Intel had the best high performance ARM processors for a long time (StrongARM, acquired from DEC).
Timing is hard.
how much of arm's renaissance comes from the iphone and qualcomm's socs?
i don't recall hearing about arm in other places really... maybe the occasional home router or cable set top box...
Yep, the inertia and economy of scale come from there and when the pendulum is in full swing, servers have seemed inevitable.
But timing is hard here too, and lots of things that seem inevitable take many attempts and a long to happen. ARM server chips have near history marked by struggle, there have been several over the last decade. Remember eg AMD in 2012? https://arstechnica.com/information-technology/2012/10/amd-a... Or this ARM Ltd announcement: https://www.tomshardware.com/news/Server-CPU-Xeon-Opteron-AR...
Low-end ARM chips are everywhere as generic microcontrollers in things you probably only barely think about as having electronic components at all. The next vacuum cleaner I buy will probably have an ARM in it.
But that's mostly phenomenon of this new smartphone era. That market used to be completely dominated by 8bit or 16bit Atmels and PICs (an probably still is, but ARM is gaining market share fast).
It's not just smartphone knock-on, although that does help. The chips in question aren't smartphone class. When I've spoken to people about it, it's more a matter of toolchain, price, and compute power: the tools are familiar (it's just GCC, not something manufacturer-specific), at interesting volumes they're within spitting distance of the others for price, and if it turns out you need more CPU than you originally thought there's usually miles of headroom. The idea of "just go straight to ARM from the start, it's not worth bothering with microcontrollers" is the mantra they go by.
There are still a load of jobs you wouldn't do that for, but the list is short and getting shorter.
Given the current semiconductor shortages I don’t think Intel will have any trouble selling chips within the next couple years.
Sure... but what happens on the other side of these unusual market conditions?
amd survived for a long time with worse chips and someone else's fab. Intel has their own fabs and the runway needed to get back in the game.
They were at the top of the pyramid, that's what happened.
Same as when Microsoft had IE6, or the French phone market company was a one provider deal, or when Blackberry was eating the smartphone market.
They stop to innovate. They milk the cow.
It works for some times, then a new guy arrive in town, Firefox for IE6, Free for French Telecom, iPhone for Blackberry...
Apple is using their own cores, Google is using thei own cores, Amazon is using their own cores, I bet Microsoft is next... Eventually these new chips will hit the desktop and then we'll see the x86 start to sink.
Amazon are using their own cores as in the Arm Neoverse series?
Apple are the leaders in terms of arm microarchitecture.
I don't know Google/Amazon's own CPU cores, but know chips.
I'd counter with the opposite sentiment. If Intel's foundry play pays off, they will still be big and important. All these companies designing chips need a place to manufacture them. While Intel's own chips might be decreasing in marketshare, they still have an opportunity to get a tiny sliver from of everyone's pie... and that pie keeps getting bigger.
Contract manufacturing won't have anywhere near as fat a margin as their CPU lineup has had the last decade though. That'd be a hard thing to sell to the shareholders.
Intel is counting on being nationalized.
pretty much they are getting a lot of support from US government. To the point that Samsung and TSMC have to open their order books so the US can "inspect" them. Most likely to snatch away clients or better insight in material usage.
pretty realistic unfortunately
I’d bet money on it. Or if not nationalized, funded like DOD contractors.
I wonder if Intel would see the obvious from a safe distance and play to their strengths instead of a death for pride.
Imagine, if Intel would swallow their pride and start building ARM SOCs… they could show their engineering prowess, and use X86 synergies to their advantage. As a buyer, I’d buy ARM cpus from intel if they also provided me a slow path for legacy x86 workloads that I haven’t migrated yet. Also, ADM is just the ISA, so, once a customer locks in to intel’s ARM cpu ecosystem, I’m sure they’ll come up with lots of special additions that then the customer would be hesitant to walk away from.
Or, they could be this decade’s IBM and make x86 look like the next generation’s mainframes - niche, even great in some dimensions but out of mainstream use.
It's easy to mistake the obvious computing market for the whole of the computing market. Servers dominates the computing market, and x86 is still dominating the server world, and Intel is still competitive on the server front.
> Servers dominates the computing market
I would wager mobile semiconductor revenue worlwide is bigger than server semiconductor revenue worldwide.
But I agree with you that Intel is still dominant in the server world.
That feels like a bit of a stretch their core CPUs are still very competitive, they are planning on manufacturing ARM chips as well
The world is rapidly being consumed by ARM, it will be interesting to see how this plays out from hardware offered on market, to what devs are targeting when conducting builds.
didn't intel have a huge hack where their secret processor data was leaked out? i wouldn't be surprised if all these new chips got inspired by that
> In-House Armv9?
https://semianalysis.com/the-semiconductor-heist-of-the-cent...
(discussed recently: https://news.ycombinator.com/item?id=28329731)
Because ARMv9-A is very new, publicly announced only this year, it does not seem likely that Alibaba could have licensed it from the rogue former ARM subsidiary, but more likely from the parent company.
IIRC, the problems with ARM China started long before the launch of ARMv9-A. Alibaba has certainly received technical information some time before the public announcement, but it is unlikely that any IP would have been transferred to ARM China before that.
>> In-House Armv9?
They meant in house ARMv9 design as in Amazon in house Graviton 2. In reality it is an ARM N2 design. Although likely clocked slightly lower due to TDP usage.
I dont think the ARM China issues has been solved ( And possibly never will be ). So the ARMv9 design is likely coming from ARM UK. Alibaba also operate in SEA and expanding outside of China.
Here are the RISV-V cores: https://github.com/T-head-Semi
One of their repos has a bunch of patches/scripts to be able to run Android on the CPUs[1]. (XuanTie C910)
The linked video (https://occ-oss-prod.oss-cn-hangzhou.aliyuncs.com/share/risc...) is impressive! Open source Android running on an Open Source CPU.
As a person learning Chinese, I find it super frustrating that Chinese brand names are only mentioned in Pinyin, and to make it worse, without tones. With tones, it would have been xuántiě, which makes it possible to properly pronounce without extra context. In Chinese characters it's 玄铁 in simplified and 玄鐵 in traditional script. The meaning is "reddish-black iron". I'm curious if it should be taken literally or if it's a cultural reference of some sort.
These names usually come from Chinese mythology and Wuxia, which carries a sense of national/cultural pride:
https://en.wikipedia.org/wiki/Chinese_mythology
https://en.wikipedia.org/wiki/Wuxia
Xuantie probably came from Xuxia, and is widely used in pop culture (games especially) for metals that have special mystical properties.
武侠 (Wuxia) culture is big in Alibaba. Everyone has a 花名 (alias / nickname) in the company. In the early days, people just picked names from 金庸 (Jin Yong)'s novels. Jack Ma's 花名 is 风清扬, who is a great sword master in one of the novels. Source: I had a short stint there.
Picking 玄铁, which also comes from Jin Yong's novel, seems rather natural in that context.
Jin Yong's novels are very popular in Chinese speaking countries and regions. Most readers won't associate them with China. I don't think it has anything to do with national pride. Culture pride? I don't see much either.
Yes, I agree that it is the natural thing to do. Maybe building / reaffirming cultural identity through these names is what I was trying to convey.
By your logic a World of Warcraft reference is also nationalistic. Western engineers can make the most convoluted cultural references possible with no criticism but something as innocuous as naming a processor after a fictional metal attracts accusationa of nationalism.
I must apologize if I made you feel like it is a criticism. I think national/cultural pride is a pretty positive thing, at least in China.
Americans have their fair share of nationalistic names.
The original Xbox was codenamed Midway for obvious reasons. :^)
Maybe a reference from a novel [1]. Rough Google translation of the summary: "The black iron is the material recorded in Jin Yong's novels. It is dark in color with a faint red light. It is extremely heavy, has a high melting point, and has a magnetic force."
I wonder if they will sell the Yitian 710 directly since I'm not sure how large on foreign use of their cloud (? Guess I was out of the loop on cloud providers in China). I would love to have a machine that could compile monstrosities like Chromium and Firefox in < 10 minutes.
Since all these high core count, cheap ARM processors are only available via cloud providers I might end up using spot instances to spin up very large build servers but it feels overkill for what are currently just personal projects (NixOS stuff).
I don’t think any cloud provider will be selling these custom chips, it’s not their business model. You can buy ARM servers already (Ampere Altra https://www.anandtech.com/show/16315/the-ampere-altra-review)
So I assume that they didn't open source their leading RISC-V core because they were giving up on RISC-V, but instead because they have something better, more powerful in the pipeline.
Does that mean that we are going to see soon a data center class RISC-V chip with hundreds of RISC-V cores? That would really be something.
Could it be that they genuinely want people to use these designs and grow an ecosystem around it?
When there are more eyeballs looking at something it can become better.
Does anyone have any data on benchmarks for the Risc-V cores? I wasn't able to find any. I'm specifically interested in the C906 (aka Allwinner D1 which exists as a dev board[0] for ~$120 ) and the C910 which is apparently multicore. I'd love to know how they compare to SiFive's offerings and similar class arm chips.
The Hot Chips presentation a year ago when the C910 was announced shows it comparable to ARM A73.
https://www.anandtech.com/show/15991/hot-chips-2020-live-blo...
I should have one of the eval boards in mid November and I'll be able to do real-word tests then.
The C906 is a little bit faster than a Raspberry Pi Zero, except it has a fairly useful vector unit which can double or triple the speed of many things.
I published memcpy and strcpy benchmarks on Nezha six months ago:
https://hoult.org/d1_memcpy.txt https://hoult.org/d1_strcpy.txt
I also have results for it in my primes benchmark. It beats out a U54 at the same clock speed and is not far off the higher clocked A53 in a Pi 3.
The current "Nezha" board at $99 is obviously expensive compared to a Pi Zero. SiPeed are promising a board with with same D1 SoC with 512 MB RAM for under $20 within the next month.
https://twitter.com/SipeedIO/status/1443486484112183298
The C906 is very comparable to SiFive U54 (as in the HiFive Unleashed, and the Microsemi "Icicle" FPGA) except it has a vector unit and a much better DRAM interface than the FU540 had. But the D1 is only single core.
The C910 is comparable to the SiFive U84, which has not yet been seen in public in actual silicon.
awesome, thanks for the info. I just purchased one of the Nezha D1 boards assuming it would have relatively poor performance. Still hopefully interesting to play around with.
Every time I see a headline with RISC-V in it I get excited, but then immediately disappointed that it still isn't a widely accessible product.
It's getting there.
There are getting to be a good number of SoCs reaching actual silicon in small batches, and available on relatively expensive boards ($100 to $665). They work. You can buy them.
Hopefully some of them go into mass production, which should drop the production cost for the actual chips to $5 or so and boards to something like Pi prices (there are dozens of companies that can make boards once chips are available).
Having the core and chip designed and progressed to working test silicon is by far the hardest part already done.
Price then comes down to production volume.
I think the lack of excitement is misplaced! RISC-V is already amazingly successful, and I would argue it is already the most widely available technology.
Tens of high quality OSS HDL implementations that target FPGAs and ASICs. Many tens of successful hardware implementations, already shipping multiple billions of cores.
The board Bruce mentions is an amazing value. A board in that form factor that you can bring up a graphical Linux and the distros are already targeting it.
You can get a 4-Stage 160Mhz RV32IMC with 400K of SRAM for $1 in the form of an ESP32-C3. Dev board for $15.
What does it mean by open sourcing chips? Does that mean I can bring the source to fabs and fabricate them out?
It's not a chip, it's a CPU core. You still need to add a lot of things to make a chip: interface for DRAM, something like PCIe for input and output. And you have to do layout, timing calculations, and all that kind of thing.
But yes, if you have sufficient funding and expertise you can take their CPU core -- approximately as good as the performance cores in the Samsung Galaxy S8 or the Raspberry Pi 4 -- and without permission or payment or even notification use them in your own chips.
The Verilog code for the 4 RISC-V cores of various sizes is licensed with the Apache License.
They are some of the best existing RISC-V cores. One of them was the fastest existing RISC-V core when it was introduced.
You can use the Verilog code to either synthesize it for a FPGA and run it in a FPGA board with a large enough FPGA, or if you have access to an ASIC manufacturing process, you can synthesize it for that process and include the RISC-V cores together with whatever else is needed in a custom IC, without paying any royalties.
Using one of these cores for a FPGA board seems very attractive, because most other open-source cores that are available have a much lower performance.
They have also provided versions of the gcc compiler, of the glibc standard C library, of the boot loader, of the Android Bionic standard library and a few other software packages that are needed to run programs for Linux or Android on these RISC-V cores, which have many extensions over the base RISC-V specification.
Alibaba appears to have played for a few years with RISC-V, but even if they have succeeded to design the fastest such cores, eventually they have decided to use ARMv9-A for their real high-performance server CPUs.
I don't think so. You still need at least "millions of dollars" and some experienced dev staff (most likely snatched from the big guys) if you plan on making anything competitive with intel, arm, or apple.
Is C910 out in the wild? Are there any dev boards with it?
https://www.aliexpress.com/item/1005003395978459.html
There were only 10 publicly available from the first batch, out of I think a total of 80.
I managed to snag one. Hopefully I'll have it mid November.
That means they are giving up on risc-v? The future is arm for them it seems.
Where are the Gflops/W?
Gamechanger
Did anyone instantly go back to the memory of the Jack Ma & Elon Musk conference panel WTF? moment and think: No way Jack is running this company, at all.
Well, to be fair both were bs-ing around. The difference is that Musk is native English speaker and you had the impression that he knew what the hell he was saying.
It pretty clear to anyone that is not a fanboy that both (like is normal in these situations as they're CEOs) have no idea what they're saying.
tbh, a lot of Chinese people sound off speaking English when they're not native speakers. I don't think he's actually stupid but a consequence of the big differences in languages.
Jack Ma is no longer the CEO of Alibaba since quite a while now
A new 'Weex',like Alibaba before, boast to the sky.
>>if we compare it to some recent measurements of Ampere Altra, AMD EPYC, and Intel Xeon, taken by Andrei over at AnandTech, the Yitian 710 should be fairly competitive at 128 cores.
Impressive. I did not expect riscv to progress so quickly. Intel is much weaker than we all thought after all. Hats off to these clever engineers.
"The Yitian 710 integrates 128 custom-designed ARMv9 cores"
That one isn't a RISC-V chip.
To be fair these are fabbed at TSMC and TSMC has been kicking Intel's behind for some time now.