TSMC to customers: It's time to stop using older nodes and move to 28nm
anandtech.comThanks TSMC... when will you release a HV version of 28nm? Oh... never, because you transitioned to bottom poly at 40nm.
How is your automotive eFlash at 28nm... oh, you're still working on it (since 2018)?
Well, guess we won't have many display drivers (or displays) or autos then, or maybe marketing should pull their heads out and smell the roses.
I mean, I get it. Most things should transition to 28nm on 300mm wafers for process & equipment reasons, but in order to do that many of them need for the right process to exist, and the foundries are concentrated only on the latest nodes that make margin $$ so they don't develop critical features for even 28nm. Could your customers redesign their entire architecture and packaging? Yes, but it will take years to decades to prove reliability.
I'll note that Apple's rumored OLED on Si for MR/AR is likely on a giant 80-90nm process in 300mm at TSMC so for money and volume, they'll do most anything... even build capacity.
Seems like the automotive industry is gonna be the one to have to blink here. I’m not intimately involved in the details of the industry but didn’t the automotive industry already try to pass their risk onto TSMC by cancelling contracts early on into covid and we’re already told to get to the back of the line when they wanted their chip orders again?
If TSMC has enough demand to sell everything they make, they don’t really need to take specific client needs into account
What does blink mean? Does your chips work when it is -70c - cars are used in arctic environments and so automakers have special tests to ensure they work in cold temperatures most of us will never see. Do you chips work when it is +65c - the inside of a parked car will get that hot, and the car is expected to work. How long will you keep the new chips in production - the longer the better we need to provide spare parts for everything for at least 10 years, so if you go out of production we need to fill a warehouse with the final production run just in case the things start mass failing in a few years.
I think the auto industry is looking to see if they can bypass the whole above mess with their own fabs. They don't need fancy processes, they need something reliable that they can depend on for years. The cost to a fab though means they need to worry about anti-trust as they can't go alone.
I think OP is saying that TSMC is saying “Okay auto manufacturers, we don’t want to manufacture chips that meet all of your requirements when we have so much order volume, so you’ll have to drop the requirements or find a different fab.” So the auto industry making their own fabs is exactly what TSMC wants them to do.
And the automakers will likely balk at spending many dozens of billions of capital outside of their core competency.
Since all the other legacy fabs are also at capacity for the next few years.
It's rapidly becoming part of their core competency, no different than steel panel manufacturing.
If the auto manufacturers don’t want chips to be part of their core competencies, then they’ll probably need to go back to building cars like they did thirty years ago.
Legacy auto is screwed. They are getting smaller and have fixed costs and fixed cultures that are going to be hard to remove. At the same time this makes it hard to recruit the types of people you need to turn the ship around.
This thread is a good example of the legacy auto mentality of blaming a supplier, instead of taking responsibility for the situation they are in.
This is about all auto. Or you thing that EV won't experience cold climate?
Doubtful the story you heard is correct.
Too many mainstream articles on semiconductors are just vague, uncheckable facts combined with filler that seems to have been suggested by Intel's PR department, even when Intel is totally irrelevant. Intel needs delicious subsidies!
> Intel is totally irrelevant
Intel still has the fastest chip in the world with the i9-12900KS. They are also the second largest semiconductor company in the world by revenue, only just last year barely passed by Samsung. [1]
I know Intel is not hip, but to call them irrelevant is some serious reality distortion.
--
[1] https://www.eetasia.com/wp-content/uploads/sites/2/2022/04/I...
Sorry. I was responding in the context of automotive. There have been so many articles about the automotive chip shortage mentioning Intel. It's seems like filler by uninformed journalists. Intel is not a top dog in auto and most of Intel's auto qualified parts are in the Altera line actually made by TSMC.
Intel is still very very relevant overall to server, networking, and consumer space. Huge revenues!
Oh yeah, I've noticed that as well. Mainstream media loves talking about the general chip shortage and investment into 5nm and 3nm fabs in the same article, whether it is Intel or TSMC. The media (and the general population) seems to be fully convinced that a chip is a chip.
I honestly don't really understand 95% of what you've written here, but could this mean an end of godawful touch screens in cars?
No, it means there will be even worse touchscreens because now automakers can't afford silicon that might actually power a tablet.
That's scary - I have a < 1 yr old Tesla model Y and it is embarrassing how underpowered the tablet that controls basically every part of the car is.
Tesla's have pretty responsive displays.
The last vehicle I bought, I turned down several vehicles because many of the internal environmental controls were through a touchscreen, and the touchscreen was so slow and bad I was honestly astonished that they were selling it.
The salesman tried to downplay it but I still walked out of the ford dealership 100% because of their shitty controls/touchscreen.
Ended up buying a car with old fashioned buttons and knobs, much happier.
FWIW, Tesla is one of the few companies that actually put decent processors in their cars: in lieu of a cruddy, off-the-shelf ARM CPU, they use Intel (and more recently, Ryzen) x86-based machines. Their performance relative to the Cortex processors running in competitors is enormous, it must be slow due to the volume of processes Tesla is actually running onboard.
Isn't one of the big problems with Tesla vehicles that they use COTS hardware?
I'm sure I saw something about how they used commercial-grade touch screen instead of automotive grade, so they feel far nicer / more responsive than other cars but fail much faster as they aren't designed to handle repeated heating/cooling cycles that cars experience.
I don't need my car to drive itself with technology that doesn't work and requires me, pedestrians, cyclists, and other drivers as test subjects. I don't need my car to be powerful enough to compute simulations of the Earth's atmosphere. I just want a basic fucking car with good enough safety features. This fetishization of shoving chips into every square centimeter is ridiculous. I want to use my computer and drive my car, not drive my computer with all of the attendant software issues that'll inevitably crop up.
How can they not afford a 300 dollar tablet in a 40k dollar car?
A $300 tablet drops support in 90days to a year. A $300 tablet equivalent in a car has a trailing maintenance requirement for service and parts for years (5-10?)
A 300 dollar tablet dies quickly in the automotive environment with outdoors-like variation in temperature and humidity. Tablet manufacturers literally write in their product sheets "do not leave the device in a car or it may get damaged"; if you leave an iPad in direct sun for a month and it dies, it's expensive for you, if the same happens to a car console, that's expensive for the manufacturer which has to repair it under warranty.
The temperature range for iPad listed by Apple is 0 to +35 C for operation and -20 to +45 C for storage. The required temperature range for electronics on automotive dashboards (which may be exposed to direct sun) is -40 to 90 C, which effectively requires different materials, which means you can't even reuse most components.
assuming you're selling 5 million of a part where the previous generation is 5-10 years old, $1.5 billion is not a lot of money for development, validation, materials, manufacturing, logistics, dealer training, integration with future models, and a decade of OTA updates
Because it becomes a $3000 tablet, in today's chip-shortage environment, to meet certification for automotive use.
Oh damn. Thanks.
Automotive is the extreme case. For now it would be good that many non-automotive chips will shift to 28nm.
That will leave some decent spare capacity for automotive.
And maybe when a lot of such chips would be done in 28nm, it would make sense for TSMC to invest more in optimizing the process for them.
Interesting as 28nm closing in on a decade of age - https://omdia.tech.informa.com/OM016176/28nm-to-be-a-long-li...
But yes, many nodes are long lived. So the nodes they are nolonger expanding would of been around one heck of a time and can imagine the equipment to produce current output will have a repair/servicing cost as well as materials that are starting to make smaller nodes more cost effective for them as a manufacturer and may well see legacy older nodes start becoming more expensive for suppliers to access moving forward. Which for some chips, may well prove less suitable or exsisting designs less accomodating to just shrink as from my understanding if you have a 40nm chip design, just running that on 28nm without any changes is not possible? Or certainly not as straight forward. Then there would be validation/testing for certification at the customer will need to do and for some chip nodes, that may prove more costly than sticking with the proven existing nodes.
So be interesting how this plays out, not aware of any real stickouts but mindful that for some companies using existing older nodes, things will not be as clear-cut as many will think.
> just running that on 28nm without any changes is not possible?
From what I understand, "node" in this context doesn't look like a printer resolution, it looks like a lego kit of transistors and such that TSMC has tuned to use the layer heights / counts / materials they plan to deposit. Since these details change between nodes, the 2D shapes aren't portable. You have to swap components.
You understand correctly. (Source: 20 years of asic design)
I'm also interested in the migration costs. Perhaps there will be a market for a migration service... though the TAM on that is probably small enough that it can't support a product-first approach.
Migration services exist today. Almost any of the ASIC backend service companies would take a contract to do a fab port. (Won't be cheap, though)
But these are services, not products. There's no one-size-fits-all way to move design Z from process X to process Y, for arbitrary values of X, Y and Z, nor is there ever likely to be.
And even once it's done, there are still costs to verify and test the results, which isn't cheap either.
>And even once it's done, there are still costs to verify and test the results, which isn't cheap either.
I can imagine its even more so in the automotive industry, where there's strict safety regulations and guidelines in place which must be met.
Yes it certainly does seem like an opertunity for a specialist company to offer a full service, but aspects like that would usualy be handled for node shrink in colaboration with the node provider. But one aspect will be say for example a microicontoller used for sensitive equipment, the level of certification and validation from goverment/mil certification down to Insurance certification (Lloyds of London have a lot of standard in many feilds that have to be ticked from my experience in the Oil industry alone) that can bury many a product. Heck even your CE/FTC certification is not cheap so any changes,such as node size will see that whole process repeated - hence not all chip makers rush for the latest and greatest node as existing work and proven. Equally some equipment may well see issues upon smaller nodes due to the enviromental factors and smaller nodes more susptable to thinks like radiation in which some exotic space particle would be smaller than the node at larger size but smaller may well start to become an issue. Hence many reasons for legacy nodes still in play and utilised.
May well end up in a decade or two in which suppliers end up having to use China or Russia for production as the only way to get access to the nodes they need, and not like things like that not happened before thinking of how NASA for a while was dependant upon old tried and tested Russian rockets for space launches. Just hope this is not a future problem that is allowed to creep up and hit us all.
Which is to say that there isn't the profitability (or even the equipment) to build new capacity for such old nodes
I'm disappointed that nobody really bothers to work on the cost reduction side of older nodes. I kind of thought we could easily and cheaply turn out, say, 32nm chips by the million.
I'm not an expert, but my layman's understanding is that the lower sizes and new production techniques decrease power consumption.
The price of ongoing power consumption probably dwarfs the price of the chip itself in terms of cost of ownership in most cases.
E.g. an i7 draws 65W at idle, or about 1.5kWh/day. That's ~$0.20/day where I live, about $6/month or $36/year. Max draw is ~4x that. I've probably paid more in power to run my CPU than I paid for the CPU itself.
Most chips run rather cool, user-facing CPU in laptop/desktop/server are a bit of an exception
E.g. an i7 draws 65W at idle, or about 1.5kWh/day
I don't know which chip you're talking about but a 12900K idles at ten watts.
He is talking about an i7 on an older node, since he is comparing the lower production cost of older nodes with their higher usage cost (electricity). I don't know what intel generation would correspond to 32nm, but that was the node discussed.
That would be the venerable 2600K of Sandy Bridge. It idled at five watts.
Sigh.
There are things like PMICs that gets little to no benefits by moving to 28nm. I doubt that will happen. But there are also plenty of designs that are stuck using older mature node, that gets some benefits but has no financial incentive in doing so. Or will require some specialty nodes that is not on offer, which TSMC are currently working on. ( Compared to what comments here suggest they dont give a toss about it. )
So the whole thing is basically about balancing Fab capacity. And there is no better time to do it. You are either stuck waiting for capacity in a node or you move to 2xnm node where new GigaFab are being built and has much better capacity planning. Do you want your $thousands to even $million product to be on hold because of a ( what used to be ) $2 chip cant be Fabbed?
It reminds me of how this backwards way of thinking seems to have infested everyone's heads; that stuff has to be, or make use of, the very latest technology available, or it will be useless junk that barely works.
There was a lot of that talk about the Newport Wafer Fab being sold to China. I remember it produces in-demand automotive stuff, but not the process size(s), if that's even relevant with the different technology. Shame I can't find anything about it on http://youknowsit.co.uk/ innit.
Every infotainment system I have ever used has been useless junk that barely worked.
Dumb question: if something is being manufactured on 60nm today, can you not just take the same design and start manufacturing it on 28nm? Or do you need to literally redesign the entire circuit?
An oversimplified ELI5 version is that simply shrinking the size changes the relative electrical characteristics of the pieces of the circuits which may cause the chip to no longer function correctly. This is partly because of physics, since making components of the circuit smaller and smaller affects its electrical properties and increases unwanted electrical effects, and partly because smaller processes require changes in how the chip is made which, in turn, changes the electrical properties of circuit components even if they are the same shape and size as the original.
In almost all cases, switching processes means a complete redesign. 28nm vs 60nm isn't just "we can draw smaller parts now". Lots of other things (silicon doping levels, choice of metal for the routing layers, operating voltage levels, thickness of the insulating layers, ...) change too, requiring design changes to keep the old design working the same way it used to.
A very small set of processes allow one way reuse e.g. you can build a tsmc28hpm design directly in the tsmc28hpc process (but you can't build a 28hpc design in the 28hpm process) - but, for instance, neither of those is directly compatible with the tsmc28hpc+ (note the 'plus' in the name) process. And all 3 of those have the same feature size and are made by the same foundry.
Also, 28nm pays a huge price in mask complexity etc over 40/65nm. So its cost per area is much higher. This works out to a win if your area gets smaller... or a massive loss if it doesn't. And if you just draw the same features with a different process, guess what, your chip cost just doubled (or whatever it is, depending on the particular processes) on the "more cost effective" process.
Chip designs have multiple layers of abstraction to represent a "circuit". Moving from 60nm to 28nm, your RTL is probably fine, but the physical layout will need some reworking as the transitors have different characteristics, SRAM different latencies, etc. It could be more cost effective to also rework some of your RTL, depending on your volume.
IBM introduced copper interconnects in 1997; the designs before that date were aluminum. IBM is now doing the same thing with "gate-all-around" that will replace FinFET, which itself replaced planar.
You can't take a tube radio and manufacture it with opamps without substantial design changes.
Samsung is starting to use GAA right now, and TSMC will use it for the next process node.
https://www.theregister.com/2021/05/06/ibm_2nm_semiconductor...
https://www.ibm.com/ibm/history/ibm100/us/en/icons/copperchi...
No, you'd need to invest a lot of labor to port the design over. Factors like area, power, gate voltages can be completely different even between technologies on the same node (for example Global Foundaries 28nm is likely completely different than TSMC 28nm, aside from the feature size).
You need to redesign the entire circuit. If it is completely standard digital (e.g. no RAM, no Flash, no 3V IO, no ESD structure), then you can likely port it without too much pain. However, you still have to change the entire package/bond pattern, and completely requalify the part with a customer who sees no advantage. Minimum port/design cost $1M and expect mask fees of at least $2M.
So at 30% margins on a $0.33 part, you can expect to break even 30M sales later!
For memory, you always need to redesign the part.
Memory is a delicate balance between "we need to store the data strongly" and "we need to write the data quickly" and "make it small".
Memory is one of the things that fell off of Moore's Law quite a while ago.
Neat trivia: Intel recently backported a microarchitecture to a bigger node to cope with fab issues at the time. The 11th gen "Rocket Lake" parts are a 14nm backport of the 10nm Ice Lake processors.
From what I understand, doing so was no small effort and is considered a strong triumph for the team that accomplished it.
24 hours before Raspberry announce Pi Pico W on 40nm...
Why don’t they find a way to move these workflows to developing economies who are trying to increase their exposure to chip manufacturing (eg india)?
How hard is it to migrate a design from 40nm to 28nm? Can this be automated?
Depends on the design.
Did you throw RTL at a layout engine and let it figure it out? Pretty damn close to automated and could get to automated with a little upfront elbow grease by a company specializing in such things.
Heavy analog design? It's going to be a lot more work.
Maybe my intuition is completely wrong here, but would the analog case be simplified if the target node+technology was chosen to have trace widths exactly 1/2 the size of the node being transitioned from?
I.e. the traces would have essentially the same standing-wave tuning requirements when modelled as waveguides; would catch a harmonic of the original frequency when acting as antennae; etc.
Analog isn't just RF; PHYs for weird protocols is a giant component of the space, as well as power monitoring/management. The changes in how voltage/resistance/capacitance/etc work at each node for a given layout is the heavy lift.
Additionally, you very, very rarely have the antenna on chip, and the analog bits even for RF are more signal conditioning that isn't typically modeled like waveguides, but instead more like those old analog plug board computers, simply integrated onto a chip.
> Additionally, you very, very rarely have the antenna on chip
I didn't mean that there would be components intentionally serving as antennae in a design; more that you might be choosing analog trace lengths in e.g. a modem, or SDR ADC, to minimize harmful analog-domain interference at your bus frequency — i.e. to increase SNR, you're trying to make your traces be as little like an antenna as possible for the frequency bands they're carrying signal in, because you can't just band-pass that interference away.
The nice thing about shrinking by half, in such deigns — I would think — is that if you've already "tuned" your trace paths to a quiet band (for the country the component is being licensed in), then the harmonic frequencies of that band will also be quiet. Otherwise the band's fundamental frequency wouldn't be considered quiet!
(See also: why the unlicensed commercial-use spectrum was allocated to 2.4GHz, and then to 5GHz. 2.4GHz is an obvious choice, already useless for long-range communication due to water in the atmosphere; the other is its equally-useless first harmonic. But the great thing about choosing the first harmonic in particular, is that transmitting at 5GHz isn't putting short-range harmonic noise onto any lower bands that weren't already noisy due to existing commercial use of the fundamental frequency; so you won't suddenly find your other-band devices working worse in the presence of 5GHz transmitters than they already worked due to 2.4GHz transmitters.)
> I didn't mean that there would be components intentionally serving as antennae in a design
Not an antenna per se, but RF ICs often contain silicon inductors tuned for the operating frequency of the transceiver. Shrinking those down would retune the circuit for a higher frequency.
It's not the traces that are the issue but the transistors and resistors. As h_fe goes up new parasitic modes have to be limited. Resistor geometry changes as do the values. Ratiometric design helps but isn't a panacea. And for something like low noise or complex exotic devices you might not even be able to make them with the passes in the new process.
How common are those two classes?
You'd have to ask a fab probably to get real numbers, but the chips that people care about that haven't migrated to a newer node trend towards having at least some mixed analog components more than the set of all chips being made. Otherwise a shrink probably would have already made sense since 28nm isn't even the cheapest node gate for gate but already a bit chonky. It gets cheaper with even smaller nodes.
Depending on the clock frequency of the chip, a change in the length of signal carrying lines (i.e. from one gate to the next) can already result in a phase/timing shift that ruins the circuit. It is not uncommon to have data lanes that are longer than necessary, just to keep signals on different lines in sync.
Even if you ignore all the other complications when switching nodes (and there are a LOT!), this alone prevents simple downscaling of circuits. It is very likely that after downscaling at least part of the interconnects have to be rerouted.
As for automation of that task: It's the traveling salesman problem in disguise. Which means that you CAN automate it, and there exists software for that purpose, but the result are hardly optimal, and most likely leave quite a lot of possible performance on the table.
Add to that all the other necessary changes when switching nodes, and it becomes fairly obvious that switching nodes, even if only porting 1:1, is a massive effort that can easily span YEARS.
Yes, much more than from 28 to 22 or 16. 40nm+ are planar transistors, and below are FinFETs
The FinFET transition was a bit later than you remember: Intel switched at 22nm, TSMC at 16nm. TSMC's 28nm wasn't their last planar transistor node, but it's a sweet spot of performance vs cost.
TSMC, Open Source your 28nm PDK!
Looks like TSMC is preparing its own grave. When everybody wants to invest in new foundries they are telling customer to go away.
They're the Apple of foundries.
They focus on high end high margin and segment their customers accordingly
There's not many investing in new >28nm foundries.
India is investing in 28-65nm https://www.deccanherald.com/opinion/indias-semiconductor-mi...
Russia and China may well be still developing fabs for such node sizes as they catchup.
EDIT ADD https://www.techspot.com/news/94233-russia-plans-manufacture... "Russia plans to manufacture chips locally on a 28 nm node by 2030" currently they are on 90nm
I wonder if it would make sense to keep producing > 28nm with smaller margins in order to make it harder for this competition to catch up and keep a large moat.
If TSMC stops 28+ within a few years, nobody will need it by 2030 with such a huge gap in supply.