Memory Supercycle: How AI’s HBM Hunger Is Squeezing DRAM (and What to Own)

22 min read Original article ↗

elongated_musk

Press enter or click to view image in full size

In the scramble to feed AI accelerators, memory makers are prioritising HBM capacity, starving commodity DDR5/LPDDR and even some NAND controller SKUs. Distributors report double-/triple-ordering and inventory weeks have plunged. DRAM prices and producer margins are spiking—classic shortage dynamics—and for a stretch even “plain vanilla” memory could out-earn high-bandwidth HBM on capital efficiency. History, however, says shortage cycles revert fast. This piece unpacks what is happening inside fabs, who benefits or loses, three scenarios for 2025–26, and the cleanest ways to express the view while managing cyclical risk.

I. Opening…

The headlines belong to high-bandwidth memory (HBM), but the profit shift may first appear where few were looking: commodity DRAM. As hyperscalers lock in years of advanced memory supply for AI needs, wafer starts and packaging capacity are quietly migrating away from vanilla DDR. The result looks like a textbook shortage—prices up, inventories down, purchasing managers double-ordering in panic (Reuters, 2025). A mid-tier server OEM recently quipped that they “can’t source DDR5 at any price” as lead times stretched from weeks to months. The only question for portfolios is whether this is a genuine supercycle or just a short, sharp squeeze.

II. What Just Changed

A massive capacity reallocation is underway: memory manufacturers have shifted more of their production lines to HBM for AI accelerators, which in turn strains supply of standard memory chips. HBM devices not only consume extra process steps (through-silicon vias, wafer thinning, chip stacking) but also soak up disproportionately more wafer capacity per bit. In fact, producing a given amount of HBM can require roughly three times the wafer output of conventional DRAM (TrendForce, 2025), due to smaller die sizes and complex assembly. This focus on high-margin HBM (and latest-gen DDR5 for servers) is starving legacy products – e.g. DDR4 and certain flash controllers – as fab time and advanced substrates are diverted to AI-centric chips. Assembly/test bottlenecks (for advanced packaging and multi-die stacking) further crowd out standard module production.

On the demand side, device makers and cloud providers are scrambling to secure inventory. A replacement cycle in data centres and surprisingly resilient PC/phone demand have coincided with the AI boom, exacerbating the shortage of non-HBM memory (Reuters, 2025). Reports from distributors indicate safety stockpiling and “double/triple ordering” reminiscent of past manias (Reuters, 2025). Key metrics underscore the sudden tightness: mainstream DRAM contract prices jumped by nearly 18–23% in Q4 2025 alone (TrendForce, 2025), and spot prices have soared – by September, spot DRAM was almost 3× higher year-on-year, after barely rising last spring (TechInsights, 2025).

Figure: DRAM spot price-per-bit year-on-year growth in 2025. After a modest uptick in early 2025, memory prices went vertical over the summer as AI-driven demand collided with constrained supply. By late 2025, spot DRAM prices were nearly +187% YoY, a surge not seen in years. Such rapid price inflation reflects a classic supply squeeze, with buyers bidding up scarce bits amid dwindling inventories. (TechInsights, 2025)

Meanwhile, the average inventory of DRAM at suppliers has collapsed to only ~8 weeks (current quarter) from ~10 weeks a year ago – and from a glut-level 31 weeks in early 2023 (Reuters, 2025).

Figure: Quarterly DRAM inventory levels (weeks of supply). After peaking in early 2023, global DRAM stockpiles have been drawn down sharply – to single-digit weeks by late 2025. This drawdown reflects both deliberate production cuts during the prior glut and the recent demand surge. With such thin buffers in the channel, any extra orders translate swiftly into shortages and price hikes. Low inventory also means memory firms can run factories hot with minimal risk of oversupply – at least until new capacity comes online. (TechInsights, 2025)

III. The Economics — Why Non‑HBM Can Win (for now)

Paradoxically, the “boring” DRAM segments may enjoy the best near-term profitability. HBM is technically complex and capital-intensive: each HBM bit requires more lithography, more testing, and expensive high-density substrates, driving a higher cost per bit. In a tight market, however, a simple DDR5 chip can see its price (and margin) climb much faster relative to its cost base. Indeed, as suppliers pour capex into HBM, the sudden scarcity of standard DRAM has let them command premium pricing on legacy products. Operating margins are responding in kind. By Q3 2025 Samsung was earning ~60% margin on HBM – but around 40% even on commodity DRAM, a figure that is rising fast (Reuters, 2025). Analysts now project that if current trends run a few more quarters, plain DDR5’s profitability could overtake that of HBM by early 2026. In other words, for all the hype, HBM’s absolute dollar profit per chip is higher – but on a per-factory or ROI basis, generic memory might briefly deliver a higher return on capital.

This phenomenon has precedent. In past shortage cycles, peak earnings often accrue to the laggards – the older nodes or less “sexy” products that suddenly become scarce goldmines until capacity catches up. A DRAM line making pedestrian PC DIMMs can, in a shortage, enjoy windfall ASPs with depreciated equipment. Meanwhile the cutting-edge product (HBM) carries a huge investment load and cannot further raise prices without demand pushback. The current dynamic hints at such a scenario: HBM3e still commanded ~4× the price of DDR5 in mid-2025 (TrendForce, 2025), but that gap is narrowing as DDR5 spikes. For now, tight mix (favoring server modules) and higher pricing are boosting non-HBM memory profits faster than expected – a welcome respite for diversified suppliers. Caution: These fat margins won’t last forever; history shows that memory markets tend to self-correct once new supply comes on stream or hoarding unwinds. But for the next couple of quarters, plain DRAM is riding high on the coattails of AI.

IV. Winners & Losers

Beneficiaries: The obvious winners are the memory producers themselves – especially diversified DRAM/NAND vendors with the agility to swing capacity. Firms like Samsung, SK hynix, and Micron, which can allocate wafer starts between HBM and commodity bits, are positioned to maximise whatever is in short supply. (Notably, Samsung – a relative laggard in HBM tech – is ironically reaping outsized gains from its heavy exposure to “conventional” DRAM during this shortage (Reuters, 2025)). Memory makers’ stock prices have rallied accordingly in 2025, anticipating a profit upcycle. Beyond the chipmakers, the capital equipment and materials supply chain is another clear beneficiary. Wafer Fab Equipment (WFE) companies tied to DRAM/HBM production – for example, suppliers of etch and deposition tools for creating DRAM cell arrays and TSV interconnects, or makers of metrology and high-precision bonding equipment for stacking memory dies – see strong order flow. As memory firms scramble to add capacity for DDR5 and HBM (often requiring cutting-edge 1β/1γ process technology and advanced packaging), they are making big purchases of tools. Similarly, assembly/test equipment vendors (e.g. thermal compression bonders, high-speed memory testers, and 2.5D packaging kit) are in demand to support the more complex HBM packaging steps. In one illustration of capital intensity, expanding DDR5 fab output by just 10k wafers/month can cost on the order of $10 billion in new equipment (Company guidance, 2025) – implying a boon for tool makers if multiple projects get underway.

Materials and components suppliers should not be overlooked. Semiconductor materials companies providing high-purity chemicals, gases, photoresists, and CMP consumables will benefit from higher fab utilisation and new expansions. Niche segments like specialty glass and quartz (used in etching equipment), or makers of silicon interposers and substrates (for HBM and advanced packaging), face booming orders – and in some cases, their capacity constraints become part of the bottleneck. For instance, ABF substrate lead times have lengthened as high-end GPU and HBM modules compete for limited substrate output. All told, the upstream ecosystem that supports memory manufacturing finds itself with fuller order books and better pricing power than it’s seen in years.

Pressured: On the losing side, parts of the electronics value chain are feeling the squeeze from surging memory costs and patchy supply. Server OEMs and module assemblers are one obvious group under pressure. Companies that build servers or sell memory modules (DIMMs) to end-users now face both higher input prices and difficulties securing enough chips. Their margins get pinched as contract arrangements often fix server prices in advance, but memory modules suddenly cost double. We’ve already seen some module vendors delay new product launches that were slated for late 2025, pushing them into 2026 in hopes that pricing stabilises (trade press, 2025). Likewise, PC and smartphone makers reliant on specific DRAM or LPDDR grades have had to scramble. If a particular timing bin or legacy DDR4 chip becomes unavailable, OEMs must either pay exorbitant spot rates or redesign devices for alternatives – potentially delaying product cycles. Some consumer electronics brands are outright raising retail prices or cutting specs; for example, the UK-based Raspberry Pi PC line announced price increases after memory costs rose ~120% YoY. In general, any device vendor without deep supply agreements is at risk of memory shortages causing production hiccups or profit margin erosion.

Second-order effects are rippling outward too. Cloud service providers – the hyperscalers driving AI demand – face a juggling act for capital spending. Each AI server requires not just expensive GPUs but also large pools of memory (both HBM and DDR); a shortfall in one can stall deployments of the other. There are murmurs that some cloud projects are gated by memory availability – essentially, AI accelerator boards waiting on sufficient HBM stacks or DDR5 modules. This could force clouds to defer data center rollouts or redirect capex, with knock-on effects for their suppliers. Another knock-on: foundries and chip designers are getting caught in the crossfire. As memory gobbles up more of electronics BOM costs, hardware makers are pressuring other component suppliers (like CPU, GPU, and ASIC vendors) for price concessions to offset skyrocketing memory bills (SMIC, 2025). This means logic chip makers with less bargaining power – including some fabless firms and smaller foundries – may have to accept lower prices/margins on their products to keep overall system costs in check. Additionally, the specialist packaging providers (e.g. TSMC’s CoWoS or OSATs doing HBM stacks) are running at capacity – any bottleneck or delay there can slow down deliveries of finished AI modules, frustrating everyone downstream. In short, while memory makers and their suppliers celebrate, the pain is felt by those who buy memory or depend on its timely availability.

V. How Long Can This Last? Supercycle vs. Classic Shortage

Is this the start of a memory supercycle or just a flash in the pan? Bulls argue that AI’s secular demand and structural lags in capacity investment could keep memory tight well into 2026. Indeed, suppliers themselves are voicing unusually prolonged optimism: SK hynix, for instance, cites a “structural constraint” on DRAM supply (as production shifts to HBM) and expects a prolonged up-cycle, having already presold its entire 2026 output (Company guidance, 2025). With hyperscalers signing multi-year supply agreements and content per AI server rising inexorably, one could imagine a longer-than-normal cycle, perhaps reinforced by geopolitical uncertainties (export controls, China’s push for self-sufficiency, etc.). Additionally, exogenous wildcards – e.g. a major earthquake in a production hub or a power outage (events that have rocked memory markets before) – could prolong the shortage by suddenly knocking out capacity. Some industry observers also note that government export restrictions (such as curbs on advanced chip sales to certain countries) might inadvertently keep supply tight in allowed markets, as manufacturers allocate inventory carefully. These factors provide a structural backdrop that could support higher-for-longer pricing if multiple things go wrong for supply or right for demand.

History, however, urges caution: memory shortages have a habit of burning hot and fast. The term “supercycle” has been thrown around, but skeptics like long-time analyst Jim Handy wryly note that it’s overused – this looks more like a classic shortage that usually lasts a year or two (TechInsights, 2025). Already we see the hallmark signs: double-ordering, inflated backlogs, and a parabolic price rise – dynamics that often precede a reversal once customers slam the brakes or new fabs ramp up. The typical DRAM cycle from trough to peak to mean reversion has historically been ~2–3 years. In this case, the trough was late 2022/early 2023 (when inventories were bloated and prices crashed). We’re now a good year into the upswing. Supply response is on the way: memory firms are guiding higher capex for 2024–25, new production lines (and nodes like 1β, 1γ) will add output, and crucially, HBM4 is on the horizon by 2025–26. The next-gen HBM could ease some pressure by delivering more GB per stack (thus more bits per wafer). Furthermore, today’s demand drivers could cool: hyperscalers might moderate spending if AI workloads don’t monetize quickly, and device makers that over-bought in panic could suddenly find themselves with excess stock (the infamous double-order hangover). There’s also the risk of policy intervention – for example, governments could release strategic stockpiles or negotiate industry accords to stabilise prices (not unheard of in memory history).

So we envision two broad possibilities: a true supercycle where structural demand keeps outpacing supply for an extended period (18–24+ months of tightness), versus a short-lived squeeze that corrects within ~6–12 months as the market catches up. Which prevails depends on those signposts we discuss next. Notably, even the bullish camp concedes that eventually memory is cyclical; the debate is about timing. TechInsights, for one, forecasts an industry downturn by 2027, implying this spike will be long over by then. Our take: the present shortage has some unique drivers (AI, geopolitics) that could extend it, but odds are it will follow a familiar script – sharp up, then sharp down. Prepare accordingly.

VI. Three Scenarios with Signposts

… Some scenarios to consider.

  • Bull Case (extended tightness): Memory remains tight through mid-2026. AI demand continues to outstrip all forecasts, with successive waves of GPU deployments (for both training and inference) pushing HBM and DDR5 consumption higher. Despite aggressive capex, new capacity from Samsung, SK hynix, and Micron only trickles in late-2026, keeping supply/demand off-kilter. DRAM prices climb further in early 2026 (albeit at a slowing pace), and then plateau at elevated levels. In this scenario, DRAM average selling prices (ASPs) could rise another ~20–30% over the next 2–3 quarters, and stay near peak for a year. Producer margins would remain near record highs well into 2026, and memory stocks would likely outperform broader semis. Importantly, even as HBM supply improves gradually, commodity memory still holds value — e.g. DDR5 prices maybe ease mildly but not crash. This bull case might see the memory market entering a “supercycle” akin to 2017–18, but on steroids. Signposts for Bull: Inventory stays extremely low (single-digit weeks) at both suppliers and OEMs; no meaningful order cancellations; hyperscalers keep expanding AI capacity unabated (and perhaps sign more multi-year purchase agreements locking in supply). If we see, say, a major cloud prepay for 2027 memory, or lead times for critical parts extend further (e.g. 6+ months for certain DDR5 modules), it supports the bull narrative of sustained shortage.
  • Base Case (moderating from late-2025): A more balanced outcome where the current shortage peaks by end-2025 and gradually eases through 2026. Under this scenario, we might already be seeing the steepest price increases now; by H2 2025 or early 2026, additional capacity (new fab tech, debottlenecked packaging lines, ramp of HBM4) starts making a dent. Demand growth, while robust, begins to normalize: hyperscalers digest what they’ve bought, and some AI projects get delayed (or achieve more memory efficiency via model optimisations). DRAM ASPs could flatten or inch down by mid-2026, though likely remaining above pre-boom levels. The market avoids a hard crash — instead we get a soft landing where pricing gently corrects as supply catches up by late-2026. Margins for memory firms retreat from peak but stay healthy (say, fall from 60% back to 40% range). This base case essentially views the current cycle as a typical two-year upswing that tops out within 12–18 months of the trough. Signposts for Base: Watch for inventory stabilisation. Also, any comments from memory executives about “demand adjusting” or capex being pulled forward into 2025 would hint that they foresee moderation.
  • Bear Case (overshoot and slump): The most negative scenario sees today’s shortage flipping into oversupply by 2026. This could happen if demand was pulled forward and then drops: for instance, if many data center projects front-loaded memory orders in 2025 but then stall out, or if global economy weakens, hitting PC/phone sales. Meanwhile, memory makers might over-invest — by mid-2026 a wave of new HBM and DRAM capacity could come online just as orders slow, leading to a glut. In this bear case, DRAM prices would likely roll over sharply sometime in 2026, potentially reversing a large portion of the 2025 run-up. We could see high-teens percentage price declines sequentially (as often happens when inventory corrects). Margins would compress quickly, and some less diversified players might swing back to losses by 2027. Essentially, this would be a classic boom-bust: the “supercycle” proves a head-fake, and memory enters a downcycle by late 2026. Signposts for Bear: Look for early warning signs like order cancellations or a rising cancel-to-book ratio at suppliers — if many orders in late-2025/early-2026 get nixed, a demand air-pocket is forming. A rapid increase in channel inventory (e.g. if distributor weeks-of-stock jump unexpectedly) would also foreshadow a glut. Additionally, any sudden policy moves — say the US restricting AI data center growth or China unexpectedly ramping its domestic DRAM output — could flip the script bearish. If HBM4 yields are excellent and those stacks flood the market, that could quickly alleviate HBM constraints and spill over to commodity DRAM oversupply. In short, the bear case comes to pass if the industry overestimates medium-term demand or underestimates its own capacity to expand.

Common signposts to monitor: Regardless of scenario, a few key indicators will tell us which path we’re on. These include:

  • DRAM pricing trends — both spot and contract. If price increases are still accelerating into early 2026, that favours the bull case; flattening or volatility suggests base/bear.
  • Inventory levels — track weeks of inventory at major suppliers and in the distributor channel. Sustained <10 weeks points to continued tightness; a rise back above ~12–15 weeks would indicate easing.
  • Capex and procurement announcements — e.g. hyperscaler memory capex disclosures or new AI cluster rollouts (with their memory content). If cloud giants signal a pause or completion of big AI builds, demand might slow. Conversely, any news of multi-year supply deals (e.g. OpenAI or AWS signing 3-year HBM contracts) would support a longer cycle.
  • New tech ramps — watch the HBM4 timeline and yield news, as well as any novel packaging breakthroughs that increase output. Successful early ramp of next-gen memory in 2025–26 could relieve shortages (bearish indicator).
  • Order book dynamics — keep an eye on lead times and backlog at memory module makers and controller IC vendors. If lead times start shrinking or if customers begin to delay orders (or double-orders get exposed), the tide is turning. A spike in cancel-to-book ratios or a sudden fall in pricing power in late 2025 would foreshadow the inflection.

VII. Pofolio Playbook

How can investors express this memory supercycle (or squeeze) theme while managing the inherent cyclicality? A few strategies emerge:

Go long the producers — but selectively. The clearest winners are the big memory manufacturers, yet one must be mindful of their differences. The ideal candidates are those with a balanced portfolio of HBM and commodity DRAM (and NAND) who can ride the current margin boom but also pivot as trends change. For instance, a company heavily skewed to HBM might not see as much immediate upside (since HBM supply is constrained by yields/capex), whereas one with plenty of older-node DRAM capacity can cash in now. However, pure commodity players could suffer most when the cycle turns. Thus, prefer diversified giants that enjoy the best of both: surging DDR5 margins today and strong HBM roadmap for tomorrow. These firms also typically have the scale and balance sheet to weather volatility.

Equip for the boom. A basket of semiconductor equipment stocks focused on memory is another avenue. As noted, this cycle is spurring fresh capex in both front-end fab and back-end packaging/test. Consider exposure to the tools that enable DRAM scaling and HBM manufacturing. This includes etch and deposition equipment (critical for modern DRAM layers and for drilling TSV holes in HBM), lithography tools (especially as DRAM moves to EUV for 1γ and beyond), metrology/inspection (to control these complex processes), and assembly/test machinery (for stacking and verifying HBM cubes, high-speed memory testers, etc.). Many of these names tend to do well when memory makers loosen their purse strings. An ETF or a custom basket overweighting memory-centric equipment could capture this upside while diversifying single-stock risk.

Materials and upstream picks. Similarly, one can look at key materials suppliers that benefit from rising volume of memory production. For example, companies making photoresists, specialty chemicals, or wafers used in DRAM fabrication might see increased orders. Others supplying packaging materials — such as the ABF substrates and bonding materials needed for HBM — stand to gain pricing leverage in a constrained market. Ensure these picks serve both the HBM and standard memory segments to cover all bases. Some firms in gases or silicones cater to logic and memory alike, providing a broader hedge.

Throughout, stay vigilant on the classic cycle indicators: monitor inventory data, ASP momentum, and even equipment order backlogs (leading indicator of oversupply if they go parabolic). The goal is to ride the uptrend but exit before the party ends. Finally, set a time horizon and thesis checkpoint — e.g. “this is a 12-month trade unless X or Y happens sooner.” Treat it as a tactical play that could evolve into a secular theme, but don’t marry the story. Memory is still a cyclical business at heart, no matter how AI-charged the narrative.

VII. What to Watch Next Quarter

As we move into upcoming quarters, keep a close eye on real-time indicators that will show whether the memory squeeze is intensifying or starting to unwind:

  • Memory price indices: Follow DRAM spot and contract price trackers (e.g. TrendForce/TechInsights monthly reports). Are prices still climbing double-digits quarter-on-quarter, or leveling off? A plateau or decline in pricing would be the first sign that balance is improving.
  • Inventory levels: Watch the reported inventory weeks at both memory producers and distributors. Many manufacturers give inventory metrics in earnings; similarly, channel checks on distributor inventory can be telling. A continued fall into, say, <8 weeks suggests ongoing shortage, whereas a stabilization or uptick could indicate the worst is past.
  • Hyperscaler signals: Any announcements by major cloud players about new AI data center deployments — especially details on memory content per server or total capex — will be crucial. For instance, if Google or Microsoft say they’re doubling HBM per TPU pod, that’s bullish demand; if they announce a “pause” on expansion, that’d be bearish. Also, look at contract durations they sign for memory procurement.
  • HBM4 and new tech updates: Stay alert for news on HBM4 yields and volume. If SK hynix or Samsung report a successful qualification of HBM4 with significantly higher capacity (or better yield), it means relief is on the horizon. Likewise, updates on advanced packaging capacity (e.g. TSMC CoWoS expansion) will matter — easing packaging bottlenecks could unleash more supply of AI chips (and thus require more memory, a double-edged sword).
  • Lead times & order behavior: Check in with OEMs and component distributors about lead times for memory modules, SSD controllers, etc. If lead times are still extending (e.g. pushing 40+ weeks for certain parts), the crunch is alive. Conversely, any reports of order cancellations, rising cancellation ratios, or dealers suddenly having excess stock of previously scarce chips would be red flags that the cycle is turning.
  • Manufacturer guidance: Finally, the tone of memory maker guidance next quarter will be telling. Listen for comments on supply plans, capital expenditures, and customer behavior. If companies start hinting at “improving supply by Q3” or “customers becoming cautious,” it might be time to re-evaluate long positions.

In essence, the next quarter or two will provide critical data to confirm whether we’re in for a prolonged ride or approaching the inflection sooner.

IX. Concluding…..

The AI hardware boom is swiftly morphing into a memory boom — but investors should remember that memory is still memory: cycles matter. Today’s HBM land grab has created an unexpected bonanza in the once-sleepy DRAM market, with shortages driving up prices across the board. It’s a story of the leading edge dragging up the trailing edge — when HBM steals the limelight, plain DRAM steals the margin. However, as in every past cycle, rising prices sow the seeds of their own undoing via new capacity and tempered demand. The prudent play is to profit from the squeeze while keeping a vigilant eye on the exit. In the end, whether this is a true supercycle or just a super-sized head-fake, the keys will be the same signals that ended all previous memory booms.

References:

  • Reuters (2025). “Chip crunch: how the AI boom is stoking prices of less trendy memory.” (Oct 20, 2025).
  • Reuters (2025). “Nvidia-supplier SK Hynix bets on chip ‘super cycle’ after booking record profit.” (Oct 29, 2025).
  • Reuters (2025). “Samsung beefs up advanced chip output after memory chip sales hit record high.” (Oct 30, 2025).
  • Reuters (2025). “SMC says worries over memory shortage prompt customers to hold back Q1 orders.” (Nov 14, 2025).
  • TrendForce (2025). “Memory Makers Halt Quotes as China Faces ‘Daily Pricing’ — HBM capacity crunch.” (TrendForce News, Oct 27, 2025).
  • TrendForce (2025). “DRAM Quotes Shift to Monthly as AI Demand Strains Supply (4Q25 update).” (TrendForce News, Nov 3, 2025).
  • TechInsights (2025). Memory Pricing Report — October 2025. (TechInsights data cited via media).