For most of computing history, hardware scarcity has been the rule rather than the exception, from early transistor and memory constraints to repeated DRAM, storage, GPU, and chip shortages driven by disasters, supply chain fragility, and sudden demand surges. Each wave forced users to delay upgrades, pushed manufacturers into allocation and redesign, and quietly shaped software toward efficiency, portability, and tolerance for uneven hardware. Over time, these pressures made owning physical hardware feel risky and unpredictable, while shared infrastructure felt safer. Cloud computing did not win simply because it was cheaper or trendier, but because it absorbed scarcity, pooled risk, and turned hardware shortages into scheduling problems instead of existential ones, even as it introduced new tradeoffs in cost, control, and concentration of power.
DISCLAIMER: This post is a mostly auto-generated summary of a long conversation that happened last weekend between myself and the GPT while I was trying to understand how hardware shortages shape software. I am not an expert in history of computer science or manufacturing, but I write software for a living and, if you haven’t heard the news, right now (December 2025) there’s almost no DDR5 RAM available for purchase.

Some history
In the 1950s and 1960s, scarcity was not an exception, it was the operating condition. Computing was expensive, components were manufactured in comparatively small volumes, and the real constraint was not just performance but the simple fact that memory, storage, and reliable parts were limited enough that engineers designed systems and wrote programs as if every byte and every minute mattered, because they did. This culture of scarcity is easy to forget when a modern laptop arrives with more memory than entire buildings of early machines, but the habits it created have a long afterlife in software.
The first modern shortage that feels familiar to anyone who has watched today’s markets came when memory became a commodity and the market began to swing. In the late 1980s, the DRAM squeeze was so severe that contemporary accounts describe prices that made a megabyte feel like a luxury good, with chip prices reported around $505 per megabyte at the peak, and relief not really arriving until around mid 1989.1 Another period analysis framed the same episode as a price shock that dragged on long enough to reshape purchasing plans, not just for hobbyists but for companies that had to keep shipping systems through allocation and delays.2
The split between OEM shortages and retail shortages was already visible. Large OEMs could negotiate supply, accept allocation, and redesign builds around what was available, while retail buyers were hit in the most visceral way, empty shelves and brutally inflated street prices. The effect on how computers were used was immediate: organizations stretched upgrade cycles, kept more terminals alive with smaller memory footprints, and treated multitasking as a privilege rather than a default. The effect on software was quieter but lasting: developers were pushed toward tighter memory use, more careful caching, and a renewed respect for “good enough” features that did not require ever larger resident memory, which in practice meant more modular programs, more optional components, and more attention to what happens when the machine is under pressure.
By the mid 1990s, shortages were less about an entire industry being young and more about modern supply chains discovering how fragile they could be. One example that captures this shift is a 1995 incident in Malaysia, where a fire on Penang Island is described as triggering a power outage so severe that semiconductor factories were shut down for nearly three weeks, a reminder that even a localized accident could ripple outward into worldwide supply conditions.1 For OEMs this translated into schedule risk and hard choices about which product lines to prioritize, while for retail buyers it translated into the same old story, delayed availability and higher pricing for upgrades. On the software side, the lesson was not only “use less memory” but also “expect uneven hardware”, which encouraged compatibility work, more robust fallbacks, and a continuing bias toward designs that could run acceptably on a wide spread of configurations.
The 1999 Taiwan earthquake became another turning point because it collided with an already tense market and pushed price signals into public view. An Associated Press report carried by Deseret News described memory costs moving fast enough that 64 megabytes of RAM, called a typical amount in a $1,000 computer, had gone from about $40 to about $100, with analysts warning it could reach $150 the next month.3 The same report also makes the retail versus OEM split painfully clear, warning that computers might appear on store shelves with less memory at the same prices, or that add on memory would cost more, if it was available at all.3
Trade publications watching the supply chain close up described how the quake’s effects could spread beyond DRAM into peripherals, and how fears of allocation and price increases could propagate through the entire electronics ecosystem, because panic buying and cautious procurement are themselves a kind of accelerant.4 The software impact here had a distinct late 1990s flavor: developers were already shipping heavier graphical interfaces and richer applications, but a sudden squeeze on RAM pushed users and IT departments back toward pragmatic configurations, and it rewarded software that degraded gracefully, that did not assume abundant memory, and that could still do real work when the machine was loaded.
Then came the 2011 Japan earthquake and tsunami, which highlighted a different kind of vulnerability: deep, specialized dependencies in the electronics supply chain. A contemporary industry report warned that disruptions could create significant shortages and rising prices across a wide set of components, naming NAND flash, DRAM, microcontrollers, standard logic, and LCD panels among the areas at risk.5 The shock was also closely tied to specific industrial nodes, including the ecosystem around major semiconductor firms, and the recovery story of Renesas became a case study in how difficult it is to restore highly specialized manufacturing at speed once equipment and infrastructure have been knocked out of place.6
For OEMs, the Japan event was about redesign, substitution, and multi month procurement planning, because missing a small component can stop an entire product from shipping. For retail buyers, the effects were often indirect, higher prices, slower refresh cycles, fewer discounts, and delayed availability. For software, the meaning was increasingly tied to embedded and industrial computing: when supply is uncertain, product teams value software that can extend the life of existing hardware, support multiple component variants, and keep performance predictable even when the underlying bill of materials shifts.
In 2013, the DRAM market relearned how a single factory can matter. After a fire at SK Hynix’s DRAM fabrication plant in Wuxi, TrendForce reported that the incident caused a month long cease in production, that global DRAM supply decreased by about 10 percent in a single month, and that commodity DRAM prices rose by nearly 20% since the fire, with the average contract price for 4GB DRAM reaching as high as US$33 in the second half of November.7 TrendForce also noted that SK Hynix’s supply to PC OEMs began showing signs of shortage in November, a line that neatly captures how OEM pain often arrives before retail emptiness becomes obvious.7
In parallel, TrendForce described how the incident inevitably created shortages in graphics DRAM categories such as GDDR3 and GDDR5, which is the kind of downstream effect that users tend to notice only later, when a graphics card that should be ordinary suddenly becomes scarce.8 The computer use impact was again practical: companies delayed upgrades, PC configurations shifted, and data center buyers leaned harder on long term contracts. The software impact was more subtle but powerful, especially in the cloud era: when memory becomes expensive, teams become more disciplined about memory footprints, caching strategies, and the costs of inefficient abstractions, because those costs multiply at scale.
The next squeeze was less about a disaster and more about a sustained imbalance. TrendForce reported in early 2017 that the contract price of DDR3 4GB modules had risen above US$25, and framed the market as continuing to experience tight supply even during what would traditionally be an off peak period.9 It also reported that contract prices of server DRAM modules at the start of the first quarter of 2017 had already increased by over 25 percent on average from the same point in the prior quarter, a signal that the pressure was landing hardest where scale and performance demands were highest.10 A separate TrendForce update described consolidated DRAM revenue rising 13.4 percent from the prior quarter, attributing it in part to a roughly 30 percent hike in PC DRAM contract prices, which is another way of saying that the price signal was not a blip, it was systemic.11
For OEMs, this kind of multi quarter squeeze encourages early negotiations, long term purchasing agreements, and platform decisions that minimize exposure. For retail buyers, it translates into fewer bargains and the slow realization that the marked price of a new PC does not buy what it did a year earlier. For software, it is the period when memory efficiency stops being a niche concern and becomes an economic one: developers are pushed toward better profiling, fewer wasteful layers, and more attention to how applications behave under constrained memory, not because it is aesthetically pleasing, but because it is cheaper to run.
At roughly the same time, a different shortage made the headlines and changed daily behavior for a very visible audience: GPU buyers. PC Gamer summarized the mining surge by reporting that cryptocurrency miners purchased over 3 million add in board graphics cards worth around $776 million in 2017, and that the year “culminated with a shortage of graphics cards and grossly inflated prices.”12 Kotaku captured the retail reality in plain numbers, describing high end GPUs out of stock across major retailers and giving a concrete example: a GTX 1080 that normally retails for $550 going for over $1,000 on third party marketplaces.13
TechSpot’s pricing analysis gave the more structured picture, describing a market where, around January 2018, buyers were sometimes facing prices more than double the marked price, and where the story was not only mining demand but also rising memory costs inside graphics cards, which pushed manufacturing costs upward and helped keep retail pricing inflated.14 Another PC Gamer analysis argued that crypto mining was not the only culprit, pointing to the broader context of memory pricing and supply pressure, and it is a useful reminder that shortages often stack rather than arrive alone.15
The OEM versus retail divide here was striking. Large buyers and system integrators could sometimes secure supply through relationships and volume, while individual consumers were left chasing restocks, paying premiums, or giving up. The effect on how computers were used was immediate: upgrades were postponed, gaming rigs were rebuilt around older cards, and researchers who needed GPUs for machine learning often did not buy hardware at all, they rented it. The effect on software was the quiet acceleration of tools and practices that assumed remote, elastic compute, because if local GPUs are scarce and expensive, the cloud becomes the place where experimentation continues. When the mining cycle turned and prices began easing, reporting described GPUs becoming affordable again as crypto demand fell, which was the other half of the lesson: hardware markets can be brutally cyclical, and software strategies that depend on owning scarce hardware can be fragile.16
By the time the world entered the COVID era, the shortage pattern had expanded beyond a single component category and into a broad stress test of global logistics. Even in more recent supply chain reporting, writers refer back to the period as “COVID induced shortages” that the industry was still recovering from, a reminder that the shock was not only about factories but also about transportation, forecasting, and how quickly demand can move when work and entertainment shift into the home overnight.17 In software, this era accelerated habits that were already forming: remote first development, heavier reliance on SaaS tools, and a deeper trust in cloud infrastructure as something you can scale without waiting for a procurement cycle to clear.
Now, the shortage story has moved to the heart of modern AI. In May 2024, Data Center Dynamics reported that SK Hynix confirmed its high bandwidth memory chips were sold out for 2024 and almost sold out for 2025, describing HBM as a critical component in AI chips because it provides faster processing speeds and lower power consumption than traditional memory designs.18 Tom’s Hardware reported the same basic constraint and extended it, describing HBM supply from SK hynix and Micron as sold out until late 2025, and framing the situation as demand for HBM exceeding supply, which is exactly what a new bottleneck looks like when the rest of the chain is ready but one critical part is not.19
Data Center Knowledge made the constraint concrete by explaining that HBM sits in the GPU package itself, and that without it, GPUs cannot be assembled because the memory has to be added at the manufacturing stage, which means this is not a component that can be swapped in later like a DIMM on a motherboard.17 A separate AFP report distributed via TechXplore similarly described SK Hynix’s entire 2024 production of high end memory chips as sold out, with most of the next year’s line gone as well, tying the shortage directly to the scale of demand for cutting edge AI hardware.20
This is where the OEM versus retail split becomes almost philosophical. HBM is not something most consumers buy at a store, yet retail users feel the consequences anyway, because the scarcity is upstream of the compute services they depend on. The price signal shows up as reserved capacity, long waiting lists for the most desirable accelerators, and a growing advantage for the cloud platforms and hyperscalers that can lock in supply. The software response is also unusually direct: when the bottleneck is memory bandwidth and capacity inside accelerators, developers are pushed toward more efficient training methods, more careful batching, more compression and quantization, and a stronger focus on inference efficiency, because wasting memory is no longer merely sloppy, it is an avoidable tax on scarce capacity.
If you follow this story across decades, the march toward cloud computing looks less like a marketing victory and more like an adaptive response to repeated scarcity. Each shortage taught the industry to pool resources: first through time sharing, then through virtualization, then through containerization and managed services, and finally through a cloud market that turns hardware availability into an API call instead of a warehouse problem. The upside is real: resilience improves when workloads can move, capital costs shrink when you rent instead of buy, and innovation can continue even when a specific part becomes hard to get. The downside is also real: dependence shifts from a parts supplier to a platform supplier, bargaining power concentrates, and the costs you avoided up front can return later as lock in, unpredictable billing, compliance headaches, or simply the discomfort of knowing that the scarce hardware is now controlled by a smaller number of gatekeepers.
In the end, shortages did not only change what was available, they changed what seemed normal. They made leaner software valuable again and again, they rewarded architectures that tolerate scarcity, and they helped push computing toward shared infrastructure, where the question is not “can I buy the part,” but “can I get capacity,” which is one reason cloud adoption has been so steady, even when the cloud is not the cheapest or the simplest option, because it often is the option that keeps moving when the physical world slows down.
- Vice, “The Year That the Entire Computer Industry Ran Out of Memory.” https://www.vice.com/en/article/the-year-that-the-entire-computer-industry-ran-out-of-memory/
- Simson Garfinkel archive, “High Price DRAMs” (PDF). https://www.simson.net/clips/1989/1989.BCS.HIGH-PRICE-DRAMS.pdf
- Deseret News (Associated Press), “Computer prices on rise amid chip shortage, Taiwan quake is most recent blow to ailing industry” (Oct 15, 1999). https://www.deseret.com/1999/10/15/19470664/computer-prices-on-rise-amid-chip-shortage-br-taiwan-quake-is-most-recent-blow-to-ailing-industry
- EDN, “DRAM prices rise sharply following Taiwan quake” (Oct 1, 1999). https://www.edn.com/dram-prices-rise-sharply-following-taiwan-quake/
- EDN, “Japanese Earthquake to Impact Component Supply and Pricing” (Mar 15, 2011). https://www.edn.com/japanese-earthquake-to-impact-component-supply-and-pricing/
- IEEE Spectrum, “How Japanese Chipmaker Renesas Recovered From the Earthquake.” https://spectrum.ieee.org/how-japanese-chipmaker-renesas-recovered-from-the-earthquake
- TrendForce, “SK Hynix’s Wuxi Fab to Recover Fully from Fire Damage by mid January 2014” (Dec 12, 2013). https://www.trendforce.com/presscenter/news/20131212-7964.html
- TrendForce, “Impact from SK Hynix’s Fire Incident Continues, Supplies of Graphics Memory also Affected” (Sep 27, 2013). https://www.trendforce.com/presscenter/news/20130927-7809.html
- TrendForce, “Contract Price of DDR3 4GB Modules Reaches Above US$25…” (Jan 4, 2017). https://www.trendforce.com/presscenter/news/20170104-9629.html
- TrendForce, “Tight Supply Causes Contract Prices of Server DRAM Modules to Rise Over 25%…” (Jan 10, 2017). https://www.trendforce.com/presscenter/news/20170110-9634.html
- TrendForce, “1Q17 Global DRAM Revenue Rose by 13.4% From Prior Quarter…” (May 18, 2017). https://www.trendforce.com/presscenter/news/20170518-9752.html
- PC Gamer, “Cryptocurrency miners bought 3 million graphics cards worth $776 million in 2017” (Feb 27, 2018). https://www.pcgamer.com/cryptocurrency-miners-bought-3-million-graphics-cards-worth-776-million-in-2017/
- Kotaku, “The Great Graphics Card Shortage Of 2018” (Jan 23, 2018). https://kotaku.com/the-great-graphics-card-shortage-of-2018-1822346367
- TechSpot, “Analyzing Graphics Card Pricing: July 2018” (Jul 19, 2018). https://www.techspot.com/article/1662-graphics-card-pricing-q3-2018/
- PC Gamer, “Why crypto mining wasn’t the only culprit for wild GPU prices.” https://www.pcgamer.com/why-crypto-mining-wasnt-the-only-culprit-for-wild-gpu-prices/
- Ars Technica, “Declining cryptocurrency prices are making graphics cards affordable again” (Jul 2018). https://arstechnica.com/gaming/2018/07/declining-cryptocurrency-prices-are-making-graphics-cards-affordable-again/
- Data Center Knowledge, “HBM Chip Shortage: A New Bottleneck in the Data Center Supply Chain” (Aug 8, 2024). https://www.datacenterknowledge.com/supply-chain/hbm-chip-shortage-a-new-bottleneck-in-the-data-center-supply-chain
- Data Center Dynamics, “SK Hynix confirms HBM chips sold out for 2024, limited supply left for 2025” (May 2, 2024). https://www.datacenterdynamics.com/en/news/sk-hynix-confirms-hbm-chips-sold-out-for-2024-limited-supply-left-for-2025/
- Tom’s Hardware, “HBM supply from SK hynix and Micron sold out until late 2025” (May 2, 2024). https://www.tomshardware.com/pc-components/gpus/hbm-supply-from-sk-hynix-and-micron-sold-out-until-late-2025
- TechXplore (AFP), “SK Hynix says high end AI memory chips almost sold out through 2025” (PDF). https://techxplore.com/news/2024-05-sk-hynix-high-ai-memory.pdf