1GB Raspberry Pi 5, and memory-driven price rises
raspberrypi.com130 points by shrx 8 hours ago
130 points by shrx 8 hours ago
It’s sad to see the one area of life that has long resisted inflation (computing) now succumb to inflationary forces. Other than emergency situations such as COVID-19, I’m used to seeing prices going down over time for computers and their components. It’s one of the rare bright spots when everything else is escalating in price, and now that’s disappearing.
Firstly, this is not due to inflation. The price increase is explicitly (per the article even) due to increased market demand that is causing raised prices.
Secondly, Computing has always been subject to inflation. It cannot escape inflation. You may not notice it , perhaps due to the increase in performance but the cost of parts definitely has risen in the same tiers if you look over a long enough period to avoid pricing amortization
> the cost of parts definitely has risen in the same tiers if you look over a long enough period
This is especially apparent if you’re a hardware manufacturer and have to buy the same components periodically, since the performance increase that consumers see doesn’t appear.
Inflation simply refers to the rate at which prices are increasing. It's agnostic as to the origin ( any single/combo of demand increase, supply shortage, money printing, price fixing, etc).
Inflation isn’t just "prices increasing". It’s the sustained, broad-based rise in the overall price level. Your comment treats any price increase as inflation, but economists draw a pretty clear line here: a relative price change (say, eggs getting more expensive because of a supply shock) isn’t the same thing as inflation. You can have sector-specific increases (as in this case, with RAM) that are independent of changes in the general price level.
Only if you phrase it devoid of any context.
And if the definition was that loose to begin with, then the original comment is even more incorrect since there have been multiple rounds of demand/scarcity led pricing increases.
I've seen prices for memory, SSDs, thunderbolt hubs, and thunderbolt/high end USB cables, flatline or get worse over the last 3 or so years.
Most in-demand electronics got worse post-2020 and haven't recovered.
That's why I just buy something when I need it or when I think the price is reasonable, because nowadays, if I wait for something to get cheaper like I used to do in the 90s-00s, chances are it's gonna get even more expensive as time passes, not cheaper.
The days when you would wait 6-12 months and get the same thing for 50% off or a new thing with 50% more performance for the same price are over, when there's only one major semiconductor fab making everything, 3 RAM makers, 3 FLASH makers, 2 GPU vendors, 2 CPU vendors, controlling all supply, and I'm competing with datacenters for it.
In general, your dollar buys a _Crazy_ amount of compute...but over the last 30 years or so, RAM has spiked several times (Taiwan plant fire) and suffered from several market driven spikes (DDRx shortages, Apple's crazy pricing structure)
There was the issue of hard disk prices for years after the floods in taiwan in 2011.
GPU prices were horrendes when crypto happened (they migrated into a stable issue but it was still because of crypto).
DDR4 jumped because they started focusing on DDR5 before these news right now.
I could probably find more examples but hey
> time for computers and their components
Seems it has been the opposite for some components like GPUs though for years (well before the AI boom)
Speaking as someone who used to buy them regularly to support a PC gaming hobby stretching back to the original glQuake -- GPUs were on average very reasonably priced prior to the crypto boom that preceded the AI boom.
So its technically not AI "ruining everything" here, but there was a nice, long before-time of reasonable pricing.
It was always subject to inflationary forces due to money printing like everything else, it was just the one place where natural deflation due to improving technology was temporarily enough to offset it
Memory price fluctuations due to market demand and monetary inflation - the increase in quantity of fiat money, diluting its value - are two separate and unrelated things.
It's not inflation tho ? It's just rise in demand.
Seriously doubt it.
When Sam Altman buys 40% of global DRAM wafer production, that looks like a demand increase to the market.
What has changed now in the memory landscape/ai workload in the recent months compared to summer or spring?
Apparently OpenAI locked down 40% of the global DRAM supply for their Stargate project, which then caused everyone else to start panic-buying, and now we're here: https://pcpartpicker.com/trends/price/memory/
It's kind of depressing to see that it takes just one asshole to screw the entire electronic market. If you read this, Sam, FU.
I got one of the Newegg circulars in my email advertising a sweet little uATX AMD server board and got to thinking that my home FreeBSD server could use a CPU bump and more memory. As soon as I saw how much 128GB of DDR5 ECC would cost my jaw dropped and noped the fuck out. The cheapest 32GB modules are around $300 and upwards of $500. Thought I was going to gift myself early this Christmas. Depressing indeed.
Indeed, it makes mini computers with soldered ram actually end up being quite cheap by comparison. HP will currently sell you 128GB AMD or Nvidia boxes for 1.7-2.8k depending on your flavor of choice. Not ECC though.
...till supplies last. Which won't be long when people do exactly that (hey, that mini PC is now cheaper than building similar setup)
Exactly this.
I'd been planning to upgrade my desktop as a christmas present for myself.
Now I have the cash and was looking at buying my PCPartPicker list, the cost of the 64GB DDR5-6000 RAM I planned to buy has gone from £300-400 to £700-800+, a difference of almost the price of the 9070 XT I just bought to go in the computer.
I guess I'll stick with my outdated AM4/X370 setup and make the best of the GPU upgrade until RAM prices stop being a complete joke.
literally every market is like that. if you've got market-cap amounts of money and place a market buy order for all of it, you'll quickly learn what slippage is.
That really isn't unprecedented. We need high RAM prices for manufacturers to expand fabs, supply overshoots demand because the AI bubble will contract to some extend and then we'll have cheap RAM once again. Classic cycle.
> We need high RAM prices for manufacturers to expand fab
Manufactures aren't dumb, they lost a lot of money in the last cycle and aren't playing that game anymore. No additional capacity is planned, OEMs are simply redirecting existing capacity towards high-margin products (HBM), instead of chasing fragile demand.
The proles will get dumb screens tethered to their sanctioned models; and we will be grateful!
I understand hating at people like Musk who destroys human lifes but what is Sam Altman doing?
Because of (c) of images or just because he bought ram?
> Apparently OpenAI locked down 40% of the global DRAM supply for their Stargate project
That sounds like a lot, and almost unbelievable, but the scales of all of this kind of sits in that space, so what do I know.
Nonetheless, where are you getting this specific number and story from? I've seen it echoed before, but no one been able to trace it to any sort of reliable source that doesn't boil down to "secret insider writing on Substack".
Samsung directly announced that OpenAI expects to procure up to 900,000 DRAM wafers every month. That number being 40% of global supply comes from third party analysis, but the market is going to notice nearly a million wafers being diverted each month however you slice it. That's a shitload of silicon.
https://news.samsung.com/samsung-and-openai-announce-strateg...
https://www.tomshardware.com/pc-components/dram/openais-star...
> Samsung directly announced that OpenAI expects to procure up to 900,000 DRAM wafers every month
The article says: "OpenAI’s memory demand projected to reach up to 900,000 DRAM wafers per month", but not by when, or what current demand is. If this is based on OpenAI's >$1T of announced capex over the next 5 years, its not clear that money will ever actually materialize.
That DDR5-4800 2x16GB price tend is crazy. It tripled from August/September until now.
Even DDR4. Just checked, I bought a non-ECC 1x32go stick for my homelab on August 25th, priced 78€ on Amazon. Same offer is now at 229€. Yeah I guess I'll wait before updating to 64gig then
It reminds me very much of the crypto mining craze, when there was a run on GPUs and one couldn't be had for any less than 5x it's MSRP. I know that eventually passed and so too will this but it still sucks if you had been planning to purchase RAM or anything needing it.
I don't think DDR4 is even being manufactured anymore, so the rush is clearing out that inventory for good.
It is still being manufactured. Older memory standards continue to be manufactured long after they stop being used in computers, e.g. for use in embedded devices.
I dont even get this trend, wouldnt OpenAI be buying ECC RAM only anyway? Who in their right mind runs this much infrastructure on NON ECC RAM??? Makes no sense to me. Same with GPUs they aren't buying your 5090s. Peoples perception is wild to me.
OpenAI bought out Samsung and SK Hynixes DRAM wafers in advance, so they'll prioritize producing whatever OpenAI wants to deploy whether that's DDR/LPDDR/GDDR/HBM, with or without ECC. That means way less wafers for everything else so even if you want a different spec you're still shit out of luck.
You forgot to mention that everyone else also raised their price because, you know, who don't like free money.
Last year I brought two 8G DDR3L RAM stick made by Gloway for around $8 each, now the same stick is priced around $22, a 275% increase in price.
SSD makers are also increasing their prices, but that started one or two years ago, and they did it again recently (of course).
It looked like I'll not be buying any first-hand computers/parts before the price can go normal again.
ECC memory is a bit like RAID: A consumer-level RAM stick will (traditionally) have 8 8-bit-wide chips operating basically in RAID-0 to provide 64-bit-wide access, whereas enterprise-level RAM sticks will operate with 9 8-bit-wide chips in something closer to RAID-4 or -5.
But they are all exactly the same chips. The ECC magic happens in the memory controller, not the RAM stick. Anyone buying ECC RAM for servers is buying on the same market as you building a new desktop computer.
> Anyone buying ECC RAM for servers is buying on the same market as you building a new desktop computer.
Even when the sticks are completely incompatible with each other? I think servers tend to use RDIMM, desktops use UDIMM. Personally I'm not seeing as step increase in (b2b) RDIMMs compared to the same stores selling UDIMM (b2c), but I'm also looking at different stores tailored towards different types of users.
At the chip level there’s no difference as far as I’m aware, you just have 9 bits per byte rather than 8 bits per byte physically on the module. More chips but not different chips.
> you just have 9 bits per byte rather than 8 bits per byte physically on the module. More chips but not different chips.
For those who aren't well versed in the construction of memory modules: take a look at your DDR4 memory module, you'll see 8 identical chips per side if it's a non-ECC module, and 9 identical chips per side if it's an ECC module. That's because, for every byte, each bit is stored in a separate chip; the address and command buses are connected in parallel to all of them, while each chip gets a separate data line on the memory bus. For non-ECC memory modules, the data line which would be used for the parity/ECC bit is simply not connected, while on ECC memory modules, it's connected to the 9th chip.
(For DDR5, things are a bit different, since each memory module is split in two halves, with each half having 4 or 5 chips per side, but the principle is the same.)
I seriously doubt that single bit errors on the scale of OpenAI workloads really matters very much, particularly for a domain that is already noisy.
Till they hit your program memory. We just had really interesting incident where one of the Ceph nodes didn't fail but started acting erratically, bringing whole cluster to a crawl, once a failing RAM module had some uncorrectable errors.
And that was caught because we had ECC. If not for that we'd be replacing drives, because metrics made it look like it is one of OSDs slowing to a crawl, which usual reason is drive dying.
Of course, chance for that is pretty damn small, bit also their scale is pretty damn big.
ECC modules use the same chips as non ECC modules so it eats into the consumer market too.
Good point! But they are slightly more energy hungry. At these scales I wonder if Stargate could go with one less nuclear reactor simply by switching to non-ECC RAM
Penny-wise and pound foolish. Non-ECC RAM might save on the small amount of RAM power, but if a bit-flip causes a failed computation then an entire forwards/backwards step – possibly involving several nodes – might need to be redone.
Linus Torvalds was recently on Linux Tech Tips to build a new computer and he insisted on ECC RAM. Torvalds was convinced that memory errors are a much greater problem for stability than otherwise posted and he's spent an inordinate amount of time chasing phantom bugs because of it.
>but if a bit-flip causes a failed computation then an entire forwards/backwards step – possibly involving several nodes – might need to be redone.
Which for the most part it would be an irrelevant cost-of-doing business compared to the huge savings from non-ECC and how incosequential it is if some ChatGPT computation fails...
The 5090 is the same chip as the workstation RTX 6000.
Of course OpenAI is also not buying that but B200 DGX systems, but that is still the same process at TSMC.
On the flipside, LLMs are so inconsistent you might argue ECC is a complete waste of money. But Open Ai wasting money is hardly anything new.
ECC RAMs utility is overblown. Major companies often use off-the-shelves non enterprise parts for huge server installations, including regular RAM. The rare bit flipping is hardly a major concern at their scale, and for their specific purposes.
Most server CPUs require RDIMMs, and while non-ECC RDIMMs exist, they are not a high-volume product and are intended for workstations rather than servers. The used parts market would look very different if there were lots of large-scale server deployments using non-ECC memory modules.
Do you have a source for this?
I would not want to rerun a whole run just because of bit flips and bit flips become a lot more relevant the more servers you need.
What will happen once the bubble pops and OpenAI will not be able to pay for all the useless stuff they ordered ?
Ideally the consumer market gets flooded with surplus at cost or below server grade hardware flowing out in going out of business fire sales.
not much use for the 100GB+ AI boards or server RAM for consumers. Tho homelab guys will be thrilled.
Enterprise wise, the used servers kinda always have been cheap (at least compared to MSRP or even after discount price), just because there is enough companies that want a feel good of having a warranty on equipment and yeet it after 5 years.
Nowadays old-gen server hardware can be a viable alternative to a new HEDT or workstation, which would typically use top-of-the-line consumer parts. The price and performance are both broadly comparable.
Isn't the typical server much noisier than, e.g., a high-end desktop (HEDT) with Noctua fans?
Depends how big the fans are. Tiny 1U rack-mountable hardware = lots of noise; huge fans = near silent with better heat removal capacity.
No. Up to you to cool. I use an Epyc based system as a home server and you can’t hear it. At a previous employer we built a cluster out of these and just water cooled them. Very easy.
This is a chassis and fan problem not a CPU problem. Some devices do need their own cooling if your case is not a rack mount. E.g. if you have a mellanox chip those run hot unless you cool them specifically. In rackmount use case that happens anyway.
Oof the RAM in my computers is apparently worth more than I paid for the entire thing...
I don't really blame them, but my question is, if ram price goes down, will RPI drop its prices? My experience with other companies is no.
Price is an optimization problem, if you raise prices and profits increase, your product was likley too cheap. If you raise prices and profits decrease ("lol I'm not paying $XYZ for an rpi when the clone is $ABC") you are charging too much.
There are myriad other factors that go into this, especially just general inflation, which will likely fill the price gap by the time memory costs go down anyway.
These scenarios end up being testers to see what people will pay. If people are buying your product at a ridiculous price, why drop it?
Nothing wrong with this. Some applications really are compute bound and don't need much RAM, such as a homemade surveillance camera system I have, presently running on a couple of Raspi 4s. Suppose I wanted to upgrade to Raspi 5, why spend extra money on RAM that's not needed? These things run headless with the only GUI exposed via web server.
What surprises me the most is the 1GB option is even viable though I can imagine this will be for IoT users who shove Pi's into things doing embedded stuff where a kernel with a few user space things along with maybe a container are doing all the work.
> What surprises me the most is the 1GB option is even viable...
There are plenty of non-IoT use cases that are viable with 1GB of general-purpose compute. Hell, I rented an obscenely cheap 512MB VPS until recently, and only abandoned it because its ancient kernel version was a security risk.
Most of my RPi tasks are not memory-bound
Well to be honest, I'm doing just fine with my 1 GB Pi3B home server. Sure, another gigabyte wouldn't hurt, but I'm able to run influxd, zigbee2mqtt, telegraf, grafana, homeassistant (containerized), mpd and navidrome on it without issues.
Probably but I fail to see what use case doesn't need more than 1Gb but can't be done already with a Pi 3b or 4.
I have a Pi3b in a 3D printer, and compiling the software, but also simply apt upgrade feels like it takes forever. Most day to day operations work just fine though.
At work we have a display with a Pi3 (not B) connected, just showing websites in rotation. Websites even with a simple animation are laggy, startup takes a few minutes.
Both of these usecases don't need more than 1 GB of ram, but I want to speed of a 4 or 5.
Usually it's just "same thing but faster". CPU is 2-3x faster, and even boot speed is faster, so it can be handy to not have to wait so long to run updates, compile something, reboot, etc.
What do you think, when will the ram prices come back down again? Years, months?
The article mentions "the $10 Raspberry Pi Zero". I feel this is rewriting history. The Raspberry Pi Zero was $5 when it was released back in 2015. It was mostly out of stock, but I did manage to get one unit at that price eventually.
Nowadays you can no longer get the Raspberrry Pi Zero for less than 12€ or so. I consider the $5 Raspberry Pi Zero to be among the best values on the market and there hasn't been anything else that came close.
$5 in 2015 is worth $7 in 2025 dollars. Combine that with higher memory prices and overall increases in supply chain costs/tariffs, and I really don’t see $10 as being that bad.
RPi Locator is a great service if you're looking to buy a Pi you can afford.
The clear winners of AI are memory makers.
And Nvidia.
And TSMC (and ASML).
It's shovels all the way down.
I bet the nerds making the PCBs, the jellybean parts and connectors are making mint as well.
Nvidia just got hit by Broadcom / Google on TPUs. There's also AMD behind its back. Not so simple.
If Google and AMD are the biggest threats to CUDA's monopoly, I'd argue Nvidia has nothing to worry about.
Stock price does not say the same :'(
Oh, you think it's undervalued?
If you believe the US government will reopen the door to China sales, then yes. Highly rumored to be a thing soon.
can't just make a new fab in a year and capitalise on spike, and most big investors in it know it.
They "can". It happened during COVID and they got trapped by it so they're not taking the bite anymore.
Starting to hate OpenAI. Them and their Trillion Dollar deals with data centers and gpu manufacturers
Those price increases seem pretty reasonable given the shitty situation. I bought a Jetson 8GB a few weeks ago for $350 CAD from Amazon, I just checked that same listing and it's now $430.