Everyone is talking about shipping data centres to space as the only way to scale the amount of compute that we are going to need for AI in the near future. The core argument? Space solves the energy bottlenecks that we will eventually face (and in many cases are already hitting us) on Earth as we start scaling.
Elon Musk was in Dwarkesh Patel’s podcast (which I still like to refer to as Lunar Society) sharing his thoughts around this. Let me share here a few of the highlights from that conversation to build some shared context (I wouldn’t want you to have to listen to a 2h podcast to take the most out of this post):
According to Elon, the primary driver for moving data centres to space isn’t real estate; it’s the sheer lack of available power on Earth. To power future AI models, Musk envisions needing terawatt-scale energy. For context, he pointed out that the entire US currently averages about half a terawatt per hour.
The main advantages of having data centres in space is that, in his words, “it’s always sunny in space”, so no need for batteries as there is constant generation; there is zero atmospheric loss (because when solar energy reaches the panels it hasn’t been degraded by the atmosphere); and cheaper hardware can be used for the solar cells because you don’t have to battle inclement weather
“Those who live in software land are about to have a hard lesson in hardware. Scaling on Earth means navigating brutal regulatory hurdles to build power plants, securing land permits, and sourcing massive amounts of electrical transformers, all of which act as severe bottlenecks.” – Elon Musk at Dwarkwesh’s
He even ventures to make one of his (always accurate) predictions: “My prediction is that it will be by far the cheapest place to put AI. It will be space in 36 months or less”, and he refines, “probably closer to 30 months”, stating that space will become “the most economically compelling place to put AI.”
We should of course contextualise all these claims around the recent announcement of xAI joining Space X: “Space enables “ridiculous improvements” in AI scaling, positioning SpaceX as a potential major AI hyperscaler with high-frequency launches”.
I think Elon’s intentions are clear. He wants to power xAI from space leveraging Starlink without having to worry about Earth regulation and expensive turbines.
He has pulled off crazier plans before through his maniacal sense of urgency: affordable electric vehicles, self-driving cars, and a reusable spaceship that was nothing but a sci-fi dream a few decades ago. But how much of this space data centre talk is marketing, and how much is a technically and economically grounded plan?”
Follow me down this new rabbit hole.
Let’s kick this off acknowledging the status quo. From what I’ve been researching, there seems to be a single recognised data centre in space. It consists of a LEO satellite with a single H100 GPU launched in November 2025 as reported by this (obviously biased) site.
They also acknowledge how “China has already launched an initial cluster (12 satellites in May 2025) described as the start of a ‘Three-Body Computing Constellation’ but that cluster still needs to scale to thousands of nodes to reach the kind of distributed supercomputer those announcements implied.”
I highly recommend reading this FAQ for a grounded view of what space data centres entail, and how they may be better suited for Earth-observation and near-space computing than providing general-purpose AI inference to Earth.
As they clearly acknowledge on this site, “space is hard: launches are expensive, cooling requires radiators and careful thermal design, and radiation hardening raises costs. Because of those constraints, most other satellites with onboard compute are still experiments or marketing exercises and don’t qualify as full operational data centres yet.”
Along with Elon, Gavin Baker also appeared on a recent podcast sharing a list of reasons why he thought that, from first principles, data centres should be deployed in space instead of the Earth.
Abundant and Continuous Solar Power: In space, satellites can remain in sunlight 24 hours a day, unlike on Earth where solar is intermittent. The sun’s intensity is about 30% stronger in space, leading to roughly six times more solar irradiance. This eliminates the need for batteries, a massive cost factor on Earth deployments, making solar the lowest-cost energy source available in the solar system.
Free and Efficient Cooling: Cooling accounts for a huge portion of a data centre’s mass, weight, and complexity on Earth (e.g., HVAC systems, CDUs, and liquid cooling). In space, you can simply attach radiators to the dark side of the satellite, rejecting heat into near-absolute zero temperatures. This makes cooling essentially free and far simpler, slashing costs dramatically. . I still remember when I was working at energy efficiency and the PUE was the key metric everyone was trying to optimise.
Superior Networking Speed: On Earth, racks in data centres are connected via fiber optics, which transmit lasers through cables. In space, linking satellites with lasers through vacuum is inherently faster than through fiber, creating a more coherent and efficient network overall. Or so it seems.
Reduced Latency for Inference: For real-time applications like AI inference, space-based data centres enable direct satellite-to-device communication (e.g., via Starlink’s direct-to-cell tech). This bypasses Earth’s multi-hop routing (cell tower to base station to fiber to data centre and back), resulting in lower latency and a better user experience. I have to admit that I cringed a bit when I read this (professional bias).
He emphasizes that from these physics-based first principles, space data centres are always superior to Earth-based ones, assuming launch costs continue to drop. But let’s be honest
To complement the foundation with some an economic framing, here’s an interesting tweet from Tomas Pueyo that breaks down at a high-level the fixed cost of on-the-ground and space data centres in order to understand the level of savings that can be achieved. There’s also this great write-up from the comma.ai team about what it takes to own and operate an Earth 5M$ data centre (slightly off-topic, but I thought it would be a great reference for those of you that do not understand in-depth what does it take to operate a data centre).
If putting data centres in orbit is so great, why haven’t we done it already? On the other side of the argument we have pieces like this one from Andrew Yoon that brings back this study from Google from last year that looked at the viability of doing AI in space. “The authors envision a constellation of 81 satellites flying in close proximity, and argue that if the cost of launching stuff into low earth orbit fell to $200/kg, it could be competitive with an equivalent ground-based data centre. They project this might happen around 2035 if SpaceX’s Starship program succeeds.”.
If you listen to the podcast you’ll see that the key thing that Elon is trying to de-risk is the cost and the frequency with which it can ship compute nodes to space.
However, there’s more to this. Training and serving frontier AI at scale takes hundreds of thousands of GPUs. This translates into “hundreds of millions of satellites in orbit. Satellite deployments at this scale would dramatically increase the risk of Kessler syndrome: a cascading explosion of debris crippling our access to space.”
Even more, satellites can’t be upgraded at scale. If there’s a new (better) chip architecture, there is no easy way to upgrade satellite nodes at scale. Even more, if AI ends up being a bubble, and demand doesn’t catch up with the amount of compute being deployed (like it happened with dark fiber and other tech bubbles in history that requires infrastructure investment), we may get dark data centres also in space, worsening the Kessler syndrome from the previous point.
We have a complete mess of arguments of why data centres in space are feasible, but what is the physical reality of it? Let’s dive into some of the main physical bottlenecks that will be faced as we try to put data centres in space.
One of the most persistent misconceptions is that because space is cold, cooling a data centre is easy. In reality, a vacuum is the ultimate insulator. On Earth, data centres cool themselves via convection and conduction; they pump chilled air or water over the servers to carry the heat away. In the vacuum of space, convection is physically impossible.
The only way to dissipate heat in space is through thermal radiation (emitting infrared light into the void). This mechanism is governed by the Stefan-Boltzmann law, which says that the power radiated is proportional to the surface area and the fourth power of the temperature. This means that to achieve the optimal temperature of operation of AI accelerators, huge radiators (or chips) would be required to dissipate the heat (with the corresponding increase in payload).
As I was listening to Elon talk about chips in space I had only one word in my mind: “radiation, radiation, radiation”. As an electrical engineer by training, when I was in college I was always scared of high-RF and space circuit design, for obvious reasons: high-frequency and radiation are brutal for circuits.
You might hear the argument that “AI is stochastic, so a few bit flips don’t matter.” This is a fatal misunderstanding of how AI training works. When a high-energy cosmic particle strikes a 3-nanometer transistor, it causes a Single Event Upset (SEU), flipping a 0 to a 1. In standard software, this might just crash the program. But in AI, it causes what the industry calls a Silent Data Corruption (SDC).
A 2021 paper by Meta and Google titled “Silent Data Corruptions at Scale” detailed how these undetected hardware errors pass corrupted math directly into the application layer. If a cosmic ray flips a bit in the exponent of a floating-point number during a training run, a benign number can instantly become massive. This causes a gradient explosion, silently poisoning the neural weights of the entire multi-million-dollar training run without ever triggering an error code. If SDC are already happening on Earth, imagine how often they could happen in the face of radiation.
I understand that initially space data centres are planned exclusively for inference, where data corruption may not be as catastrophic, but still… Building radiation-hardened chips is expensive and relies on thicker semiconductors (with the corresponding performance hit), but I don’t know to what extent we can build the mechanisms to minimise them on off-the-shelf chips (although I have to admit this is a really cool open problem that I would love to work on :) ).
Proponents of space data centres correctly note that the speed of light is roughly 30% faster in a vacuum than it is inside a glass fiber-optic cable. From a purely physics-based “first principle,” linking satellites via laser sounds vastly superior. But this ignores bandwidth density and the brutal physics of signal dispersion.
Inside a terrestrial data centre, AI clusters are connected by millions of parallel fiber strands, moving Terabytes across the pod to keep the GPUs fed. In space, firing lasers between satellites requires perfectly aligning optical transceivers across the void. As Google detailed in their November 2025 Project Suncatcher research paper, achieving terrestrial-level DWDM (Dense Wavelength-Division Multiplexing) bandwidth via space lasers is incredibly difficult because the optical signal disperses over distance.
To make this work for AI training without losing the signal, Google’s preprint paper modeled a notional 81-satellite cluster that couldn’t just float freely; the satellites had to fly in an incredibly tight, just 100 to 200 meters apart. Not only does this require constant, fuel-burning maneuvers to prevent the billion-dollar nodes from colliding, but you still have to get that data back to Earth. High-bandwidth laser downlinks are notoriously susceptible to atmospheric interference. If a thick cloud system parks itself over your ground station, your latency advantage instantly vanishes.
I know, I know. You went into training again. But even for inference, think of the time and bandwidth required to either move huge amounts of data from Earth into space, or between satellites. Depending on the use case, this transmission limitations may become a problem for day-to-day AI use.
So, where does this leave us? Elon Musk is absolutely right about one fundamental thing: the energy bottleneck on Earth is a severe, existential threat to the scaling of AI.
What if instead of shipping current chips to space we try to architect a new computing paradigm that is not as power hungry as a general-purpose matrix multiplication accelerator as GPUs? I’ve been thinking about this for some time now with the emergence of companies like Extropic and their thermodynamic sampling units, all the work around photonic chips, and non-deterministic computing (astute readers will have noticed that I on purpose left quantum computing out of this bag… at least for now).
I think it is time for me and this newsletter to dig deeper into this new computing paradigm and what is possible to avoid having to ship Nvidia chips into space.


