Orbital Data Centers (ODCs) surged into the spotlight after SpaceX filed for a constellation of 1 million satellites and StarCloud spoke of a 5 GW orbital data center powered by a 4×4 km solar array. Processing data in space is not new. Satellites already run GPU class compute to filter and compress data before downlink. What is new is the scale of the proposed systems and the idea of using them for terrestrial workloads. This is where the physics argument becomes essential. For a deeper dive, download our Insight Note.
Physics: The Ceiling on Orbital Hype
The case for ODCs rests on two claims: abundant solar power in orbit and the notion that “space is cold,” eliminating cooling and water constraints. Both claims require context.
In our Insight Note, we introduce the PHAMOC framework: power, heat, area, mass, orbit, and cost. These factors compound. More power means more heat. More heat requires more radiator area. More area increases mass. More mass increases launch cost. Orbit determines solar exposure and drag. Cost accumulates at every step.
A couple of points stand out.
- Generating power in space requires large solar arrays. Solar panels capture about a quarter of solar irradiance after efficiency losses. Scaling to MW class compute drives array sizes into hundreds or thousands of square meters. Orbit selection matters. Sun-Synchronous Orbit (SSO) maximizes sunlight but does not eliminate eclipse.
- Cooling is harder in space, not easier. Vacuum is an insulator. All heat must be radiated away. Radiators must run cooler than the chips they support, which drives large surface areas.
Physics sets the economic boundary. The aspiration for 100 kW/tonne contrast with the 15–25 kW/tonne of modern satellites. This ~4× gap forces scale through volume: more satellites, more mass, more cost.
The Scaling Penalty of Orbital Compute: The Myth of Low Latency Space
ODCs are distributed compute networks. High compute density per satellite reduces constellation size. Low density increases it. As constellation size grows, so does coordination latency. This is the horizontal latency tax: the time required for cross‑satellite synchronization across orbital planes.
We also have to account for the cost and complexity of optical transceiver modules for inter‑satellite connectivity. Each satellite can only maintain a limited number of optical links at any instant. Connectivity becomes a gating parameter for scalability. It constrains how fast tasks can synchronize and how much traffic the mesh can absorb. Architecting a scalable ODC is therefore not a trivial exercise. It is a deeply complex systems‑engineering problem shaped by physics, connectivity, and orbital dynamics. For a deeper look at how LEO latency behaves in practice, and why latency advantages depend heavily on distance and workload, see my earlier analysis on LEO vs. terrestrial fiber.
This is why ODCs are better suited for embarrassingly parallel inference and space‑native data reduction. Large‑scale AI training, which depends on tightly coupled communication, is not a good fit.
The myth of “low latency to space” ignores this. The vertical hop to 400 km is short. The horizontal mesh across hundreds or thousands of satellites is not.

Concluding Thoughts: The Demand Problem at the Heart of ODCs
SpaceX references inference and edge computing as target applications. Others mention training. Few identify concrete use cases or paying customers. The most compelling applications remain space native: missile defense, EO, SAR, SIGINT. These benefit from in situ processing that reduces downlink bandwidth by 60–80%.
A recent report highlighted that even SpaceX quietly acknowledged the commercial uncertainty around ODCs. SpaceX’s pre‑IPO S‑1 filing explicitly states that its orbital AI compute initiative is early‑stage, technically complex, relies on unproven technologies, and may not achieve commercial viability. This is consistent with our analysis: the physics may allow ODCs to exist, but the economics depend entirely on whether there are customers willing to pay for them.
For terrestrial workloads, the economics are challenging. A representative 1 GW ODC constellation is estimated at ~$51B vs ~$16–20B for a terrestrial hyperscale facility. ODCs must justify this premium through differentiated value, not cost parity.
The discussion on ODCs must start with applications. Without clear revenue generating workloads, the physics and economics remain unforgiving.