SpaceX and Blue Origin race toward orbit as scientists question physics



The argument is seductive in its simplicity: AI needs more energy than terrestrial networks can supply, so data centers must be put into orbit, where the sun never sets and electricity is free. SpaceX, Blue Origin and a growing constellation of startups are now racing to make that vision a reality. The problem, according to the scientists and engineers who would have to make the physics work, is that the vision skips several chapters on thermodynamics, economics, and orbital mechanics that have not yet been written.

SpaceX requested permission from the Federal Communications Commission on Jan. 30 to launch up to one million satellites into low-Earth orbit, each carrying computing hardware that together would form what the company described as a constellation with “Unprecedented computing power to drive advanced artificial intelligence models..” The satellites would operate at altitudes between 500 and 2,000 kilometers, in orbits designed to maximize time in sunlight, and would route traffic through SpaceX’s existing Starlink network. SpaceX requested an exemption from the FCC’s standard deployment milestones, which typically require half a constellation to be operational within six years.

Seven weeks later, Blue Origin filed its own application. Project Sunrise proposes 51,600 satellites in sun-synchronous orbits between 500 and 1,800 kilometers, complemented by the previously announced TeraWave constellation of 5,408 satellites providing ultra-high-speed optical backhaul. While SpaceX’s presentation emphasized raw scale, Blue Origin’s emphasized architecture: The system would perform calculations in orbit and transmit the results to Earth via TeraWave’s mesh network.

The startup ecosystem is moving even faster. Starcloud, formerly Lumen Orbit, raised $170 million at a $1.1 billion valuation. in March, becoming the fastest unicorn in Y Combinator history just 17 months after completing the program. The company launched its first satellite with an Nvidia H100 GPU in November 2025 and filed a constellation of up to 88,000 satellites with the FCC in February. Aethero, a defense-focused startup that builds space-grade computers with Nvidia Orin NX chips wrapped in radiation shielding, raised $8.4 million and is testing hardware in orbit this year.

The business logic is based on a genuine problem. Global data center electricity consumption It reached approximately 415 terawatt-hours in 2024 and the International Energy Agency projects it could exceed 1,000 TWh by 2026, with accelerated AI servers driving 30 percent annual growth. In Virginia alone, data centers consume 26 percent of the total electricity supply. Ireland’s share could reach 32 percent by the end of the year. Grid limitations are real, permitting delays are real, and political resistance to building more onshore capacity is real.

What’s also real, scientists maintain, is the physics that makes orbital computing spectacularly difficult at any meaningful scale. The most fundamental challenge is heat. In space, there is no air to carry heat away from the processors, only radiative cooling, which requires large surfaces. Dissipating just one megawatt of thermal energy and keeping electronic devices at a stable temperature of 20 degrees Celsius requires approximately 1,200 square meters of radiator, approximately four tennis courts. A data center of several hundred megawatts, the minimum threshold of commercial relevance, would require radiators thousands of times larger than any ever installed on the International Space Station.

Radiation presents the second structural problem. Low Earth orbit exposes unprotected chips to cosmic rays and trapped particles that induce bit flips and permanent circuit damage. Radiation hardening adds 30 to 50 percent to hardware costs and reduces performance by 20 to 30 percent. The alternative, triple modular redundancy, means launching three copies of each chip, three times the cooling, three times the electricity, and three times the mass. Starcloud’s approach of flying commercial GPUs with external shielding is an interesting experiment, but no one has shown it to work at scale or over the hardware lifespan measured in years rather than months.

Latency is the third constraint. A million satellites spread across orbital layers of 500 to 2,000 kilometers cannot achieve the tight coupling necessary for training frontier models, where communication latencies between nodes must remain in the microsecond range. Low Earth orbit introduces minimum latencies of several milliseconds for inter-satellite links and 60 to 190 milliseconds for Earth-orbit round trips, compared to 10 to 50 milliseconds for terrestrial content delivery networks. That makes orbital infrastructure potentially viable for inference workloads, not training, which is where the vast majority of AI computing demand currently lies.

Then there is the cost. IEEE Spectrum estimated that a one-gigawatt orbital data center would cost more than $50 billion, about three times the cost of an equivalent ground-based facility, including five years of operation. Google has said launch costs must fall to less than $200 per kilogram before space computing starts to make economic sense. The current economics of SpaceX’s Starlink operates at approximately $1,000 to $2,000 per kilogram. Some analysts maintain that the true threshold for competing with the terrestrial upgrading economy is $20 to $30 per kilogram, a figure that no credible projection places within the next two decades. The economy looks even less favorable when compared to the deep tech financing landscape on the groundwhere onshore infrastructure projects can leverage established supply chains and proven unit economics.

Even OpenAI’s Sam Altman, who explored a multibillion-dollar investment in rocket manufacturer Stoke Space as a potential SpaceX competitor for orbital data centers, has publicly called the concept “ridiculous” for the current decade. Altman told reporters that rough calculations of launch costs relative to ground energy costs simply don’t work yet, and asked pointedly how anyone plans to fix a broken GPU in space.

The astronomical community adds a totally different objection. The vast majority of the roughly 1,000 public comments on SpaceX’s FCC filing urged the commission not to proceed. If approved, the constellation would place more satellites than visible stars in the sky during much of the night throughout the year. further militarize and commercialize an orbital environment which is already under pressure under the weight of existing mega-constellations.

None of this means that orbital data centers will never exist. SpaceX’s Starship, if it hits its cost goals, could fundamentally change the mass-to-orbit economics that currently make the concept unviable. Starcloud’s incremental approach of flying small payloads and iterating on radiation performance is the kind of engineering path that occasionally produces breakthroughs. And the limitations of the terrestrial network that drive interest are not going away.

But the gap between applying to the FCC for a million satellites and making orbital computing economically competitive with a warehouse full of GPUs in Iowa is not measured in years. It is measured in physics problems that The current pace of investment in AI infrastructure. There are no shortcuts, no matter how many billionaires are willing to try. The question scientists are asking is not whether space data centers are theoretically possible. Therefore, given the magnitude of unsolved engineering, anyone treats it as a short-term solution to a problem that requires short-term answers. It turns out that the sky is not the limit. The radiator is.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *