Orbital AI Data Centers: Economic Competitiveness Timeline

At what point — if ever — will orbital AI data centers become economically competitive with terrestrial alternatives?

Growing demand for AI compute is straining terrestrial infrastructure: suitable sites, grid capacity, and permitting are all increasingly constrained. Meanwhile, launch costs are falling and solar power in orbit is abundant. This analysis models the total cost of ownership (TCO) for both orbital and terrestrial AI compute over 2026–2040 across optimistic, central, and conservative scenarios, examining when — and whether — the two cost curves converge.

Summary

For Tier 1-2 inference workloads (small-to-medium models, batch inference), the optimistic scenario approaches cost parity by 2040 (~1x) but does not clearly reach it in the central or conservative case. Under aggressive but not implausible assumptions (Starship at $100/kg by 2030, lightweight satellites, 5.9-year effective lifetime, SpaceX-level financing declining from 10% to 8% WACC), the optimistic ratio reaches ~1.1x by 2035 and ~1x by 2040. Training, RAG, and inference workloads exceeding a single satellite's NVLink domain (Tier 3: NVL144+ configurations) are outside the scope of this analysis due to inter-satellite bandwidth constraints. The central scenario reaches ~1.5x by 2040, and the conservative ~3.3x. Optimistic ratios assume SpaceX-level vertical integration (internal launch pricing, shared Starlink ground infrastructure, heritage manufacturing); non-SpaceX operators would face substantially higher ratios across all scenarios.

Confidence note: The three most impactful orbital parameters — effective satellite lifetime, platform manufacturing cost, and orbital WACC — all have Low or Medium confidence, lacking production data or financing precedent. The headline conclusion is robust to the well-sourced parameters (GPU cost, terrestrial energy, terrestrial infrastructure) but sensitive to revisions in these poorly-sourced ones. See Critical Inputs for the full confidence breakdown. Scale note: All cost ratios assume multi-GW-scale deployment. Near-term deployment economics — where fixed costs amortize over a small base — are out of scope and would be materially worse for orbital (see Limitations).

The remaining gap is driven by three factors — two of which (effective lifetime and cost of capital) have Low-to-Medium confidence and no operational validation: (1) the effective lifetime penalty — orbital hardware delivers 2.2–5.9 capacity-weighted years of service (including deployment delay) versus 4–6 years for terrestrial GPU depreciation; (2) the cost of capital spread — orbital assets face 8–20% WACC (declining over time, central: 13.5%→10% by 2040) versus 5–10% for terrestrial, amplified by CRF over shorter lifetimes; (3) the GPU space adaptation premium. Variable energy cost represents only ~6-7% of terrestrial TCO (about 694 $/kW_IT/year), making orbital's primary advantage — free solar energy — a minor factor, insufficient to offset these structural penalties.

Beyond cost, orbital compute is initially constrained to Tier 1 and Tier 2 inference workloads. Frontier MoE models requiring 64+ GPU NVLink domains for wide expert parallelism cannot be served across satellites at demonstrated inter-satellite link bandwidth (~800 Gbps vs ~14,400 Gbps per GPU for NVLink 5). Projected ISL capabilities via DWDM (10-40 Tbps per link in close-formation clusters at 100-200 m spacing) would approach current NVLink 5 per-link bandwidth — but these projections are undemonstrated in space, the terrestrial bar is also advancing (NVLink 6 at 28.8 Tbps per GPU, shipping H2 2026), and the all-to-all communication pattern of expert parallelism poses topological challenges beyond raw point-to-point bandwidth. See the inference networking analysis for the full bandwidth trajectory and feasibility assessment.

Model

The quantitative model computes amortized TCO per kW_IT per year for both orbital and terrestrial AI compute, with time-varying inputs over 2026-2040 across three scenarios (optimistic/central/conservative). All metrics are normalized to kW_IT (IT load power, GPUs only) in 2025 USD.

Orbital TCO = (launch_cost + GPU_cost + platform_manufacturing) × CRF(orbital_WACC, effective_lifetime) + fixed_opex

Terrestrial TCO = GPU_cost × CRF(terr_WACC, GPU_life) + infrastructure × CRF(terr_WACC, 15yr) + power_capex × CRF(terr_WACC, 20yr) + variable_energy × 8760hr × PUE + non_energy_opex

CRF (Capital Recovery Factor) converts one-time capex into equivalent annual cost accounting for the cost of capital. The model uses separate WACC values for orbital (central: 13.5% in 2026, declining to 10% by 2040) and terrestrial (central: 7%) to reflect their different risk profiles. The orbital central WACC of 13.5% (rather than a naive 15%) reflects a structural adjustment removing ~1.5pp of double-counting with risks already captured in the effective lifetime parameter.

All input values are sourced from individual research pages. The interactive model table at the bottom of this page shows all computed values across scenarios.

Excluded: The model omits several cost categories that favor orbital in the output:

Key Findings

1. Optimistic Scenario Approaches Parity by 2040

Year Optimistic Central Conservative
2026 1.9x 4.8x 9.4x
2030 1.2x 2.3x 5.6x
2035 1.1x 1.7x 3.9x
2040 1x 1.5x 3.3x

Note: Year-to-year movement is driven by launch cost, platform manufacturing cost (learning curves), and orbital WACC (declining as operational history accumulates). Effective lifetime, fixed opex, structural overhead, and orbital PUE are held constant. The 2035–2040 ratios are somewhat conservative for orbital as a result, since effective lifetime should improve with design iteration (see [Limitations](#limitations)).

The ratio declines as launch costs fall and WACC compresses, converging toward a floor set by the effective lifetime penalty and the irreducible GPU cost shared with terrestrial. The three parameters with the largest impact on this ratio — effective satellite lifetime, platform manufacturing cost, and orbital WACC — all have Low-to-Medium confidence (see Critical Inputs). The optimistic scenario approaches parity (~1x by 2040) as WACC compression and manufacturing learning compound, but the central and conservative scenarios remain well above parity. See the cost parity analysis for the full timeline.

2. GPU Cost Dominates Both Sides

At 7,926 $/kW_IT/year (central), amortized GPU cost is 74% of terrestrial TCO. On the orbital side, amortized GPU cost ranges from ~26% of orbital TCO in 2026 (when launch cost dominates) to ~75% by 2040 (when launch and platform costs have fallen via learning curves). Since GPU cost per kW_IT is essentially identical whether deployed on Earth or in orbit (plus a modest 8-30% space adaptation premium), this dominant shared cost cannot create an advantage for either deployment context. The competition reduces to non-GPU costs — where terrestrial has a decisive advantage.

3. Energy Savings Are Real but Insufficient Alone

Terrestrial energy cost is 694 $/kW_IT/year (central, 2026) — only ~7% of TCO. Eliminating this saves roughly that amount. In the optimistic scenario, this saving combined with low platform and launch costs narrows the gap to ~1.1x — but the residual premium persists because of the effective lifetime penalty and cost of capital spread. In the central scenario, the energy saving is overwhelmed by the CRF-amplified capex penalty: orbital amortizes capex at 13.5% WACC in 2026 (Low confidence — no financing precedent, adjusted down from 15% to remove double-counting with effective lifetime) over 3.8 years (Medium confidence — model-derived, no operational data), yielding CRF = 0.35, versus terrestrial's 7% WACC over 5 years for GPUs (CRF ≈ 0.24). This creates a large annual cost gap that dwarfs the energy saving.

4. Launch Cost Becomes Irrelevant by 2040

Launch cost dominates orbital capex in 2026 (83,239 $/kW_IT, ~56% of capex) but becomes negligible by 2040 (3,022 $/kW_IT, ~5% of capex). Further launch cost reduction has diminishing returns because GPU cost and platform manufacturing have become the dominant capex components.

5. Effective Lifetime Is the Key Remaining Cost Lever

The effective lifetime — capacity-weighted years of service accounting for degradation, failures, and deployment delay — remains the primary driver of the orbital cost premium (OAT swing: ~1.1x, see Finding 7). The central estimate of 3.8 years is a weakly informed model output with Medium confidence: physical lifetime is well-grounded in fleet data, but the space GPU attrition rate (especially destructive SEL) spans >6 orders of magnitude in the literature — the central value should be understood as likely somewhere in the range of 2–6 years, not a precisely calibrated figure. Because effective lifetime is the dominant sensitivity lever, this uncertainty propagates strongly into all downstream TCO results. With CRF-based amortization, the lifetime effect is amplified: extending the effective lifetime from 3.8 to 5.9 years reduces the CRF from 0.35 to ~0.23, cutting annual amortized capex by ~35%.

6. Inference Networking Constrains Workload Scope

Frontier MoE models (DeepSeek R1 671B, 60%+ of frontier models use MoE) require 64+ GPUs in a single NVLink domain (1.8 TB/s per GPU, 130 TB/s aggregate). The bandwidth picture has two layers — demonstrated and projected — and both sides of the orbital-terrestrial comparison are advancing:

Demonstrated (today): Google's Suncatcher bench-demonstrated 800 Gbps (0.1 TB/s) per optical inter-satellite link pair — an ~18× gap versus current NVLink 5 (14.4 Tbps per GPU). At this level, ISLs are comparable to InfiniBand 400G: sufficient for pipeline parallelism but insufficient for tensor or expert parallelism.

Projected (both sides advancing): Google states the required ISL bandwidth is "on the order of 10 Tbps," achievable via COTS DWDM (9.6-12.8 Tbps per aperture) in a close-formation cluster with satellites at 100–200 m spacing; our analysis estimates an upper bound of 10-40 Tbps with spatial multiplexing. At the projected level, per-link ISL bandwidth would approach current NVLink 5 (14.4 Tbps) — but the terrestrial bar is simultaneously advancing: NVIDIA's Rubin NVL72 (H2 2026) doubles per-GPU NVLink to 3.6 TB/s (28.8 Tbps), and the roadmap extends to NVL576 and NVL1152 with co-packaged optics. Neither the projected ISL bandwidth nor these future terrestrial networks are flight-demonstrated or production-deployed, so a symmetric comparison should note that both represent engineering projections, not current capability. The projected ISL capabilities are more likely to narrow the gap for point-to-point links between satellite pairs, but the aggregate all-to-all bandwidth of an NVL72 fabric (130-260 TB/s across all 72 endpoints simultaneously) has no ISL analogue.

See the inference networking analysis for the full bandwidth comparison table and feasibility assessment. Under current demonstrated technology, this limits orbital to:

Two factors could relax these constraints: (1) DWDM inter-satellite links reaching multi-Tbps bandwidth would enable wider parallelism across satellites, and (2) a satellite housing enough GPUs with an internal NVLink switch fabric could support a tightly-coupled domain within a single spacecraft. One estimate (Handmer) suggests ~200 H100-equivalent GPUs per satellite, which exceeds the 72-GPU NVL72 count — though GPU count alone does not guarantee an NVL72-class fully-connected NVLink domain, which also requires switch ASICs, high-bandwidth cabling, and additional power and mass.

Model compression closes the capability gap over time (frontier capabilities reach consumer GPUs in 6-12 months), but orbital always serves models 1-2 generations behind the terrestrial frontier.

Moreover, the terrestrial networking bar is accelerating. NVIDIA's GTC 2026 roadmap extends beyond NVL144 to NVL576 and NVL1152 (multi-rack systems using co-packaged optics), with inference-specific hardware disaggregation (GPU + Groq LPU + storage accelerators coordinated across rack types within a pod achieving 10 PB/s internal bandwidth). Competitive inference is evolving from a GPU problem to a systems-level integration problem that has no orbital analogue — reinforcing the Tier 3 gap. See the inference networking analysis for the full assessment.

7. Sensitivity: Lifetime Dominates, Financing Matters

A one-at-a-time (OAT) analysis — varying each input from optimistic to conservative while holding all others at central — reveals the individual impact of each parameter on the TCO ratio at 2035 (baseline ~1.7x). The top 7 parameters by approximate swing:

Rank Parameter Swing (Δ ratio) Confidence
1 Effective satellite lifetime ~1.1 Medium
2 Platform manufacturing cost ~0.9 Low
3 Launch cost ~0.5 Medium
4 GPU useful life* ~0.5 High
5 Orbital WACC ~0.4 Low
6 GPU cost premium ~0.3 Low
7 Terrestrial WACC ~0.3 High

*GPU useful life direction is reversed: longer terrestrial GPU life widens the gap (lower terrestrial amortization), so the optimistic end of the range corresponds to shorter GPU life.

Effective satellite lifetime remains the most impactful single parameter. The WACC parameters (ranks 5 and 7) are individually significant — orbital WACC alone has a ~0.4x swing. Platform manufacturing cost is the second-largest driver; with manufacturing learning curves now modeled (5%/year central decline), the 2035 range is $3.5K-$29.2K. GPU useful life reverses direction: longer terrestrial GPU life widens the gap (lower terrestrial amortization). See the sensitivity analysis for the full table.

8. Technology Readiness

The economic analysis assumes orbital compute systems exist at scale. As of early 2026, multiple players have moved beyond concepts to hardware: Kepler Communications launched the first operational distributed on-orbit computing service (10 satellites, March 2026), Starcloud-1 completed the first AI training in orbit (single H100, November 2025), K2 Space is preparing to launch its 20 kW Gravitas platform (scheduled late March 2026), and China's Xingshidai deployed 12 AI satellites (May 2025). SpaceX has disclosed plans for 100 kW AI Sat Mini satellites (~100 per Starship launch), and Google's Suncatcher targets a two-satellite prototype in early 2027.

However, the existence of orbital compute demonstrations does not validate the assumptions in our economic model. All operational systems are low-power edge-compute demonstrations (0.7–28 kW) — not the 100+ kW satellites our cost model assumes. The gap between a 60 kg single-GPU satellite and a 1–5 ton, 100 kW compute satellite is not merely a matter of scaling up: the model's key input assumptions — thermal management at >1 kW/kg rejection rates, power systems at 100+ kW with multi-year reliability, mass-production of satellite buses at $8K-$35K/kW_IT, and bus failure rates of 0.5–2.5%/year — have not been validated at the power levels the economics require. No satellite has demonstrated radiative cooling at 100 kW scale, operated a high-power EPS for multiple years, or been produced in the quantities needed to achieve the manufacturing cost learning curves the model assumes. Readers should treat the model's inputs as engineering projections informed by component-level data and analogy to lower-power systems, not as validated system-level parameters. Multi-GW orbital compute deployment remains unlikely before 2035, making the 2030 optimistic ratio (~1.2x) economically computed but practically irrelevant. See the cost parity timeline for the full rollout analysis.

Critical Inputs

The leaf values that most influence the conclusion, with their ranges and confidence. Impact ranking is from the OAT sensitivity analysis (see Finding 7).

Input Central Range Confidence Impact (OAT swing)
Effective satellite lifetime 3.8 years 2.2-5.9 Medium (physical lifetime well-grounded in fleet data — Iridium 20+ yr precedent, OneWeb 0.3% failure/4-7 yr, Castet-Saleh Weibull infant mortality; bus loss rate empirically anchored but requires judgment premium for high-power systems; GPU attrition terrestrial baseline peer-reviewed but space SEL rate genuinely uncharacterized — NASA SEL database shows >6 OOM rate variation with no predictive trends; no orbital compute operational data; deployment delay 3-6 months consuming GPU economic life) ~1.1 — dominant driver
Platform mfg cost (2035) $11,275/kW_IT $3.5K-$29.2K Low (no production data; blog models and startup pricing; learning curves applied) ~0.9
Launch cost (2035) $150/kg $35-$500 Medium (Starship unproven) ~0.5 — diminishing
GPU useful life 5 years 4-6 High (observed depreciation) ~0.5 (helps terrestrial)
Orbital WACC 13.5%→10% (2026→2040) 8-20% (declining) Low (no precedent; adjusted for double-counting) ~0.4
GPU cost premium 1.15x 1.08-1.30x Low ~0.3
Terrestrial WACC 7% 5-10% High (observed: Equinix 5.9%, DLR 6.5%) ~0.3
Orbital PUE 1.05 1.035-1.10 Medium (component specs from Vicor + NASA subsystem allocation; no system-level data) Low (~0.05)
GPU cost per kW_IT $32,500/kW_IT $25K-$40K High (observed pricing) ~0.05 (affects both sides equally)
Terrestrial energy cost $0.066/kWh $0.041-$0.098 High (observed) ~0.1
Inference domain size 16 GPUs/domain 8-72 Medium N/A (workload scope, not cost)

The three parameters with Low confidence and highest impact — effective satellite lifetime (~1.1 swing), platform manufacturing cost (~0.8 swing), and orbital WACC (~0.4 swing) — all lack primary sources or production data. Platform manufacturing cost is particularly uncertain: the 4.4x range from optimistic to conservative may still understate the true uncertainty, as even the conservative estimate depends on manufacturing-learning assumptions for hardware types that have never been produced. Orbital PUE was re-sourced after source accuracy reviews removed all original evidence items; current estimates are based on Vicor space-grade converter specifications, VPT GaN converter data, NASA's SOA PMAD survey, and NASA-affiliated spacecraft subsystem allocation guides — component-level data, not system-level measurements (see Limitations). The financing parameters (WACC) are jointly significant: if both move favorably for orbital simultaneously, their combined effect approaches effective lifetime in magnitude. Terrestrial parameters (GPU cost, energy cost, infrastructure, terrestrial WACC) are better anchored by observed market data. See the source quality assessment for the full classification.

Side Pages

Limitations

Workload scope. The TCO comparison applies to Tier 1 and Tier 2 inference workloads only — small-to-medium model inference, batch inference, and high compute-to-data-ratio workloads that fit within a single satellite or small cluster. Training, RAG, large-context retrieval, and frontier MoE inference requiring wide expert parallelism are infeasible with current inter-satellite link technology. The cost ratios are meaningful only for this subset of inference.

Latency. LEO adds ~4-8 ms round-trip latency (ground-to-satellite-to-ground), excluding orbital compute from latency-sensitive interactive inference workloads (real-time chatbots, code completion, voice assistants). This further narrows the addressable workload beyond the networking tier restrictions — even within Tier 1 and Tier 2, only batch and latency-tolerant inference is viable.

Deployment scale scope. The model assumes multi-GW-scale deployment, which amortizes NRE, ground-segment capex, and other fixed costs over a large installed base. Near-term deployment economics — where the amortization base is small — are out of scope. This is a material omission: the report acknowledges that NRE alone reaches $1,000–5,000/kW_IT at an initial 1 GW deployment, and that the aggregate excluded-cost adjustment would be "substantially larger" at that scale (see Aggregate direction of excluded costs below). Since multi-GW orbital deployment is unlikely before 2035, and current operational systems are 0.7–28 kW, the year-by-year cost ratios presented for the early period (2026–2030) reflect a mature-scale cost structure that would not apply to realistic first deployments. Readers should interpret early-year ratios as answering "what would orbital cost if deployed at scale in that year," not "what would the first orbital deployment cost in that year" — the latter would be materially worse for orbital.

Source quality on key inputs. The four lowest-confidence orbital parameters — effective satellite lifetime, platform manufacturing cost, orbital WACC, and orbital PUE — all lack primary sources or production data. Platform cost estimates depend on Mach33's analysis of Starlink-heritage pricing (an industry blog modeling exercise, not demonstrated costs) and Starpath panel pricing (a startup without production deliveries). The optimistic scenario's solar cost of $5/W assumes a ~20x compression from the NASA SBSP study's $100/W; this may eventually happen but is a manufacturing-learning assumption, not a demonstrated baseline. Orbital PUE is now sourced from Vicor space-grade converter specifications and NASA subsystem allocation guides, but no system-level orbital compute PUE has been measured. Some evidence anchors are weak for the weight placed on them: HN discussion summaries and NextBigFuture for Falcon 9/Starship economics serve as exploratory pointers but are insufficient to anchor central estimates for outcome-determining parameters. Orbital mechanics parameters (eclipse duration, beta angles) are derived from standard physics formulas validated against five real dawn-dusk SSO missions; the computational model used for cross-checking was generated by an LLM and should be treated as a computation check, not an independent empirical source.

Operator archetype asymmetry — the model reflects SpaceX's internal cost. The orbital optimistic and partly central cases model a SpaceX/xAI-style vertically integrated operator: internal launch pricing (~2-4x below customer price, based on Falcon 9 precedent), shared Starlink ground infrastructure (~170+ stations), SpaceX-heritage satellite manufacturing, and SpaceX balance sheet financing (10% WACC). Any other organization attempting orbital compute would face substantially higher costs across all four dimensions. No other entity currently combines volume launch capability, mega-constellation operations experience, and AI compute demand under one corporate umbrella — developing comparable vertical integration is a high bar. The terrestrial side is benchmarked to broader hyperscaler/market economics. A like-for-like comparison of best-in-class orbital vs best-in-class terrestrial (where hyperscalers achieve below-market energy and infrastructure costs) would likely show a wider gap than the optimistic scenario suggests. Conversely, a non-SpaceX orbital operator paying customer launch prices, building dedicated ground infrastructure, and financing at venture rates would face TCO ratios substantially above our conservative scenario.

WACC double-counting has been addressed. The central orbital WACC was adjusted from a naive 15% to 13.5% to remove ~1.5pp of overlap with risks already captured in the effective lifetime parameter (shorter asset life, catastrophic loss). The remaining WACC spread (technology novelty, revenue uncertainty, financing precedent) is not double-counted. See the WACC analysis for the full decomposition.

Effective lifetime compresses distinct mechanisms with very different evidence quality. Physical durability, failure attrition, economic obsolescence, deployment delay, and design-life constraints are folded into one scalar. These have different mitigation paths and improvement timelines, but the model cannot distinguish between them. Critically, the sub-inputs have very different confidence levels: physical lifetime and SDC overhead are well-grounded in fleet data; bus loss rate is empirically anchored but requires judgment premium for high-power systems; deployment delay (3-6 months) is based on engineering estimates; and GPU attrition in space — especially the destructive SEL rate for H100/B200 — is the weakest link, with NASA's statistical SEL database showing rates spanning >6 orders of magnitude. The central effective lifetime of 3.8 years is a weakly informed indicative estimate — likely somewhere in the range of 2–6 years — not a precisely calibrated figure. Because this is the analysis's dominant sensitivity lever, the resulting uncertainty propagates strongly into all TCO and parity results. See the operational lifetime page for the full evidence quality decomposition.

Time-invariance. Effective lifetime, fixed opex, structural overhead, and orbital PUE are held constant across 2026–2040. In reality, all should improve with operational experience. Launch cost, platform manufacturing cost, and orbital WACC are now time-varying — WACC declines as operational history accumulates (central: 13.5%→10% by 2040, following the offshore wind precedent). The remaining flat parameters mean the later-year ratios are somewhat conservative for orbital, since effective lifetime in particular should improve with design iteration. GPU cost per kW_IT is also time-invariant; if competition drives $/kW_IT down, both TCOs fall but the orbital premium widens.

Correlated parameters. The OAT sensitivity analysis varies one parameter at a time, but key variables are correlated: lower launch cost arrives with higher manufacturing scale; proven operations should compress WACC; longer effective life and lower failure rates should co-occur. The bundled scenarios partially capture this, but important cross-correlations remain unexplored.

Aggregate direction of excluded costs. All four acknowledged exclusions — NRE, ground-segment capex, debris liability, and SDC quality degradation — add cost to the orbital side, meaning the modeled ratios understate the true orbital premium. Bounding the aggregate at GW-scale deployment (2035): NRE ($50–400/kW_IT capex, annualized ~$18–146/kW_IT/year), ground-segment capex ($50–750M total, annualized ~$7–99/kW_IT/year), debris liability ($150–600/kW_IT/year), and SDC quality degradation (unquantified — the 1.3% throughput overhead is modeled in effective lifetime, but the broader quality impact on inference accuracy is not). Excluding the unquantified SDC quality impact, the aggregate of NRE, ground segment, and debris liability adds approximately +0.1x to +0.3x to the modeled TCO ratio — moving the central 2035 ratio from ~2.1x to ~2.2–2.4x and the optimistic 2035 ratio from ~1.4x to ~1.5–1.6x. Debris liability dominates at GW scale; NRE and ground segment are secondary. SDC quality degradation could add further cost if quantified. At the more plausible initial deployment scale of ~1 GW, NRE alone could reach $1,000–5,000/kW_IT (annualized ~$360–1,820/kW_IT/year), and the aggregate adjustment would be substantially larger — most significant in the first years of deployment when amortization bases are smallest.

Terrestrial energy supply constraints. The core proponent argument for orbital compute is not that it will be cheaper, but that terrestrial capacity will hit physical limits (grid, land, permitting) before AI demand is met. A side-page analysis examines this argument in detail. The finding: supply constraints are genuine (8-year grid interconnection queues, gas turbines sold out through 2030, PJM capacity prices up 10x), but the diversity of terrestrial supply responses (BTM gas, solar+battery, aeroderivative turbines, nuclear PPAs, grid reform) creates a cost ceiling. Even in the conservative scenario, blended terrestrial electricity costs peak at ~$0.11/kWh. More fundamentally, the model-derived break-even analysis shows that from 2030 onward, no terrestrial energy price would make orbital competitive — the orbital cost premium is structural (effective lifetime, cost of capital, GPU adaptation), not energy-driven. Energy cost is only ~6-7% of terrestrial TCO.

Ground-segment weather availability is out of scope. The model omits ground-segment capex (~$50-750M) and does not incorporate the operational constraints of LEO optical downlinks — cloud cover interrupts optical links and motivates multiple geographically diverse ground stations; NASA's LCRD uses adaptive optics, weather monitoring, and two optical ground stations for availability; TBIRD's record 200 Gbps downlink occurred during ~5-minute LEO passes. Ground-segment weather availability is declared out of scope for this cost analysis because optical ground links may not be a hard requirement: RF downlinks (such as Starlink's existing Ka/Ku-band infrastructure) provide weather-independent connectivity at lower per-link bandwidth, and a SpaceX-operated orbital compute constellation could leverage existing Starlink ground infrastructure (~170+ stations) rather than building a dedicated optical ground network. The weather availability problem documented in the ground-segment constraints side page is real for dedicated optical-only architectures, but the assumption that optical is the only viable downlink technology overstates the constraint for an operator with access to RF ground infrastructure.

Orbital PUE relies on component-level data, not system-level measurements. The orbital PUE estimate (central: 1.05) is sourced from Vicor's space-grade DC-DC converter efficiency specifications (91-96% per stage) and NASA-affiliated spacecraft subsystem power allocation guides. No system-level PUE measurement exists for an orbital compute satellite, since none have been built at this scale. The low sensitivity impact (~0.05x OAT swing) means this does not materially affect the conclusion.

Scope. The model analyzes cost only, not addressable market. No demand-side analysis estimates what fraction of inference workloads falls into each networking tier. The "no terrestrial alternative" market (military, maritime, disaster response) is not analyzed, but is unlikely to drive an orbital compute business case: these users can reach terrestrial data centers via satellite Internet (e.g., Starlink) at far lower cost than fielding orbital compute infrastructure, and deploying a comms satellite fleet is orders of magnitude simpler than deploying an orbital data center fleet. Data sovereignty concerns are similarly unlikely to favor orbital — a nation that lacks the infrastructure for terrestrial data centers is not positioned to operate an orbital compute constellation.

Orbital vs Terrestrial AI Compute TCO

Parameter202620282030203220352040
Launch cost to LEO
$/kg
2,5001,20050036015075
Solar array specific power
W/kg
150150150150150150
Radiative cooling specific power
W_rejected/kg
777777777777
Compute hardware mass
kg/kW_IT
666666
Platform manufacturing cost
$/kW_IT
18,00016,25014,67513,25011,2758,725
Orbital GPU cost premium
multiplier
1.21.21.21.21.21.2
Effective satellite lifetime
years
3.83.83.83.83.83.8
Orbital fixed opex
$/kW_IT/year
200200200200200200
Structural overhead
multiplier
1.31.31.31.31.31.3
Orbital PUE
ratio
1.11.11.11.11.11.1
Max eclipse per orbit
minutes
212121212121
Annual eclipse hours (no battery)
hours/year
412412412412412412
Terrestrial infrastructure cost
$/kW_IT
12,50012,50012,50012,50012,50012,500
Terrestrial energy cost
$/kWh
0.070.080.070.070.070.06
Terrestrial PUE
ratio
1.11.11.11.11.11.1
Terrestrial power-asset capex
$/kW_IT
200350500575650750
Terrestrial non-energy opex
$/kW_IT/year
750750750750750750
GPU cost per kW_IT
$/kW_IT
32,50032,50032,50032,50032,50032,500
GPU useful life
years
555555
Orbital WACC
fraction
0.140.130.130.120.110.1
Terrestrial WACC
fraction
0.070.070.070.070.070.07
Solar array mass
kg/kW_IT = (orbital_PUE × 1 kW) / solar_specific_power
777777
Thermal system mass
kg/kW_IT = (orbital_PUE × 1 kW) / cooling_specific_power
13.613.613.613.613.613.6
Battery mass
kg/kW_IT = usable_min / 60 / (DOD × EOL × path_eff) / specific_energy
01.24.95.25.65.6
Total satellite mass
kg/kW_IT = (solar + thermal + compute + battery) × structural_overhead
33.334.839.539.840.340.3
Optimal battery duration
minutes = argmin(penalized_TCO_per_operating_hour)
04.518.519.52121
Orbital availability
fraction = 1 - eclipse_downtime(battery_duration) / 8760
0.950.971111
Launch cost
$/kW_IT = total_satellite_mass × launch_cost_per_kg
83,23941,75319,72814,3246,0433,022
Orbital GPU cost
$/kW_IT = gpu_cost_per_kw × gpu_cost_premium
37,37537,37537,37537,37537,37537,375
Orbital total capex
$/kW_IT = launch_cost + orbital_gpu_cost + platform_mfg_cost
138,61495,37871,77864,94954,69349,122
Orbital capital recovery factor
fraction = CRF(orbital_wacc, effective_lifetime)
0.350.350.350.340.340.33
Orbital capex (amortized)
$/kW_IT/year = orbital_capex × CRF(orbital_wacc, effective_lifetime)
48,99133,37524,86622,27418,47216,167
Orbital TCO
$/kW_IT/year = (orbital_capex_amortized + orbital_fixed_opex) / availability
51,61934,76525,13722,50618,67216,367
Terrestrial GPU cost (amortized)
$/kW_IT/year = gpu_cost_per_kw × CRF(terrestrial_wacc, gpu_useful_life)
7,9267,9267,9267,9267,9267,926
Terrestrial infrastructure (amortized)
$/kW_IT/year = terrestrial_infra_cost × CRF(terrestrial_wacc, 15 years)
1,3721,3721,3721,3721,3721,372
Terrestrial power-asset capex (amortized)
$/kW_IT/year = power_asset_capex × CRF(terrestrial_wacc, 20 years)
18.93347.254.361.470.8
Terrestrial energy cost (variable)
$/kW_IT/year = variable_energy_cost × 8760 hours × PUE
694723703675636578
Terrestrial non-energy opex
$/kW_IT/year = non_energy_opex (page-backed)
750750750750750750
Terrestrial TCO
$/kW_IT/year = gpu_amortized + infra_amortized + power_capex_amortized + energy + non_energy_opex
10,76210,80510,80010,77810,74610,698
Orbital / Terrestrial TCO ratio
ratio = orbital_tco / terrestrial_tco
4.83.22.32.11.71.5

Pages

Input Questions

Analyses

Side Chapters

Source Reviews