Orbital AI Data Centers: Economic Competitiveness Timeline
At what point — if ever — will orbital AI data centers become economically competitive with terrestrial alternatives?
Growing demand for AI compute is straining terrestrial infrastructure: suitable sites, grid capacity, and permitting are all increasingly constrained. Meanwhile, launch costs are falling and solar power in orbit is abundant. This analysis models the total cost of ownership (TCO) for both orbital and terrestrial AI compute over 2026–2040 across optimistic, central, and conservative scenarios, examining when — and whether — the two cost curves converge.
Summary
For Tier 1-2 inference workloads (small-to-medium models, batch inference), the optimistic scenario approaches cost parity by 2040 (~1x) but does not clearly reach it in the central or conservative case. Under aggressive but not implausible assumptions (Starship at $100/kg by 2030, lightweight satellites, 5.9-year effective lifetime, SpaceX-level financing declining from 10% to 8% WACC), the optimistic ratio reaches ~1.1x by 2035 and ~1x by 2040. Training, RAG, and inference workloads exceeding a single satellite's NVLink domain (Tier 3: NVL144+ configurations) are outside the scope of this analysis due to inter-satellite bandwidth constraints. The central scenario reaches ~1.5x by 2040, and the conservative ~3.3x. Optimistic ratios assume SpaceX-level vertical integration (internal launch pricing, shared Starlink ground infrastructure, heritage manufacturing); non-SpaceX operators would face substantially higher ratios across all scenarios.
Confidence note: The three most impactful orbital parameters — effective satellite lifetime, platform manufacturing cost, and orbital WACC — all have Low or Medium confidence, lacking production data or financing precedent. The headline conclusion is robust to the well-sourced parameters (GPU cost, terrestrial energy, terrestrial infrastructure) but sensitive to revisions in these poorly-sourced ones. See Critical Inputs for the full confidence breakdown. Scale note: All cost ratios assume multi-GW-scale deployment. Near-term deployment economics — where fixed costs amortize over a small base — are out of scope and would be materially worse for orbital (see Limitations).
The remaining gap is driven by three factors — two of which (effective lifetime and cost of capital) have Low-to-Medium confidence and no operational validation: (1) the effective lifetime penalty — orbital hardware delivers 2.2–5.9 capacity-weighted years of service (including deployment delay) versus 4–6 years for terrestrial GPU depreciation; (2) the cost of capital spread — orbital assets face 8–20% WACC (declining over time, central: 13.5%→10% by 2040) versus 5–10% for terrestrial, amplified by CRF over shorter lifetimes; (3) the GPU space adaptation premium. Variable energy cost represents only ~6-7% of terrestrial TCO (about 694 $/kW_IT/year), making orbital's primary advantage — free solar energy — a minor factor, insufficient to offset these structural penalties.
Beyond cost, orbital compute is initially constrained to Tier 1 and Tier 2 inference workloads. Frontier MoE models requiring 64+ GPU NVLink domains for wide expert parallelism cannot be served across satellites at demonstrated inter-satellite link bandwidth (~800 Gbps vs ~14,400 Gbps per GPU for NVLink 5). Projected ISL capabilities via DWDM (10-40 Tbps per link in close-formation clusters at 100-200 m spacing) would approach current NVLink 5 per-link bandwidth — but these projections are undemonstrated in space, the terrestrial bar is also advancing (NVLink 6 at 28.8 Tbps per GPU, shipping H2 2026), and the all-to-all communication pattern of expert parallelism poses topological challenges beyond raw point-to-point bandwidth. See the inference networking analysis for the full bandwidth trajectory and feasibility assessment.
Model
The quantitative model computes amortized TCO per kW_IT per year for both orbital and terrestrial AI compute, with time-varying inputs over 2026-2040 across three scenarios (optimistic/central/conservative). All metrics are normalized to kW_IT (IT load power, GPUs only) in 2025 USD.
Orbital TCO = (launch_cost + GPU_cost + platform_manufacturing) × CRF(orbital_WACC, effective_lifetime) + fixed_opex
Terrestrial TCO = GPU_cost × CRF(terr_WACC, GPU_life) + infrastructure × CRF(terr_WACC, 15yr) + power_capex × CRF(terr_WACC, 20yr) + variable_energy × 8760hr × PUE + non_energy_opex
CRF (Capital Recovery Factor) converts one-time capex into equivalent annual cost accounting for the cost of capital. The model uses separate WACC values for orbital (central: 13.5% in 2026, declining to 10% by 2040) and terrestrial (central: 7%) to reflect their different risk profiles. The orbital central WACC of 13.5% (rather than a naive 15%) reflects a structural adjustment removing ~1.5pp of double-counting with risks already captured in the effective lifetime parameter.
All input values are sourced from individual research pages. The interactive model table at the bottom of this page shows all computed values across scenarios.
Excluded: The model omits several cost categories that favor orbital in the output:
- NRE (non-recurring engineering) for satellite platform development, radiation qualification, and ground segment development. At GW-scale deployment, NRE amortizes to ~$50–400/kW_IT, but at the more plausible initial scale of 1 GW, NRE could reach $1,000–5,000/kW_IT — material in early deployment years.
- Ground-segment capex — construction of dedicated optical ground terminals, fiber backhaul, and orchestration infrastructure (~$50–750M for a global network).
- Debris liability — at GW scale with 3–5% annual catastrophic failure rates, 300+ uncontrolled satellites per year could trigger mandatory active debris removal. At $500K–$2M per active debris removal (ADR) mission, this adds $150–600/kW_IT/year — potentially doubling the central fixed opex estimate. This cost is regulatory and uncertain but directionally significant.
- SDC quality degradation — elevated radiation in LEO increases silent data corruption rates in inference outputs. The operational-lifetime model captures the throughput overhead of SDC mitigation (ECC, scrubbing, checkpoint/restart) as a 1.3% fixed throughput reduction. The broader concern is quality degradation — undetected SDCs that produce subtly wrong inference results, reducing the effective value per kW_IT. This quality impact is harder to quantify and is not modeled, but could be significant for safety-critical or high-accuracy inference workloads.
Key Findings
1. Optimistic Scenario Approaches Parity by 2040
| Year | Optimistic | Central | Conservative |
|---|---|---|---|
| 2026 | 1.9x | 4.8x | 9.4x |
| 2030 | 1.2x | 2.3x | 5.6x |
| 2035 | 1.1x | 1.7x | 3.9x |
| 2040 | 1x | 1.5x | 3.3x |
Note: Year-to-year movement is driven by launch cost, platform manufacturing cost (learning curves), and orbital WACC (declining as operational history accumulates). Effective lifetime, fixed opex, structural overhead, and orbital PUE are held constant. The 2035–2040 ratios are somewhat conservative for orbital as a result, since effective lifetime should improve with design iteration (see [Limitations](#limitations)).
The ratio declines as launch costs fall and WACC compresses, converging toward a floor set by the effective lifetime penalty and the irreducible GPU cost shared with terrestrial. The three parameters with the largest impact on this ratio — effective satellite lifetime, platform manufacturing cost, and orbital WACC — all have Low-to-Medium confidence (see Critical Inputs). The optimistic scenario approaches parity (~1x by 2040) as WACC compression and manufacturing learning compound, but the central and conservative scenarios remain well above parity. See the cost parity analysis for the full timeline.
2. GPU Cost Dominates Both Sides
At 7,926 $/kW_IT/year (central), amortized GPU cost is 74% of terrestrial TCO. On the orbital side, amortized GPU cost ranges from ~26% of orbital TCO in 2026 (when launch cost dominates) to ~75% by 2040 (when launch and platform costs have fallen via learning curves). Since GPU cost per kW_IT is essentially identical whether deployed on Earth or in orbit (plus a modest 8-30% space adaptation premium), this dominant shared cost cannot create an advantage for either deployment context. The competition reduces to non-GPU costs — where terrestrial has a decisive advantage.
3. Energy Savings Are Real but Insufficient Alone
Terrestrial energy cost is 694 $/kW_IT/year (central, 2026) — only ~7% of TCO. Eliminating this saves roughly that amount. In the optimistic scenario, this saving combined with low platform and launch costs narrows the gap to ~1.1x — but the residual premium persists because of the effective lifetime penalty and cost of capital spread. In the central scenario, the energy saving is overwhelmed by the CRF-amplified capex penalty: orbital amortizes capex at 13.5% WACC in 2026 (Low confidence — no financing precedent, adjusted down from 15% to remove double-counting with effective lifetime) over 3.8 years (Medium confidence — model-derived, no operational data), yielding CRF = 0.35, versus terrestrial's 7% WACC over 5 years for GPUs (CRF ≈ 0.24). This creates a large annual cost gap that dwarfs the energy saving.
4. Launch Cost Becomes Irrelevant by 2040
Launch cost dominates orbital capex in 2026 (83,239 $/kW_IT, ~56% of capex) but becomes negligible by 2040 (3,022 $/kW_IT, ~5% of capex). Further launch cost reduction has diminishing returns because GPU cost and platform manufacturing have become the dominant capex components.
5. Effective Lifetime Is the Key Remaining Cost Lever
The effective lifetime — capacity-weighted years of service accounting for degradation, failures, and deployment delay — remains the primary driver of the orbital cost premium (OAT swing: ~1.1x, see Finding 7). The central estimate of 3.8 years is a weakly informed model output with Medium confidence: physical lifetime is well-grounded in fleet data, but the space GPU attrition rate (especially destructive SEL) spans >6 orders of magnitude in the literature — the central value should be understood as likely somewhere in the range of 2–6 years, not a precisely calibrated figure. Because effective lifetime is the dominant sensitivity lever, this uncertainty propagates strongly into all downstream TCO results. With CRF-based amortization, the lifetime effect is amplified: extending the effective lifetime from 3.8 to 5.9 years reduces the CRF from 0.35 to ~0.23, cutting annual amortized capex by ~35%.
6. Inference Networking Constrains Workload Scope
Frontier MoE models (DeepSeek R1 671B, 60%+ of frontier models use MoE) require 64+ GPUs in a single NVLink domain (1.8 TB/s per GPU, 130 TB/s aggregate). The bandwidth picture has two layers — demonstrated and projected — and both sides of the orbital-terrestrial comparison are advancing:
Demonstrated (today): Google's Suncatcher bench-demonstrated 800 Gbps (0.1 TB/s) per optical inter-satellite link pair — an ~18× gap versus current NVLink 5 (14.4 Tbps per GPU). At this level, ISLs are comparable to InfiniBand 400G: sufficient for pipeline parallelism but insufficient for tensor or expert parallelism.
Projected (both sides advancing): Google states the required ISL bandwidth is "on the order of 10 Tbps," achievable via COTS DWDM (9.6-12.8 Tbps per aperture) in a close-formation cluster with satellites at 100–200 m spacing; our analysis estimates an upper bound of 10-40 Tbps with spatial multiplexing. At the projected level, per-link ISL bandwidth would approach current NVLink 5 (14.4 Tbps) — but the terrestrial bar is simultaneously advancing: NVIDIA's Rubin NVL72 (H2 2026) doubles per-GPU NVLink to 3.6 TB/s (28.8 Tbps), and the roadmap extends to NVL576 and NVL1152 with co-packaged optics. Neither the projected ISL bandwidth nor these future terrestrial networks are flight-demonstrated or production-deployed, so a symmetric comparison should note that both represent engineering projections, not current capability. The projected ISL capabilities are more likely to narrow the gap for point-to-point links between satellite pairs, but the aggregate all-to-all bandwidth of an NVL72 fabric (130-260 TB/s across all 72 endpoints simultaneously) has no ISL analogue.
See the inference networking analysis for the full bandwidth comparison table and feasibility assessment. Under current demonstrated technology, this limits orbital to:
- Tier 1 (high feasibility): Models up to ~70B on 1-8 GPUs within a single satellite
- Tier 2 (high feasibility on monolithic satellites): Large dense models and frontier MoE with wide EP (e.g., DeepSeek R1 at EP=64) on 8-72 GPUs — fit within a monolithic 72-GPU satellite's internal NVLink domain. On distributed small-satellite architectures (8-16 GPUs each), Tier 2 requires cross-satellite parallelism with ISL bandwidth constraints.
- Tier 3 (low feasibility): NVL144+ workloads exceeding a single satellite's NVLink domain — require cross-satellite parallelism at bandwidth levels not yet demonstrated in space
Two factors could relax these constraints: (1) DWDM inter-satellite links reaching multi-Tbps bandwidth would enable wider parallelism across satellites, and (2) a satellite housing enough GPUs with an internal NVLink switch fabric could support a tightly-coupled domain within a single spacecraft. One estimate (Handmer) suggests ~200 H100-equivalent GPUs per satellite, which exceeds the 72-GPU NVL72 count — though GPU count alone does not guarantee an NVL72-class fully-connected NVLink domain, which also requires switch ASICs, high-bandwidth cabling, and additional power and mass.
Model compression closes the capability gap over time (frontier capabilities reach consumer GPUs in 6-12 months), but orbital always serves models 1-2 generations behind the terrestrial frontier.
Moreover, the terrestrial networking bar is accelerating. NVIDIA's GTC 2026 roadmap extends beyond NVL144 to NVL576 and NVL1152 (multi-rack systems using co-packaged optics), with inference-specific hardware disaggregation (GPU + Groq LPU + storage accelerators coordinated across rack types within a pod achieving 10 PB/s internal bandwidth). Competitive inference is evolving from a GPU problem to a systems-level integration problem that has no orbital analogue — reinforcing the Tier 3 gap. See the inference networking analysis for the full assessment.
7. Sensitivity: Lifetime Dominates, Financing Matters
A one-at-a-time (OAT) analysis — varying each input from optimistic to conservative while holding all others at central — reveals the individual impact of each parameter on the TCO ratio at 2035 (baseline ~1.7x). The top 7 parameters by approximate swing:
| Rank | Parameter | Swing (Δ ratio) | Confidence |
|---|---|---|---|
| 1 | Effective satellite lifetime | ~1.1 | Medium |
| 2 | Platform manufacturing cost | ~0.9 | Low |
| 3 | Launch cost | ~0.5 | Medium |
| 4 | GPU useful life* | ~0.5 | High |
| 5 | Orbital WACC | ~0.4 | Low |
| 6 | GPU cost premium | ~0.3 | Low |
| 7 | Terrestrial WACC | ~0.3 | High |
*GPU useful life direction is reversed: longer terrestrial GPU life widens the gap (lower terrestrial amortization), so the optimistic end of the range corresponds to shorter GPU life.
Effective satellite lifetime remains the most impactful single parameter. The WACC parameters (ranks 5 and 7) are individually significant — orbital WACC alone has a ~0.4x swing. Platform manufacturing cost is the second-largest driver; with manufacturing learning curves now modeled (5%/year central decline), the 2035 range is $3.5K-$29.2K. GPU useful life reverses direction: longer terrestrial GPU life widens the gap (lower terrestrial amortization). See the sensitivity analysis for the full table.
8. Technology Readiness
The economic analysis assumes orbital compute systems exist at scale. As of early 2026, multiple players have moved beyond concepts to hardware: Kepler Communications launched the first operational distributed on-orbit computing service (10 satellites, March 2026), Starcloud-1 completed the first AI training in orbit (single H100, November 2025), K2 Space is preparing to launch its 20 kW Gravitas platform (scheduled late March 2026), and China's Xingshidai deployed 12 AI satellites (May 2025). SpaceX has disclosed plans for 100 kW AI Sat Mini satellites (~100 per Starship launch), and Google's Suncatcher targets a two-satellite prototype in early 2027.
However, the existence of orbital compute demonstrations does not validate the assumptions in our economic model. All operational systems are low-power edge-compute demonstrations (0.7–28 kW) — not the 100+ kW satellites our cost model assumes. The gap between a 60 kg single-GPU satellite and a 1–5 ton, 100 kW compute satellite is not merely a matter of scaling up: the model's key input assumptions — thermal management at >1 kW/kg rejection rates, power systems at 100+ kW with multi-year reliability, mass-production of satellite buses at $8K-$35K/kW_IT, and bus failure rates of 0.5–2.5%/year — have not been validated at the power levels the economics require. No satellite has demonstrated radiative cooling at 100 kW scale, operated a high-power EPS for multiple years, or been produced in the quantities needed to achieve the manufacturing cost learning curves the model assumes. Readers should treat the model's inputs as engineering projections informed by component-level data and analogy to lower-power systems, not as validated system-level parameters. Multi-GW orbital compute deployment remains unlikely before 2035, making the 2030 optimistic ratio (~1.2x) economically computed but practically irrelevant. See the cost parity timeline for the full rollout analysis.
Critical Inputs
The leaf values that most influence the conclusion, with their ranges and confidence. Impact ranking is from the OAT sensitivity analysis (see Finding 7).
| Input | Central | Range | Confidence | Impact (OAT swing) |
|---|---|---|---|---|
| Effective satellite lifetime | 3.8 years | 2.2-5.9 | Medium (physical lifetime well-grounded in fleet data — Iridium 20+ yr precedent, OneWeb 0.3% failure/4-7 yr, Castet-Saleh Weibull infant mortality; bus loss rate empirically anchored but requires judgment premium for high-power systems; GPU attrition terrestrial baseline peer-reviewed but space SEL rate genuinely uncharacterized — NASA SEL database shows >6 OOM rate variation with no predictive trends; no orbital compute operational data; deployment delay 3-6 months consuming GPU economic life) | ~1.1 — dominant driver |
| Platform mfg cost (2035) | $11,275/kW_IT | $3.5K-$29.2K | Low (no production data; blog models and startup pricing; learning curves applied) | ~0.9 |
| Launch cost (2035) | $150/kg | $35-$500 | Medium (Starship unproven) | ~0.5 — diminishing |
| GPU useful life | 5 years | 4-6 | High (observed depreciation) | ~0.5 (helps terrestrial) |
| Orbital WACC | 13.5%→10% (2026→2040) | 8-20% (declining) | Low (no precedent; adjusted for double-counting) | ~0.4 |
| GPU cost premium | 1.15x | 1.08-1.30x | Low | ~0.3 |
| Terrestrial WACC | 7% | 5-10% | High (observed: Equinix 5.9%, DLR 6.5%) | ~0.3 |
| Orbital PUE | 1.05 | 1.035-1.10 | Medium (component specs from Vicor + NASA subsystem allocation; no system-level data) | Low (~0.05) |
| GPU cost per kW_IT | $32,500/kW_IT | $25K-$40K | High (observed pricing) | ~0.05 (affects both sides equally) |
| Terrestrial energy cost | $0.066/kWh | $0.041-$0.098 | High (observed) | ~0.1 |
| Inference domain size | 16 GPUs/domain | 8-72 | Medium | N/A (workload scope, not cost) |
The three parameters with Low confidence and highest impact — effective satellite lifetime (~1.1 swing), platform manufacturing cost (~0.8 swing), and orbital WACC (~0.4 swing) — all lack primary sources or production data. Platform manufacturing cost is particularly uncertain: the 4.4x range from optimistic to conservative may still understate the true uncertainty, as even the conservative estimate depends on manufacturing-learning assumptions for hardware types that have never been produced. Orbital PUE was re-sourced after source accuracy reviews removed all original evidence items; current estimates are based on Vicor space-grade converter specifications, VPT GaN converter data, NASA's SOA PMAD survey, and NASA-affiliated spacecraft subsystem allocation guides — component-level data, not system-level measurements (see Limitations). The financing parameters (WACC) are jointly significant: if both move favorably for orbital simultaneously, their combined effect approaches effective lifetime in magnitude. Terrestrial parameters (GPU cost, energy cost, infrastructure, terrestrial WACC) are better anchored by observed market data. See the source quality assessment for the full classification.
Side Pages
- Terrestrial Energy Supply Constraints — Examines the proponent argument that terrestrial capacity will hit physical limits before AI demand is met. Finding: constraints are genuine but diverse supply responses create a cost ceiling below the level needed for orbital competitiveness.
- Satellite GPU Capacity Scaling — How many GPUs per satellite? Industry converges on 100 kW (~72 GPUs) as the near-term design point, with mass estimates spanning 1–5.4 tons. Thermal management becomes the binding constraint above 100 kW. Monolithic vs distributed architectures offer different tradeoffs for workload coupling vs reliability.
- Ground Segment Constraints — Operational constraints on delivering orbital compute as a service via optical links. Optical ground links achieve only 60-69% session success with two stations (NASA LCRD data); 99.9% availability requires ~10 globally distributed stations with adaptive optics. LEO passes last 5-7 minutes, constraining service to batch/high-compute-to-data workloads unless constellation-scale coverage is achieved. Ground-segment weather availability is out of scope for the main TCO model — optical is not the only option, as RF downlinks (e.g., Starlink Ka/Ku-band infrastructure) provide weather-independent connectivity. Ground segment capex ($50-750M) is excluded from the main model.
- In-Orbit Servicing Feasibility — Speculative analysis of whether robotic GPU module replacement could extend satellite lifetime. Current servicing is limited to GEO station-keeping; the mature constellation model projects $50-200K/visit but depends on collectively undemonstrated assumptions (reusable servicers, standardized interfaces, Starship cargo costs). If all assumptions are met, could narrow the central TCO ratio by 15-30% in the 2035+ timeframe.
Limitations
Workload scope. The TCO comparison applies to Tier 1 and Tier 2 inference workloads only — small-to-medium model inference, batch inference, and high compute-to-data-ratio workloads that fit within a single satellite or small cluster. Training, RAG, large-context retrieval, and frontier MoE inference requiring wide expert parallelism are infeasible with current inter-satellite link technology. The cost ratios are meaningful only for this subset of inference.
Latency. LEO adds ~4-8 ms round-trip latency (ground-to-satellite-to-ground), excluding orbital compute from latency-sensitive interactive inference workloads (real-time chatbots, code completion, voice assistants). This further narrows the addressable workload beyond the networking tier restrictions — even within Tier 1 and Tier 2, only batch and latency-tolerant inference is viable.
Deployment scale scope. The model assumes multi-GW-scale deployment, which amortizes NRE, ground-segment capex, and other fixed costs over a large installed base. Near-term deployment economics — where the amortization base is small — are out of scope. This is a material omission: the report acknowledges that NRE alone reaches $1,000–5,000/kW_IT at an initial 1 GW deployment, and that the aggregate excluded-cost adjustment would be "substantially larger" at that scale (see Aggregate direction of excluded costs below). Since multi-GW orbital deployment is unlikely before 2035, and current operational systems are 0.7–28 kW, the year-by-year cost ratios presented for the early period (2026–2030) reflect a mature-scale cost structure that would not apply to realistic first deployments. Readers should interpret early-year ratios as answering "what would orbital cost if deployed at scale in that year," not "what would the first orbital deployment cost in that year" — the latter would be materially worse for orbital.
Source quality on key inputs. The four lowest-confidence orbital parameters — effective satellite lifetime, platform manufacturing cost, orbital WACC, and orbital PUE — all lack primary sources or production data. Platform cost estimates depend on Mach33's analysis of Starlink-heritage pricing (an industry blog modeling exercise, not demonstrated costs) and Starpath panel pricing (a startup without production deliveries). The optimistic scenario's solar cost of $5/W assumes a ~20x compression from the NASA SBSP study's $100/W; this may eventually happen but is a manufacturing-learning assumption, not a demonstrated baseline. Orbital PUE is now sourced from Vicor space-grade converter specifications and NASA subsystem allocation guides, but no system-level orbital compute PUE has been measured. Some evidence anchors are weak for the weight placed on them: HN discussion summaries and NextBigFuture for Falcon 9/Starship economics serve as exploratory pointers but are insufficient to anchor central estimates for outcome-determining parameters. Orbital mechanics parameters (eclipse duration, beta angles) are derived from standard physics formulas validated against five real dawn-dusk SSO missions; the computational model used for cross-checking was generated by an LLM and should be treated as a computation check, not an independent empirical source.
Operator archetype asymmetry — the model reflects SpaceX's internal cost. The orbital optimistic and partly central cases model a SpaceX/xAI-style vertically integrated operator: internal launch pricing (~2-4x below customer price, based on Falcon 9 precedent), shared Starlink ground infrastructure (~170+ stations), SpaceX-heritage satellite manufacturing, and SpaceX balance sheet financing (10% WACC). Any other organization attempting orbital compute would face substantially higher costs across all four dimensions. No other entity currently combines volume launch capability, mega-constellation operations experience, and AI compute demand under one corporate umbrella — developing comparable vertical integration is a high bar. The terrestrial side is benchmarked to broader hyperscaler/market economics. A like-for-like comparison of best-in-class orbital vs best-in-class terrestrial (where hyperscalers achieve below-market energy and infrastructure costs) would likely show a wider gap than the optimistic scenario suggests. Conversely, a non-SpaceX orbital operator paying customer launch prices, building dedicated ground infrastructure, and financing at venture rates would face TCO ratios substantially above our conservative scenario.
WACC double-counting has been addressed. The central orbital WACC was adjusted from a naive 15% to 13.5% to remove ~1.5pp of overlap with risks already captured in the effective lifetime parameter (shorter asset life, catastrophic loss). The remaining WACC spread (technology novelty, revenue uncertainty, financing precedent) is not double-counted. See the WACC analysis for the full decomposition.
Effective lifetime compresses distinct mechanisms with very different evidence quality. Physical durability, failure attrition, economic obsolescence, deployment delay, and design-life constraints are folded into one scalar. These have different mitigation paths and improvement timelines, but the model cannot distinguish between them. Critically, the sub-inputs have very different confidence levels: physical lifetime and SDC overhead are well-grounded in fleet data; bus loss rate is empirically anchored but requires judgment premium for high-power systems; deployment delay (3-6 months) is based on engineering estimates; and GPU attrition in space — especially the destructive SEL rate for H100/B200 — is the weakest link, with NASA's statistical SEL database showing rates spanning >6 orders of magnitude. The central effective lifetime of 3.8 years is a weakly informed indicative estimate — likely somewhere in the range of 2–6 years — not a precisely calibrated figure. Because this is the analysis's dominant sensitivity lever, the resulting uncertainty propagates strongly into all TCO and parity results. See the operational lifetime page for the full evidence quality decomposition.
Time-invariance. Effective lifetime, fixed opex, structural overhead, and orbital PUE are held constant across 2026–2040. In reality, all should improve with operational experience. Launch cost, platform manufacturing cost, and orbital WACC are now time-varying — WACC declines as operational history accumulates (central: 13.5%→10% by 2040, following the offshore wind precedent). The remaining flat parameters mean the later-year ratios are somewhat conservative for orbital, since effective lifetime in particular should improve with design iteration. GPU cost per kW_IT is also time-invariant; if competition drives $/kW_IT down, both TCOs fall but the orbital premium widens.
Correlated parameters. The OAT sensitivity analysis varies one parameter at a time, but key variables are correlated: lower launch cost arrives with higher manufacturing scale; proven operations should compress WACC; longer effective life and lower failure rates should co-occur. The bundled scenarios partially capture this, but important cross-correlations remain unexplored.
Aggregate direction of excluded costs. All four acknowledged exclusions — NRE, ground-segment capex, debris liability, and SDC quality degradation — add cost to the orbital side, meaning the modeled ratios understate the true orbital premium. Bounding the aggregate at GW-scale deployment (2035): NRE ($50–400/kW_IT capex, annualized ~$18–146/kW_IT/year), ground-segment capex ($50–750M total, annualized ~$7–99/kW_IT/year), debris liability ($150–600/kW_IT/year), and SDC quality degradation (unquantified — the 1.3% throughput overhead is modeled in effective lifetime, but the broader quality impact on inference accuracy is not). Excluding the unquantified SDC quality impact, the aggregate of NRE, ground segment, and debris liability adds approximately +0.1x to +0.3x to the modeled TCO ratio — moving the central 2035 ratio from ~2.1x to ~2.2–2.4x and the optimistic 2035 ratio from ~1.4x to ~1.5–1.6x. Debris liability dominates at GW scale; NRE and ground segment are secondary. SDC quality degradation could add further cost if quantified. At the more plausible initial deployment scale of ~1 GW, NRE alone could reach $1,000–5,000/kW_IT (annualized ~$360–1,820/kW_IT/year), and the aggregate adjustment would be substantially larger — most significant in the first years of deployment when amortization bases are smallest.
Terrestrial energy supply constraints. The core proponent argument for orbital compute is not that it will be cheaper, but that terrestrial capacity will hit physical limits (grid, land, permitting) before AI demand is met. A side-page analysis examines this argument in detail. The finding: supply constraints are genuine (8-year grid interconnection queues, gas turbines sold out through 2030, PJM capacity prices up 10x), but the diversity of terrestrial supply responses (BTM gas, solar+battery, aeroderivative turbines, nuclear PPAs, grid reform) creates a cost ceiling. Even in the conservative scenario, blended terrestrial electricity costs peak at ~$0.11/kWh. More fundamentally, the model-derived break-even analysis shows that from 2030 onward, no terrestrial energy price would make orbital competitive — the orbital cost premium is structural (effective lifetime, cost of capital, GPU adaptation), not energy-driven. Energy cost is only ~6-7% of terrestrial TCO.
Ground-segment weather availability is out of scope. The model omits ground-segment capex (~$50-750M) and does not incorporate the operational constraints of LEO optical downlinks — cloud cover interrupts optical links and motivates multiple geographically diverse ground stations; NASA's LCRD uses adaptive optics, weather monitoring, and two optical ground stations for availability; TBIRD's record 200 Gbps downlink occurred during ~5-minute LEO passes. Ground-segment weather availability is declared out of scope for this cost analysis because optical ground links may not be a hard requirement: RF downlinks (such as Starlink's existing Ka/Ku-band infrastructure) provide weather-independent connectivity at lower per-link bandwidth, and a SpaceX-operated orbital compute constellation could leverage existing Starlink ground infrastructure (~170+ stations) rather than building a dedicated optical ground network. The weather availability problem documented in the ground-segment constraints side page is real for dedicated optical-only architectures, but the assumption that optical is the only viable downlink technology overstates the constraint for an operator with access to RF ground infrastructure.
Orbital PUE relies on component-level data, not system-level measurements. The orbital PUE estimate (central: 1.05) is sourced from Vicor's space-grade DC-DC converter efficiency specifications (91-96% per stage) and NASA-affiliated spacecraft subsystem power allocation guides. No system-level PUE measurement exists for an orbital compute satellite, since none have been built at this scale. The low sensitivity impact (~0.05x OAT swing) means this does not materially affect the conclusion.
Scope. The model analyzes cost only, not addressable market. No demand-side analysis estimates what fraction of inference workloads falls into each networking tier. The "no terrestrial alternative" market (military, maritime, disaster response) is not analyzed, but is unlikely to drive an orbital compute business case: these users can reach terrestrial data centers via satellite Internet (e.g., Starlink) at far lower cost than fielding orbital compute infrastructure, and deploying a comms satellite fleet is orders of magnitude simpler than deploying an orbital data center fleet. Data sovereignty concerns are similarly unlikely to favor orbital — a nation that lacks the infrastructure for terrestrial data centers is not positioned to operate an orbital compute constellation.
Orbital vs Terrestrial AI Compute TCO
| Parameter | 2026 | 2028 | 2030 | 2032 | 2035 | 2040 |
|---|---|---|---|---|---|---|
|
Launch cost to LEO
$/kg
|
2,500 | 1,200 | 500 | 360 | 150 | 75 |
|
Solar array specific power
W/kg
|
150 | 150 | 150 | 150 | 150 | 150 |
|
Radiative cooling specific power
W_rejected/kg
|
77 | 77 | 77 | 77 | 77 | 77 |
|
Compute hardware mass
kg/kW_IT
|
6 | 6 | 6 | 6 | 6 | 6 |
|
Platform manufacturing cost
$/kW_IT
|
18,000 | 16,250 | 14,675 | 13,250 | 11,275 | 8,725 |
|
Orbital GPU cost premium
multiplier
|
1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
|
Effective satellite lifetime
years
|
3.8 | 3.8 | 3.8 | 3.8 | 3.8 | 3.8 |
|
Orbital fixed opex
$/kW_IT/year
|
200 | 200 | 200 | 200 | 200 | 200 |
|
Structural overhead
multiplier
|
1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
|
Orbital PUE
ratio
|
1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
|
Max eclipse per orbit
minutes
|
21 | 21 | 21 | 21 | 21 | 21 |
|
Annual eclipse hours (no battery)
hours/year
|
412 | 412 | 412 | 412 | 412 | 412 |
|
Terrestrial infrastructure cost
$/kW_IT
|
12,500 | 12,500 | 12,500 | 12,500 | 12,500 | 12,500 |
|
Terrestrial energy cost
$/kWh
|
0.07 | 0.08 | 0.07 | 0.07 | 0.07 | 0.06 |
|
Terrestrial PUE
ratio
|
1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
|
Terrestrial power-asset capex
$/kW_IT
|
200 | 350 | 500 | 575 | 650 | 750 |
|
Terrestrial non-energy opex
$/kW_IT/year
|
750 | 750 | 750 | 750 | 750 | 750 |
|
GPU cost per kW_IT
$/kW_IT
|
32,500 | 32,500 | 32,500 | 32,500 | 32,500 | 32,500 |
|
GPU useful life
years
|
5 | 5 | 5 | 5 | 5 | 5 |
|
Orbital WACC
fraction
|
0.14 | 0.13 | 0.13 | 0.12 | 0.11 | 0.1 |
|
Terrestrial WACC
fraction
|
0.07 | 0.07 | 0.07 | 0.07 | 0.07 | 0.07 |
|
Solar array mass
kg/kW_IT = (orbital_PUE × 1 kW) / solar_specific_power
|
7 | 7 | 7 | 7 | 7 | 7 |
|
Thermal system mass
kg/kW_IT = (orbital_PUE × 1 kW) / cooling_specific_power
|
13.6 | 13.6 | 13.6 | 13.6 | 13.6 | 13.6 |
|
Battery mass
kg/kW_IT = usable_min / 60 / (DOD × EOL × path_eff) / specific_energy
|
0 | 1.2 | 4.9 | 5.2 | 5.6 | 5.6 |
|
Total satellite mass
kg/kW_IT = (solar + thermal + compute + battery) × structural_overhead
|
33.3 | 34.8 | 39.5 | 39.8 | 40.3 | 40.3 |
|
Optimal battery duration
minutes = argmin(penalized_TCO_per_operating_hour)
|
0 | 4.5 | 18.5 | 19.5 | 21 | 21 |
|
Orbital availability
fraction = 1 - eclipse_downtime(battery_duration) / 8760
|
0.95 | 0.97 | 1 | 1 | 1 | 1 |
|
Launch cost
$/kW_IT = total_satellite_mass × launch_cost_per_kg
|
83,239 | 41,753 | 19,728 | 14,324 | 6,043 | 3,022 |
|
Orbital GPU cost
$/kW_IT = gpu_cost_per_kw × gpu_cost_premium
|
37,375 | 37,375 | 37,375 | 37,375 | 37,375 | 37,375 |
|
Orbital total capex
$/kW_IT = launch_cost + orbital_gpu_cost + platform_mfg_cost
|
138,614 | 95,378 | 71,778 | 64,949 | 54,693 | 49,122 |
|
Orbital capital recovery factor
fraction = CRF(orbital_wacc, effective_lifetime)
|
0.35 | 0.35 | 0.35 | 0.34 | 0.34 | 0.33 |
|
Orbital capex (amortized)
$/kW_IT/year = orbital_capex × CRF(orbital_wacc, effective_lifetime)
|
48,991 | 33,375 | 24,866 | 22,274 | 18,472 | 16,167 |
|
Orbital TCO
$/kW_IT/year = (orbital_capex_amortized + orbital_fixed_opex) / availability
|
51,619 | 34,765 | 25,137 | 22,506 | 18,672 | 16,367 |
|
Terrestrial GPU cost (amortized)
$/kW_IT/year = gpu_cost_per_kw × CRF(terrestrial_wacc, gpu_useful_life)
|
7,926 | 7,926 | 7,926 | 7,926 | 7,926 | 7,926 |
|
Terrestrial infrastructure (amortized)
$/kW_IT/year = terrestrial_infra_cost × CRF(terrestrial_wacc, 15 years)
|
1,372 | 1,372 | 1,372 | 1,372 | 1,372 | 1,372 |
|
Terrestrial power-asset capex (amortized)
$/kW_IT/year = power_asset_capex × CRF(terrestrial_wacc, 20 years)
|
18.9 | 33 | 47.2 | 54.3 | 61.4 | 70.8 |
|
Terrestrial energy cost (variable)
$/kW_IT/year = variable_energy_cost × 8760 hours × PUE
|
694 | 723 | 703 | 675 | 636 | 578 |
|
Terrestrial non-energy opex
$/kW_IT/year = non_energy_opex (page-backed)
|
750 | 750 | 750 | 750 | 750 | 750 |
|
Terrestrial TCO
$/kW_IT/year = gpu_amortized + infra_amortized + power_capex_amortized + energy + non_energy_opex
|
10,762 | 10,805 | 10,800 | 10,778 | 10,746 | 10,698 |
|
Orbital / Terrestrial TCO ratio
ratio = orbital_tco / terrestrial_tco
|
4.8 | 3.2 | 2.3 | 2.1 | 1.7 | 1.5 |
Pages
Input Questions
- What is the mass per kW_IT (kg/kW_IT) for AI compute hardware adapted for orbital deployment?
- What is the maximum eclipse duration per orbit (minutes) and annual eclipse exposure for an orbital compute satellite in a dawn-dusk sun-synchronous orbit at ~575 km, and what are the implications for battery sizing or planned downtime?
- What is the cost per kW_IT for current-generation AI accelerator hardware?
- What is the expected useful life of AI accelerator hardware?
- What are the scale-up and scale-out networking requirements for AI inference, how is domain size evolving, and what does this imply for orbital feasibility?
- What is the cost per kilogram to deliver payload to LEO, and how will it evolve from 2026 through 2040?
- What are the fixed annual operating costs per kW_IT ($/kW_IT/year) for orbital compute, excluding failure-driven replacement (which is captured in effective lifetime)?
- What cost premium does space adaptation add to AI compute hardware?
- What is the effective capacity-weighted lifetime (years) of an orbital compute satellite, accounting for GPU degradation, catastrophic failures, and regulatory constraints?
- What is the manufacturing cost per kW_IT ($/kW_IT) for the non-compute satellite components?
- What is the PUE for compute-focused satellites in LEO?
- What is the weighted average cost of capital (WACC) for orbital AI compute infrastructure?
- What is the achievable thermal rejection rate (W_rejected/kg) for radiative cooling systems in LEO, given AI compute operating temperatures?
- What is the quantitative failure rate multiplier for AI compute hardware in LEO relative to terrestrial data centers, considering thermal cycling fatigue, radiation-induced failures, and launch vibration damage?
- What is the achievable specific power (W/kg) for space-grade solar arrays suitable for orbital compute satellites?
- What is the structural overhead mass multiplier for orbital compute satellites?
- What is the variable electricity cost (fuel, O&M, grid procurement) for AI data centers, excluding BTM generation capex?
- What is the annual permanent failure rate of datacenter GPUs (H100/A100-class) in terrestrial operation?
- What is the all-in infrastructure cost per kW_IT for a terrestrial AI data center?
- What is the annual non-energy opex per kW_IT for hyperscale AI data centers?
- What is the power-generation capital cost per kW_IT for behind-the-meter generation at terrestrial AI data centers?
- What is the PUE for modern liquid-cooled AI data centers?
- What is the weighted average cost of capital (WACC) for terrestrial AI data center infrastructure?
Analyses
- When (if ever) does orbital AI compute TCO reach parity with terrestrial, and what are the key drivers and constraints?
- What is the capital cost per kW_IT ($/kW_IT) for deploying AI compute in orbit, including launch, platform manufacturing, and GPU hardware?
- What is the amortized total cost of ownership per kW_IT per year ($/kW_IT/year) for orbital AI compute?
- What is the total mass per kW_IT (kg/kW_IT) for an orbital compute satellite, combining solar arrays, thermal rejection, and compute hardware?
- What is the amortized total cost of ownership per kW_IT per year ($/kW_IT/year) for terrestrial AI compute?
Side Chapters
- Chip Manufacturing Constraints as a Structural Barrier to Orbital Compute
- Ground Segment Constraints for Orbital Compute Service Delivery
- In-Orbit Servicing Feasibility for Orbital Compute
- Satellite GPU Capacity Scaling: How Many GPUs Per Satellite?
- Solar Land Supply Constraints for Data Centers
- Terrestrial Energy Supply Constraints