Cost Parity Timeline

When (if ever) does orbital compute TCO reach parity with terrestrial?

When (if ever) does orbital AI compute TCO reach parity with terrestrial, and what are the key drivers and constraints?

Answer

The optimistic scenario reaches ~1.1x by 2035, declining to ~1x by 2040 — approaching but not clearly reaching cost parity. Under aggressive but not implausible assumptions (Starship at $100/kg by 2030, lightweight satellites at ~13 kg/kW_IT, 5.9-year effective lifetime, SpaceX-level financing declining from 10% to 8% WACC), the optimistic scenario nears parity in the late 2030s as WACC compression and manufacturing learning curves compound.

The central scenario shows orbital at 1.5x terrestrial cost by 2040, and the conservative at 3.3x. The remaining gap is driven by three structural factors: the effective lifetime penalty (orbital hardware delivers fewer capacity-years, partly due to deployment delay consuming GPU economic life), the cost of capital spread (orbital assets carry a higher financing premium than terrestrial, though this narrows over time), and the GPU space adaptation premium.

Beyond cost, orbital compute's workload coverage depends on satellite architecture. Monolithic 72-GPU satellites with internal NVLink can serve all Tier 1 and Tier 2 workloads, including frontier MoE at EP=64. Distributed architectures of smaller satellites face inter-satellite link bandwidth constraints that limit them to Tier 1 workloads (within a single satellite) and degraded cross-satellite pipeline parallelism. In either case, Tier 3 workloads (NVL144+) approach or exceed single-satellite capacity.

Inputs

Input Question Answer Page
orbital-tco What is the amortized TCO per kW_IT/year for orbital compute? $9,221-$144,203/kW_IT/year (scenario/year dependent) link
terrestrial-tco What is the amortized TCO per kW_IT/year for terrestrial compute? $6,593-$17,493/kW_IT/year (scenario/year dependent) link
inference-networking-requirements What are the networking requirements for AI inference? 8-72 GPUs per tightly-coupled domain link

Analysis

The TCO Ratio Over Time

Important caveat: The year-to-year movement in this table is driven by three time-varying inputs: launch cost, platform manufacturing cost (via learning curves), and orbital WACC (declining as operational history accumulates). All other orbital parameters — effective lifetime, fixed opex, structural overhead, and orbital PUE — are held constant across 2026–2040. In reality, effective lifetime should improve with design iteration and opex should decline with fleet-management experience. Holding these flat makes the later years (2035–2040) somewhat conservative for orbital. Readers should not interpret the year-by-year outputs as a mature forecast of all relevant learning curves.

Year Optimistic Central Conservative
2026 1.9 4.8 9.4
2028 1.4 3.2 8
2030 1.2 2.3 5.6
2032 1.1 2.1 4.9
2035 1.1 1.7 3.9
2040 1 1.5 3.3

The optimistic ratio drops rapidly from 1.9 in 2026 to 1.2 by 2030, then plateaus near 1.1-1x through 2040. The residual ~8% gap (at 2035) reflects the compounding effect of CRF-based amortization on the effective lifetime penalty, the cost of capital spread (10% orbital vs 5% terrestrial WACC in the optimistic case), and the GPU space adaptation premium. The slight change from 1.1 (2035) to 1 (2040) occurs because terrestrial energy costs continue falling in the optimistic scenario while orbital costs have already converged to their floor.

The central ratio declines more gradually, reaching 1.5 by 2040. The wider gap reflects the compounding effect of a heavier satellite mass budget (~33.3 vs ~13.3 kg/kW_IT in 2026), higher launch costs, a shorter effective lifetime (3.8 vs 5.9 years), and higher initial WACC (13.5% vs 10%). The central WACC declines from 13.5% to 10% by 2040, which partially offsets the lifetime penalty in later years.

Why True Parity Remains Elusive: The Structural Cost Gap

The optimistic scenario's ~1.1x by 2035 shows that the gap has narrowed relative to the central case, but true cost parity (ratio below 1.0) is not reached in any scenario. This reflects three structural economic realities:

1. GPU cost is common to both sides and dominates both TCOs.

GPU hardware represents 74% of central terrestrial TCO and approximately 65-70% of central orbital amortized capex (once amortized over the effective lifetime). Since GPU cost per kW_IT is essentially the same in both deployment contexts (plus a modest 8-30% orbital premium), this dominant shared cost component cannot create a cost advantage for either side. The competition between orbital and terrestrial reduces to a comparison of their non-GPU costs -- and that comparison has narrowed but still favors terrestrial.

2. The effective lifetime penalty is the primary remaining cost driver.

Orbital compute hardware delivers fewer capacity-years than terrestrial: 2.2-5.9 effective years vs 4-6 years for terrestrial GPU depreciation. This is partly because five degradation mechanisms reduce delivered capacity (bus loss, GPU attrition, SDC, obsolescence) and partly because a deployment delay of 3-6 months (ground testing, launch, commissioning) consumes GPU economic life before the satellite begins producing revenue. Orbital must amortize its capex over this shorter period at a higher cost of capital. In the central 2026 case, orbital amortizes GPU cost at 13.5% WACC over 3.8 years (CRF = 0.35) while terrestrial amortizes at 7% WACC over 5 years (CRF ≈ 0.24).

This penalty stems from the harsh space environment: radiation degradation reduces solar panel and compute performance over time, thermal cycling fatigues components, and satellite failures require full replacement rather than component swaps. The penalty is partially offset by orbital's elimination of energy costs (~694 $/kW_IT/year central) and cooling overhead, but the offset is far from sufficient in the central case.

3. Orbital energy savings are real but insufficient to close the gap in the central case.

Terrestrial energy cost is only 694 $/kW_IT/year (central, 2026) -- roughly 6-7% of total terrestrial TCO. Eliminating this saves ~$580-$720/kW_IT/year (varying by year). In the optimistic scenario, this saving plus lower platform and launch costs narrows the gap, but the remaining ~8% premium (optimistic, 2035) reflects the CRF-amplified lifetime penalty, the cost of capital spread, and GPU premium. In the central scenario, the ~$694/kW_IT/year energy saving (2026) is overwhelmed by:

Orbital compute does eliminate cooling costs too (captured in the near-unity orbital PUE), but modern terrestrial PUE of 1.10 means cooling overhead is only ~5% of power -- a trivial additional advantage.

Sensitivity: What Would Need to Change for True Parity?

With CRF-based amortization, the optimistic scenario reaches ~1.1x by 2035. The gap:

Closing this gap would require simultaneous favorable shifts across multiple parameters:

Effective lifetime extension (highest impact). Extending the optimistic effective lifetime from 5.9 to 7.0+ years would reduce the CRF, cutting annual amortized capex. This requires radiation-hardened chips (Nvidia Space-1), lower failure rates, shorter deployment delays, or in-orbit servicing to extend useful life.

WACC convergence (moderate impact, partially modeled). Orbital WACC now declines over time in the model (optimistic: 10%→8% by 2040). If it converges further toward terrestrial rates (5-7%), the ratio improves further. The 2040 optimistic WACC of 8% is already close to mature infrastructure rates, so further compression requires demonstrated multi-year operational track record.

Platform manufacturing cost reduction (moderate impact). The model now includes manufacturing learning curves (8%/year optimistic, 5%/year central, 2%/year conservative), reducing optimistic 2035 platform cost from $8,000 (2026) to ~$3,475. Further reduction below this would require breakthrough manufacturing processes beyond normal production learning.

For the central scenario, the gap is much larger:

The most sensitive levers for the central case:

Effective lifetime extension (highest impact). Extending central effective lifetime from 3.8 to 5.9 years reduces CRF from 0.35 to ~0.23, cutting annual capex burden by ~35%. This is the single highest-impact improvement.

WACC reduction (partially modeled). Central orbital WACC already declines from 13.5% to 10% by 2040 in the model. Further reduction to SpaceX balance-sheet levels (8%) would provide additional benefit.

In-orbit servicing (potentially high impact, long timeline). If robotic in-orbit servicing could replace degraded GPU modules, the primary effect would be extending effective lifetime. This could push central effective lifetime from 4.1 to 4.5-5.0 years, reducing CRF-weighted amortized capex by ~12-19%. However, no capability for autonomous LEO component-level servicing has been demonstrated. Commercial LEO servicing at scale is unlikely before 2035. See In-Orbit Servicing Feasibility for the full analysis.

Combined favorable shift. The optimistic scenario already incorporates WACC compression (10%→8% by 2040) and manufacturing learning curves. It approaches parity at ~1x by 2040. Pushing beyond this to sub-1.0 ratios would require effective lifetime extension beyond 5.9 years (e.g., through in-orbit servicing or radiation-hardened chips) while other parameters remain at optimistic values. CRF(0.08, 7.0) = 0.192, which would bring the ratio below 1.0 — confirming that cost parity is theoretically reachable but requires beyond-optimistic lifetime outcomes.

The analysis confirms that the optimistic scenario can approach parity by the late 2030s with WACC compression and manufacturing learning, but true cost parity requires lifetime extension beyond current optimistic estimates. The central scenario remains substantially above parity, and the conservative remains far from parity regardless.

One-at-a-Time Sensitivity Analysis

The bundled scenarios (optimistic/central/conservative) vary all parameters together, making it difficult to isolate which parameters matter most. A one-at-a-time (OAT) analysis varies each input from its optimistic to conservative value while holding all other inputs at central, revealing each parameter's individual impact on the TCO ratio.

2035 reference year (baseline all-central: ~1.7x):

Rank Parameter Optimistic value (2035) Conservative value (2035) Swing
1 Effective satellite lifetime 5.9 yr 2.2 yr ~1.1
2 Platform manufacturing cost $3,475/kW_IT $29,175/kW_IT ~0.9
3 Launch cost to LEO $35/kg $500/kg ~0.5
4 GPU useful life 6 yr 4 yr ~0.5
5 Orbital WACC (2035 value) 9% 17% ~0.4
6 Orbital GPU cost premium 1.08x 1.30x ~0.3
7 Terrestrial WACC 5% 10% ~0.3
8 Terrestrial infrastructure cost $8K/kW_IT $20K/kW_IT ~0.2
9+ All other parameters <0.11 each

Note: Swing values are approximate. Platform manufacturing cost is now time-varying (learning curves), so the 2035 optimistic/conservative values differ from the 2026 base.

The OAT analysis confirms four findings:

1. Effective satellite lifetime remains the dominant parameter at all time horizons. Its ~1.1x swing at 2035 is the largest single-parameter impact. The confidence in this parameter is low-medium — no orbital compute satellite has operated, so the central 3.8-year estimate is derived from a structured bottom-up reliability model with separate terms for bus loss, GPU attrition, SDC overhead, economic obsolescence, deployment delay, and spares. The central value of 3.8 years should be understood as a weakly informed indicative estimate, likely somewhere in the range of 2–6 years.

2. GPU useful life reverses direction — it helps terrestrial. A longer GPU useful life (optimistic: 6 years) lowers terrestrial GPU amortization, widening the gap. A shorter life (conservative: 4 years) raises terrestrial costs, narrowing the gap. This is because GPU cost dominates terrestrial TCO, and orbital amortizes GPU cost over the satellite's effective lifetime regardless of the GPU depreciation schedule. Orbital benefits from shorter GPU useful life — an unusual situation where the "conservative" assumption favors orbital.

3. WACC is a significant driver. Orbital WACC (rank 5, ~0.4 swing) and terrestrial WACC (rank 7, ~0.3 swing) each contribute meaningful sensitivity. If both move favorably for orbital simultaneously (lower orbital WACC + higher terrestrial WACC), the combined effect approaches effective lifetime. This underscores that financing conditions are a first-order input, not a second-order refinement.

4. Launch cost ranks third by 2035. At 2030, launch cost is the second most impactful parameter, but by 2035 it drops to third (~0.5 swing) and continues declining. This quantitatively confirms the qualitative finding that launch cost becomes less important as Starship matures — the remaining gap is increasingly dominated by factors (lifetime, manufacturing cost, financing) that launch cost reduction cannot address.

The top 8 parameters together account for ~95% of the total variation between optimistic and conservative bundled scenarios. Parameters ranked 9 and below (including terrestrial energy cost, PUE, solar/thermal mass, and eclipse duration) have swings below 0.11x and do not materially affect the parity conclusion.

Cross-Scenario Comparison: Optimistic Orbital vs Conservative Terrestrial

The bundled scenarios compare like-for-like (optimistic orbital vs optimistic terrestrial, etc.), but the most policy-relevant question is: what if space technology succeeds while terrestrial headwinds persist? This cross-scenario pairing — optimistic orbital assumptions with conservative terrestrial assumptions — represents the strongest possible case for orbital compute.

Year Optimistic Orbital TCO Conservative Terrestrial TCO Ratio
2026 13,015 17,291 0.75x
2030 7,979 17,493 0.46x
2035 7,064 17,431 0.41x
2040 6,614 17,332 0.38x

Ratio = optimistic orbital TCO / conservative terrestrial TCO. Values below 1.0 indicate orbital is cheaper in this cross-scenario pairing.

Important caveat: The sub-1.0 ratios in this table require simultaneously assuming the most favorable orbital outcomes (optimistic launch cost, lifetime, WACC, and manufacturing learning) and the most unfavorable terrestrial outcomes (high grid costs, capacity auction spikes, slow BTM deployment, elevated WACC). Neither side of this pairing represents a consensus expectation. Furthermore, the 2026 orbital values assume 100+ kW compute satellite systems that do not yet exist — no such system has been built, launched, or operated as of early 2026. These ratios illustrate the theoretical ceiling of orbital cost advantage, not a near-term deployable reality.

Under this pairing, orbital is cheaper from 2026 onward. However, this requires simultaneously assuming: (a) Starship achieves $100/kg by 2030, (b) orbital hardware achieves 6.1-year effective lifetime, (c) SpaceX-level financing at 10% WACC, (d) manufacturing learning curves compress platform cost aggressively, and (e) terrestrial faces sustained headwinds — high grid costs, capacity auction spikes, slow BTM deployment, and elevated WACC. The optimistic orbital assumptions individually have Low confidence; the conservative terrestrial assumptions are plausible but represent a worst-case that most market participants would not bet on.

This cross-scenario comparison demonstrates that orbital compute's business case depends not just on orbital technology succeeding, but on terrestrial infrastructure simultaneously underperforming. If even one side of this pairing reverts toward its central estimate, orbital loses its cost advantage.

Technology Readiness and Industrial Rollout

The economic analysis above asks whether orbital compute could be cost-competitive — it does not ask whether multi-GW commercial deployment is physically achievable on the timelines where the optimistic scenario approaches parity.

As of early 2026, multiple players have moved beyond concepts to hardware:

This is substantially more industry activity than existed 12 months ago. However, the existence of these demonstrations should not be read as validation of the 100+ kW system assumptions underlying our economic model. The operational systems (Kepler, Starcloud-1, Xingshidai) are low-power edge-compute demonstrations (0.7–28 kW), operating at 50–150× lower power than the satellites our cost model assumes. The gap between a 60 kg single-GPU satellite and a 1–5 ton, 100 kW compute satellite is not merely a matter of scaling up — it involves unsolved or undemonstrated engineering at scale in thermal management (radiative cooling at >100 kW with no convection), power systems (100+ kW EPS with multi-year reliability), deployable structures (large solar arrays and radiators), and mass production (satellite bus manufacturing at the volumes and costs the model assumes). None of the model's key assumptions — bus failure rates, thermal rejection capacity, platform manufacturing cost, or effective lifetime — have been validated at the power levels where the economics are computed. See the satellite GPU capacity scaling side page for the full analysis of satellite sizing, thermal constraints, and architecture tradeoffs.

Historical precedent for space constellation scaling:

A compute satellite is substantially more complex than a communications satellite (integrating GPU hardware, thermal management, power systems at higher specific power), suggesting that the prototype-to-scale timeline would be longer than Starlink's, not shorter. The current wave of low-power demonstrations compresses the early phase but does not eliminate the engineering challenges at 100 kW scale. A plausible timeline: 2025–2026 low-power demonstrations (operational) → 2027–2028 100 kW prototypes (SpaceX, Suncatcher) → 2029–2031 first operational 100 kW batches (10–100 satellites, ~1–10 MW) → 2032–2035 significant scale (1,000+ satellites, ~100 MW–1 GW) → 2035+ multi-GW deployment.

The optimistic scenario's 2030 ratio (~1.2x) may be economically correct but practically irrelevant — multi-GW commercial orbital compute deployment by 2030 is implausible even with the current industry momentum.

The 2035 timeframe is more relevant, and by then the central scenario still shows a ~1.7x ratio. The economic case for orbital compute depends on achieving optimistic-case assumptions and compressing the normal 7–10 year prototype-to-scale timeline for complex space systems.

Cost of Capital and Financing

The TCO model uses the Capital Recovery Factor (CRF) to amortize capex, with separate WACC parameters for orbital and terrestrial assets. This captures the fundamentally different risk profiles:

Factor Terrestrial Orbital
WACC 5–10% (central: 7%) 8–20% in 2026, declining (central: 13.5%→10% by 2040)
Asset life 15–20 yr (building) + 5–6 yr (GPU) 5–7 yr physical; 2.2–5.9 yr effective (entire satellite)
Salvage value Building has residual value; GPU sold on secondary market Zero — deorbited and destroyed
Failure mode Component replacement; no catastrophic total loss Satellite failure = total loss of that unit
Financing precedent Deep, liquid market for data center project finance No precedent; venture/corporate balance sheet only

The CRF converts a one-time capex into an equivalent annual cost that reflects the time value of money. At the central 2026 orbital WACC of 13.5% and 3.8-year effective lifetime, CRF = 0.35 — meaning each dollar of orbital capex costs 0.35 cents per year, compared to simple amortization (1/3.8 = $0.26/year) or the terrestrial GPU CRF at 7% over 5 years ($0.24/year). The higher orbital CRF reflects both the shorter lifetime and the higher cost of capital. By 2040, the central CRF declines to 0.33 as WACC compresses from 13.5% to 10%.

The financing spread has a material impact on the ratio. In the OAT sensitivity analysis, the WACC parameters individually contribute 0.36x (orbital) and 0.26x (terrestrial) swings — jointly, they are comparable in impact to effective satellite lifetime. The WACC spread is particularly important in the optimistic scenario: at 10% orbital WACC vs 5% terrestrial, the financing differential alone accounts for a material portion of the optimistic cost premium.

See orbital WACC and terrestrial WACC for the derivation of these estimates.

The Networking Constraint: Workload Limitation

Even if cost parity were achieved, orbital compute faces a workload feasibility constraint. The inference-networking-requirements analysis identifies three workload tiers by NVLink domain size, with feasibility depending on satellite architecture:

Tier 1 (1-8 GPUs): Models up to ~70B parameters with quantization. Feasible on any satellite architecture — no inter-satellite networking needed for tightly-coupled computation. This covers most practical inference workloads today, including distilled frontier models.

Tier 2 (8-72 GPUs): Large dense models and frontier MoE with wide expert parallelism (e.g., DeepSeek R1 at EP=64). A monolithic 72-GPU satellite with internal NVLink serves these workloads entirely within a single satellite. Distributed architectures of smaller satellites cannot — they face a fundamental inter-satellite link bandwidth gap (NVLink provides 1.8 TB/s per GPU vs. 0.1 TB/s per ISL link, and the NVL72's 130 TB/s aggregate all-to-all topology has no ISL analogue).

Tier 3 (72+ GPUs): NVL144+ workloads. Approach or exceed even monolithic satellite capacity (~300-500 kW power ceiling). The terrestrial roadmap is pushing domains to NVL144, NVL576, and beyond.

For monolithic satellites, many current frontier inference workloads (Tier 1-2) are served — conditional on fitting within a single NVL72-style domain and on current batching/context assumptions. The workload limitation is on future NVL144+ demands and on long-context workloads that push domain sizes upward. For distributed architectures, the constraint is more severe. In either case, the terrestrial frontier advances: the newest, most capable models consistently require the largest domains. The rapid improvement of smaller models (frontier capabilities become runnable on consumer GPUs within 6-12 months) means the models orbital can serve are increasingly capable, but orbital is likely 1-2 generations behind the absolute terrestrial frontier.

What the Ratio Range Implies for the Business Case

The optimistic scenario reaches ~1.1x by 2035. This is a meaningful premium, but it is within the range that non-cost advantages could offset for specific use cases:

  1. Capacity deployment speed. If terrestrial data center buildout is bottlenecked by power grid interconnection (currently 8-year queues in PJM, the largest US regional grid operator), a ~1.1x cost ratio (optimistic 2035) may be justifiable — capacity has option value. By 2040, the optimistic ratio approaches 1x.

  2. Energy sovereignty. Nations or organizations without reliable grid access could justify a premium for self-contained compute capacity. The optimistic ratio's trajectory toward parity by 2040 makes this increasingly viable.

  3. Siting constraints. If terrestrial land/power availability becomes severely constrained, scarcity pricing could push effective terrestrial costs closer to orbital.

However, the deployment-speed argument faces a strong terrestrial counterargument: behind-the-meter (BTM) generation is the market's actual response to grid constraints. Per the terrestrial energy cost analysis, 56 GW of BTM generation is already planned for US data centers, deployable in 6–12 months at costs below orbital. The orbital speed advantage holds only if BTM itself becomes supply-constrained — a plausible scenario at extreme scale but not the current trajectory.

However, the central scenario (~1.5x by 2040) and conservative scenario (~3.3x) remain well above parity, and these scenarios represent the more likely outcomes given current technology maturity. The cost-competitiveness thesis — the claim that orbital compute will be cheaper than terrestrial — is not supported in the central or conservative scenarios. The optimistic scenario approaches parity by 2040 but requires simultaneous favorable outcomes across multiple Low-confidence parameters.

Limitations and Model Caveats

Workload scope depends on satellite architecture. With monolithic 72-GPU satellites, many current frontier inference workloads (Tier 1 and Tier 2, conditional on fitting within a single NVL72-style domain and current batching/context assumptions) are served within a single satellite's internal NVLink domain. With distributed small-satellite architectures, the scope narrows to Tier 1 and cross-satellite pipeline parallelism. Tier 3 (NVL144+) approaches single-satellite capacity limits in either architecture. Training, RAG, and large-context retrieval are not addressed. The TCO comparison implicitly assumes workloads where orbital can deliver equivalent service quality — the cost ratios are meaningful only for that subset of inference.

Correlated parameters make OAT sensitivity incomplete. The OAT analysis varies one parameter at a time, but the most important low-confidence variables are likely correlated: lower launch cost arrives with higher manufacturing scale (compressing platform cost); proven operations should compress WACC over time; longer effective life and lower catastrophic loss rates should move together. The bundled scenarios (optimistic/central/conservative) partially capture these correlations, but important cross-correlations remain unexplored. For example, the optimistic scenario's combined effect is not simply the sum of individual OAT improvements — some favorable shifts enable others.

Some orbital parameters are held flat from 2026 to 2040. Effective lifetime, fixed opex, structural overhead, and orbital PUE are constant across the time horizon. In reality, all of these should improve with operational experience. Launch cost, platform manufacturing cost, and orbital WACC are time-varying (the latter now declines as operational history accumulates). The remaining flat parameters mean the 2035-2040 ratios remain somewhat conservative for orbital — if the technology succeeds, effective lifetime in particular would improve with design iteration, further compressing the gap.

Availability is treated as an annual average. The eclipse page notes that eclipses are concentrated in a seasonal window (~30-150 days around the solstice), while the orbital TCO page divides annual cost by average availability. This is reasonable for batch and deferrable workloads but may understate the service-quality penalty for always-on, latency-sensitive products that need consistent availability year-round. The analysis does not model the economic cost of seasonal availability variation.

Operator archetype is asymmetric. The orbital optimistic and partly central cases explicitly model a SpaceX/xAI-style vertically integrated operator with internal transfer pricing and SpaceX financing advantages (10% WACC, shared Starlink ground infrastructure, internal launch pricing). The terrestrial side is benchmarked to broader hyperscaler/market economics. A fairer comparison would either benchmark best-in-class orbital against best-in-class terrestrial (where hyperscalers like Google also achieve below-market infrastructure and energy costs), or third-party orbital against third-party terrestrial. The current framing somewhat favors the orbital optimistic case relative to its terrestrial comparand.

WACC double-counting has been addressed. The central orbital WACC was adjusted from 15% to 13.5% to remove ~1.5pp of overlap with risks already captured in the effective lifetime parameter (shorter asset life, catastrophic loss). See the WACC page for the full decomposition.

Effective lifetime is derived from a structured reliability model with separate terms for bus loss (0.5-2.5%/yr), GPU accelerator attrition (3.2-16.1%/yr), SDC overhead (1.3% fixed), economic obsolescence (caps physical life), deployment delay (3-6 months), and spares/graceful degradation (small recovery factor). This decomposition improves traceability — extending effective lifetime from 3.8 to 5.9 years could come from lower bus failure rates (design maturation), lower GPU attrition (radiation hardening, better thermal design), shorter deployment delays (streamlined integration), longer economic relevance (slower GPU improvement cadence), or in-orbit servicing. However, the TCO model still consumes a single effective-lifetime scalar, so the sensitivity analysis cannot isolate individual mechanism contributions.