Orbital Satellite Operational Lifetime
Answer
The expected operational lifetime of an orbital compute satellite is 5 years (central estimate), bounded by the FCC 5-year post-mission deorbit rule as the hard regulatory constraint. Physical hardware can plausibly survive 3.5 to 7 years in LEO, but GPU obsolescence and regulatory requirements converge to make 5 years the design target that all current industry players have adopted.
- Optimistic: 7 years -- If the FCC grants waivers for well-functioning satellites with active station-keeping, and radiation-hardened or space-designed chips (e.g., Nvidia Space-1) prove durable, the physical hardware could operate this long. Solar panel degradation at ~1-2%/yr in LEO leaves ~85-90% power at year 7. However, GPU obsolescence becomes increasingly binding: a chip launched in year 0 will be 3-4 generations behind by year 7.
- Central: 5 years -- This is the convergence point of three independent constraints: the FCC 5-year deorbit rule, SpaceX's own filing (5-year operational life), Starcloud's stated 5-year lifespan, Starlink's ~5-year designed lifespan, and the hyperscaler GPU depreciation cycle (5-6 years). It is also roughly where radiation TID begins to challenge commercial silicon with moderate shielding.
- Conservative: 3.5 years -- In a scenario with elevated solar activity, higher-than-expected failure rates from thermal cycling and radiation SEEs, and rapid GPU obsolescence (1-year Nvidia cadence means 3+ newer generations available), operators may choose to deorbit and replace earlier. This aligns with the lower end of Starlink operational data, where some early batches showed 3-5% uncontrollable failure rates.
Evidence
Regulatory Constraints
FCC 5-year deorbit rule (effective Sept 2024) [evidence]: The FCC adopted rules requiring LEO satellite operators to complete disposal within 5 years of mission end (fcc-5yr-deorbit-rule). This replaced the prior 25-year guideline. New licensees after Sept 29, 2024 must comply. The FCC noted that "large constellations may warrant shorter periods." This is a hard legal constraint on operational lifetime -- satellites must either deorbit or begin active disposal within 5 years of end-of-mission.
SpaceX FCC filing: 5-year operational life [evidence]: SpaceX's January 2026 filing for up to 1 million orbital data center satellites at 500-2,000 km explicitly states the data centers "are expected to have an operational life of five years" (spacex-fcc-orbital-dc-filing). This is the operator's own design target, aligning with the FCC rule.
Starcloud: 5-year lifespan [evidence]: Starcloud CEO stated their "satellites should have a five-year lifespan given the expected lifetime of the Nvidia chips on its architecture" (starcloud-five-year-lifespan). This ties the satellite lifetime directly to chip lifetime rather than bus/structure lifetime.
Radiation Degradation
LEO TID environment [evidence]: With 3 mm Al shielding, total ionizing dose in LEO is <10 krad(Si) over a 3-year mission (researchgate-leo-radiation). Extrapolating linearly, a 5-year mission accumulates ~15-17 krad(Si), and a 7-year mission ~23 krad(Si).
Google Suncatcher TPU radiation testing [evidence]: Google's Trillium TPUs (v6e Cloud TPU) survived proton beam testing to ~2 krad(Si) before HBM subsystems showed irregularities -- nearly 3x the expected shielded 5-year mission dose of ~700 rad(Si) (google-suncatcher). This suggests commercial AI silicon can survive a 5-year LEO mission with adequate shielding, with meaningful margin.
Commercial silicon radiation tolerance [evidence]: Ordinary modern server CPUs would be "severely damaged by a tiny fraction" of rad-hard levels -- "potentially just a few krad" (per-aspera-space-compute). The RAD750 space processor tolerates 200,000-1,000,000 rad, but commercial chips are limited to single-digit krad. With shielding keeping LEO doses to ~3-5 krad over 5 years, commercial silicon operates near its margin.
Single event effects (SEEs) [evidence]: In LEO, trapped protons (especially in the South Atlantic Anomaly) are the greatest SEE threat. Commercial CMOS at 65nm and below is increasingly susceptible to single-event upsets and multi-bit upsets, requiring ECC and fault-tolerant design. Neural network weights are relatively tolerant of bit flips (per Elon Musk's observation), but control electronics and memory are not.
Hardware Failure Rates
H100 MTBF ~50,000 hours [evidence]: NVIDIA H100 GPUs have mean time between failures of approximately 50,000 hours (~5.7 years) in terrestrial conditions (epoch-gpu-failures). At scale (100K GPUs), this means one failure every 30 minutes, with ~9% annualized failure rate.
Meta Llama 3 cluster failures [evidence]: Meta experienced 419 failures in 54 days on a 16,384 H100 cluster, including 148 GPU hardware failures (0.9% of GPUs) and 72 HBM3 failures (0.44%) (meta-llama3-failures). This implies an annualized individual GPU failure rate of ~6%.
Large-scale ML cluster reliability [evidence]: Research shows MTTF for 1024-GPU jobs is 7.9 hours, approximately 2 orders of magnitude lower than 8-GPU jobs at 47.7 days (revisiting-ml-cluster-reliability). Hardware reliability scales inversely with GPU count.
Space environment amplification [opinion]: In orbit, GPU failure rates will likely be higher than terrestrial baselines due to thermal cycling (hot sun/cold eclipse every 90 minutes), radiation-induced degradation, vibration during launch, and inability to perform physical repairs. A reasonable estimate is 1.5-3x the terrestrial annualized failure rate, yielding ~14-27% annual GPU attrition in space.
Solar Panel Degradation
GEO solar array measured degradation [evidence]: Telemetry from 11 GEO communications satellites showed GaAs cells degrade at 0.44-1.03%/yr and Si cells at 0.71-1.69%/yr (solar-degradation-geo-gaas-si). LEO radiation fluences are 5-10x lower than GEO for the same time period, suggesting lower degradation rates in LEO.
ISS solar array degradation [evidence]: The ISS P6 silicon photovoltaic arrays showed measured short-circuit current degradation of 0.2-0.5%/yr, below the predicted rate of 0.8%/yr (iss-solar-array-degradation). The ISS operates at ~400 km LEO.
Triple-junction cells with coverglass [evidence]: Modern triple-junction InGaP2/InGaAs/Ge cells with coverglass protection show annual degradation coefficients of ~0.2-0.5%/yr in LEO under standard conditions. Coverglass (100 um thick) reduces degradation by orders of magnitude versus bare cells.
Solar panel lifetime implication [evidence]: At 0.5-1.5%/yr degradation in LEO, a 5-year satellite retains ~92-97% of initial power; a 7-year satellite retains ~90-96%. Solar panels are NOT the binding constraint on satellite lifetime. A 10-15% margin in initial panel sizing comfortably covers 5-7 years of degradation.
Starlink Operational Data
Starlink designed lifespan: 5 years [evidence]: Starlink satellites are designed for a 5-year operational lifespan (starlink-deorbit-stats). Of 10,801 launched, ~1,391 (~13%) have re-entered, with early batches showing 3-5% uncontrollable failure rates.
Starlink deorbit activity 2024-2025 [evidence]: SpaceX deorbited 472 Starlink satellites between December 2024 and May 2025, and another 218 between June and November 2025. Many of these satellites were less than 5 years old, suggesting proactive fleet management and replacement with newer models rather than pure end-of-life disposal.
Starlink orbit lowering 2026 [evidence]: SpaceX is lowering all Starlink satellites from ~550 km to ~480 km in 2026, which ensures malfunctioning units deorbit faster via increased atmospheric drag. This demonstrates the operational philosophy of preferring faster disposal over extended lifetimes.
GPU Obsolescence
Nvidia 1-year release cadence [evidence]: Nvidia has shifted from a 2-year to a 1-year release cadence for datacenter GPUs: Hopper (2022), Blackwell (2024-25), Rubin (2026), Feynman (2028) (nvidia-one-year-cadence). Each generation delivers roughly 2-4x performance improvement for AI inference.
GPU depreciation schedules [evidence]: AWS, Google, and Microsoft use 6-year depreciation schedules for servers/GPUs (gpu-depreciation-schedules). Industry is converging toward 5-year useful life. Michael Burry has argued for 3-year or shorter depreciation reflecting faster obsolescence.
Dylan Patel on orbital GPU economics [evidence]: Dylan Patel (SemiAnalysis) noted that the testing, assembly, and launch process for orbital GPUs could consume 6+ months, representing "10% of your cluster's useful life" if GPU useful life is 5 years (dylan-patel-gpu-depreciation). This effectively shortens the in-orbit productive period.
"Operate, deorbit, replace" model [evidence]: Per Aspera analysis proposes that early orbital clouds will default to designing for a "5-7 year tour, then burn it up and launch Version N+1 with the latest silicon" (per-aspera-space-compute). This refresh cadence aligns with Moore's Law-like improvements and curbs long-lived debris.
Station-Keeping and Orbital Decay
Orbital decay at 500-700 km [evidence]: Atmospheric drag at 500-700 km LEO causes orbital decay of 13-29 meters per day during quiet solar conditions. During high solar activity, decay rates increase substantially. At 480 km (Starlink's planned altitude), uncontrolled satellites deorbit within months to a few years. At 700 km, passive decay takes decades, requiring active deorbit at end-of-life.
Station-keeping fuel budget [evidence]: Starlink V2 Mini satellites use argon-fueled Hall thrusters with 2.4x the thrust and 1.5x the specific impulse of earlier krypton thrusters. Propellant mass for 5 years of station-keeping at 500-700 km is a meaningful fraction of satellite mass but is well within engineering capability, as demonstrated by the Starlink fleet.
Analysis
The Binding Constraint: Regulatory, Not Physical
The operational lifetime of an orbital compute satellite is determined by the intersection of four independent constraint curves, listed from most to least binding:
1. FCC 5-year deorbit rule (hard regulatory cap). This is the single most important constraint. Post-mission disposal must occur within 5 years of mission end. While "mission end" is not the same as "launch date" (a satellite could theoretically operate for 7 years then deorbit within 5 years of declaring end-of-mission), the FCC has signaled that large constellations "may warrant shorter periods." SpaceX's own filing adopts 5 years as the total operational life, likely reflecting both regulatory compliance and engineering practicality.
2. GPU obsolescence (economic constraint, ~3-5 years). With Nvidia releasing new architectures annually and each generation delivering 2-4x inference performance, a GPU launched today will be 2-3 generations behind within 3 years and 4-5 generations behind within 5 years. At some point, the economics of operating an obsolete chip in orbit become worse than deorbiting and launching current-generation silicon. For inference workloads (which are less demanding of cutting-edge hardware than training), this crossover likely occurs at 4-5 years. The "operate, deorbit, replace" model explicitly embraces this.
3. Radiation-induced degradation (physical constraint, ~5-7 years). Google's Suncatcher testing shows commercial AI silicon surviving ~3x the shielded 5-year dose, suggesting a physical radiation limit around 7-10 years with adequate shielding (~3 mm Al). However, this is a gradual degradation, not a cliff: performance degrades through accumulated parametric drift, increased error rates, and occasional SEE-induced failures. Running commercial silicon "at the margin" in years 5-7 would require increasingly aggressive error correction and graceful degradation strategies.
4. Solar panel degradation (non-binding, ~15-20 years). At 0.2-1.5%/yr in LEO, solar arrays retain >85% of initial power after 10 years. This is easily managed by oversizing arrays by 10-15% at launch. Solar panels are categorically not the limiting factor.
Terrestrial GPU Failure Rates vs. Space
On Earth, the ~9% annualized GPU failure rate is manageable because failed GPUs are physically swapped. In orbit, each GPU failure represents permanent capacity loss. Over a 5-year mission with an assumed 1.5x space multiplier on failure rates:
- Terrestrial: ~9%/yr annualized, ~37% cumulative over 5 years (but replaced)
- Space: ~14%/yr estimated, ~52% cumulative over 5 years (permanent loss)
This means an orbital data center satellite launched with N GPUs would have roughly N/2 functioning GPUs at end-of-life under central assumptions. Redundancy and graceful degradation in inference workloads partially offset this (inference can tolerate individual GPU loss, unlike tightly-coupled training).
Why 5 Years Is the Design Point
The convergence of evidence is remarkably consistent:
- FCC rule: 5 years max post-mission disposal
- SpaceX filing: 5-year operational life stated
- Starcloud CEO: 5-year lifespan based on chip lifetime
- Starlink precedent: 5-year designed lifespan
- Hyperscaler GPU depreciation: 5-6 years
- H100 MTBF: ~5.7 years (aligns with 5-year service life)
- Radiation tolerance: 5-year mission dose (~3-5 krad shielded) within commercial silicon capability
The optimistic case (7 years) requires: FCC waiver, space-hardened chips (Nvidia Space-1 Vera Rubin), and acceptance of 3-4 generation GPU obsolescence. The conservative case (3.5 years) reflects: elevated solar activity accelerating degradation, higher-than-expected failure rates, rapid GPU obsolescence making early replacement economically rational, and the Dylan Patel observation that 6+ months of pre-launch testing/assembly effectively shortens useful orbital life.
Key Uncertainty: The Role of Obsolescence
The most important open question is whether GPU obsolescence or physical degradation is more binding. If inference hardware commoditizes (prices fall, performance plateaus), then physical lifetime becomes binding and satellites might operate the full 5-7 years. If the current pace of AI hardware improvement continues (2-4x/generation annually), then economic obsolescence at 3-4 years becomes binding, and operators will rationally choose to deorbit and replace with current-generation silicon even before the FCC rule forces them to.
For cost-optimized at-scale deployment of inference workloads, the central 5-year estimate represents the point where regulatory, physical, and economic constraints simultaneously converge.