Sources

Source Quality Assessment for Load-Bearing Inputs

Key Sources

patel-2024-ai-bottlenecks

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute

https://www.dwarkesh.com/p/dylan-patel

SemiAnalysis CEO on semiconductor bottlenecks, data center economics, and skepticism of space GPUs

States "Space GPUs aren't happening this decade." Estimates a 1 GW data center costs ~$13B/year in rental compute expenses, Big Tech committing ~$600B annually with ~$1T total supply chain investment. Amazon can build data centers in as little as eight months. Argues scaling power in the US will not be a problem.

handmer-2026

I guess we're doing Moon factories now

https://caseyhandmer.wordpress.com/2026/02/10/i-guess-were-doing-moon-factories-now/

Argues orbital inference is economically viable because inference value far exceeds deployment cost premium

Contends inference value could be ~100x ground-based cost while space deployment costs only ~2x more, leaving substantial profit margins. Estimates ~10,000 Starship launches/year could deliver ~100 GW orbital power. Beyond that scale, manufacturing satellite mass in space (from lunar materials) becomes necessary. Claims beaming power from Earth to Moon is 1000x cheaper than alternative lunar power generation.

musk-2026

Elon Musk — "In 36 months, the cheapest place to put AI will be space"

https://www.dwarkesh.com/p/elon-musk

Musk argues orbital AI compute will be cheaper than terrestrial within 30-36 months

Claims orbit becomes cheapest for AI compute within 30-36 months. Solar panels achieve ~5x greater output in orbit (no atmosphere, no night, no batteries). Ground solar cells cost ~$0.25-0.30/W in China; space deployment reduces effective cost by 10x. Gas turbine production sold out through 2030, utility interconnect studies take 1+ year. Envisions 100+ GW/year deployment via ~10,000 annual Starship launches (20-30 Starships cycling every ~30 hours). Projects annual AI launches to space will exceed cumulative Earth-based AI compute within five years.

Sources

vicor-newspace-dc-dc

High-efficiency DC-DCs for New Space applications

https://www.vicorpower.com/resource-library/articles/high-efficiency-dc-dcs-for-newspace-applications

Vicor Corporation technical article on space-grade DC-DC converter modules with specified efficiencies

Describes Vicor's Factorized Power Architecture (FPA) for NewSpace satellite power distribution. Reports space-qualified buck regulators achieve 67-95% efficiency; forward/flyback DC-DCs 47-87%. Specifies four radiation-tolerant COTS modules: BCM3423 bus converter (94V→33V, 94% efficiency), PRM2919 buck-boost regulator (96%), VTM2919 step-down to 2-4V (93%), VTM2919 step-down to 0.42-1.1V (91%). These are SAC (Sine Amplitude Converter) topology with zero-voltage/zero-current switching.

vpt-epc-gan-sgrb

VPT Introduces 120 Volt SGRB DC-DC Converter Featuring GaN Technology

https://www.vptpower.com/newsroom/press-releases/vpt-introduces-120-volt-sgrb-dc-dc-converter-featuring-gan-technology

Press release for radiation-hardened GaN-based DC-DC converter achieving 95% efficiency at 100 krad TID

VPT's SGRB12028S uses EPC Space GaN technology to achieve up to 95% efficiency at 120V input, 28V/400W output. Radiation hardened to 100 krad(Si) TID and 85 MeV/mg/cm2 SEE. VPT supplies converters to NASA, ESA, Lockheed Martin, Boeing, BAE Systems, Thales.

nasa-soa-power-2021

NASA State-of-the-Art of Small Spacecraft Technology — Chapter 3: Power

https://www.nasa.gov/wp-content/uploads/2021/10/3.soa_power_2021.pdf

NASA reference listing flight-proven PMAD systems with efficiency ratings from 86% to 98.5%

Table 3-6 lists TRL-9 PMAD systems: Pumpkin EPSM 1 (98.5%), AAC Clyde Space Starbuck Micro (97%), GomSpace P31U (96%), ISISPACE iEPS Type C (95%), DHV EPS Module (93%), EnduroSat EPS I (86%). Notes GaN improvements enabling higher switching rates and lower losses.

jwst-design-2023

The Design, Verification, and Performance of the James Webb Space Telescope

https://ntrs.nasa.gov/api/citations/20230009106/downloads/Design%20of%20JWST%20Final%20in%20Publication.pdf

Peer-reviewed NASA paper with as-measured JWST subsystem power budgets (Table 4)

JWST bus power during normal operations: ADCS 184W, C&DH 140W, Comms 170W, Thermal 437W, Propulsion 118W, EPS 64W, Harness 227W. Total bus 1369W vs. 660W science payload on 2029W observatory. Provides gold-standard as-measured spacecraft subsystem power data.

uah-spacecraft-design-101

Spacecraft Design 101 (UAH/NASA lecture)

http://matthewwturner.com/uah/IPT2010_spring/lectures_videos/01_Spacecraft_Design_101.pdf

NASA-affiliated university spacecraft design reference with subsystem power and mass allocation guides

Provides industry-standard subsystem power allocation guide for different spacecraft types. For "Other" missions: thermal control 33%, attitude control 11%, power electronics 2%, C&DH 15%, communications 30%, propulsion 4%, mechanisms 5% of bus power. Also provides mass allocation guide: power subsystem 21-35% of dry mass, structure 21-30%, ACS 7-13%.

laird-thermal-space

Thermal Pathways in Space

https://www.laird.com/resources/case-studies/thermal-pathways-in-space

Technical case study on heat dissipation in LEO using PCB-level thermal design

Radiation via emissivity is the sole heat dissipation mechanism in vacuum. Uses distributed radiant heat sinks at PCB level with second-surface mirrors (fluoropolymers with vapor-deposited metal layers). LEO atomic oxygen and radiation rapidly degrade organic materials and alter thermal properties, making long-term stability a critical challenge.

handmer-2025-tweet

Casey Handmer — SpaceX orbital AI inference concept

https://x.com/CJHandmer/status/1997906033168330816

First-principles analysis of Starlink-derived orbital inference satellites

Proposes inference satellites derived from Starlink v3 in sun-synchronous orbit at 560 km. Each satellite: ~130 kW solar, ~200 H100-equivalent GPUs, 13,000 tokens/sec, ~$4M revenue/year at $10/token, ~60% ROI at $50,000/kW all-in cost. Key innovation: mounting GPUs directly on solar array modules (6 kW each) with local WiFi, distributing heat rather than concentrating it. At 1 kg/m² solar arrays, one Starship launch delivers ~30 MW. 1,000 launches = 30 GW. Economics work if revenue exceeds ~$4/kWh.

hn-xai-spacex-solar

Solar Power: Space vs. Earth — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/5

HN debate on whether orbital solar-powered AI compute can compete with terrestrial solar

Proponents argue space offers continuous solar without weather/night, panels paying back ~7-8x faster. Critics note ground-based solar remains far cheaper, global PV production is only ~1-2 TW/year vs the proposed 500-1000 TW/year scale, and hardware utilization drops to ~30% in space scenarios. Most concluded orbital compute is not economically competitive with ground-based solutions.

hn-xai-spacex-resources

Resource Utilization and Scarcity — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/19

HN debate on whether Earth faces genuine resource constraints justifying orbital data centers

Critics contend Earth has vast non-arable desert land and power limitations are political/infrastructural rather than fundamental. Proponents counter that space bypasses permitting, rolling blackouts, and grid constraints (19 GW shortage, 7-year turbine lead times). SpaceX-xAI vertical integration seen as competitive advantage.

hn-xai-spacex-thermodynamics

Thermodynamics of Space Cooling — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/0

Technical analysis of Stefan-Boltzmann radiative cooling constraints

A single AI rack generates ~100 kW waste heat (equivalent to ISS power budget). ISS radiator system (1,000+ m², 6+ metric tons) dissipates only ~84 kW. Operating GPUs at 70°C rather than 20°C dramatically improves radiative efficiency due to T⁴ relationship. Critics say launch costs triple or quadruple per-rack when accounting for cooling infrastructure.

hn-xai-spacex-starship

Launch Economics and Starship — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/4

Whether Starship cost reductions make orbital data centers viable

Entire proposal hinges on Starship achieving dramatic cost reductions. Even with reduced launch costs, mass for cooling, shielding, and hardware makes space data centers far more expensive. Manufacturing bottlenecks persist — current solar cell production ~1 TW/year vs proposed 500-1000 TW/year.

hn-xai-spacex-manufacturing

Space Manufacturing and Moon Bases — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/10

Orbital vs lunar vs terrestrial alternatives, with strong skepticism toward orbital

ISS dissipates max 70 kW with 1,500 m² of radiators (6.5 metric tons) — less than a single AI rack. Commenters broadly dismiss space data centers as "insane" vs Earth-based infrastructure. Moon-based described as easier due to ground-based heat dissipation. Edge computing in space acknowledged as potentially viable.

hn-xai-spacex-maintenance

Technical Feasibility of Maintenance — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/2

Hardware failure rates and impossibility of in-orbit maintenance

Failed satellites must be deorbited and replaced entirely. At scale, one-in-a-million failures become daily certainties. AI clusters' heavy interconnection means single failures cascade. Radiation-hardened hardware is several generations obsolete by deployment. Falcon Heavy delivers ~12 racks for ~$100M, tripling or quadrupling per-rack costs.

hn-xai-spacex-compute-demand

AI Capability and Compute Demand — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/17

Whether AI compute demand growth justifies space-based infrastructure

Critics question whether proposal is "buzzword attachment to drive investment." Proponents argue terrestrial expansion faces regulatory and supply-chain bottlenecks. 100 kW per rack heat is fundamentally different from modest space telescope needs. Google also exploring space-based AI infrastructure.

hn-xai-spacex-radiators

Radiator Design and Physics — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/15

Engineering approaches including droplet radiators, ammonia loops, pyramidal designs

Proposed solutions: higher GPU temperatures (70°C), ammonia coolant loops, droplet radiators, pyramidal designs. Radiator area ~3x solar panel dimensions could maintain ~300K. But mass penalties destroy the economic case. Consensus: solvable physics, prohibitive economics. Lunar facilities described as "1000x easier."

hn-xai-spacex-latency

Latency and Data Transmission — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/11

Latency tolerance of AI workloads and bandwidth constraints

AI training is not latency-sensitive; batch inference could work via queuing. Skeptics raise bandwidth limitations and model checkpoint transfer costs. Most acknowledge concept is speculative but potentially viable within ~decade if terrestrial economics worsen.

mccalip-space-dc

Economics of Orbital vs Terrestrial Data Centers

https://andrewmccalip.com/space-datacenters

Detailed quantitative cost model comparing 1 GW orbital vs terrestrial over 5-year lifecycle

Orbital capex 2.1x terrestrial ($31.2B vs $14.8B for 1 GW). LCOE $891/MWh orbital vs $398/MWh terrestrial (2.24x gap). Launch costs dominate orbital budget at $22.2B of $31.2B. Assumes $1,000/kg to LEO. Radiator must maintain equilibrium below 75°C. Concludes economics are "not obviously stupid, and not a sure thing."

techcrunch-orbital-brutal

Why the economics of orbital AI are so brutal

https://techcrunch.com/2026/02/11/why-the-economics-of-orbital-ai-are-so-brutal/

Analysis of orbital vs terrestrial cost disparity (Feb 2026)

A 1 GW orbital data center would cost ~$42.4B — almost 3x terrestrial equivalent. Questions whether SpaceX's million-satellite approach can achieve viability.

peraspera-realities

Realities of Space-Based Compute

https://www.peraspera.us/realities-of-space-based-compute/

Comprehensive technical analysis of orbital compute across power, thermal, radiation, communications, and timeline

100 kW system requires 3-5 metric tons (solar ~930 kg, batteries ~500 kg, radiators ~1000+ kg). Timeline phases: "Crawl" (<10 kW, near-term), "Walk" (10-500 kW, 10-15 years), "Run" (MW scale, 2040s+). LEO latency 1-4 ms one-way. Commercial AI compute at MW-scale is "still decades away."

google-suncatcher

Exploring a space-based, scalable AI infrastructure system design

https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/

Google's Project Suncatcher technical feasibility study

Sun-synchronous LEO at ~650 km. 81-satellite clusters with TPUs connected by free-space optical links. Bench demo: 800 Gbps per transceiver pair. Trillium v6e TPUs survived proton beam testing to ~2 krad(Si), nearly 3x shielded 5-year dose. Solar panels 8x more productive in orbit. Two prototype satellites launching early 2027 with Planet Labs. Economic viability requires launch costs below $200/kg, projected mid-2030s.

spacecomputer-cooling

Cooling for Orbital Compute: A Landscape Analysis

https://blog.spacecomputer.io/cooling-for-orbital-compute/

Deep technical analysis of thermal management approaches at various scales

Stefan-Boltzmann: 1 m² at 80°C radiates ~850 W; at 127°C ~1,450 W/m². Rule of thumb: 2.5 m² radiator per kW rejected. ISS achieves 166 W/m² in practice. Liquid Droplet Radiators up to 7x lighter than conventional (NASA research), achieving 450 W/kg. ESA ASCEND validated thermal feasibility but requires 10x reduction in launcher emissions; targets 50 kW proof-of-concept by 2031, 1 GW by 2050.

sci-am-space-dc

Space-Based Data Centers Could Power AI with Solar Energy — At a Cost

https://www.scientificamerican.com/article/data-centers-in-space/

Balanced assessment of orbital data center feasibility (Dec 2025)

Google estimates launch costs must fall below $200/kg by 2035. Benjamin Lee (UPenn): "Launch costs are dropping... but we would still require a very large number of launches." Saarland University found orbital facilities produce ~10x more emissions than terrestrial. As of late 2025, space data centers are "mostly an idea, a handful of small prototypes and a stack of ambitious slide decks."

aei-launch-costs

Moore's Law Meet Musk's Law: The Stunning Decline in Launch Costs

https://www.aei.org/articles/moores-law-meet-musks-law-the-underappreciated-story-of-spacex-and-the-stunning-decline-in-launch-costs/

Historical analysis of SpaceX's impact on launch cost trajectory

Pre-SpaceX average: ~$16,000/kg. Falcon 9: $2,500/kg (30x reduction vs Shuttle). Falcon Heavy: $1,500/kg. Starship expected ~$1,600/kg initially, potential $100-150/kg long-term. Musk aspirational: $10/kg. Citigroup 2040 projections: best case ~$30/kg, bear case ~$300/kg.

balerion-kilowatts

Kilowatts to Compute: Data Centers on Earth and in Orbit

https://balerionspace.substack.com/p/bsv-insights-0002-kilowatts-to-compute

Detailed comparison of orbital vs terrestrial economics at 40 MW scale

Terrestrial 40 MW facility costs ~$175M in electricity over 5-year GPU lifecycle. Starcloud claims orbital equivalent could cost "tens of millions" with no ongoing fuel/grid costs. 100 MW space solar requires 330,000 m² array. Emphasizes "time is now as valuable as cost" — terrestrial facilities face multi-year permitting delays while orbital enables incremental expansion.

starcloud-nvidia

How Starcloud Is Bringing Data Centers to Outer Space

https://blogs.nvidia.com/blog/starcloud/

Starcloud's first orbital GPU launch and future plans

Starcloud-1 (60 kg, H100) launched Nov 2025 — 100x more powerful GPU than any previous space operation. First LLM trained in space. Claims energy costs 10x cheaper than terrestrial including launch. Targets 5 GW facility with ~4 km solar/cooling panels.

blocksandfiles-starcloud

Starcloud pitches orbital datacenters as cheaper, cooler, and cleaner

https://blocksandfiles.com/2025/10/23/starcloud-orbiting-datacenters/

Critical analysis of Starcloud's economic claims

Starcloud claims 20x cost advantage: 40 MW terrestrial = $167M over 10 years vs Starcloud-2 = $8.2M. But figures exclude server/storage/networking hardware; when full system deployment included ($24B in hardware), the cost advantage shrinks to 0.007%.

thales-ascend

Thales Alenia Space — ASCEND Feasibility Study Results

https://www.thalesaleniaspace.com/en/press-releases/thales-alenia-space-reveals-results-ascend-feasibility-study-space-data-centers-0

EU-funded Horizon Europe study by consortium including Thales, ArianeGroup, Airbus, DLR, Orange, HPE validating orbital DC feasibility

ASCEND = Advanced Space Cloud for European Net zero emission and Data sovereignty; launched 2023 under Horizon Europe. Requires launcher 10x less emissive over lifecycle for CO2 reduction goals. Space DCs would not require water for cooling. Projects ROI of "several billion euros" by 2050. Targets 1 GW before 2050. Timeline: robotic demo 2026 (EROSS IOD), proof-of-concept 2031, initial deployment 2036, large-scale rollout after. Modular space infrastructure assembled in orbit using robotic tech.

enr-grid-bottleneck

Grid Access, Not Land, Emerges as Bottleneck for Data Center Construction

https://www.enr.com/articles/62227-grid-access-not-land-emerges-as-bottleneck-for-data-center-construction

Analysis of grid interconnection as primary constraint on terrestrial expansion

Data center electricity demand could triple by end of decade. Multiyear waits for grid interconnection studies. Grid upgrades add tens of millions and extend preconstruction by 1+ years. Developers now required to include on-site generation and battery storage.

bloomberg-dc-decline

US Data Center Construction Drops as Permit, Power Delays Slow Projects

https://www.bloomberg.com/news/articles/2026-02-25/us-data-center-construction-fell-amid-permit-and-power-delays

First decline in US data center construction since 2020

Capacity under construction fell to 5.99 GW (end 2025) from 6.35 GW (end 2024). Nearly half of 140 projects planned for 2026 delayed to 2027.

spacex-xai-merger

SpaceX acquires xAI — orbiting data center plans

https://www.tomshardware.com/tech-industry/artificial-intelligence/spacex-acquires-xai-in-a-bid-to-make-orbiting-data-centers-a-reality-musk-plans-to-launch-a-million-tons-of-satellites-annually-targets-1tw-year-of-space-based-compute-capacity

SpaceX-xAI merger and orbital data center ambitions

SpaceX acquired xAI. Plans to launch 1 million tons of satellites annually. Targets 1 TW/year of space-based compute capacity. 100 kW compute per ton of satellite, adding 100 GW annually at full scale.

nbf-falcon9-true-cost

SpaceX Falcon 9 True Cost to Launch

https://www.nextbigfuture.com/2026/02/spacex-falcon-9-true-cost-to-launch-is-about-300-per-pound-which-is-25-of-selling-price-to-customers.html

Analysis of SpaceX's internal launch costs vs customer pricing

Internal marginal cost ~$629/kg (25% of $2,600/kg customer price). Total marginal launch cost ~$10.5-11M. Upper stage $7M, booster amortized $1M, propellant $250K.

nbf-starship-roadmap

SpaceX Starship Roadmap Lower Launch Costs by 100 Times

https://www.nextbigfuture.com/2025/01/spacex-starship-roadmap-to-100-times-lower-cost-launch.html

Cost per kg projections at various Starship reuse rates

Build cost ~$90M. At 6 flights: $94/kg; 20 flights: $33/kg; 50 flights: $19/kg; 70 flights: $14/kg. Per-flight marginal cost target $2M.

dwarkesh-space-gpus

Notes on Space GPUs

https://www.dwarkesh.com/p/notes-on-space-gpus

Quantitative analysis of orbital datacenter satellite mass budgets

Stripped GB200 NVL72 at ~100 kg consuming 132 kW (~1,452 W/kg compute). With 200 W/kg solar, ~320 W/kg radiators at 60°C, 25% chassis overhead: ~85 W/kg (~11.8 kg/kW) integrated satellite.

nasa-smallsat-power-soa

Small Spacecraft Technology State of the Art — Power Subsystems

https://www.nasa.gov/smallsat-institute/sst-soa/power-subsystems/

NASA survey of space solar array technologies with specific power data

Flown missions clustered ~30 W/kg. State-of-art rigid: up to 200 W/kg. ROSA: 100 W/kg. FOSA: 140 W/kg. Next-gen thin-film targets 500 W/kg (not flight-proven).

nvidia-gb200-specs

NVIDIA DGX GB200 NVL72 Hardware Specifications

https://docs.nvidia.com/dgx/dgxgb200-user-guide/hardware.html

Official specifications for GB200 NVL72 rack

~1,360 kg, 120-132 kW (115 kW liquid + 17 kW air cooled). 72 Blackwell GPUs, 36 Grace CPUs.

mach33-cooling

Debunking the Cooling Constraint in Space Data Centers

https://research.33fg.com/analysis/debunking-the-cooling-constraint-in-space-data-centers

Analysis challenging thermal management as fundamental blocker

Scaling from ~20 kW to ~100 kW: radiators 10-20% of total mass, ~7% of planform area. Solar arrays dominate footprint.

melagen-radiation-shielding

Radiation Shielding for Electronics

https://www.melagenlabs.com/learn/radiation-shielding-for-electronics-what-every-space-hardware-team-needs-to-know

Overview of radiation shielding approaches and mass trade-offs

LEO below 1,000 km needs minimal additional shielding for <10 krad. Hydrogen-rich polymers 3x better per unit mass than aluminum.

epoch-gpu-failures

Hardware failures won't limit AI scaling

https://epoch.ai/blog/hardware-failures-wont-limit-ai-scaling

GPU failure rates at scale and implications for AI training

H100 MTBF ~50,000 hours (~5.7 years). At 100K GPUs: one failure every 30 min. Note: the 50,000-hour figure conflates all 419 job interruptions (hardware + software + network) as "failures." The permanent GPU hardware failure rate is substantially lower — see terrestrial-gpu-failure-rate page for the refined estimate.

meta-llama3-failures

Faulty H100 GPUs and HBM3 caused half of Llama 3 training failures

https://www.tomshardware.com/tech-industry/artificial-intelligence/faulty-nvidia-h100-gpus-and-hbm3-memory-caused-half-of-failures-during-llama-3-training

Journalism article summarizing Meta's Llama 3 failure data; see meta-llama3-paper for primary source

Tom's Hardware reporting on Meta's failure data. 419 failures in 54 days. 148 GPU failures (0.9%), 72 HBM3 failures (0.44%). Annualized GPU failure rate of ~6.1% represents an upper bound (all "Faulty GPU" events treated as permanent), not a central estimate. The primary source (meta-llama3-paper) provides more detailed categorization.

hyperscaler-depreciation-sec

Hyperscaler Server Depreciation Changes (SEC Filings, 2020-2025)

(multiple SEC filings — see individual filing references in gpu-useful-life page)

Primary SEC filing evidence for server/GPU depreciation schedule changes at Amazon, Google, Microsoft, and Meta

Consolidated from 10-K/10-Q filings and earnings call transcripts. Amazon: 3→4yr (Jan 2020), 4→5yr (Jan 2022), 5→6yr (Jan 2024), **6→5yr reversal (Jan 2025, -$700M, citing AI acceleration)**. Google: 3→4yr (Jan 2021), 4→6yr (Jan 2023). Microsoft: 3→4yr (Jul 2020), 4→6yr (Jul 2022). Meta: 4→4.5yr (Q2 2022), 4.5→5yr (Q4 2022), 5→5.5yr (Jan 2025, -$2.9B depreciation). Amazon's 2025 reversal is the strongest evidence for 5-year convergence. Combined financial impact of all extensions exceeded $15B.

gpu-depreciation-schedules

Resetting GPU depreciation

https://siliconangle.com/2025/11/22/resetting-gpu-depreciation-ai-factories-bend-dont-break-useful-life-assumptions/

SiliconANGLE trade press analysis of GPU depreciation practices across hyperscalers (secondary — see hyperscaler-depreciation-sec for primary SEC filing evidence)

AWS/Google/Microsoft: 6-year depreciation. Industry converging toward 5-year via "value cascade" model. AI-native neoclouds use 4-5 year schedules. Provides the "value cascade" framework (training → inference → batch) as the analytical justification for 5-6 year life.

Starlink Satellites Falling Out of Orbit

https://orbitaltoday.com/2026/02/28/starlink-satellites-falling-risks-statistics-analysis/

Statistics on Starlink deorbiting and failure rates

10,801 launched, 1,391 (~13%) re-entered. Designed ~5-year lifespan. Early batches 3-5% uncontrollable failure rates.

fcc-5yr-deorbit-rule

FCC Adopts New 5-Year Rule for Deorbiting Satellites

https://www.fcc.gov/document/fcc-adopts-new-5-year-rule-deorbiting-satellites-0

FCC rulemaking requiring LEO satellite disposal within 5 years

Effective September 2024. Large constellations may warrant shorter periods.

nvidia-space1-module

NVIDIA Space-1 Vera Rubin Module

(Payload newsletter)

Purpose-built space AI compute module

Up to 25x H100 AI-compute. Designed for low-SWaP. Not yet commercially available. Six launch customers announced.

jll-2026-dc-outlook

2026 Global Data Center Outlook

https://www.jll.com/en-us/insights/market-outlook/data-center-outlook

JLL data center market outlook including construction costs

Shell-and-core from $7.7M/MW (2020) to $10.7M/MW (2025), forecast $11.3M/MW (2026). AI tech fit-out adds $25M/MW.

turner-townsend-dcci-2025

Data Centre Construction Cost Index 2025-2026

https://www.turnerandtownsend.com/insights/data-centre-construction-cost-index-2025-2026/

Annual construction cost index covering 52 global markets

5.5% YoY increase (down from 9.0%). 7-10% AI premium. Tokyo ($15.2/W), Singapore ($14.5/W), Zurich ($14.2/W) most expensive.

semianalysis-gb200-tco

H100 vs GB200 NVL72 Training Benchmarks

https://newsletter.semianalysis.com/p/h100-vs-gb200-nvl72-training-benchmarks

SemiAnalysis TCO analysis of GB200 NVL72

GB200 NVL72 rack ~$3.1M (hyperscaler), ~$3.9M all-in. 120 kW/rack. 1.6-1.7x H100 per-GPU cost.

mckinsey-cost-of-compute

The cost of compute: A $7 trillion race

https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers

McKinsey analysis of global data center investment requirements

$6.7T total DC capex by 2030 ($5.2T AI + $1.5T non-AI), "continued momentum" scenario. AI $5.2T breakdown: DC infrastructure $1.6T, IT equipment $3.3T, power $0.3T. By investor archetype: tech developers 60%/$3.1T, energizers 25%/$1.3T, builders 15%/$0.8T. Three scenarios: $3.7T (constrained, 78 GW) to $7.9T (accelerated, 205 GW).

epoch-hyperscaler-capex

Hyperscaler capex has quadrupled since GPT-4

https://epochai.substack.com/p/hyperscaler-capex-has-quadrupled

Epoch AI analysis of hyperscaler capital expenditure trends

Combined capex near $500B in 2025, 70%/year growth. Could reach $770B in 2026.

xai-colossus-expansion

xAI Colossus Hits 2 GW: 555,000 GPUs, $18B

https://introl.com/blog/xai-colossus-2-gigawatt-expansion-555k-gpus-january-2026

xAI Colossus expansion details

2 GW, 555,000 GPUs for ~$18B (~$9M/MW in GPU costs). Built in 122 days initially.

epochai-power-capacity

Global AI power capacity comparable to New York State

https://epochai.substack.com/p/global-ai-power-capacity-is-now-comparable

Analysis of global AI data center power capacity

AI data centers ~30 GW as of late 2025, total US data center ~40 GW.

goldman-sachs-dc-demand

AI to drive 165% increase in data center power demand by 2030

https://www.goldmansachs.com/insights/articles/ai-to-drive-165-increase-in-data-center-power-demand-by-2030

Goldman Sachs forecast of data center power demand

Projects 122 GW globally by end of 2030. 165% increase vs 2023.

semianalysis-pjm-bills

Are AI Datacenters Increasing Electric Bills?

https://newsletter.semianalysis.com/p/are-ai-datacenters-increasing-electric

PJM capacity market price dynamics and data center responsibility

PJM capacity prices jumped 9.3x. Removing datacenters reduced payments by $9.33B (64%). 67M residents face ~15% bill increase.

ieefa-pjm-10x

Data center growth spurs PJM capacity prices by factor of 10

https://ieefa.org/resources/projected-data-center-growth-spurs-pjm-capacity-prices-factor-10

IEEFA analysis of data center impact on PJM prices

Data centers responsible for 63% of capacity price increase, $9.3B in costs.

ge-vernova-backlog

GE Vernova 80-GW gas turbine backlog stretches into 2029

https://www.utilitydive.com/news/ge-vernova-gas-turbine-investor/807662/

Gas turbine supply constraints

80 GW backlog against 20 GW/year output. Sold out through 2030.

lazard-lcoe-2025

Lazard LCOE+ (June 2025)

https://www.lazard.com/media/eijnqja3/lazards-lcoeplus-june-2025.pdf

Annual LCOE benchmark for generation technologies

Combined-cycle gas $48-107/MWh. Gas peaking $149-251/MWh. CCGT costs at 10-year high.

introl-smr-timeline

SMR Nuclear Power for AI Data Centers

https://introl.com/blog/smr-nuclear-power-ai-data-centers-implementation

SMR deployment timeline and costs

FOAK $14,600/kW vs projected NOAK $2,800/kW. Google-Kairos: 500 MW, first unit 2030. Realistic timelines 7-10 years.

introl-liquid-cooling

Liquid Cooling vs Air Cooling for AI Data Centers

https://introl.com/blog/liquid-vs-air-cooling-ai-data-centers

Comparison of cooling technologies and PUE

Air PUE 1.4-1.8. Liquid PUE 1.05-1.15. Immersion PUE 1.02-1.03.

bnef-battery-costs-2025

Battery Storage Costs Hit Record Lows — BloombergNEF

https://about.bnef.com/insights/clean-energy/battery-storage-costs-hit-record-lows-as-costs-of-other-clean-power-technologies-increased-bloombergnef/

Global benchmark for 4-hour battery storage fell 27% YoY to $78/MWh

Installed battery capex ~$125/kWh (utility-scale). LCOS of $65/MWh. 27% year-over-year decline in 2025.

google-intersect-acquisition

Google acquires Intersect Power for $4.75B

https://www.utilitydive.com/news/google-intersect-power-co-located-energy-park-data-center-ferc/735198/

Co-located energy parks with solar, batteries, and gas backup for data centers

Quantum Energy Park in TX: 640 MW solar, 1.3 GWh battery storage, plus flexible gas backup. $20B targeted renewable infrastructure investment by end of decade.

hyperscaler-solar-2025

How Data Centers Redefined Energy and Power in 2025

https://www.datacenterknowledge.com/energy-power-supply/how-data-centers-redefined-energy-and-power-in-2025

Hyperscaler clean energy procurement and onsite power trends

Hyperscalers signed 40+ GW solar in 2025. Brookfield-Microsoft 10.5 GW deal. 30% of DC sites expected to use onsite power as primary by 2030.

duke-flexible-load-study

Flexible Load Integration for Utilities

https://www.renewableenergyworld.com/power-grid/grid-modernization/as-ai-and-data-center-power-demand-skyrockets-flexible-load-integration-becomes-a-critical-strategy-for-utilities/

Duke University study on grid capacity for curtailable large loads

Grid could integrate 76-126 GW new demand with 22-88 hours/year curtailment. <50 hours/year curtailment could accommodate ~100 GW.

epri-dcflex-results

EPRI DCFlex Data Center Flexibility — IEEE Spectrum

https://spectrum.ieee.org/dcflex-data-center-flexibility

Demonstrated 25% power reduction in AI data center with no SLA breach

256 NVIDIA GPUs, 25% reduction for 3 hours, 15-minute ramp. 10-40% modulation feasible. 40+ partners including Google, Meta, Microsoft, PJM.

google-demand-response-1gw

Google Data Center Demand Response Milestone

https://blog.google/innovation-and-ai/infrastructure-and-cloud/global-network/demand-response-data-center-milestone/

Google signs 1 GW of demand response contracts

Contracts with Entergy Arkansas, Minnesota Power, DTE Energy. Demand response used to accelerate grid interconnection.

ftai-power-cfm56

FTAI Aviation Launches FTAI Power

https://ir.ftaiaviation.com/news-releases/news-release-details/ftai-aviation-announces-launch-ftai-power-ftai-adapts-worlds

Converting retired CFM56 jet engines to 25 MW gas turbines for data centers

30-45 day conversion per engine. 100+ units/year (2.5+ GW/year). 1,000+ engines owned; 22,000+ produced globally. Production starts 2026.

boom-superpower-turbine

Boom Supersonic Superpower Gas Turbines

https://boomsupersonic.com/press-release/boom-supersonic-to-power-ai-data-centers-with-superpower-natural-gas-turbines-adds-300-million-in-new-funding

42 MW turbine derived from supersonic aviation technology

$1.25B+ backlog. Crusoe launch customer (29 units, 1.21 GW). 4+ GW/year production by 2030. Prototype core testing 2026.

baker-hughes-twenty20

Baker Hughes Gas Turbine Order for Data Centers

https://investors.bakerhughes.com/news-releases/news-release-details/baker-hughes-receives-gas-turbine-order-twenty20-energy-power-us

10 Frame 5 gas turbines (~250 MW) for data centers

Twenty20 Energy order for Georgia and Texas DCs. Initial delivery 2027. Multi-GW strategic agreement.

wartsila-data-center-orders

Wärtsila Data Center Power Orders

https://www.wartsila.com/media/news/29-01-2026-wartsila-chosen-for-a-major-u-s-power-plant-project-addressing-critical-energy-demand-driven-by-data-center-development-3711601

~1 GW in reciprocating engine orders for US data centers

507 MW (27 engines, delivery 2027) + 429 MW (24 engines, late 2028/early 2029). 79 GW installed globally.

caterpillar-dc-orders

Caterpillar Gas Generator Data Center Agreements

https://www.caterpillar.com/en/news/corporate-press-releases/h/joule-caterpillar-wheeler.html

6+ GW in gas generator agreements for data center campuses

4 GW (Joule Capital, Utah) + 2 GW (AIP, West Virginia). 11.5% reciprocating engine market share. Fastest-growing segment.

utility-dive-solar-data-center

Solar as a Data Center Power Solution

https://www.utilitydive.com/news/data-center-power-problem-solar/758809/

BTM solar deployment timelines for data centers

Virginia Permit By Rule allows 18-24 month solar timeline. BTM solar constructable in months once permitted.

introl-nvl72-deployment

GB200 NVL72 Deployment: Managing 72 GPUs in Liquid-Cooled Configurations

https://introl.com/blog/gb200-nvl72-deployment-72-gpu-liquid-cooled

Detailed physical breakdown of the full NVL72 system components and mass

Full NVL72 system ships as four components: compute rack (~1,500 kg, 18 × 1U trays), NVLink switch rack (~800 kg, 9 switch trays), CDU (~400 kg, 200 L coolant), power distribution (~300 kg, 48 PSUs). Total ~3,000 kg, significantly more than the often-cited ~1,360 kg compute rack alone.

mdpi-satellite-dc-dc

State-of-the-Art DC-DC Converters for Satellite Applications

https://www.mdpi.com/2226-4310/12/2/97

Survey of space-grade DC-DC converter technologies and mass characteristics

Satellite power system constitutes ~25% of total dry mass. Modern GaN/SiC converters achieving ~0.2-0.5 kg/kW at high power. Power harness/cabling is 10-25% of electrical power system mass.

nature-multilayer-shield

Multilayer radiation shield for satellite electronic components protection

https://www.nature.com/articles/s41598-021-99739-2

Optimized graded-Z shielding designs for satellites

Three-layer shields (Au/W/Al) provide 70% better electron protection than single aluminum. For protons, W/Pb/Ta achieves 50% dose reduction vs equivalent aluminum. Graded-Z reduces electron dose by >60% over single-material shields at same areal density.

researchgate-leo-radiation

Radiation analysis and mitigation framework for LEO small satellites

https://www.researchgate.net/publication/322649302

Radiation environment characterization and shielding requirements for LEO

Below 1.5 mm Al, trapped electrons dominate dose. Above 1.5 mm, trapped protons dominate. 3 mm Al attenuates TID to <10 krad(Si) for 3-year LEO mission. 0.5 mm Al sufficient for 1-year worst-case.

catalyst-scaling-pathways

AI scaling pathways: on grid, on edge, off grid, off planet (Catalyst podcast)

https://reader.secondthoughts.workers.dev/posts/2248/text

Latitude Media Catalyst podcast with Shayl Khan (EIP) and Jake Elder (EIP) comparing grid-connected, edge, off-grid, and orbital data center pathways

Frameworks four pathways for scaling AI compute: grid-connected hyperscale (incumbent, constrained by transmission 5-7+ years and social license), edge (<50 MW, speed advantage but cost disadvantage at subscale), off-grid (>1 TW opportunity in US Southwest per Stripe/Paces study, but reliability challenges — early projects below 90% uptime), and orbital (free solar power but only 5-15% of DC cost is energy; O&M and debris are harder constraints than thermal). 10-year forecast: 50-60% grid hyperscale, 10-15% off-grid, ~15% edge, 5-10% orbital. Both hosts skeptical of Musk's 3-4 year orbital cost parity claim. Key insight: off-grid is an underexplored middle ground — why go to space before exhausting terrestrial off-grid options? Chip supply chain likely bottlenecks before either off-grid or orbital scale constraints bind. At GW scale, orbital DC would be ~4 km^2 orbiting asset; debris strike expected every hour at that size. O&M identified as hardest unsolved problem for orbital DCs.

starpath-solar-panels

Starpath Space ultra-lightweight solar panels (Payload Space newsletter)

https://reader.secondthoughts.workers.dev/posts/1576/view

Coverage of Starpath Space's Starlight Air panels at 73 g/m^2 and ~$15/watt

Starlight Air panels: 73 g/m^2, ~$15/watt (space-grade). Starlight Classic (thicker): ~$11.20/watt. PV crystalline structure in hundreds of nanometers, printed onto substrate fabric. 50 MW production facility planned; first deliveries 2026. Raised $12M seed in 2024.

spacex-fcc-million-satellite-filing

SpaceX files for million satellite orbital AI data center megaconstellation

https://www.datacenterdynamics.com/en/news/spacex-files-for-million-satellite-orbital-ai-data-center-megaconstellation/

SpaceX filed with the FCC for up to one million satellites to provide 100 GW of AI compute capacity

Filing projects launching one million tonnes of satellites annually to generate 100 GW of AI compute capacity. Scale would dwarf all existing satellite constellations combined.

blue-origin-project-sunrise

Blue Origin joins the orbital data center race

https://spacenews.com/blue-origin-joins-the-orbital-data-center-race/

Blue Origin filed FCC application on March 19, 2026 for "Project Sunrise," a 51,600-satellite orbital data center constellation

FCC filing for up to 51,600 satellites in sun-synchronous orbits at 500-1,800 km altitude. Orbital planes spaced 5-10 km apart, each containing 300-1,000 satellites. Optical intersatellite links with TeraWave broadband constellation.

starcloud-88k-constellation-fcc

Starcloud files plans for 88,000-satellite constellation

https://spacenews.com/starcloud-files-plans-for-88000-satellite-constellation/

FCC accepted Starcloud's March 2026 filing for up to 88,000 orbital data center satellites

FCC accepted filing March 13, 2026. 88,000 satellites at 600-850 km altitude in dusk-dawn sun-synchronous orbits. Orbital shell thickness up to 50 km for near-continuous solar power.

starcloud-first-ai-model-space

Nvidia-backed Starcloud trains first AI model in space

https://www.cnbc.com/2025/12/10/nvidia-backed-starcloud-trains-first-ai-model-in-space-orbital-data-centers.html

Starcloud trained Google's Gemma LLM on Starcloud-1 satellite in December 2025

Starcloud-1 launched Nov 2025 with H100 GPU — 100x more powerful than any prior space GPU. First LLM trained in orbit. Second satellite planned Oct 2026 with 100x power generation and Blackwell platform. Funded by Google and Andreessen Horowitz ($34M total).

electronics-cooling-arrhenius

Does a 10C Increase in Temperature Really Reduce the Life of Electronics by Half?

https://www.electronics-cooling.com/2017/08/10c-increase-temperature-really-reduce-life-electronics-half/

Technical analysis of Arrhenius equation limitations for electronics lifetime prediction

The "10C = half life" rule assumes activation energy ~0.7 eV; actual values range 0.3-1.0+ eV. Significant failure modes are not temperature-dependent (thermal cycling, vibration, humidity). Running GPUs at higher temperatures (as proposed for space at 70-80C) has complex reliability implications.

introl-orbital-dc-race-2026

Orbital Data Center Race 2026

https://introl.com/blog/orbital-data-centers-space-computing-race-2026

Comprehensive competitive landscape identifying 8+ companies, cost economics, and three-wave deployment timeline

Three companies with hardware in orbit: Kepler (10 optical relay sats), Axiom Space (2 DC nodes), Starcloud (H100, Nov 2025). Starcloud claims $0.005/kWh orbital energy vs $0.04-0.08/kWh terrestrial. McCalip calculator: orbital ~3x more per watt. Market forecast: $1.77B by 2029, $39.09B by 2035 (67.4% CAGR). Three waves: defense/ISR (2025-2030), AI training/premium cloud (2030-2035), potential mainstream (2035-2045).

cnbc-electricity-prices-inflation

Electricity prices rising by double the rate of inflation

https://www.cnbc.com/2026/02/12/electricity-price-data-center-ai-inflation-goldman.html

Goldman Sachs analysis of electricity price inflation driven by data center demand

Electricity prices jumped 6.9% in 2025, more than double headline inflation of 2.9%. Data centers make up 40% of electricity demand growth. Prices expected to increase up to 40% by 2030. Wholesale costs up 267% near data center clusters.

rmi-pjm-speed-to-power

PJM's Speed to Power Problem and How to Fix It

https://rmi.org/pjms-speed-to-power-problem-and-how-to-fix-it/

RMI analysis of PJM interconnection delays stretching from <2 years to >8 years

Average time from interconnection application to commercial operation: under 2 years in 2008, over 8 years by 2025. Capacity market clearing prices jumped from $29/MW-day to $330/MW-day cap. Capacity bills rose from $2.2B to $16.1B. PJM serves 67 million people.

datacenterwatch-opposition-tracker

$64 billion of data center projects have been blocked or delayed amid local opposition

https://www.datacenterwatch.org/report

Comprehensive tracker of data center projects facing community opposition

$18B blocked; $46B delayed; $64B total affected. 142 activist groups across 24 states. Bipartisan opposition (55% Republican, 45% Democrat). Loudoun County ended by-right zoning March 2025.

latitude-btm-traction

Behind-the-meter generation is picking up traction

https://www.latitudemedia.com/news/behind-the-meter-generation-is-picking-up-traction/

Rapid growth of BTM power generation for data centers

46 data centers with combined 56 GW plan BTM power, ~30% of all planned US DC capacity. 90% of BTM projects announced in 2025 alone. McKinsey estimates 25-33% of incremental demand through 2030 met by BTM.

camus-grid-connection-delays

Why Does It Take So Long to Connect a Data Center to the Grid?

https://www.camus.energy/blog/why-does-it-take-so-long-to-connect-a-data-center-to-the-grid

Technical analysis of multi-year bottlenecks in grid connection

Interconnection queue swollen to 2,600 GW nationally. Median time to commercial operation approaching 5 years. Withdrawal rates reaching nearly 80%. AI DC demand projected to grow 3.5x from 2025 to 2030 (McKinsey: 156 GW).

powermag-transformer-shortage

Transformers in 2026: Shortage, Scramble, or Self-Inflicted Crisis?

https://www.powermag.com/transformers-in-2026-shortage-scramble-or-self-inflicted-crisis/

Analysis of transformer supply crisis constraining data center and grid buildout

Power transformer lead times averaging 128 weeks (~2.5 years); GSUs 144 weeks. 30% supply shortfall for power transformers in 2025; 47% for GSUs. Cost inflation 77-95% since 2019.

aetherflux-galactic-brain

Aetherflux enters orbital data center race

https://spacenews.com/space-based-solar-power-startup-aetherflux-enters-orbital-data-center-race/

Aetherflux plans "Galactic Brain" orbital DC node in Q1 2027

Founded by Baiju Bhatt (Robinhood co-founder). $60M raised. Power-beaming demo satellite launching 2026. "Galactic Brain" first orbital DC node targeted Q1 2027. Combines space-based solar power with compute.

sophia-space-seed

Sophia Space raises $10M for orbital computing

https://www.geekwire.com/2026/sophia-space-10m-space-computing-network/

Modular TILE platform combining solar power with passive radiative cooling

Tabletop-sized satellite modules combining solar + passive radiative cooling. Multiple tiles connect into racks for scalable LEO computing. First in-orbit demo late 2027 or early 2028. One of NVIDIA's six space computing launch partners.

spacenews-economics-focus

With attention on orbital data centers, the focus turns to economics

https://spacenews.com/with-attention-on-orbital-data-centers-the-focus-turns-to-economics/

SpaceNews analysis noting $61B in terrestrial DC construction with unproven orbital business case

$61B in terrestrial data center construction last year (record). Axiom Space and Spacebilt plan ISS installation in 2027. Central finding: "it's not yet clear if the business case for data centers in space holds up."

fortune-experts-not-so-fast

AI data centers in space are having a moment. Experts say: Not so fast

https://fortune.com/2026/02/19/ai-data-centers-in-space-elon-musk-power-problems/

Expert skepticism about orbital DC timelines

Kathleen Curlee (Georgetown CSET): 2030-2035 timeline unrealistic. 1 GW orbital power requires ~1 km^2 solar panels. Jeff Thornburg (SpaceX veteran): minimum 3-5 years before functional systems. Tech companies project $5T+ in terrestrial DC spending by 2030.

chinatalk-dc-cost-comparison

How Much AI Does $1 Get You in China vs America?

https://reader.secondthoughts.workers.dev/posts/1238/view

Detailed cost comparison of 400 MW data center in China vs US

Chinese DCs cost $5.5-6.5M/MW construction; US $8-12M/MW. 400 MW construction: China ~$2.4B vs US ~$4B. US electricity for 400 MW DC: ~$600M over 3 years; China ~$350M.

payload-falcon9-price-hike

The Promise of Low Launch Prices is Still Far Off

https://pyld.omeclk.com/portal/public/ViewCommInBrowser.jsp?Sv4%2BeOSSucwiV%2BSifRJiNeUHzeOgHitiuZt0k4LaAu%2FtGh9fCjOzTvcfB6f0uDKUE90KLtIX9m6H0VKSnmjQuA%3D%3DA

Payload Pro analysis of SpaceX's March 2026 price increase and competitive dynamics

SpaceX increased Falcon 9 dedicated launch price from $70M to $74M and rideshare from $6,500/kg to $7,000/kg. Notes lack of real alternatives and concludes access to orbit has gotten more expensive in recent years despite narrative of falling launch costs.

spacenexus-launch-economics

Space Launch Economics Analysis

https://spacenexus.us/launch-economics

Comprehensive database of current launch vehicle costs per kg with historical trend data

Falcon 9 reusable $1,500/kg, expendable $2,720/kg. Falcon Heavy $1,400/kg. Starship target $10-50/kg. Global launch market $9.1B (2024), forecast $32B by 2030. Historical cost from $54,500/kg (Shuttle) to $1,500/kg (Falcon 9 reusable).

citi-gps-space-2022

Citi GPS: Space -- The Dawn of a New Age

https://www.citigroup.com/global/insights/space_20220509

Citigroup 2022 research note projecting launch costs to $100/kg by 2040 with bull/bear scenarios

Projects launch costs declining 95% to ~$100/kg by 2040. Bull case $33/kg. Driven by reusability, scale, new materials, cost-efficient production. Space industry to reach $1T revenue by 2040.

spacenews-categorical-imperative

SpaceX and the categorical imperative to achieve low launch cost

https://spacenews.com/spacex-and-the-categorical-imperative-to-achieve-low-launch-cost/

Analysis of SpaceX pricing strategy showing cost savings not passed to customers

SpaceX sells Falcon 9 launches at major markup over internal cost. Cost savings fund Starlink development rather than benefit external customers. No competitive pressure to lower customer prices given market dominance.

indexbox-starship-90m

SpaceX Starship Launch Price Set at $90 Million for 2029 Mission

https://www.indexbox.io/blog/spacex-starship-launch-price-set-at-90-million-for-2029-mission/

First publicly known Starship customer price: $90M for Voyager Starlab launch in 2029

Starship priced at $90M for Voyager Technologies Starlab station launch in 2029. Compared to $74M for Falcon 9 with far less payload capacity. Implies Starship customer price of ~$600/kg at 150t capacity.

voyager-10k-starship-contract

Voyager Technologies 10-K Annual Report (SEC EDGAR)

https://www.sec.gov/Archives/edgar/data/1788060/000162828025026244/voyager-sx1.htm

Primary SEC filing documenting the $90M Starship launch contract for Starlab station deployment (2028-2029)

SEC filing confirms $90M Starship contract for Starlab deployment. First publicly documented Starship customer price. Starlab mass not disclosed; at 59-150t payload range, implied $/kg is $600-1,525/kg.

dlr-starship-analysis-2025

Comparison of SpaceX's Starship with winged heavy-lift launcher options for Europe

https://link.springer.com/article/10.1007/s12567-025-00625-8

Peer-reviewed DLR analysis using actual Starship flight data (CEAS Space Journal, 2025)

Current Starship reusable payload to LEO: ~59 tonnes (based on telemetry from first 4 flight tests). Future V3 reusable: ~115 tonnes. V3 expendable: ~188 tonnes. Payload fraction for fully reusable Starship: ~40%. The 59t current figure is dramatically lower than the 100-200t commonly assumed in cost projections.

jones-nasa-launch-cost-2025

The Impact of Reduced Space Launch Costs

https://arc.aiaa.org/doi/10.2514/6.2025-4073

NASA Ames cost-cadence dependency analysis for Starship (AIAA, 2025)

Low $/kg requires high launch cadence — circular dependency. At sustained high cadence: ~$30/kg. At moderate cadence over 30 years: ~$119/kg. At low cadence: ~$436/kg. Musk's $10/kg target is marginal cost only, excluding development cost recovery. The cadence-cost dependency is the key uncertainty in long-term projections.

citigroup-gps-space-2022

Space: The Dawn of a New Age (Citi GPS Report, May 2022)

https://www.citigroup.com/global/insights/space_20220509

Major financial institution research on space economy including launch cost projections

Base case: ~$100/kg by 2040. Bull case: ~$33/kg (100+ reuses). Bear case: ~$300/kg (10x reuses). Launch costs expected to fall ~95% by 2040. Space economy projected at $1T annual revenue by 2040.

adilov-launch-cost-decline-2022

An Analysis of Launch Cost Reductions for Low Earth Orbit Satellites

http://www.accessecon.com/Pubs/EB/2022/Volume42/EB-22-V42-I3-P130.pdf

Peer-reviewed econometric analysis of historical launch cost trends (Economics Bulletin, 2022)

2000-2020 per-kg launch costs decreased at average 5.5% annually (4.4% altitude-adjusted). Commercial satellites: 7.5% annual decrease. Pre-Starship trend data; projecting forward gives slower decline than most Starship-centric analyses assume.

Is Starlink Solar Module the Answer to Power in Space?

https://www.linkedin.com/pulse/starlink-solar-module-answer-power-space-stan-herasimenka-7anfc

Reverse-engineering of Starlink Gen 1.x solar array: 18% silicon cells, 78-100 W/kg achieved, 40-60 kg array mass

Starlink Gen 1.x solar arrays estimated at 78-100 W/kg specific power using mass-produced 18% efficiency silicon half-cells at ~7,535 W total per satellite.

satnews-fractal-lab-iii

The Fractal Lab -- Part III

https://satnews.com/2026/02/24/the-fractal-lab-part-iii/

Three-tier solar specific power framework: flown ~30 W/kg, lab demonstrated ~200 W/kg, near-term projection ~100 W/kg

Presents a maturity framework for solar array technology: heritage fleet at ~30 W/kg, laboratory demonstrated up to 200 W/kg, and near-term achievable at ~100 W/kg for 2030s deployable systems at megawatt scale.

mdpi-leo-degradation

Degradation Modeling and Telemetry-Based Analysis of Solar Cells in LEO

https://www.mdpi.com/2076-3417/15/16/9208

Models Si solar cell power loss of 12.5% at 300 km and 7.8% at 700 km over six months; evaluates Si, GaAs, TJ, CIGS

Silicon solar cell power output decreases approximately 12.5% at 300 km and 7.8% at 700 km over six months. Dominant degradation mechanisms include trapped charged particles, atomic oxygen, and UV radiation.

terawatt-starlight-specs

Starlight Solar Panel Specifications (Terawatt/Starpath)

https://terawatt.space/

Starlight Air: 16% efficiency, 73 g/m^2, $15/W. Starlight Classic: 19% efficiency, 900 g/m^2, $11.20/W.

Starlight Air panels at 73 g/m^2 yield ~2,980 W/kg cell-level specific power. Starlight Classic at 900 g/m^2 yield ~287 W/kg cell-level. Both radiation-hardened for LEO through Mars.

solar-degradation-geo-gaas-si

Solar array degradation on geostationary communications satellites

https://www.inderscience.com/info/inarticle.php?artid=90549

Telemetry from 11 GEO sats (1990-1998): GaAs 0.44-1.03%/yr degradation; Si 0.71-1.69%/yr

GEO GaAs cells degrade 0.44-1.03%/yr; Si cells 0.71-1.69%/yr. LEO radiation fluences 5-10x lower than GEO.

iss-solar-array-degradation

On-Orbit Performance Degradation of the International Space Station P6 Photovoltaic Arrays

https://ntrs.nasa.gov/api/citations/20030068268/downloads/20030068268.pdf

ISS silicon solar arrays: measured degradation 0.2-0.5%/yr, below predicted 0.8%/yr

ISS P6 silicon photovoltaic arrays showed measured short-circuit current degradation of 0.2-0.5%/yr at ~400 km LEO, below the predicted rate of 0.8%/yr.

satnews-physics-wall

The Physics Wall: Orbiting Data Centers Face a Massive Cooling Challenge

https://satnews.com/2026/03/17/the-physics-wall-orbiting-data-centers-face-a-massive-cooling-challenge/

SatNews analysis of radiative cooling challenges for orbital data centers, including radiator sizing, temperature tradeoffs, and active thermal control trends

Running radiators at 60C instead of 20C can reduce required area by half. Industry expected to move toward space-rated heat pumps by 2027. A centralized 1 GW orbital DC would require ~834,000 m^2 of radiators at 400K.

isnps-lightweight-radiators

Advanced Lightweight Heat Rejection Radiators for Space Nuclear Power Systems

https://isnps.unm.edu/reports/ISNPS_Tech_Report_97.pdf

NASA-funded research on Ti-water heat pipe panels ranging from 5.8-7.16 kg/m^2, with additive-manufactured embedded heat pipes achieving >70% fin efficiency at 2-3 kg/m^2

State-of-the-art heat rejection radiators with Ti-water heat pipe panels range from 5.8 kg/m^2 to 7.16 kg/m^2. NASA TFAWS 2024 demonstrated embedded branching network heat pipes at 2-3 kg/m^2 using additive manufacturing.

nasa-smallsat-thermal

7.0 Thermal Control - NASA State of the Art of Small Spacecraft Technology

https://www.nasa.gov/smallsat-institute/sst-soa/thermal-control/

NASA reference on thermal control subsystems for small spacecraft

Comprehensive survey of thermal control technologies for small spacecraft including passive radiators, heat pipes, and active thermal management systems.

toughsf-radiators

ToughSF: All the Radiators

http://toughsf.blogspot.com/2017/07/all-radiators.html

Reference survey of spacecraft radiator technologies, mass ranges from structural-panel designs to 12 kg/m^2 heavy deployable radiators

Spacecraft radiator weight varies from nearly nothing (structural panel reuse) to ~12 kg/m^2 for heavy deployable radiators. NASA target for advanced thermal management: 2 kg/m^2.

vera-rubin-nvl72-nvidia

NVIDIA Vera Rubin POD: Seven Chips, Five Rack-Scale Systems, One AI Supercomputer

https://developer.nvidia.com/blog/nvidia-vera-rubin-pod-seven-chips-five-rack-scale-systems-one-ai-supercomputer/

NVIDIA blog on Vera Rubin NVL72 rack architecture (~1,815 kg, 180-220 kW TDP, 72 Rubin GPUs + 36 Vera CPUs)

VR NVL72 rack weighs ~4,000 lbs (~1,815 kg) for the compute rack unit alone, housing 72 Rubin GPUs and 36 Vera CPUs across 18 compute trays plus 9 NVLink switch trays. System TDP is 180-220 kW.

semianalysis-vera-rubin

Vera Rubin - Extreme Co-Design: An Evolution from Grace Blackwell Oberon

https://newsletter.semianalysis.com/p/vera-rubin-extreme-co-design-an-evolution

SemiAnalysis deep dive on VR NVL72 architecture, power delivery, and NVLink 6 switch trays

VR NVL72 maintains same NVLink switch tray count as GB200. Power delivery uses four 110 kW power shelves. Compute tray uses Strata board with IBC modules stepping from 50 VDC to 12 VDC, then VRMs to ~1 VDC.

mach33-energy-parity

Orbital Compute Energy will be Cheaper than Earth by 2030

https://research.33fg.com/analysis/orbital-compute-energy-will-be-cheaper-than-earth-by-2030

Mach33 analysis deriving $/W for satellite power & cooling subsystems from Starlink V2 Mini baseline

Starlink V2 Mini hardware costs ~$650/kg. Power & cooling subsystem (~400 kg, 42.8 kW) yields ~$6.1/W. Compute-optimized Starlink derivative achieves ~$5.0/W.

spacenews-solar-bottleneck

Modernizing the satellite supply chain by breaking the solar power bottleneck

https://spacenews.com/modernizing-the-satellite-supply-chain-by-breaking-the-solar-power-bottleneck/

Analysis of solar panel supply as key satellite manufacturing bottleneck

Solar panel supply identified as a critical bottleneck for satellite manufacturing scale-up.

Cost-Saving Method Yields Solar Cells for Exploration, Gadgets

https://spinoff.nasa.gov/Spinoff2016/ee_5.html

NASA spinoff on MicroLink substrate-reuse approach; traditional space cell costs $400-500 per 4x8cm cell

Traditional space-qualified solar cell measuring 4x8 cm costs $400-500 apiece including flight qualification. Substrate accounts for ~40% of total cell material cost.

nasa-high-power-dc-dc

A 1 MW, 100 kV, less than 100 kg space based dc-dc power converter

https://ntrs.nasa.gov/citations/19920067913

NASA study of high-power space-based DC-DC converter at 11.9 kW/kg

Describes a 1 MW, 100 kV space-based DC-DC converter with estimated system mass of 83.8 kg, giving 11.9 kW/kg (or ~0.084 kg/kW).

arena-space-lasers

Making Space Lasers Boring

https://arenamagazine.substack.com/p/making-space-lasers-boring

Notes that Starlink demonstrated satellite design requirements are within reach of consumer electronics components

SpaceX demonstrated satellite design can use consumer electronics components. Interior chambers sealed and maintained at consistent temperatures, reducing need for expensive space-grade components.

ieee-h100-space

NVIDIA's H100 GPU Takes AI Processing to Space

https://spectrum.ieee.org/nvidia-h100-space

IEEE Spectrum coverage of Starcloud-1 deploying a terrestrial-grade H100 in orbit

Documents the first terrestrial, data-center-class GPU (H100) deployed in orbit aboard Starcloud-1 (November 2025), 100x more powerful than any prior space GPU.

militaryaerospace-radhard-cost

Radiation-hardened space electronics enter the multi-core era

https://www.militaryaerospace.com/computers/article/16709760/radiation-hardened-space-electronics-enter-the-multi-core-era

Analysis of rad-hard component costs vs commercial equivalents

Rad-hard power ICs that cost ~$2 in commercial volume sell for over $2,000 in space-grade versions (~1,000x multiplier). Testing costs often swamp material costs.

microchip-cots-newspace

Decrease Time to Market and Cost for the NewSpace Market by Using Radiation-Tolerant Solutions Based on COTS Devices

https://www.microchip.com/en-us/about/news-releases/products/decrease-time-to-market-and-cost-for-the-newspace-market-by-using-radiation-tolerant-solutions-based-on-cots-devices

Microchip's radiation-tolerant COTS approach for NewSpace applications

Radiation-tolerant MCUs deliver cost savings of up to 75% over rad-hard MCUs. Targets NewSpace operators who find traditional space-qualified components too expensive and slow.

meta-sdc-reliability

How Meta keeps its AI hardware reliable

https://engineering.fb.com/2025/07/22/data-infrastructure/how-meta-keeps-its-ai-hardware-reliable/

Meta's analysis of silent data corruptions in AI training and inference at scale

SDCs in inference lead to incorrect results affecting thousands of consumers. AI training workloads sometimes considered self-resilient to SDCs but only for a limited subset of manifestations.

blocventures-satellite-compute

The road to high-performance and robust satellite compute

https://blocventures.com/the-road-to-high-performance-and-robust-satellite-compute/

Analysis of COTS vs rad-hard electronics for NewSpace LEO satellites

LEO satellites below Van Allen belt have relatively low cumulative radiation exposure (<30 krad). Starlink operates with more risk tolerance because constellation-level redundancy absorbs individual failures.

nvidia-one-year-cadence

Nvidia Draws GPU System Roadmap Out To 2028

https://www.nextplatform.com/2025/03/19/nvidia-draws-gpu-system-roadmap-out-to-2028/

Nvidia shifted from 2-year to 1-year release cadence for datacenter GPUs

Hopper (2022), Blackwell (2024/25), Rubin (2026), Feynman (2028). Major architecture every 2 years, updates yearly. Each generation delivers ~2-4x inference performance improvement.

orbital-dc-race-2026

The Orbital Data Center Race: Every Major Player, Timeline, and Economic Reality in 2026

https://medium.com/@marc.bara.iniesta/orbital-data-centers-part-ii-spacexs-million-satellite-bet-cfd4e2bdcf66

Comprehensive survey of orbital DC players, regulatory filings, and economic analyses

Market valued at $1.77B by 2029, $39B by 2035 (67.4% CAGR). Three-wave deployment timeline: defense/ISR (2025-2030), AI training (2030-2035), mainstream (2035-2045).

revisiting-ml-cluster-reliability

Revisiting Reliability in Large-Scale Machine Learning Research Clusters

https://arxiv.org/abs/2410.21680

Meta FAIR paper: 11 months, 24K A100 GPUs, 150M+ GPU-hours with explicit transient vs permanent failure taxonomy

Analyzes Meta's RSC-1 (16K GPUs) and RSC-2 (8K GPUs) A100 clusters over 11 months, 4M jobs, 150M+ GPU-hours. Failure rate 6.50 per thousand node-days (RSC-1) vs 2.34 (RSC-2). Identifies "lemon nodes" (1.2% of fleet) causing 13% of daily job impacts; 28.2% GPU-caused. Explicitly distinguishes transient vs permanent failures. GPU swap data shows RSC-1 rate ~3x RSC-2. MTTF: 7.9 hours for 1024-GPU jobs, 1.8 hours for 16,384-GPU jobs.

satnews-insurance-congestion

Satellite Insurers Driving Costs in a Hyper-Congested Orbital Environment

https://satnews.com/2026/02/08/satellite-insurers-driving-costs-in-a-hyper-congested-orbital-environment/

SatNews analysis of rising space insurance costs in congested LEO

LEO insurance premiums now 5-10% of mission total budget. WEF projects $42.3B in congestion-related costs over next decade across $3.03T total space infrastructure value (~1.4%).

wef-debris-cost-2026

Clear Orbit, Secure Future: A Call to Action on Space Debris

https://reports.weforum.org/docs/WEF_Clear_Orbit_Secure_Future_2026.pdf

WEF 2026 report projecting space debris costs to industry over next decade

Total congestion costs $25.8B-$42.3B over next decade, representing ~1.4% of $3.03T total space infrastructure value. Maneuver costs alone $560M. Non-catastrophic failure costs $11.1B.

The Little-Known Secret That Could Cost Elon Musk $8.2 Billion a Year

https://www.fool.com/investing/2024/02/22/spacex-secret-could-cost-musk-82-billion-a-year/

Analysis of Starlink satellite replacement costs given 5-year lifespan

Starlink satellite manufacturing cost ~$500K each. Launch cost ~$3M per satellite via Falcon 9. With 5-year lifespan across 42,000-satellite constellation, annual replacement cost ~$8.2B/year.

SpaceX's Impact on Satellite Launch Insurance

https://telecomworld101.com/spacex-launch-insurance/

Analysis of SpaceX's decision not to insure Starlink satellites

SpaceX does not insure Starlink satellites. Mega-constellation quantity functions as its own insurance. SpaceX does secure launch insurance for most Falcon 9 missions.

payload-debris-costs

WEF's Space Debris Report Projects Significant Costs

https://payloadspace.com/wefs-space-debris-report-projects-significant-costs/

Payload Space coverage of WEF debris cost report

Anomaly costs $14.2B-$30.7B over next decade. Maneuver costs alone $560M. Total ~1.4% of projected space infrastructure value.

thunder-said-dc-economics

Economic costs of data-centers?

https://thundersaidenergy.com/downloads/data-centers-the-economics/

Data center economics analysis with opex breakdown for 30 MW facility

30 MW data center requires ~$100M/year opex (~$3,333/kW/year). Standard capex ~$10M/MW; AI-heavy up to $40,000/kW. Over half of AI DC capex is GPUs.

cushman-wakefield-dc-cost-2025

U.S. Data Center Development Cost Guide 2025

https://www.cushmanwakefield.com/en/united-states/insights/data-center-development-cost-guide

Cushman & Wakefield survey of data center development costs across 19 US markets

Costs range from $9.3M/MW (San Antonio) to $15M/MW (Reno), average $11.7M/MW. Texas markets consistently lowest cost. Excludes IT equipment, land acquisition, and soft costs.

dgtl-infra-dc-cost-breakdown

How Much Does It Cost to Build a Data Center?

https://dgtlinfra.com/how-much-does-it-cost-to-build-a-data-center/

Detailed breakdown of data center construction costs by component

Total development costs $7-12M/MW. Electrical 40-45%, HVAC/cooling ~20%, powered shell 17-21%, building fit-out 20-25%. Per-sqft: $600-1,100/sqft total.

alpha-matica-dc-cost-structure

Deconstructing the Data Center: A Look at the Cost Structure Igniting the AI Boom

https://www.alpha-matica.com/post/deconstructing-the-data-center-a-look-at-the-cost-structure-1

Alpha Matica analysis of 100 MW hyperscale data center CapEx breakdown

100 MW hyperscale DC total CapEx $3.4B-$5.5B ($34-55/W including IT hardware). Infrastructure-only $900M-$1.5B ($9-15M/MW).

mckinsey-beyond-compute

Beyond compute: Infrastructure that powers and cools AI data centers

https://www.mckinsey.com/industries/industrials/our-insights/beyond-compute-infrastructure-that-powers-and-cools-ai-data-centers

McKinsey analysis: 25% ($1.3T) of $6.7T global DC investment goes to power/cooling infrastructure

25% of $6.7T total global data center investment through 2030 goes to power generation, transmission, cooling, and electrical equipment. With projected 219 GW demand, implies ~$5,900/kW.

introl-cdu-cost-analysis

Cooling Distribution Units: Liquid Cooling Infrastructure for AI Data Centers

https://introl.com/blog/cooling-distribution-units-cdu-liquid-cooling-ai-data-center-2025

CDU cost analysis: $75K-150K per 500 kW unit; CDU market growing from $1B to $7.7B at 33% CAGR

CDUs priced at $75K-150K per 500 kW unit. Piping installation $50-100 per linear foot. Cold plates and manifolds $5K-10K per server.

truelook-dc-construction-costs

Data Center Construction Costs Explained: Where Your Budget Really Goes

https://www.truelook.com/blog/data-center-construction-costs

Cost analysis showing MEP at 50% of budgets, cooling at 20% of mechanical

MEP systems consume up to 50% of total budgets. Electrical at 40-45%. Cooling systems at 43.2% of mechanical infrastructure spending in 2024. Air cooling $1.5-2M/MW; liquid cooling $3-4M/MW.

yale-dc-electricity-rates

Home electricity bills are skyrocketing. For data centers, not so much.

https://yaleclimateconnections.org/2026/01/home-electricity-bills-are-skyrocketing-for-data-centers-not-so-much/

Analysis showing K-shaped electricity pricing: residential up 25%, commercial up only 3%

Residential prices rose 25% (2020-2024). Commercial prices rose only 3% over two years. Data centers consuming more power but paying proportionally less through negotiated PPAs and industrial tariffs.

cnbc-footing-ai-bill

Who is really footing the AI energy bill?

https://www.cnbc.com/2026/03/13/ai-data-centers-electricity-prices-backlash-ratepayer-protection.html

Debate about data center electricity costs and ratepayer impact

US residential electricity prices rose from $0.1276/kWh (2020) to $0.1744/kWh (Feb 2026), 36% increase. Projected $0.1901/kWh by September 2027.

volts-pjm-explainer

What is PJM and why is everyone so mad about it?

https://www.volts.wtf/p/what-is-pjm-and-why-is-everyone-so

David Roberts (Volts) explainer on PJM capacity market dynamics and data center impact

Data centers were 40% of costs in the December 2025 auction for 2027/28. Pennsylvania Governor Shapiro called it "the largest unjust wealth transfer in the history of US energy markets."

sciencedirect-dc-lcoe-comparison

Energy solutions for data center: Comparative analysis of LCOE and recent developments

https://www.sciencedirect.com/science/article/pii/S2352484725005803

Solar+battery storage as lowest-cost option for data centers at $25.11/MWh

Solar+battery storage found lowest cost at $25.11/MWh ($0.025/kWh), though sensitive to CAPEX, capacity factors, and firmness requirements.

pv-magazine-solar-ppa-playbook

AI datacenters rewrite the solar PPA playbook

https://pv-magazine-usa.com/2026/03/13/ai-datacenters-rewrite-the-solar-ppa-playbook/

Solar PPA prices rising due to hyperscaler demand

P25 solar prices rose 3.2% in Q4 2025, up ~9% year-over-year, as hyperscaler demand compresses available supply.

premai-parallelism-guide-2026

Multi-GPU LLM Inference: TP vs PP vs EP Parallelism Guide (2026)

https://blog.premai.io/multi-gpu-llm-inference-tp-vs-pp-vs-ep-parallelism-guide-2026/

Comprehensive practical guide to multi-GPU inference parallelism strategies with specific GPU counts, bandwidth thresholds, and efficiency data

Llama 405B requires minimum 8x H100 in FP8. DeepSeek R1 (671B MoE) requires 8x H100 minimum. TP scaling: TP=2 85-95% efficiency, TP=8 56-75%. PP uses point-to-point transfers requiring far less bandwidth than TP. NVLink mandatory for TP beyond TP=2.

nvidia-wide-ep-nvl72

Scaling Large MoE Models with Wide Expert Parallelism on NVL72 Rack Scale Systems

https://developer.nvidia.com/blog/scaling-large-moe-models-with-wide-expert-parallelism-on-nvl72-rack-scale-systems/

NVIDIA technical blog: EP32 achieves 1.8x throughput vs EP8; requires 130 TB/s aggregate NVLink bandwidth

Wide-EP on DeepSeek R1 with EP=32 achieves 1.8x more output tokens/sec/GPU than EP=8. Without 130 TB/s NVLink bandwidth, large-scale EP would be impractical.

nvidia-dynamo-moe-inference

How NVIDIA GB200 NVL72 and NVIDIA Dynamo Boost Inference Performance for MoE Models

https://developer.nvidia.com/blog/how-nvidia-gb200-nvl72-and-nvidia-dynamo-boost-inference-performance-for-moe-models/

Disaggregated serving for MoE models showing 6x throughput gains with wide EP on NVL72

Disaggregated serving (prefill/decode separation) achieved 6x throughput gain. Optimal DeepSeek R1 decode uses 64 GPUs in wide-EP within single NVLink domain.

NVIDIA NVLink and NVSwitch Supercharge Large Language Model Inference

https://developer.nvidia.com/blog/nvidia-nvlink-and-nvidia-nvswitch-supercharge-large-language-model-inference/

NVSwitch delivers 1.5x inference throughput for Llama 70B; quantifies per-query data transfer

Single Llama 70B inference query requires up to 20 GB of TP synchronization data per GPU. NVSwitch-equipped H100 achieved 168 tok/s/GPU vs 112 tok/s/GPU without NVSwitch (1.5x).

Scaling AI Inference Performance and Flexibility with NVIDIA NVLink and NVLink Fusion

https://developer.nvidia.com/blog/scaling-ai-inference-performance-and-flexibility-with-nvidia-nvlink-and-nvlink-fusion/

72-GPU NVLink domain maximizes revenue and performance for inference workloads

Analysis showing full 72-GPU NVLink domain delivers optimal inference revenue and performance across frontier model workloads.

semianalysis-inferencex-v2

InferenceX v2: NVIDIA Blackwell Vs AMD vs Hopper

https://newsletter.semianalysis.com/p/inferencex-v2-nvidia-blackwell-vs

All top-tier labs use disaggregated serving with wide EP; detailed DeepSeek R1 deployment configs

All top-tier labs (OpenAI, Anthropic, xAI, Google DeepMind, DeepSeek) use disaggregated inferencing and wide expert parallelism. EP64 places 4 experts/layer/GPU vs EP8 at 32 experts/layer/GPU.

nebius-gb200-interconnect

Leveraging high-speed, rack-scale GPU interconnect with NVIDIA GB200 NVL72

https://nebius.com/blog/posts/leveraging-nvidia-gb200-nvl72-gpu-interconnect

TP groups always contained within single NVL72 rack

Technical deep-dive confirming TP groups require fastest interconnect and are always contained within a single NVL72 rack.

nvidia-moe-frontier-models

Mixture of Experts Powers the Most Intelligent Frontier AI Models

https://blogs.nvidia.com/blog/mixture-of-experts-frontier-models/

10x MoE performance on NVL72 vs H200; 60%+ of frontier models use MoE

Since early 2025, over 60% of open-source frontier model releases use MoE. NVL72 achieves 10x performance improvement for MoE vs HGX H200.

nvidia-rubin-cpx-nvl144

NVIDIA Unveils Rubin CPX: A New Class of GPU Designed for Massive-Context Inference

https://nvidianews.nvidia.com/news/nvidia-unveils-rubin-cpx-a-new-class-of-gpu-designed-for-massive-context-inference

NVL144 with 100TB memory, 1.7 PB/s bandwidth, designed for million-token context

Vera Rubin NVL144 CPX doubles domain to 144 GPUs with NVLink 6.0 at 3.6 TB/s per GPU. 100TB fast memory, 1.7 PB/s bandwidth. Rubin Ultra (2027) goes to NVLink 7.0.

lmsys-gb200-deepseek-part1

Deploying DeepSeek on GB200 NVL72 (Part I)

https://lmsys.org/blog/2025-06-16-gb200-part-1/

2.7x decode throughput improvement on NVL72

2.7x decode throughput improvement using 12 decode + 2 prefill nodes within NVL72 for DeepSeek R1.

lmsys-gb200-deepseek-part2

Deploying DeepSeek on GB200 NVL72 with PD and Large Scale EP (Part II)

https://lmsys.org/blog/2025-09-25-gb200-part-2/

3.8x prefill and 4.8x decode speedup with NVFP4 MoE on 48 decode ranks

SGLang on GB200 NVL72 achieved 26,156 input tokens/sec/GPU (prefill) and 13,386 output tokens/sec/GPU (decode) for DeepSeek R1 with FP8 attention and NVFP4 MoE.

epoch-consumer-gpu-gap

Frontier AI capabilities can be run at home within a year or less

https://epoch.ai/data-insights/consumer-gpu-model-gap

6-12 month lag before frontier capabilities run on single consumer GPU

Frontier AI capabilities become runnable on single consumer GPU (RTX 4090, ~24 GB VRAM) within 6-12 months. Small open models improve faster (+125 ELO/year) than frontier models (+80 ELO/year).

ai-dc-networking-gpu-clusters

AI Data Center Networking: How GPU Clusters Are Changing Network Design

https://www.thenetworkdna.com/2026/03/ai-data-center-networking-how-gpu.html

Technical analysis of TP, PP, DP communication patterns and bandwidth requirements

Data parallelism is embarrassingly parallel (no cross-replica communication). Pipeline parallelism uses predictable point-to-point flows. Tensor parallelism uses all-to-all AllGather and ReduceScatter collectives.

airandspaceforces-oos-2026

US Bets on On-Orbit Satellite Servicing with 4 Missions in 2026

https://www.airandspaceforces.com/us-on-obit-satellite-servicing-4-missions-2026/

Four DoD-funded on-orbit servicing demonstrations in GEO planned for 2026

SpaceLogistics MRV with DARPA RSGS robotic arm, Astroscale U.S. hydrazine refueling, Tetra-5 autonomous RPOD/refueling, and Kamino hydrazine transfer. All target GEO. SpaceLogistics president notes 10-20 GEO satellites reach end of life annually from fuel depletion. Over 500 high-value GEO satellites currently operational.

breakingdefense-spacelogistics-2022

SpaceLogistics sees potential defense market for orbital life-extension spacecraft

https://breakingdefense.com/2022/03/spacelogistics-sees-potential-defense-market-for-orbital-life-extension-spacecraft/

MEV pricing details and transition to MRV/MEP platforms

MEV lease rates approximately $13M/year (based on Intelsat SEC filings). MEV service cost is "half to a quarter" of $300-500M satellite replacement cost. MEP pricing "dramatically less" than MEV. MEP is dishwasher-sized, provides 6 years of electric propulsion. MRV carries multiple MEPs. Five operators hold MEP "seat reservations."

spacenews-oos-road-to-market

Increasingly feasible, on-orbit servicing has a challenging road to market

https://spacenews.com/increasingly-feasible-on-orbit-servicing-challenging-road-to-market/

Analysis of commercial viability challenges for on-orbit servicing, especially in LEO

LEO satellites cost ~$500K with 3-5 year lifespans; replacement is more economical than servicing. Operators enhancing satellite autonomy and propulsion reduces OOS demand. Commercial viability likely emerges through government support. In-orbit assembly and maintenance "only in the long term, driven by large-scale infrastructure projects."

satnews-starfish-52m

US Space Force Awards Starfish Space $52.5 Million for Proliferated LEO Deorbit Services

https://satnews.com/2026/01/21/us-space-force-awards-starfish-space-52-5-million-for-proliferated-leo-deorbit-services/

First operational LEO Deorbit-as-a-Service contract for PWSA constellation

$52.5M contract for deorbit services using Otter vehicle. ESPA-class (~200 kg) with autonomous RPOD. First operational vehicles launch late 2026. Features CETACEAN (computer vision), CEPHALOPOD (autonomous guidance), Nautilus (universal capture mechanism for non-equipped satellites).

breakingdefense-starfish-otter-2

Space Force buys second Otter spacecraft to power satellites on orbit

https://breakingdefense.com/2026/02/space-force-buys-second-otter-spacecraft-to-power-satellites-on-orbit/

$54.5M contract for second GEO Otter, following $37M first GEO Otter contract

Second Otter vehicle for GEO at $54.5M, delivery 2028. First GEO Otter ready for 2026 launch. Operates as auxiliary propulsion for station-keeping or relocation. Docks with unmodified satellites. Earlier $37M contract from 2024 funded first GEO vehicle.

sciencedirect-oos-economic-value

Economic value analysis of on-orbit servicing for geosynchronous communication satellites

https://www.sciencedirect.com/science/article/abs/pii/S0094576520307165

Quantitative economic model for OOS viability thresholds in GEO

On-orbit servicing commercially viable when client satellite initial cost exceeds $242M and servicing architecture cost is below $140M. GEO-focused analysis. Provides NPV framework for comparing servicing vs. replacement under various cost assumptions.

marketsandmarkets-oos

On-Orbit Satellite Servicing Industry worth $5.1 billion by 2030

https://www.marketsandmarkets.com/PressReleases/on-orbit-satellite-servicing.asp

Market forecast for on-orbit satellite servicing through 2030

Market valued at $2.4B (2023), projected $5.1B by 2030 (11.5% CAGR). GEO dominates. Robotic servicing segment leads. Key companies: Maxar, Astroscale, SpaceLogistics, Airbus, Thales. Active debris removal fastest-growing segment.

cordis-eross-iod

European Robotic Orbital Support Services In-Orbit Demonstration (EROSS IOD)

https://cordis.europa.eu/project/id/101082464

EU Horizon Europe funded robotic servicing demonstration led by Thales Alenia Space

Targets 2026 demonstration of rendezvous, capture, docking, refuelling, and payload exchange between two cooperative spacecraft. Builds on EROSS and EROSS+ research. Validates technologies for robotic in-space servicing including coordinated close rendezvous and autonomous robotic operations with a poly-articulated arm.

aiaa-sophia-space

Sophia Space's Orbital Data Centers Will Cool, Compute, and Conquer Space-Based Computing

https://aerospaceamerica.aiaa.org/institute/no-fans-needed-sophia-spaces-orbital-data-centers-will-cool-compute-and-conquer-space-based-computing/

AIAA Aerospace America profile of Sophia Space TILE architecture and SOOS operating system

TILE modules are 1m x 1m x 1cm with integrated solar and passive radiative cooling. SOOS (Sophia Orbital Operating System) routes around failed tiles, handles firmware upgrades and security patches. 30-year datacenter lifecycle with ~6-year hardware refresh cycles. Three tiers: single tile, ~40-tile clusters, ~2,500-tile full orbital data centers. Company handles launch, maintenance, and deorbiting of obsolete modules.

sciencedirect-modular-reconfigurable-spacecraft

Modular self-reconfigurable spacecraft: Development status, key technologies, and application prospect

https://www.sciencedirect.com/science/article/abs/pii/S0094576523001297

Chinese Academy of Sciences review of modular reconfigurable spacecraft technologies

Identifies key technologies for on-orbit module replacement: standardized interfaces (mechanical, electrical, thermal), autonomous docking, fault isolation, reconfiguration planning. Current TRL 3-4 for autonomous module swap. Modular designs can replace failed modules with spares for low-cost, fast-response on-orbit repair.

militaryembedded-modular-gpu

Modernizing Mission Compute: Enabling AI Through Modular GPU Expansion

https://militaryembedded.com/ai/big-data/guest-blog-modernizing-mission-compute-enabling-ai-through-modular-gpu-expansion

GPU-based XMC modules for modular compute upgrades in military platforms

GPU-based XMC (Switched Mezzanine Card) modules enable adding GPU acceleration without modifying base carrier card or backplane. Compute capability can be upgraded while maintaining common system architecture. Designed for human-serviced environments, not autonomous in-orbit operations.

bcsatellite-mev

Another MEV Rescue Mission

https://www.bcsatellite.net/blog/another-mev-rescue-mission/

Coverage of MEV-2 docking with Intelsat 10-02 satellite for life extension

MEV-2 docked with Intelsat 10-02 in GEO in April 2021. Provides station-keeping by attaching to satellite engine nozzle. Five-year initial service contract.

space-com-starfish-otter

Starfish Space's Otter satellite will attempt first-ever commercial docking in LEO

https://www.space.com/space-exploration/launches-spacecraft/starfish-spaces-otter-satellite-will-attempt-1st-ever-commercial-docking-in-low-earth-orbit-this-year

Coverage of Otter Pup 2 mission launching June 2025 for LEO docking demonstration

Otter Pup 2 launched on SpaceX Transporter 14 in June 2025 to rendezvous with and dock with a D-Orbit ION satellite. First-ever commercial satellite docking attempt in LEO. Three operational Otter vehicles scheduled for 2026 launch for NASA, Space Force, and Intelsat.

nasa-osam-1

On-orbit Servicing, Assembly, and Manufacturing 1 (OSAM-1)

https://www.nasa.gov/mission/on-orbit-servicing-assembly-and-manufacturing-1/

NASA mission page for cancelled OSAM-1 robotic satellite servicing demonstration

OSAM-1 was designed to demonstrate on-orbit satellite refueling and robotic assembly (SPIDER payload for 10m beam construction). Cancelled March 2024 due to technical, cost, and schedule challenges.

wikipedia-osam-1

OSAM-1 - Wikipedia

https://en.wikipedia.org/wiki/OSAM-1

Wikipedia article on NASA's cancelled OSAM-1 mission

OSAM-1 cancelled February 29, 2024 due to continued technical, cost, and schedule challenges. Budget grew significantly beyond initial estimates. Related OSAM-2 (Archinaut) concluded in 2023 without flight demonstration.

nasa-sbsp-study

Space-Based Solar Power Study (NASA, 2024)

https://ntrs.nasa.gov/citations/20240002440

NASA study on space-based solar power costs including traditional space solar cell pricing

Traditional space solar cells cost approximately $100/W in volume production of 200-1,000 kW quantities. This includes bare cells only, not the full array assembly with structure, wiring, and deployment mechanisms.

bnef-lcoe-2026

BloombergNEF 1H 2026 LCOE Update

https://about.bnef.com/blog/1h-2026-levelized-cost-of-energy-update/

BNEF 2026 levelized cost of energy benchmarks including solar+storage

Combined solar+storage delivered at $57/MWh average in 2025 (87 GW deployed). Fixed-axis solar benchmark rose to $39/MWh. BNEF forecasts 30% solar LCOE reduction and 25% battery storage reduction by 2035.

semianalysis-dc-anatomy-electrical

Datacenter Anatomy Part 1: Electrical Systems

https://newsletter.semianalysis.com/p/datacenter-anatomy-part-1-electrical

SemiAnalysis deep-dive on data center electrical infrastructure: MV switchgear, transformers, generators, UPS, PDU

Standard power distribution: MV switchgear, step-down transformers (to 415V AC), diesel generators, ATS, UPS with 5-10 min battery storage, PDUs. Microsoft uses standardized 3 MW generator and 3 MVA transformer pods for modularity.

semianalysis-tokens-to-burgers

From Tokens to Burgers

https://newsletter.semianalysis.com/p/from-tokens-to-burgers

SemiAnalysis analysis of Colossus 2 data center economics including PUE modeling

Uses PUE of 1.15 for Colossus 2 modeling (400 MW AI data center in Memphis). Detailed cost model for AI inference economics.

introl-ocp-2025-analysis

OCP 2025: Liquid Cooling Trends for AI Data Centers

https://introl.com/blog/ocp-2025-liquid-cooling-ai-data-centers

Introl analysis of liquid cooling trends from OCP 2025, including retrofit and new-build cost comparisons

Retrofitting to support 40 kW racks costs $50K-100K per rack; building new 100 kW infrastructure costs $200K-300K per rack. Direct-to-chip liquid cooling commands ~65% of the liquid cooling market in 2026.

gridlab-gas-turbine-costs-2025

The New Reality of Power Generation: An Analysis of Increasing Gas Turbine Costs in the U.S.

https://gridlab.org/wp-content/uploads/2025/09/GridLab_Gas-Turbine-Costs-Report-1.pdf

GridLab/Energy Futures Group/Halcyon analysis of gas turbine costs from IRP/CPCN filings

Recent CCGT projects routinely exceed $2,000/kW_gen, far above EIA baseline assumptions of ~$900-1,000/kW_gen. Simple-cycle projects range $728-$1,965/kW_gen. Elevated costs driven by data center demand competing for turbine supply.

eia-capital-cost-aeo2025

Capital Cost and Performance Characteristics for Utility-Scale Electric Power Generating Technologies (AEO2025)

https://www.eia.gov/analysis/studies/powerplants/capitalcost/pdf/capital_cost_AEO2025.pdf

Sargent & Lundy report for EIA with overnight capital cost estimates for 19 generator types

H-class CCGT at ~$921/kW_gen base. Aeroderivative simple-cycle at ~$1,606/kW_gen. Solar+4hr battery at $2,175/kW_gen. Location adjustments 0.98-1.21x.

nrel-battery-cost-2025

Cost Projections for Utility-Scale Battery Storage: 2025 Update

https://docs.nrel.gov/docs/fy25osti/93281.pdf

NREL bottom-up cost model for utility-scale battery storage

4-hour battery at $334/kWh (2024). Projections to $147-$339/kWh by 2035, $108-$307/kWh by 2050.

nanostar-methodology

NANOSTAR Systems Engineering Methodology

https://nanostar-project.gitlab.io/main/source/preliminary-design/systems.html

Spacecraft design methodology with subsystem mass allocation tables by mission type

LEO satellite mass allocations: structure 27%, thermal 2%, ADCS 6%, propulsion 3%, power 21%, comms 2%, C&DH 5%, payload 31%.

nasa-harness-llis

Spacecraft Electrical Harness Design Practice (NASA LLIS)

https://llis.nasa.gov/lesson/722

NASA lessons learned on wiring harness design, including mass fraction data

Spacecraft wiring harness is 7-10% of dry mass for conventional satellites, potentially 10-30% for power-intensive designs.

nasa-mass-growth

Probabilistic Mass Growth Uncertainties (NASA)

https://ntrs.nasa.gov/api/citations/20130013736/downloads/20130013736.pdf

NASA study on spacecraft mass growth from SRR to launch

Average mass growth 28-30%. Growth allowances by subsystem: structure 20%, wire harness 15%, propulsion 20%, mechanisms 20%.

proba2-launch-orbit

Launch and Orbit — PROBA2 Science Center

https://proba2.sidc.be/about/launch

ESA solar observation satellite in dawn-dusk SSO at 725 km with real operational eclipse data

Eclipse season ~80 days/year (Nov-Jan); maximum eclipse duration 18 minutes per orbit at peak around December solstice. Full sunlight view for most of the year.

wikipedia-beta-angle

Beta angle — Wikipedia

https://en.wikipedia.org/wiki/Beta_angle

Reference on beta angle and its effect on satellite eclipse exposure

Beta_crit = arcsin(R_earth / (R_earth + h)). At 575 km, beta_crit ≈ 66.5°. Earth's obliquity ~23.45°. Describes beta angle geometry and its effect on eclipse/sunlight exposure for LEO satellites.

starcloud-whitepaper

Why we should train AI in space — Starcloud White Paper

https://starcloudinc.github.io/wp.pdf

Starcloud's technical pitch for orbital AI compute

Claims capacity factor >95% in dawn-dusk SSO. 88,000 satellites at 600-850 km in dusk-dawn SSO.

Space Batteries: How SpaceX Designs Batteries for Satellites

https://www.batterypoweronline.com/news/space-batteries-how-spacex-designs-batteries-for-satellites/

Technical breakdown of SpaceX Starlink satellite battery design

NMC 2170 cells, >230 Wh/kg pack level, ~11 kWh per 500 kg satellite. Max DOD ~50%, targeting 5,000 full-cycle equivalents. 90% capacity retention at 2,000 full cycles.

saft-ves16

VES16 Batteries for LEO and GEO Satellites (Saft)

https://spw.aerospace.org/files/2021/07/2019_04_02_IV-e_Borthomieu.pdf

Space-qualified Li-ion cell with extensive LEO flight heritage

>155 Wh/kg cell level, >65,000 cycles at 30-50% DOD over 12 years LEO. 80+ spacecraft in orbit.

eaglepicher-lp33037

LP 33037 60Ah Space Cell (EaglePicher)

https://satsearch.co/products/eaglepicher-technologies-lp-33037-60ah-space-cell

Space-qualified Li-ion cell for LEO/MEO/GEO missions

60 Ah, >40,000 LEO cycles at 40% DOD over 10 years. Prismatic design. 600+ satellites.

amprius-satellite

How High-Energy Batteries are Enhancing Satellite Operations (Amprius)

https://amprius.com/satellite-batteries/

Next-generation silicon-anode batteries for satellite applications

SiCore cells up to 450-500 Wh/kg cell level. 80% more energy than traditional graphite-anode Li-ion. Cycle life at these densities unproven for LEO.

erau-eclipse-computation

Computation of Eclipse Time for Low-Earth Orbiting Small Satellites

https://commons.erau.edu/cgi/viewcontent.cgi?article=1412&context=ijaaa

Academic paper on eclipse duration calculation methodology for LEO

Eclipse duration function of altitude, Earth radius, and beta angle. At 800 km standard LEO (beta=0°), max eclipse ~35 min.

mdpi-power-bus-management

Power Bus Management Techniques for Space Missions in Low Earth Orbit

https://www.mdpi.com/1996-1073/14/23/7932

Analysis of spacecraft power bus topologies and efficiency

Shunt regulation >96% efficiency. BCR+BDR combined round-trip path ~80-87% efficient for charge-discharge cycle.

round-trip-efficiency

A Comprehensive Guide to Round Trip Efficiency in Batteries

https://www.anernstore.com/blogs/diy-solar-guides/round-trip-efficiency-batteries

Technical explanation of battery round-trip efficiency

Li-ion RTE 90-95% cell level. LiFePO4 above 92%. Higher C-rates reduce RTE.

wikipedia-sso

Sun-synchronous orbit — Wikipedia

https://en.wikipedia.org/wiki/Sun-synchronous_orbit

Technical overview of SSO orbit characteristics

Dawn-dusk orbit rides terminator between day and night; solar panels can always see the Sun for most of the year. Beta angle varies seasonally.

researchgate-dawn-dusk-beta

Beta angle variation for a 600 km SSO dawn-dusk orbit

https://www.researchgate.net/figure/b-angle-variation-for-a-600-km-SSO-dawn-dusk-orbit_fig44_319252682

Scientific diagram showing beta angle variation over a year for dawn-dusk SSO

For 600 km dawn-dusk SSO, beta angle stays near 90° most of the year with brief excursions to lower beta around solstices.

valueinvesting-eqix

Equinix WACC, Cost of Equity, Cost of Debt and CAPM

https://valueinvesting.io/EQIX/valuation/wacc

Financial modeling estimates of Equinix's weighted average cost of capital

Equinix WACC estimated at 5.9%, cost of debt 4.65%.

alphaspread-dlr

Digital Realty Trust Discount Rate - WACC & Cost of Equity

https://www.alphaspread.com/security/nyse/dlr/discount-rate

Financial modeling estimates of Digital Realty's WACC

DLR WACC 6.5-8.76% depending on methodology. Deutsche Bank used 6.5%. Cost of debt 5.0-5.5%, cost of equity 7.46%.

alphaspread-googl

Alphabet (GOOGL) Discount Rate - WACC & Cost of Equity

https://www.alphaspread.com/security/nasdaq/googl/discount-rate

Financial modeling estimates of Alphabet's WACC

Alphabet WACC 5.5-7.5%. Cost of equity ~7.47%, cost of debt ~5.15%.

alphabet-bonds

Alphabet Issues 100-Year Bond as Tech Giants Seek AI Funding

https://www.trendingtopics.eu/google-ai-bond-2/

Alphabet's multi-currency bond issuance including century bond and spreads over Treasuries

3-year bonds at 27 bps over Treasuries, 40-year at 95 bps. Long-term debt rose from $10.9B to $46.5B (2024-2025).

fortune-tech-borrowing

Google, Meta, and Oracle Are on a $1 Trillion Borrowing Spree

https://fortune.com/2026/03/07/big-tech-trillion-dollar-borrowing-ai-century-bonds/

Analysis of hyperscaler debt issuance for AI infrastructure

$121B issued in 2025. Credit ratings: MSFT AAA, GOOGL Aa2, AMZN AA-, META Aa3, ORCL Baa2. $1.5T projected total need.

equinix-green-bonds

Equinix Issues Additional EUR 1.15 Billion in Green Bonds

https://www.equinix.com/newsroom/press-releases/2024/11/equinix-continues-to-expand-sustainability-initiatives-with-additional-1-15-billion-in-green-bonds

Equinix green bond coupon rates and effective interest costs

EUR 650M at 3.25% due 2031, EUR 500M at 3.625% due 2034. Also $750M at 5.500% due 2034.

credaily-dc-cap-rates

Data Centers Lead REIT Investment Surge With Low Cap Rates

https://www.credaily.com/briefs/data-centers-lead-reit-investment-surge-with-low-cap-rates/

Data center cap rates relative to other commercial real estate

Data center implied cap rate 4.4% in 2025, lowest across all CRE.

rclco-dc-investment

Data Centers: Capitalizing on the Data Explosion

https://www.rclco.com/publication/data-centers-capitalizing-on-the-data-explosion/

Data center investment returns including cap rates, IRRs, and development yields

Stabilized cap rates 4.25-6.25%, unleveraged IRRs 7.0-8.5%. Development leveraged IRRs 12-19%.

nareit-balance-sheets

REIT Balance Sheet Metrics

https://www.reit.com/news/blog/media/new-data-show-solid-balance-sheets-and-net-operating-income-amid-higher-longer

REIT industry weighted average interest rates and leverage

Weighted average interest rate on REIT debt 4.1% (Q1 2024). 89.6% fixed-rate, 6.4-year maturity.

wolfstreet-hyperscaler

Hyperscalers Plan $700 Billion in AI-Related Capex in 2026

https://wolfstreet.com/2026/02/07/amzn-goog-msft-meta-orcl-plan-700-billion-in-largely-ai-related-capex-in-2026-heres-where-the-cash-comes-from/

Hyperscaler cash flow, debt, and capex funding sources

2025 operating cash flow: GOOGL $165B, AMZN $139B, MSFT $136B, META $115B (combined $575B).

mellon-ai-debt

Record-Breaking AI-Related Debt Issuance in 2025

https://www.mellon.com/insights/insights-articles/record-breaking-ai-related-debt-issuance-in-2025.html

AI infrastructure debt issuance trends and credit implications

$121B hyperscaler debt in 2025, $90B+ in last 3 months. CDS costs rising since June 2025.

information-bigtech-debt

Big Tech Could Borrow Hundreds of Billions Each

https://reader.secondthoughts.workers.dev/posts/1459/view

S&P credit capacity analysis for hyperscaler AI infrastructure

S&P estimates each major hyperscaler could borrow ~$200B while retaining credit rating.

introl-hyperscaler-capex

Hyperscaler CapEx Hits $600B in 2026

https://introl.com/blog/hyperscaler-capex-600b-2026-ai-infrastructure-debt-january-2026

Hyperscaler capital expenditure and debt financing volumes

Big Five raised $108B in debt in 2025 (3.4x historical average). Capital intensity 45-57% of revenue.

accordant-dc-irr

Numbers Supporting 30%+ IRR in Hyperscale Data Center Development

https://www.accordantinvestments.com/blog/the-numbers-supporting-30-irr-returns-in-hyperscale-data-center-development

Hyperscale development economics and return expectations

Ground-up hyperscale targets 25-40% gross IRRs over 3-4 year holds. 10-15 year NNN leases.

Starlink: Is This Time Different?

https://caseclosed.substack.com/p/starlink-is-this-time-different

LEO satellite constellation financing history and bankruptcy track record

Iridium, OneWeb, ORBCOMM, Globalstar, Teledesai all went bankrupt. Only Starlink avoided bankruptcy via SpaceX cross-subsidy.

iridium-next-financing

Iridium Signs Coface Facility Agreement / Iridium Refinancing Announcements

https://investor.iridium.com/10-04-2010-iridium-signs-coface-facility-agreement

Iridium NEXT constellation financing: original 2010 Coface-backed credit facility, 2019 commercial refinancing, and 2023 SOFR-based refinancing

Original $1.8B Coface-backed facility (95% ECA guarantee): $1.54B fixed at 4.96%, $0.26B at LIBOR + 1.95%. Refinanced Nov 2019 at LIBOR + 3.75% ($1.45B). Refinanced Sep 2023 to SOFR + 2.25%.

telesat-lightspeed-financing

Telesat Completes $2.54 Billion Funding for Lightspeed

https://www.telesat.com/press/press-releases/2024/09/telesat-completes-2-54-billion-funding-agreements-for-telesat-lightspeed/

Government-backed financing terms for Telesat's LEO constellation

C$2.14B federal loan at CORRA + 4.75%, C$400M Quebec loan. ~US$750M savings vs commercial borrowing.

iridium-wacc-2025

Iridium Communications WACC Analysis

https://www.gurufocus.com/term/wacc/IRDM

WACC decomposition for mature satellite operator Iridium

WACC 7.45%. Cost of equity 8.72% (beta 0.60-0.71), cost of debt 5.34%.

eutelsat-wacc-2025

Eutelsat Communications WACC

https://www.alphaspread.com/security/lse/0jni/discount-rate

WACC components for merged GEO/LEO satellite operator

WACC 6.23%. Cost of equity 6.53%, cost of debt 5.13%.

eutelsat-oneweb-eca-financing

Eutelsat gets nearly 1 billion euros in French-backed ECA financing

https://spacenews.com/eutelsat-gets-nearly-1-billion-euros-in-french-backed-eca-financing/

French state-backed export credit financing for OneWeb LEO satellite procurement

Eutelsat signed ~€975M in ECA financing backed by French state (Bpifrance Assurance Export) for 440 replacement LEO satellites from Airbus. Combined with €1.5B shareholder raise (French government >€700M, UK government €163M), total financing covers estimated €2.2B needed for OneWeb constellation replenishment.

viasat-fy25-financials

Viasat Annual Report FY25

https://investors.viasat.com/static-files/26695466-1245-4ca9-a161-b308e459c03c

Viasat's debt structure as a leveraged satellite operator

Debt/equity 1.65-1.88. 9.0% Senior Secured Notes. S&P B+ negative outlook.

spacex-financial-profile

SpaceX Revenue, Valuation & Funding

https://sacra.com/c/spacex/

SpaceX financial profile including Starlink revenue

$800B valuation (Dec 2025). Revenue $15.5B, Starlink $10B. 2018 term loan at LIBOR + 4.25%.

space-vc-returns

A Different Space Race: Raising Capital (McKinsey)

https://www.mckinsey.com/industries/aerospace-and-defense/our-insights/a-different-space-race-raising-capital-and-accelerating-growth-in-space

Space venture capital return expectations

Space VCs cite 10-15x ROIC minimum. Over $47B private capital invested since 2015.

offshore-wind-wacc

Finance Innovations Can Halve Cost of Capital for Offshore Wind

https://www.gwec.net/news/new-innovations-in-finance-can-half-the-cost-of-capital-for-offshore-wind

How blended finance reduces offshore wind WACC over time

Baseline 10-12% WACC driven to 6-7% over decades. Philippines: 11.72% to 6.54%.

leo-insurance-market

Satellite Launches Up, Insurance Takeup Down

https://www.businessinsurance.com/satellite-launches-up-insurance-takeup-down/

Space insurance market data showing LEO self-insurance trend

Of ~10,000 active satellites, only ~300 insured (mostly GEO). Fewer than 50 LEO satellites insured.

industry-wacc-benchmarks-2025

Cost of Capital by Industry: Benchmarks 2025

https://www.phoenixstrategy.group/blog/cost-of-capital-industry-benchmarks

Cross-industry WACC ranges

Technology 8.5-12.0%, Energy & Natural Resources 9.0-13.5%, Real Estate 5.5-8.5%.

gs-dc-power-demand-2025

AI to Drive 165% Increase in Data Center Power Demand by 2030

https://www.goldmansachs.com/insights/articles/ai-to-drive-165-increase-in-data-center-power-demand-by-2030

Goldman Sachs Research projects data center power demand reaching ~122 GW globally by end of 2030.

GS forecasts 165% increase in data center power demand by 2030 vs 2023, driven by AI (27% of market by 2027). Current global usage ~55 GW; projected ~122 GW by 2030.

ieefa-pjm-capacity-prices

Projected Data Center Growth Spurs PJM Capacity Prices by Factor of 10

https://ieefa.org/resources/projected-data-center-growth-spurs-pjm-capacity-prices-factor-10

IEEFA analysis of how data center demand drove PJM capacity prices from $28.92 to $329.17/MW-day.

PJM capacity prices rose ~10x from 2024/25 to 2026/27. Data centers responsible for 63% of the 2025/26 price increase.

rmi-gas-turbine-constraints

Gas Turbine Supply Constraints Threaten Grid Reliability

https://rmi.org/gas-turbine-supply-constraints-threaten-grid-reliability-more-affordable-near-term-solutions-can-help/

RMI analysis of gas turbine supply chain bottlenecks, rising costs, and alternative solutions.

Three OEMs supply 75%+ of gas turbines; lead times extended to 2028-2030. Utility planned gas capacity doubled from 25 GW to 45 GW by 2030.

grist-btm-gas-2026

Data Centers Are Scrambling to Power the AI Boom with Natural Gas

https://grist.org/energy/data-centers-natural-gas-methane-behind-the-meter/

Grist investigation of 46+ data centers deploying 56 GW of BTM gas generation.

46 data centers with 56 GW combined BTM capacity identified. 1,000+ GW gas-fired power in development globally.

eesi-dc-energy-bills

Data Center Power Demands Are Contributing to Higher Energy Bills

https://www.eesi.org/articles/view/data-center-power-demands-are-contributing-to-higher-energy-bills

EESI analysis of how data center demand is driving electricity price increases.

US avg electricity price rose 27% from 2019 to 2025. New gas plant costs tripled since 2022.

bnef-dc-power-106gw

U.S. Data Center Power Demand Could Reach 106 GW by 2035: BloombergNEF

https://www.utilitydive.com/news/us-data-center-power-demand-could-reach-106-gw-by-2035-bloombergnef/806972/

BNEF raised its US data center demand forecast 36% to 106 GW by 2035.

BNEF forecast US data center power demand at 106 GW by 2035, up 36% from April 2025 forecast.

mckinsey-dc-power-2030

Data Centers and AI: How the Energy Sector Can Meet Power Demand

https://www.mckinsey.com/industries/private-capital/our-insights/how-data-centers-and-the-energy-sector-can-sate-ais-hunger-for-power

McKinsey projects 219 GW global data center demand by 2030, with 156 GW for AI workloads.

McKinsey forecasts 3.5x increase in data center capacity demand 2025-2030. US to triple from 25 GW to 80+ GW by 2030.

epoch-ai-power-30gw

Global AI Power Capacity Is Now Comparable to Peak Power Usage of New York State

https://epoch.ai/data-insights/ai-datacenter-power

Epoch AI estimates total AI data center power capacity reached ~30 GW by end of 2025.

AI data center power capacity reached ~30 GW by Q4 2025. Computing capacity growing at ~3.3x per year since 2022.

iea-energy-and-ai-2025

Energy Demand from AI - IEA Energy and AI Report

https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai

IEA projects global data center electricity consumption doubling to ~945 TWh by 2030.

Current data center electricity ~415 TWh (1.5% global). Base case projects doubling to 945 TWh by 2030.

camus-grid-interconnection

Why Does It Take So Long to Connect a Data Center to the Grid?

https://www.camus.energy/blog/why-does-it-take-so-long-to-connect-a-data-center-to-the-grid

Camus Energy explains grid interconnection bottlenecks and timelines.

Grid interconnection timelines rose from under 2 years (2008) to over 8 years (2025).

ge-vernova-backlog-2025

GE Vernova Expects to End 2025 with an 80-GW Gas Turbine Backlog

https://www.utilitydive.com/news/ge-vernova-gas-turbine-investor/807662/

GE Vernova gas turbine production targets and backlog stretching to 2029.

GE Vernova targets 20 GW annualized production by mid-2026, stretch to 24 GW by mid-2028. Backlog extends to 2029.

fervo-geothermal-2025

Fervo Energy Raises $462M Series E for Geothermal Development

https://fervoenergy.com/fervo-energy-raises-462-million-series-e-to-accelerate-geothermal-development-and-meet-surging-energy-demand-with-clean-firm-power/

Fervo raised $462M to accelerate enhanced geothermal, targeting 500 MW by 2028.

Cape Station: 100 MW by 2026, 500 MW by 2028. Tripled drilling speed, halved per-well costs.

gasturbinehub-market-2025

2025: The Year the Gas Turbine Market Quietly Rewired Itself

https://gasturbinehub.com/2025-the-year-the-gas-turbine-market-quietly-rewired-itself/

Industry analysis of gas turbine market transformation driven by data center demand.

6-7 year OEM delivery horizons. 20-30% EPC cost rise since 2021. Gas turbine manufacturing at ~90% utilization.

dc-geothermal-frontier

How Geothermal Energy Is Gaining Ground in AI Data Center Power Strategies

https://www.datacenterfrontier.com/energy/article/55339222/how-geothermal-energy-is-gaining-ground-in-ai-data-center-power-strategies

Analysis of geothermal economics for data centers including 1 GW modeled project.

Current geothermal LCOE ~$88/MWh with tax credits; projected $50-60/MWh by 2035. US has ~3,400 GW potential.

spacex-ai-sat-mini-spacenews

SpaceX offers details on orbital data center satellites

https://spacenews.com/spacex-offers-details-on-orbital-data-center-satellites/

SpaceNews reporting on SpaceX AI Sat Mini specifications and megawatt-class plans

AI Sat Mini: 100 kW, ~1 ton, ~180 m wingspan, custom D3 chip designed to run hot with radiation protection. ~100 per Starship launch.

spacex-ai-sat-mini-daniel-marin

AI Sat Mini: los centros de datos orbitales de SpaceX de 180 metros de longitud

https://danielmarin.naukas.com/2026/03/23/ai-sat-mini-los-centros-de-datos-orbitales-de-spacex-de-180-metros-de-longitud/

Daniel Marin's technical analysis of AI Sat Mini dimensions and deployment

~1 ton, ~180 m wingspan (exceeding ISS 108.5 m), SSO, ~100 per Starship V3 launch.

starship-payload-specs

Starship of SpaceX - eoPortal

https://www.eoportal.org/other-space-activities/starship-of-spacex

Technical specifications for Starship payload capacity

9 m diameter fairing, 18 m height, ~1,100 m³ volume, 100+ metric tons to LEO.

nvl72-rack-physical-specs

Is Your Data Center Ready for the NVIDIA GB200 NVL72?

https://www.sunbirddcim.com/blog/your-data-center-ready-nvidia-gb200-nvl72

Physical specifications of the GB200 NVL72 rack

0.6 m × 1.07 m × 2.24 m, 1,360 kg, 120 kW, 72 Blackwell GPUs.

nasa-atcs-overview

NASA ISS Active Thermal Control System Overview

https://www.nasa.gov/wp-content/uploads/2021/02/473486main_iss_atcs_overview.pdf

Technical reference for the ISS external cooling system

70 kW maximum rejection via 6 radiator ORUs (~460 m² total), ammonia loops, ~13,000 kg system mass.

celeroton-space-thermal

Celeroton Space Thermal Management Systems

https://celeroton.com

ESA-funded heat pump technology for space radiator temperature boosting

Boosting from 80°C to 150°C reduces radiator area ~60%, COP 3-5, 5-10% compute power cost.

nasa-rosa-gateway

NASA Gateway Power and Propulsion Element

https://www.nasa.gov/missions/artemis/gateway/a-powerhouse-in-deep-space-gateways-power-and-propulsion-element/

Gateway PPE 60 kW ROSA deployment specifications

2 ROSA wings, 60 kW total, 100-120 W/kg, 40 kW/m³ stowed power density.

megaflex-sbir

MegaFlex Scale-Up to 175 kW/Wing

https://www.sbir.gov/sbirsearch/detail/388526

NASA SBIR for Northrop Grumman MegaFlex solar array

Up to 200 W/kg, 175 kW per wing, fan-fold circular deployment, TRL 5-6.

nasa-300kw-solar-array-structures

Solar Array Structures for 300 kW-Class Spacecraft

https://ntrs.nasa.gov/citations/20140000360

NASA study validating 300 kW solar array structural feasibility

Designed and ground-tested for Solar Electric Propulsion missions.

k2-gravitas-orbital-today

K2 Space to Launch Satellite for Orbital Data Centers

https://orbitaltoday.com/2026/03/23/k2-space-to-launch-satellite-that-could-pave-the-way-for-orbital-data-centers/

K2 Space Gravitas satellite launch and Giga-Class platform development

Gravitas ~2 tons, 40 m wingspan, 20 kW, launching March 2026. Giga-Class: 110 kW, 15,000 kg payload.

google-suncatcher-research

Exploring a space-based, scalable AI infrastructure system design

https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/

Google Research blog on the Suncatcher orbital AI compute architecture

81-satellite formation, 1.6 Tbps ISLs demonstrated, Trillium v6e TPUs, prototype 2027.

nonuniform-tensor-parallelism

Nonuniform-Tensor-Parallelism: Mitigating GPU failure impact for Scaled-up LLM Training

https://arxiv.org/abs/2504.06095

Meta paper on fault tolerance with GPU replacement timeline data and spare capacity requirements

Hardware failure recovery takes 5 days for physical replacement ("perhaps on the low-side for replacing high-demand hardware"). Clusters spend 81% of time with >0.1% GPUs failed. DP-DROP requires ~11.4% spare capacity; NTP reduces to ~2%. TP8 at 0.4% failure rate: >99% availability. TP64: ~80%.

jacklin-small-satellite-failure-rates

Small-Satellite Mission Failure Rates (NASA)

https://ntrs.nasa.gov/citations/20190002705

NASA study of satellite reliability across mass categories

After controlling for design maturity, micro/minisatellites equally reliable (~98%).

payload-space-isam-2025

The State of ISAM 2025

https://payloadspace.com/the-state-of-isam-2025/

Industry survey of in-space servicing, assembly, and manufacturing readiness

Servicing TRL 7-9, assembly TRL 4-7, manufacturing TRL 5-7.

gitai-iss-demo

GITAI S2 ISS External ISAM Demonstration

https://gitai.tech

First autonomous robotic ISAM tasks outside the ISS (March 2024)

Dual robotic arm achieved TRL 7 for autonomous assembly tasks.

ascend-project-specs

ASCEND - Advanced Space Cloud for European Net zero emission

https://ascend-horizon.eu/activities/

EU Horizon Europe feasibility study for orbital data center infrastructure

800 kW blocks, 10 MW MVP by 2036, ~115 kg/kW specific mass, requires in-orbit assembly.

starcloud-satellite-progression

Starcloud plans its next moves after training first AI model in space

https://www.geekwire.com/2025/starcloud-power-training-ai-space/

Starcloud satellite roadmap from single-GPU to 100 kW

Starcloud-1 (1 GPU, operational), Starcloud-2 (multi-GPU, 2027), Starcloud-3 (100 kW, Starship).

sophia-space-tile

Sophia Space TILE Architecture

https://www.geekwire.com/2026/sophia-space-launches-orbital-data-center-plans/

Modular 1 m × 1 m compute tiles with integrated cooling

92% power-to-compute efficiency, passive cooling, ground test 2026, orbit 2027-2028.

kepler-comms-tranche1

Kepler Communications Tranche 1 Distributed Compute Cluster

https://www.kepler.space

First operational distributed on-orbit computing service

10 satellites, 4× Jetson Orin each, 100 Gbps ISLs, operational March 2026.

china-xingshidai

Xingshidai AI Satellite Constellation

https://www.datacenterdynamics.com/en/news/chinese-ai-satellite-constellation-launches-12-satellites/

China's first operational AI satellite constellation (May 2025)

12 satellites, 744 TOPS each, 8B model, 100 Gbps laser ISLs, target 2,800 satellites.

darpa-nom4d

DARPA NOM4D Program

https://www.darpa.mil/research/programs/novel-orbital-and-moon-manufacturing-materials-and-mass-efficient-design

Novel orbital manufacturing and assembly demonstrations

Phase 3 orbital demos in 2026: 1.4 m truss construction and carbon fiber polymerization.

semianalysis-silicon-shortage

The Great AI Silicon Shortage

https://open.substack.com/pub/semianalysis/p/the-great-ai-silicon-shortage

SemiAnalysis analysis of front-end wafer capacity as the dominant AI compute bottleneck (March 2026)

Documents structural shift from power-constrained to silicon-supply-constrained AI compute. NVIDIA locked majority of logic, memory, and component supply. N3 fully utilized through 2027.

semianalysis-memory-mania

Memory Mania: How a Once-in-Four-Decades Shortage Is Fueling a Memory Boom

https://reader.secondthoughts.workers.dev/posts/835/view

SemiAnalysis analysis of HBM capacity dynamics and dual memory shortage

HBM wafer capacity 5x-ing in 4 years but supply shortfall persists at 5-9% through 2027. Commodity DRAM also in ~7% deficit.

epoch-packaging-bottleneck

Advanced packaging and HBM — not logic dies — were the bottlenecks on AI chip production in 2025

https://epochai.substack.com/p/advanced-packaging-and-hbm-not-logic

Epoch AI analysis showing AI consumed ~90% of packaging/HBM but only 12% of logic in 2025

CoWoS and HBM are almost entirely consumed by AI demand; logic serves broader markets.

epoch-scaling-2030

Can AI scaling continue through 2030?

https://epoch.ai/blog/can-ai-scaling-continue-through-2030

Epoch AI assessment of chip manufacturing vs power as binding constraint

Projects ~100M H100-equiv GPUs by 2030. Concludes power binds before chips.

luminix-euv

ASML EUV Shipment Projections

(Luminix industry analysis)

Independent analysis corroborating ASML production rates

48 EUV systems shipped in 2025. Consensus: 64-67 for 2026, 80-85 for 2027.

asml-1kw-source

ASML Set to Boost Chip Output 50% by 2030

https://wccftech.com/asml-set-to-boost-chip-output-by-ramping-euv-power-to-a-kilowatt/

ASML 1kW EUV source upgrade to boost throughput 50%

Source power from 600W to 1kW enables 330 wph (from 220). Upgrade packages for existing tools.

tsmc-demand-gap

TSMC Advanced Node Demand Gap

(TSMC earnings, TrendForce, Fusion Worldwide)

TSMC reports demand 3x available advanced-node capacity

Demand "about three times" available capacity; suppliers could sell "20-50% more" if it existed.

nvidia-q4-fy2026

Nvidia FY2026 Q4 results — $68B revenue

https://fortune.com/2026/02/25/nvidia-nvda-earnings-q4-results-jensen-huang/

NVIDIA financial results showing $216B annual revenue and $95B supply commitments

FY2026 data center revenue $197.3B. Supply commitments nearly doubled to $95.2B.

uvation-h100-availability

H100 Availability: The Silent Crisis

https://uvation.com/articles/h100-availability-the-silent-crisis-threatening-enterprise-ai-plans

Analysis of 4-tier GPU allocation hierarchy and enterprise access barriers

Enterprises face 6-12 month waits. Grey market at $25-40/hour.

coreweave-nvidia

CoreWeave Deep Dive

https://introl.com/blog/coreweave-gpu-cloud-ai-infrastructure-deep-dive-2025

CoreWeave's preferential NVIDIA allocation via $250M investment

First to deploy GB200 NVL72 and GB300. Fleet: 250K+ GPUs.

silicon-analysts-share

NVIDIA GPU Market Share 2024-2026

https://siliconanalysts.com/analysis/nvidia-ai-accelerator-market-share-2024-2026

NVIDIA market dominance and competitive dynamics

87% peak share (2024), declining to ~75% (2026). H100: $3,320 cost, $28,000 price.

saudi-gpu-deal

NVIDIA sending 18,000 GPUs to Saudi Arabia

https://www.tomshardware.com/pc-components/gpus/nvidia-sending-18-000-ai-gpus-to-saudi-arabias-state-backed-ai-data-centers-in-wake-of-cancelled-export-rules

US-approved GPU sale to Saudi Arabia requiring government authorization

~18,000 GB300 GPUs for HUMAIN/G42 500MW datacenter.

amd-openai-deal

AMD-OpenAI 6 GW multi-year GPU deal

https://logisticsviewpoints.com/2025/10/06/amd-and-openai-sign-long-term-gpu-deployment-agreement-strategic-and-supply-chain-considerations/

AMD deploys Instinct GPUs to OpenAI with equity component

6 GW MI450 starting H2 2026. AMD issued ~10% equity warrant.

introl-secondary-market

Secondary GPU Markets Guide 2025

https://introl.com/blog/secondary-gpu-markets-buying-selling-used-hardware-guide-2025

Secondary GPU market pricing and structure

H100 at 50-85% of new. 300+ new GPU cloud providers in 2025.

google-meta-tpu

Google-Meta Multibillion-Dollar TPU Rental Deal

https://reader.secondthoughts.workers.dev/posts/1482/view

Meta signs multi-year TPU deal with Google

Signals chip supply constraints driving diversification among largest buyers.

sk-hynix-shortage

Chip wafer shortage through 2030 — SK Hynix chief

https://www.networkworld.com/article/4146270/chip-wafer-shortage-will-run-through-2030-as-ai-demand-overwhelms-supply-sk-hynix-chief.html

SK Group chairman projects wafer shortage through 2030

Wafer deficit >20%, requiring 4-5 years of capacity building.

hbm-export-controls

High-Bandwidth Memory: Critical Gaps in Export Controls

https://reader.secondthoughts.workers.dev/posts/712/view

HBM industry concentration — 3 companies control 97% of production

SK Hynix (53-62%), Samsung (35%), Micron (11%).

payload-nvidia-space1

Nvidia Unveils Space-1 Vera Rubin Module

(Payload newsletter, March 2026)

NVIDIA purpose-built space compute at GTC 2026

25x H100 compute. Six launch customers.

introl-2026

Orbital Data Center Competitive Landscape 2026

(Introl, 2026)

Mapping of orbital operators to chip procurement strategies

Operators split across commercial NVIDIA, custom silicon, and custom fab strategies.

lbnl-solar-land-2022

Land Requirements for Utility-Scale PV (Bolinger & Bolinger, 2022)

https://emp.lbl.gov/publications/land-requirements-utility-scale-pv

Definitive empirical study of solar land-use intensity across >90% of US utility PV

Fixed-tilt median: 2.8 acres/MW_DC; tracking median: 4.2 acres/MW_DC. Power density improved 43-52% from 2011-2019.

nrel-land-use-2013

Land-Use Requirements for Solar Power Plants in the US (Ong et al., 2013)

https://docs.nrel.gov/docs/fy13osti/56290.pdf

Original NREL baseline study on solar land requirements

Total capacity-weighted land use: 7.3 acres/MW_AC (direct), 8.9 acres/MW_AC (total).

jordaan-solar-land-metrics-2025

Quantifying Land-Use Metrics for Solar PV Projects in the Western US

https://www.nature.com/articles/s43247-025-02862-5

2025 empirical analysis of 719 solar projects in Western Interconnection

Average capacity-based land-use efficiency of 24.7 W/m² (~4.9 acres/MW).

doe-solar-futures-2021

DOE Solar Futures Study (2021)

https://www.energy.gov/eere/solar/solar-futures-study

Comprehensive DOE study on solar deployment scenarios through 2050

1,570 GW_DC by 2050 requires ~10.3M acres (0.5% of US). Concludes land will not limit deployment.

nrel-federal-lands-solar-2025

Vast Federal Lands Have Potential for Renewable Energy (NREL, 2025)

https://www.nrel.gov/news/program/2025/vast-federal-lands-have-potential-for-renewable-energy-but-only-a-small-fraction-is-needed.html

NREL study quantifying 5,750 GW of solar potential on federal lands

44 million acres of federal land; central scenarios deploy 51-84 GW on <2M acres by 2035.

blm-western-solar-plan

BLM Western Solar Plan — 31 Million Acres

https://www.powermag.com/blm-considering-31-million-acres-of-u-s-public-lands-for-solar-power-development/

BLM designation of federal land for solar across 11 western states

31 million acres designated (up from 19M). Within 15 miles of high-voltage transmission.

breakthrough-solar-land-2024

Is Utility-Scale Solar Stealing Our Food? Think Again

https://thebreakthrough.org/issues/food-agriculture-environment/is-utility-scale-solar-stealing-our-food-think-again

Analysis of solar's tiny footprint relative to US farmland

Solar occupied 336,090 acres in 2020, <0.04% of 897M acres of farmland.

smartenergyusa-solar-land-lease-2026

How Much Do Solar Companies Pay to Lease Land?

https://www.smartenergyusa.com/blog/how-much-do-solar-companies-pay-to-lease-land/

Regional breakdown of US solar land lease rates

National range $250-$1,000/acre/year. Typical escalation 1.5-2.5%/year.

bisnow-dc-land-prices-2024

Rising Land Prices Spell Trouble For Data Center Developers

https://www.bisnow.com/national/news/data-center/rising-land-prices-spell-trouble-for-some-data-center-developers-124988

Per-acre data center land prices across major US markets

NoVA at $2-3.75M/acre; Silicon Valley $5-6M; land historically <10% of DC cost but rising.

datacenters-com-land-prices-2025

Data Center Land Deals: Why Prices Are Skyrocketing

https://www.datacenters.com

Average US data center land pricing trends

Average $244K/acre (50+ acre parcels, 2024), up 23% YoY.

credaily-dc-land-market-2026

Data Centers Dominate US Land Market

https://www.credaily.com/briefs/data-centers-dominate-us-land-market/

Data centers outbidding other land uses in key markets

Amazon $700M for single site. 64% of new DC capacity in frontier markets.

lancaster-farming-dc-solar-va-2025

Data Centers Clobber Solar in Quest for Virginia Farmland

https://www.lancasterfarming.com/farming-news/conservation/data-centers-clobber-solar-in-quest-for-virginia-farmland/article_ce12c9f1-5f32-4d71-abff-ab886655f1d9.html

Data center developers displacing solar in Virginia farmland competition

Solar leases at 10x ag rents still can't match DC purchase prices.

pv-mag-opposition-zoning-2025

US Renewable Energy Rollout Slows Amid Local Opposition

https://www.pv-magazine.com/2025/07/15/us-renewable-energy-rollout-slows-amid-local-opposition-zoning-laws/

459 counties in 44 states with severe renewable energy restrictions

262 solar projects contested in 2024; 31 canceled. 16% increase in restrictions.

virginia-mercury-solar-rejected-2024

Data Centers Approved, Solar Farms Rejected in Rural Virginia

https://virginiamercury.com/2024/12/03/data-centers-approved-solar-farms-rejected-what-is-going-on-in-rural-virginia/

Virginia counties rejecting more solar MW than approved in 2024

First time more solar rejected than approved, while DCs approved in same communities.

datacenterwatch-64b-blocked-2025

$64B of Data Center Projects Blocked or Delayed

https://www.datacenterwatch.org/report

Tracking opposition to data center projects

$18B blocked, $46B delayed (2024-2025). 142 activist groups in 24 states.

enr-grid-not-land-bottleneck

Grid Access, Not Land, Emerges as Bottleneck

https://www.enr.com/articles/62227-grid-access-not-land-emerges-as-bottleneck-for-data-center-construction

ENR analysis confirming power delivery is the binding constraint

222 GW announced DC capacity vs 147 GW deliverable = 75 GW gap.

exowatt-dispatchable-solar

Powering AI at Scale: Modular Dispatchable Solar for Data Centers

https://www.exowatt.com/blog/powering-ai-at-scale-modular-dispatchable-solar-for-data-centers-3

Analysis of desert land potential for DC solar in US Southwest

Hundreds of thousands of acres could support 1,200+ GW of DC capacity.

pv-magazine-agrivoltaics-lcoe-2026

Agrivoltaics LCOE Premium Study

https://www.pv-magazine.com/2026/02/11/scientists-say-land-preservation-costs-should-be-factored-into-agrivoltaics-lcoe-calculations/

German study showing 4-148% LCOE premium for agrivoltaics

Agricultural value too small to offset higher system costs; land preservation is the rationale.

seia-wood-mackenzie-2026

SEIA/Wood Mackenzie US Solar Market Insight (2026)

https://www.seia.org/research-resources/solar-market-insight-report

Industry solar installation tracking

43 GW installed in 2025 (279 GW cumulative). 769 GW projected by 2036.

nrel-atb-2024-solar

NREL Annual Technology Baseline 2024 — Utility-Scale PV

https://atb.nrel.gov/electricity/2024/utility-scale_pv

Solar cost, efficiency, and capacity factor projections through 2050

Panel efficiency to 28% by 2050 via tandem cells. 7-15% CF improvement by 2035.

pv-mag-solar-per-acre-2022

More Solar Per Acre (PV Magazine, 2022)

https://pv-magazine-usa.com/2022/01/20/more-solar-per-acre-50-more-panels-and-30-more-electricity-over-the-past-decade/

Analysis of 52% power density improvement from 2011-2019

Fixed-tilt 52% improvement, tracking 43%, driven by higher-efficiency modules.

eia-capacity-factors

EIA Solar Capacity Factors by State

https://www.eia.gov/todayinenergy/detail.php?id=39832

Official US solar capacity factor data by state and region

National average ~25%. Arizona 29.1%. Southwest 26-28%. Northeast 14-16%.

act-cchps-space

Heat Pipes In Space: How CCHPs Are Used In Spacecraft Thermal Control

https://www.1-act.com/resources/blog/heat-pipes-in-space-cchps/

Advanced Cooling Technologies overview of constant conductance heat pipes for spacecraft

CCHPs transport thermal energy several meters in microgravity. Practical designs up to ~15 feet (~4.6 m). Aluminum extrusions with ammonia working fluid are the standard. Can be bent into 2D and 3D configurations. Transport capacity depends on diameter, working fluid, and satellite architecture.

broadstaff-dc-staffing-levels

Data Center Staffing Levels: How Many People Does a Facility Need?

https://broadstaffglobal.com/data-center-staffing-levels-how-many-people-does-a-facility-need

Industry benchmarks for data center staffing density by facility size

Small (1-5 MW): 8-15 staff. Medium (5-20 MW): 15-35 staff. Large (20+ MW): 35+ staff. A 12 MW facility requires ~20 FTEs; a 40 MW facility ~45. Hyperscale (100+ MW) achieves lower FTE/MW through automation. Cites Uptime Institute staffing forecast.

ncsl-dc-incentives

Policy Snapshot: Data Center Incentives

https://www.ncsl.org/fiscal/policy-snapshot-data-center-incentives

National Conference of State Legislatures overview of state data center tax incentive programs

37 states offer DC tax incentives as of 2025. Five states (Alabama, Iowa, Montana, Nevada, Oklahoma) explicitly offer property tax relief. Job creation requirements range from 5 to 50 jobs. Iowa granted property tax exemptions beginning 2027. Louisiana allows 20-30 year tax breaks for $200M+ investments.

abitos-dc-tax-incentives

Tax Incentives for Building and Operating Data Centers

https://abitos.com/tax-incentives-data-centers-2025/

AbitOs overview of state-level data center tax incentives including property tax abatements

Incentives typically last 10-20 years; Alabama up to 30 years. Property tax abatements administered at local level, varying widely within states. Nevada: up to 75% personal property tax abatement for 10-20 years. Minnesota: permanent property tax exemption on equipment. Virginia: $732M in subsidies (2024). Texas: $1B+ in subsidies (2025). Most packages are individually negotiated.

pan-2005-norris-landzberg-sac

Solder Joint Reliability Acceleration Model (TI E2E Forum, citing Pan et al. 2005)

https://e2e.ti.com/support/data-converters-group/data-converters/f/data-converters-forum/730473/solder-joint-reliability-acceleration-model

Norris-Landzberg equation parameters for SAC305 lead-free solder

Defines the standard acceleration factor model for solder joint fatigue. SAC305 parameters: n=2.65, m=0.136, Ea/k=2185 K. SnPb parameters: n=1.9, m=0.33, Ea/k=1414 K.

chen-2014-sac305-bga-fatigue

Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint (PMC 2014)

https://pmc.ncbi.nlm.nih.gov/articles/PMC4121147/

Weibull fatigue life data for SAC305 BGA solder joints under thermal cycling

BGA with SAC305 at -40°C to +125°C showed Weibull characteristic life of 3,104 cycles. Pan acceleration factor of 35.5 from test ΔT=165°C to field ΔT=60°C.

pmc-2019-satellite-thermal-cycling

Temperature Sensor Assisted Lifetime Enhancement of Satellite Embedded Systems (PMC 2019)

https://pmc.ncbi.nlm.nih.gov/articles/PMC6891388/

Thermal cycling effects on satellite electronics with on-orbit data from SwissCube

SwissCube CubeSat measured 60°C external ΔT in LEO. Modified Coffin-Manson exponents: q=1-3 for solder. Temperature-aware task mapping achieves 8x lifetime improvement.

esa-ogs-tenerife

Observatorio del Teide — ESA Science & Technology

https://sci.esa.int/web/smart-1/-/36326-observatorio-del-teide

ESA page on Observatorio del Teide site: altitude 2,393m, above cloud level

ESA's Optical Ground Station at Observatorio del Teide, Tenerife, at 2,393 m altitude — well above the first inversion layer or cloud level. Optimal conditions for Earth-to-space optical communications.

esa-thermal-control

Current and Future Techniques for Spacecraft Thermal Control (ESA Bulletin)

https://www.esa.int/esapub/bulletin/bullet87/paroli87.htm

ESA overview of spacecraft thermal control technologies and operating temperature requirements

Generic electronics: -20°C to +70°C; batteries: -5°C to +20°C; louvre mechanisms achieve ±5°C regulation accuracy.

electronics-cooling-1996-space

Thermal Control of Space Electronics (Electronics Cooling Magazine)

https://www.electronics-cooling.com/1996/09/thermal-control-of-space-electronics/

Reference on spacecraft electronics temperature control with specific operating ranges

Spacecraft electronics typically -10°C to +50°C; max junction temp goal 110°C; radiator performance up to 350 W/m² at 40°C.

pmc-2024-thermal-fatigue-review

Thermal Fatigue Failure of Micro-Solder Joints in Electronic Packaging Devices: A Review (Materials, 2024)

https://pmc.ncbi.nlm.nih.gov/articles/PMC11123225/

Comprehensive review of thermal fatigue failure mechanisms in electronic packaging solder joints

70% of electronic device failures originate in packaging and assembly; thermomechanical fatigue is the major reason (~55%) of PCBA failure.

linux-see-cots-soc-2025

When Radiation Meets Linux: Analyzing Soft Errors in Linux on COTS SoCs under Proton Irradiation

https://arxiv.org/html/2503.03722v2

Proton irradiation testing of three COTS Linux SoCs with on-orbit rate calculations

NXP i.MX 8M Plus (14nm FinFET) calculated crash rate 0.44-0.78/year at ISS orbit. 14nm showed 5-14x lower cross-section vs 40nm.

ball-sheets-sel-7nm-finfet-2021

Single-Event Latchup in a 7-nm Bulk FinFET Technology (IEEE TNS, 2021)

https://ieeexplore.ieee.org/document/9324760/

First characterization of SEL in 7nm bulk FinFET showing increased sensitivity

7nm FinFET has 3x shallower trench isolation, increasing SEL sensitivity. Holding voltage as low as 0.85V. Confirmed by 64 MeV proton beam testing.

sel-destructive-fraction

Reliability Impacts of Non-Destructive Single-Event Latch-up in COTS (NASA, 2025)

https://ntrs.nasa.gov/api/citations/20250006971/downloads/Reliability%20Impacts%20of%20Non-Destructive%20SEL%20in%20COTS.pdf

NASA study quantifying the reliability impact of SEL in COTS components

~50% of commercial CMOS parts susceptible to SEL; ~50% of those are immediately destructive. Establishes SEL as a major COTS reliability concern.

xilinx-versal-7nm-see-2022

7nm FinFET technology heavy ion SEL evaluation using Xilinx Versal (IEEE, 2022)

https://ieeexplore.ieee.org/document/9954564/

SEL testing of 7nm Versal FPGA showing design rules can eliminate SEL

No SEL at LET up to 80 MeV-cm²/mg with design rules. PS SEFI rate ~1/year in LEO. Demonstrates radiation-aware design can fully mitigate SEL in advanced nodes.

seu-rate-5nm-7nm-scaling

SEU Cross-Section Trends for D-FFs at 5-nm and 7-nm Bulk FinFET (ResearchGate)

https://www.researchgate.net/publication/365952507

Anomalous order-of-magnitude increase in SEU cross-section at 5nm vs 7nm FinFET

5nm SEU cross-section is an order of magnitude higher than 7nm for equivalent RHBD, due to disproportionate changes in SET pulse-widths.

nextbigfuture-suncatcher-2025

Google Project Suncatcher to Put TPUs for AI in Space in 2027

https://www.nextbigfuture.com/2025/11/google-project-suncatcher-to-put-tpus-for-ai-in-space-in-2027.html

Additional detail on Google Suncatcher HBM radiation sensitivity metrics

HBM sensitivity: one uncorrectable ECC event per ~50 rad proton exposure; with shielding, ~1 error per 10M inferences. "Likely acceptable for inference."

oliveira-2022-cubesat-radiation

Comparison of cubesat and microsat catastrophic failures in function of radiation and debris impact risk (Scientific Reports, 2022)

https://pmc.ncbi.nlm.nih.gov/articles/PMC9825371/

Quantifies COTS vs rad-hard catastrophic radiation failure rates in LEO

COTS catastrophic radiation failure: ~10⁻³/device/year at ISS orbit; rad-hard: ~10⁻⁵ (100x lower). Radiation dominates debris risk for COTS.

sciencedirect-seu-commercial-leo

Single Event Upset — ScienceDirect Topics

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/single-event-upset

Reference compilation of SEU performance ranges for commercial vs rad-hard electronics in LEO

Commercial electronics SEU rate 10⁻³ to 10⁻⁷ errors/bit/day; rad-hard: 10⁻⁸ to 10⁻¹¹.

spacex-falcon-users-guide-2025

Falcon User's Guide, Version 8 (March 2025)

https://www.spacex.com/assets/media/falcon-users-guide-2025-05-09.pdf

Official SpaceX payload planning document specifying vibration, acoustic, and shock environments

Random vibration MPE 5.13 g_rms (20-2000 Hz); acoustic 131.4 dB OASPL with blankets; separation shock 300-1000 g SRS; quasi-static up to 6.0 g axial.

nasa-gsfc-vibration-levels

Benefits of Spacecraft Level Vibration Testing (NASA GSFC, 2015)

https://ntrs.nasa.gov/api/citations/20150020490/downloads/20150020490.pdf

NASA Goddard vibration testing methodology and standard levels

GEVS workmanship minimum 6.8 g_rms; qualification 14.1 g_rms for <50 lb components; current projects 8.7-15.8 g_rms.

solder-joint-reliability-review-2019

Reliability issues of lead-free solder joints in electronic devices (PMC review, 2019)

https://pmc.ncbi.nlm.nih.gov/articles/PMC6735330/

Comprehensive review of solder joint failure mechanisms under thermal cycling and vibration

~20% of electronic failures from vibration; ~55% from thermal. BGA corner joints fail first under vibration. Failure mode shifts ductile→brittle at higher intensity.

mil-hdbk-344-ess

Environmental Stress Screening (Wikipedia / MIL-HDBK-344A summary)

https://en.wikipedia.org/wiki/Environmental_stress_screening

Overview of ESS methodology for precipitating latent defects in electronics

~80% of latent defects thermally sensitive, ~20% vibration-sensitive; combined screening catches >90%.

combined-vibration-thermal-bga

Failure study of Sn37Pb PBGA solder joints using temperature cycling, random vibration and combined tests (2019)

https://www.sciencedirect.com/science/article/abs/pii/S0026271418309478

Study comparing single-environment vs combined thermal+vibration effects on BGA solder joints

Thermal-to-vibration sequence is harsher than vibration-to-thermal. Pre-cracks from thermal cycling reduce vibration reliability. Effects not simply additive.

smallsat-reliability-spacenews-2020

Smallsat reliability increasing (SpaceNews, 2020)

https://spacenews.com/smallsat-reliability-increasing/

Smallsat reliability trends from the 34th Annual Small Satellite Conference

87% mission success for smallsats 2009-2018; 96% for 220-500 kg class. Most failures in first 60 days.

SpaceX Has Reduced Starlink Failure Rate To 0.2% Reveals Early Data

https://wccftech.com/spacex-starlink-failure-rate-early-data/

Starlink failure rate improvement across satellite generations

V0.9: 13% failure; V1.0 batch 2: 3%; latest batch: 0.2%. Dramatic improvement from design maturation.

Starlink Launch Statistics (Jonathan McDowell)

https://planet4589.org/space/con/star/stats.html

Comprehensive tracking of all Starlink satellite launches, failures, and orbital status

11,641 launched as of Mar 2026; 178 early deorbits; Gen2 V2 Mini >99% control; 9,347 working as of Dec 2025.

spacecube-cots-iss

SpaceCube Overview and Use of COTS Parts in Space (NASA NEPP)

https://nepp.nasa.gov/workshops/etw2020/talks/18-JUN-THU/1030-Petrick-NEPP-ETW-20205002774-SpaceCube-COTS.pdf

NASA presentation on SpaceCube COTS processor performance on ISS

Eight COTS PowerPCs error-free >99.99% over 4 years on ISS; RHBSW overhead <1.3%; 3.3x performance over rad-hard.

mil-hdbk-217-factors

MIL-HDBK-217 Microcircuit Tables

https://www.sqconline.com/mil-hdbk-217-microcircuit-tables

Standard reliability prediction factors for microcircuits including environmental multipliers

Space flight π_E = 0.5 (same as ground benign); commercial-grade π_Q = 10x vs military. Space environment rated as benign for non-radiation failure modes.

bouwmeester-2022-cubesat

Improving CubeSat reliability: Subsystem redundancy or improved testing? (Delft, 2022)

https://www.sciencedirect.com/science/article/pii/S0951832021007584

Study of CubeSat reliability improvement strategies

EPS causes >40% of failures after 30 days; comms ~26-30%. Improved testing beats redundancy. Most failures are immaturity, not environment-induced.

STMicroelectronics and SpaceX celebrate a decade-long partnership key to Starlink

https://newsroom.st.com/media-center/press-item.html/t4741.html

Disclosure of COTS chip supply chain behind Starlink constellation

5+ billion COTS chips shipped to SpaceX; STM32 MCUs, BiCMOS for phased arrays; >5M chips/day delivery rate.

tafazoli-2009-spacecraft-failures

A study of on-orbit spacecraft failures (Acta Astronautica, 2009)

https://www.sciencedirect.com/science/article/abs/pii/S0094576508003019

Analysis of 156 on-orbit failures across 129 spacecraft from 1980-2005

AOCS caused 32% and power 27% of failures (59% combined). Gyroscopes alone caused 17%. First 30 days: power 31.5%, AOCS 31.7%, TTC 16%.

kim-castet-saleh-2012-eps

Spacecraft electrical power subsystem: Failure behavior, reliability, and multi-state failure analyses (RESS, 2012)

https://www.sciencedirect.com/science/article/abs/pii/S0951832011002055

Detailed statistical analysis of EPS failures in LEO and GEO

EPS fails less frequently but more fatally in LEO than GEO. After 10 years, EPS accounts for 44.1% of all failures. 29% of LEO EPS failures in electrical distribution.

SpaceX's Semi-Annual Update on Starlink Network Health, Failure Rate, Collision Risk

https://www.kratosspace.com/constellations/articles/spacex-semi-annual-update-on-starlink-network-health-failure-rate-collision-risk

Analysis of SpaceX's FCC semi-annual constellation status report (H2 2023)

SpaceX took 14 satellites out of operation in 6 months, all retaining collision avoidance. Only 1 truly failed. 73 reentered during the period.

oneweb-failures-2023

OneWeb fleet reliability: 4 of 634 satellites failed in orbit

https://www.spaceintelreport.com/oneweb-eutelsat-call-for-tighter-satellite-disposal-regulations-4-of-onewebs-634-satellites-have-failed-in-orbit/

OneWeb fleet reliability data showing <1% failure rate

4/634 OneWeb satellites failed in orbit (0.63% cumulative, ~0.2%/yr annualized) as of mid-2023. Eutelsat CEO expressed confidence to extend operational life.

tid-7nm-finfet-ro-2021

Supply Voltage Dependence of Ring Oscillator Frequencies for TID Exposures for 7-nm Bulk FinFET (IEEE TNS, 2021)

https://ieeexplore.ieee.org/document/9445089

7nm FinFET TID ring oscillator frequency response

7nm bulk FinFET shows <1% circuit degradation at 380 krad — ~500x margin over shielded 5-year LEO dose of ~0.75 krad.

tid-seu-synergy-soi-sram-2022

Effects of TID on SEU Cross-Section of SOI SRAMs (MDPI Electronics, 2022)

https://www.mdpi.com/2079-9292/11/19/3188

TID-SEU synergistic effects in modern vs older SRAM technologies

Modern SOI 6T SRAM shows only 15% SEU cross-section increase at 800 krad; 7T SRAM decreases by 60%. Contrasts with 1000x increase in micrometer-scale SRAMs.

gpu-sdc-fit-2025

Silent Data Errors in GPUs: FIT Rates and Vulnerability Analysis (2025)

https://computerresearch.org/index.php/computer/article/view/102474

Quantitative GPU SDC FIT rates and fault propagation analysis

GPU SDC rate: 8.15×10⁻³ FIT/device (one error per 14,000 device-hours). Cosmic rays cause 61.7% of faults. Error rates increase 17-32% at full computational capacity.

meta-sdc-fleet-2022

Silent Errors in Production Data Centers (Meta Engineering Blog, 2022)

https://engineering.fb.com/2022/03/17/production-engineering/silent-errors/

Meta's fleet-wide SDC detection methodology and prevalence data

~3.6% of CPUs cause SDCs. Root causes: "born defective, become defective (aging), or timing variability." In-production testing detects 70% within 15 days.

google-cores-dont-count-2021

Cores that don't count (Google, HotOS 2021)

https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf

Google's discovery of "mercurial cores" producing silent data corruption

"A few mercurial cores per several thousand machines" produce silent corrupt execution errors due to minor manufacturing defects. Manifest sporadically, long after installation.

llm-soft-error-vulnerability-2025

Analysis of LLM Vulnerability to GPU Soft Errors (arXiv, 2025)

https://arxiv.org/html/2601.19912

Instruction-level fault injection analysis of LLM inference soft error impact

Most single bit-flip errors masked in LLM inference (~70-85%). Vulnerability strongly position-dependent: low-order bits <1% SDC, high-order bits 23-24%. Larger models show higher masking.

nusat-tid-leo-2025

TID Measurements in Small Satellites in LEO using LabOSat-01 (arXiv, 2025)

https://arxiv.org/html/2503.09520v1

In-situ TID measurements from dosimeters on ÑuSat satellites in polar LEO

Measured 0.5-1.9 krad over ~3 years in polar LEO (~490 km) depending on shielding. 2.9mm Al: ~1.9 krad; 5.7mm: ~0.6 krad.

epoch-hw-failures-2024

Hardware failures won't limit AI scaling (Epoch AI, 2024)

https://epoch.ai/blog/hardware-failures-wont-limit-ai-scaling

Analysis of GPU failure impact on scaling with spare buffer calculations

1M GPU cluster needs only 480 spare nodes (0.3%) with 1-day replacement. Failures won't limit scaling even at 1M+ GPU clusters.

constellation-spare-strategy-2025

Spare Strategy Analysis and Design for Large-Scale Satellite Constellation Using Markov Chain (Georgia Tech, 2025)

https://arxiv.org/html/2509.09957

Multi-echelon spare inventory optimization for mega-constellations

Indirect strategy (parking orbit spares + batch resupply) achieves 53% cost reduction vs direct replacement. Baseline failure rate: 0.05/satellite/year.

nvidia-nvlink6-specs

NVLink & NVSwitch for Advanced Multi-GPU Communication

https://www.nvidia.com/en-us/data-center/nvlink/

Official NVIDIA NVLink product page with NVLink 6 specs

NVLink 6 provides 3.6 TB/s per GPU (2x NVLink 5), 260 TB/s aggregate for Vera Rubin NVL72. 36 NVLink Switch chips with bidirectional SerDes at double lane rate.

megascale-infer-sigcomm

MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism

https://arxiv.org/abs/2504.02263

ByteDance production MoE serving system (SIGCOMM 2025)

Disaggregates attention and FFN modules with M2N communication. 4.2x higher throughput than NCCL, 1.9x per-GPU throughput, 1.5-2x cost reduction. Supports heterogeneous clusters (PCIe + NVLink).

deepep-communication-lib

DeepEP: An Efficient Expert-Parallel Communication Library

https://github.com/deepseek-ai/DeepEP

DeepSeek's open-source all-to-all communication library for MoE

High-throughput mode: 153 GB/s NVLink, 43-58 GB/s RDMA. Low-latency mode: 77-194 us dispatch. Hook-based communication-computation overlap.

moetuner-expert-placement

MoETuner: Optimized Mixture of Expert Serving with Balanced Expert Placement and Token Routing

https://arxiv.org/abs/2502.06643

ILP-based expert placement optimization for multi-node MoE

36% tail latency reduction, 17.5% end-to-end speedup on 16 H200 GPUs across 2 InfiniBand nodes. Minimizes inter-GPU communication via routing-aware placement.

lmsys-large-scale-ep

Deploying DeepSeek with PD Disaggregation and Large-Scale Expert Parallelism on 96 H100 GPUs

https://lmsys.org/blog/2025-05-05-large-scale-ep/

Production deployment of wide EP over InfiniBand

DeepSeek-V3 with EP72 across 12 H100 nodes via InfiniBand. 52.3k input tok/s per node, within 5.6% of official profile. Two-batch overlap for latency masking.

vllm-large-scale-ep

vLLM Large Scale Serving: DeepSeek @ 2.2k tok/s/H200 with Wide-EP

https://vllm.ai/blog/large-scale-serving

Multi-node InfiniBand MoE deployment benchmark

2,200 output tok/s per H200 in multi-node InfiniBand deployments. Uses DeepEP kernels and dual-batch overlap (DBO).

nvidia-dgx-b200-specs

NVIDIA DGX B200 Specifications

https://www.runpod.io/articles/guides/nvidia-dgx-b200

DGX B200 physical and power specifications

10U server, 8 Blackwell B200 GPUs, ~14.3 kW max power, 1,440 GB total HBM3e.

nvidia-hgx-b200-pcf

NVIDIA HGX B200 Product Carbon Footprint Summary

https://images.nvidia.com/aem-dam/Solutions/documents/HGX-B200-PCF-Summary.pdf

HGX B200 baseboard mass and carbon footprint data

HGX B200 baseboard ~32 kg for 8 GPUs. Contains GPU packages, HBM3e, VRMs, PCB, NVLink interconnects.

lcrd-spie-2024

NASA's LCRD Experiment Program: Characterization and Initial Operations

https://ntrs.nasa.gov/citations/20240001299

SPIE paper on LCRD operational results

59% session success (full period), 69% (later period), 79% excluding weather. Weather availability ~80% per station. Two optical ground stations.

lcrd-nasa-year

NASA's Laser Communications Relay: A Year of Experimentation

https://www.nasa.gov/missions/tech-demonstration/nasas-laser-communications-relay-a-year-of-experimentation/

LCRD first-year operations overview

Heavy fronts knock stations offline for days. Snowstorms, wildfires, mudslides caused closures.

lcrd-eoportal

STPSat6-LCRD — eoPortal

https://www.eoportal.org/satellite-missions/stpsat6-lcrd

Comprehensive LCRD technical reference

Two ground stations (Table Mountain CA, Haleakala HI), complementary weather. Ka-band RF backup (622 Mbps down). Adaptive optics with deformable mirrors.

tbird-mit

TeraByte InfraRed Delivery (TBIRD) — MIT Lincoln Laboratory

https://www.ll.mit.edu/r-d/projects/terabyte-infrared-delivery-tbird

TBIRD mission overview and records

200 Gbps downlink demonstrated. 4.8 TB in single 5-minute pass. 100x faster than typical city internet.

tbird-eoportal

TBIRD System — eoPortal

https://www.eoportal.org/satellite-missions/tbird

TBIRD orbital and system parameters

525 km SSO on 6U CubeSat. 7-minute passes. >40 degree elevation passes every 1-2 days. 2 TB on-board storage.

leo-contact

LEO satellite contact time and data volume analysis

https://www.researchgate.net/publication/261480453

LEO pass geometry analysis

~6 minutes per pass at 500 km with 10-degree minimum elevation. ~4 contacts per day per station.

ogs-network-jocn

Ground Station Network Optimization for Space-to-Ground Optical Communication Links

https://opg.optica.org/jocn/abstract.cfm?uri=jocn-7-12-1148

Five-year cloud data analysis for ground station network sizing

Single-site 25-80% availability. German 8-station: 84.7%. European: ~99.9%. Intercontinental 9+: ~100%.

ogs-gso-feeder

Ground Segment Design for Broadband Geostationary Satellite with Optical Feeder Link

https://opg.optica.org/jocn/abstract.cfm?uri=jocn-7-4-325

GEO feeder link ground station requirements

~10 stations needed for 99.9% link availability. Site diversity is the primary mitigation for cloud cover.

ogs-europe-arxiv

Performance Analysis of Varied Optical Ground Station Network Configurations

https://arxiv.org/html/2410.23470v2

European OGS network scaling analysis

Single station 83.75%, 7 stations 96.56%, optimized 6-station 99.58%. 1,536 handovers/year. Cloud correlations r<0.02 between most pairs.

ogs-australia

Update on German and Australasian Optical Ground Station Networks

https://arxiv.org/html/2402.13282v2

Australasian OGS network availability analysis

8-node Australasian: 99.98% availability. 3-node Australian: 97%.

ogs-network-tenerife

DLR Optical Ground Station Networks (Tenerife data)

https://arxiv.org/html/2402.13282v2

Tenerife ground station cloud statistics

Mean cloud cover 0.30, link probability ~70%. Observatorio del Teide at 2,400m above cloud layer.

atmo-effects

Atmospheric Effects on Satellite-Ground Free Space Optical Transmissions

https://www.mdpi.com/2076-3417/12/21/10944

Comprehensive treatment of atmospheric turbulence for optical links

Beam wander, spreading, scintillation from refractive index variations. Cn2 profiles and Hufnagel-Valley model.

atmo-ao-tbit

Tbit/s Line-Rate Satellite Feeder Links Enabled by Coherent Modulation and Full-Adaptive Optics

https://pmc.ncbi.nlm.nih.gov/articles/PMC10282091/

Demonstration of Tbit/s free-space optical with AO

1.008 Tbit/s over 53.42 km. Full AO provides 24.7 dB median power gain. Scintillation index 1-4. Power fluctuations >20 dB despite AO.

tbird-spie

On-Orbit Demonstration of 200-Gbps Laser Communication Downlink from TBIRD

https://ntrs.nasa.gov/citations/20230000434

TBIRD link budget and scintillation testing

Worst-case scintillation index 1.0 at 20-30 degree elevation. 3-7 microradian pointing accuracy.

cailabs-ogs

Cailabs Optical Ground Stations (TILBA-OGS)

https://www.cailabs.com/aerospace-defense/laser-communications/optical-ground-stations/

Commercial optical ground station with turbulence correction

Multi-Plane Light Conversion for atmospheric turbulence correction. 10+ Gbps bidirectional. Remotely operable. SES testing.

ses-cailabs-pr

SES Partners with Cailabs to Test Next-Generation Laser Communication Technology

https://www.ses.com/press-release/ses-partners-cailabs-test-next-generation-laser-communication-technology

SES press release (September 2025): TILBA-OGS L10 stations, full-duplex 10 Gbps, MPLC turbulence correction, remote operation

SES testing Cailabs TILBA-OGS L10 optical ground stations for commercial integration. Full-duplex 10 Gbps, MPLC atmospheric turbulence correction, remote operability for scalable global deployment.

backhaul-fiber

Satellite Ground Station Fiber Backhaul Requirements

https://www.stackinfra.com/resources/thought-leadership/using-satellites-for-backhaul-data/

Fiber backhaul architecture for satellite ground stations

DWDM fiber supporting 10-100+ Gbps over up to 100 km between ground stations and data centers.

dtn-pace

NASA's Near Space Network Enables PACE Mission DTN Operations

https://www.nasa.gov/communicating-with-missions/delay-disruption-tolerant-networking/

First NASA Class-B operational DTN deployment

PACE mission: 34 million DTN bundles, 100% success rate. Store-and-forward for intermittent links.

semianalysis-gtc-2026

GTC 2026 — The Inference Kingdom Expands

https://newsletter.semianalysis.com/p/nvidia-the-inference-kingdom-expands

SemiAnalysis GTC 2026 recap covering Nvidia's inference architecture roadmap

Covers Groq LPU acquisition/integration, Rubin NVL144 Kyber rack (72 NVLink 7 switches, 28.8 Tbps per switch), NVL576/NVL1152 multi-rack CPO systems, attention-FFN disaggregation for MoE inference. Key finding: copper-based all-to-all networking within racks approaching physical limits (20,736 differential pairs for NVL288 backplane); CPO required for multi-rack scaling. Nvidia expanding lock-in across compute, networking, storage, and software layers.

jensen-huang-lex-2026

Jensen Huang: NVIDIA — The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494

https://lexfridman.com/jensen-huang-transcript

March 2026 interview covering Nvidia's rack-scale co-design philosophy and inference architecture

Jensen argues inference is computationally harder than training ("thinking is harder than reading"), explicitly rejecting the idea that inference can be commoditized on simple hardware. NVLink-72 exists to make 4-10T parameter MoE models run "as if on one GPU." Vera Rubin pod: 10 PB/s internal scale bandwidth, ~1,100 GPUs, 60 exaflops per pod. NVIDIA shipping ~200 pods/week. 1-year architecture cadence with 10x token efficiency improvement per year. On space: acknowledges cooling challenge ("no conduction, no convection"), describes space compute as practical today only for edge imaging, says he's "cultivating space" while prioritizing terrestrial low-hanging fruit (idle grid power).

ftai-power-launch

FTAI Aviation Announces the Launch of FTAI Power

https://ir.ftaiaviation.com/news-releases/news-release-details/ftai-aviation-announces-launch-ftai-power-ftai-adapts-worlds

FTAI Power launches 25 MW aeroderivative turbine adapted from CFM56 engine

25 MW unit adapted from CFM56 engine. Over 22,000 CFM56 engines produced. Capacity to deliver 100+ units annually. Targets data center and industrial power markets.

lazard-lcoe-2025

Lazard Releases 2025 Levelized Cost of Energy+ Report

https://www.lazard.com/news-announcements/lazard-releases-2025-levelized-cost-of-energyplus-report-pr/

Annual LCOE benchmark report (June 2025)

U.S. utility-scale solar LCOE: $38-78/MWh, average $58/MWh (down 4% YoY). With PTC: $20-45/MWh. Comprehensive generation cost benchmark across technologies.

ember-battery-cost-2025

How Cheap is Battery Storage?

https://ember-energy.org/latest-insights/how-cheap-is-battery-storage/

Ember analysis of global battery storage costs (December 2025)

Global (ex-US, ex-China) all-in BESS project capex ~$125/kWh, comprising ~$75/kWh core equipment (shipped from China) + ~$50/kWh installation. Translates to LCOS ~$65/MWh. Core BESS equipment fell 40% in 2024.

nrel-battery-cost-2025

Cost Projections for Utility-Scale Battery Storage: 2025 Update

https://docs.nrel.gov/docs/fy25osti/93281.pdf

NREL U.S. battery storage cost benchmark

2024 U.S. 4-hour lithium-ion battery system overnight capital cost: $334/kWh. Energy-related costs: $241/kWh; power-related costs: $372/kW. Projections: $147-339/kWh by 2035.

constellation-tmi-restart

DOE loans Constellation $1B to restart Three Mile Island nuclear unit

https://www.utilitydive.com/news/doe-loan-constellation-crane-nuclear-restart/805923/

DOE loan for TMI Unit 1 restart (Utility Dive, Nov 2025)

835 MW TMI Unit 1 (renamed Crane Clean Energy Center) restart at ~$1.6B cost. $1B DOE loan closed November 2025. 20-year PPA with Microsoft. Timeline: expected 2027 (ahead of original 2028 schedule).

meta-nuclear-deals-2026

Meta Announces Nuclear Energy Projects, Unlocking Up to 6.6 GW

https://about.fb.com/news/2026/01/meta-nuclear-energy-projects-power-american-ai-leadership/

Meta signs nuclear power deals with Vistra, TerraPower, and Oklo (January 2026)

Up to 6.6 GW by 2035. Vistra: 2,176 MW from existing Perry/Davis-Besse plants + 433 MW uprates. TerraPower: up to 8 Natrium reactors (2.8 GW), 2 by 2032. Oklo: up to 1.2 GW Aurora fast-reactor campus in Ohio.

chatgpt-pro-eclipse-audit

Orbital Eclipse/Shade Model Audit for a Proposed LEO "Orbital AI Datacenter"

https://research-viewer.pages.dev/orbital-ai-datacenters-3/local-sources/chatgpt-pro-eclipse-audit/

LLM-generated computation (ChatGPT Pro / GPT-5.4 Pro, March 2026) applying standard orbital mechanics to dawn-dusk SSO eclipse exposure at 500–650 km. Used as a computation check, not as an independent empirical source.

Applies standard cylindrical-shadow eclipse model with J2 precession to derive eclipse behavior at 500–650 km dawn-dusk SSO. Results: ~95 eclipse days/year, ~21 min max eclipse, ~95.4% annual sunlight at 575 km. The primary value of this source is its cross-validation against five real missions (TerraSAR-X, Sentinel-1, MicroSCOPE, PROBA-2, IRIS) — the match between the model's predictions and reported mission data confirms that the underlying physics is correctly applied. The computation itself is routine orbital mechanics; the LLM served as a calculator, not as an authority. Eclipse-free SSO requires ~1,390 km altitude.

cui-two-gpus-2025

Story of Two GPUs: Characterizing the Resilience of Hopper H100 and Ampere A100 GPUs

https://arxiv.org/abs/2503.11901

Peer-reviewed SC '25 paper: 2.5 years, 11.7M GPU-hours comparing A100 and H100 GPU resilience

Longitudinal study on NCSA Delta: 448 A100s over 895 days (9.6M GPU-hours) and 608 H100s over 146 days (2.1M GPU-hours). H100 memory MTBE 3.2x worse per-GPU than A100 (88,768 vs 283,271 hours). H100 critical hardware dramatically improved (zero GSP, PMU, NVLink errors). Row remapping mitigates 92% of uncorrectable errors but spare rows capped at 512. 8 row remapping failures observed on H100. Node availability ~99.3-99.4%. Recommends 5% overprovisioning for 99.9% job availability.

meta-llama3-paper

The Llama 3 Herd of Models

https://arxiv.org/abs/2407.21783

Primary Meta paper with detailed GPU failure categorization during Llama 3 405B training

466 job interruptions (47 planned, 419 unexpected) in 54 days on 16,384 H100 GPUs. Detailed categories: 148 Faulty GPU, 72 HBM3, 54 Software Bug, 35 Network, 32 Host Maintenance, 19 SRAM, 17 GPU System Processor, etc. 78% hardware-attributed. Only 3 manual interventions needed; automation handled the rest. Achieved >90% effective training time. This is the primary source for the data reported secondarily by Tom's Hardware (meta-llama3-failures).

microsoft-superbench

SuperBench: Improving Cloud AI Infrastructure Reliability with Proactive Validation

https://arxiv.org/abs/2402.06194

USENIX ATC '24 best paper: proactive GPU node validation in Azure across hundreds of thousands of GPUs

2+ years in Azure, validated hundreds of thousands of GPUs. 10.36% of nodes defective (failure or performance regression, not exclusively permanent). Baseline MTBI 17.5 hours, improved to 22.61x with proactive validation. 38.1% of incidents previously required >1 day to resolve. Row remapping with >10 correctable errors shows 77.8% higher regression chance.

nebius-fault-tolerant-2025

Fault-tolerant training: How we build reliable clusters for distributed AI workloads

https://nebius.com/blog/posts/how-we-build-reliable-clusters

Nebius engineering blog with production GPU cluster MTBF and spare capacity data

Peak MTBF 56.6 hours (169,800 GPU-hours) on 3,000-GPU H100/H200 cluster. Average MTBF 33.0 hours. MTTR 12 minutes. Maintains dedicated spare GPU buffer per customer with both dedicated and floating spare modes.

gpu-useful-life-2025

Why GPU Useful Life Is the Most Misunderstood Variable in AI Economics

https://www.stanleylaman.com/signals-and-noise/gpus-how-long-do-they-really-last

Analysis of GPU depreciation schedules vs actual operational lifetimes across hyperscalers

Depreciation schedule extensions: Microsoft/Google 4→6 years, Meta 4→5.5 years, Amazon reversed 6→5 years ($700M hit). Google TPUs at 100% after 7-8 years. Azure K80s ran 9 years, P100s 7 years. Distinguishes economic obsolescence from physical failure.

trendforce-gpu-lifespan-2024

Datacenter GPUs May Have an Astonishingly Short Lifespan of Only 1 to 3 Years

https://www.trendforce.com/news/2024/10/31/news-datacenter-gpus-may-have-an-astonishingly-short-lifespan-of-only-1-to-3-years/

TrendForce report on Google architect's claim about GPU lifespans

Unnamed Google/Alphabet architect claims datacenter GPUs last 1-3 years at 60-70% utilization. At heavy AI workloads, 1-2 years. Cites 700W TDP thermal stress. Likely refers to economic useful life rather than physical failure given contradicting depreciation evidence.

iridium-lifetime-extension-spacenews

Iridium adds five years to constellation lifetime estimate

https://spacenews.com/iridium-adds-five-years-to-constellation-lifetime-estimate/

Iridium NEXT constellation (launched 2017-2019) expected to operate to at least 2035 (17.5+ years), with first-gen Iridium precedent of 20+ years

CEO Matt Desch announced February 2024 that engineering assessment extended Iridium NEXT expected life from 12.5-year design life to at least 2035 (17.5+ years). First-generation Iridium satellites with similar design life lasted 20+ years, limited by fuel depletion rather than component failure. As of March 2026, zero reported failures across all Iridium NEXT satellites (7-9 years of operation).

castet-saleh-2009-satellite-reliability

Satellite and satellite subsystems reliability: Statistical data analysis and modeling

https://www.sciencedirect.com/science/article/abs/pii/S0951832009001094

Peer-reviewed analysis of 1,584 satellite reliabilities (1990-2008) finding infant mortality pattern (Weibull beta=0.45), contradicting wear-out assumption

Demonstrated satellite failures follow infant mortality pattern (Weibull beta=0.4521, theta=2,607 years via MLE), meaning failure rate decreases with age. Satellites surviving early life become progressively more reliable. Used SpaceTrak database. Identified AOCS and telemetry as primary failure-driving subsystems. Dataset dominated by large traditionally-manufactured satellites.

oneweb-stats-mcdowell

OneWeb Launch Statistics (McDowell)

https://planet4589.org/space/con/ow/stats.html

Independent tracking showing 656 OneWeb satellites with only 2 failures (0.3% cumulative) over 4-7 years

656 launched, 654 in orbit, 2 total down (early deorbits). 637 fully operational. 0.3% cumulative failure rate (~0.05-0.08%/yr annualized). Design life 7+ years. Managed by Eutelsat since 2022 merger. Airbus signed contract for 100 extension satellites.

SpaceX Deorbits Nearly 500 Starlink Satellites in 6 Months

https://space4peace.org/mass-burn-spacex-deorbits-nearly-500-starlink-satellites-in-6-months/

SpaceX deorbited 472 satellites in 6 months (Dec 2024-May 2025), most less than 5 years old

472 satellites deorbited, 430 first-generation. "Most of the satellites that reentered the atmosphere did so less than five years after beginning operations." SpaceX did not explain why. Related to identified component flaw in ferrite transformers in early V1 satellites.

SpaceX Rapidly Retiring and Incinerating Old Satellites

https://www.cgaa.org/article/spacex-rapidly-retiring-and-incinerating-old-starlink-satellites

Compilation of Starlink proactive retirement data showing 500+ first-gen satellites retired due to identified component flaw

SpaceX deorbiting 4-5 satellites/day. 100 retired February 2024 due to identified potential flaw in early V1 components involving high-melting-point components like ferrite transformers. Over 500 first-generation models retired. Satellites were "currently maneuverable and serving users effectively" at time of retirement.

saft-ves16-leo-battery

Saft: Overcoming the challenges of LEO satellite batteries

http://www.satmagazine.com/story.php?number=464410536

Space-qualified Li-ion cells achieve 65,000+ cycles (12 years), providing 2.4x margin for 5-year LEO missions

Saft VES16 Li-ion: 65,000+ cycles at 30-50% DoD over 12 years. 5-year LEO mission requires ~27,000 cycles. Li-ion provides ~2.4x cycle margin. Prior Ni-Cd/Ni-H2 batteries lasted only 5-7 years. Batteries are no longer the binding constraint for LEO satellite lifetime.

newspace-systems-rw

NewSpace Systems Reaction Wheels

https://www.newspacesystems.com/products/reaction-wheels/

Reaction wheels with 3M+ failure-free hours in orbit across 800+ units

T065 wheel (first flown 2014) has 3 million+ failure-free hours, no reported SEUs. 800+ wheels sold, baselined on 4 constellation programs. Demonstrates high reliability of modern ADCS components.

ladbury-2025-sel-statistics

Statistical Analysis of Historical SEL Test Data to Provide A Priori Risk Estimates

https://ntrs.nasa.gov/api/citations/20240007573/downloads/TNS2024-v4.pdf

NASA multi-center analysis of JPL+CERN SEL databases finding ~50% COTS parts susceptible, rates spanning >6 orders of magnitude, no predictive trends

~50% of unhardened CMOS parts SEL-susceptible (stable over 20+ years). ~50% of SEL events destructive. SEL rates span >6 orders of magnitude. "A few percent of parts" have rates exceeding once per month in benign environments. No consistent trends with vendor, process, or function. Proton screening "often ineffective" for SEL. Heavy-ion test at LET ≥30 MeV-cm²/mg can bound SEL rate at <once per 10.5 years (90% confidence).

karp-hart-2018-sel-planar-to-finfet

Single-Event Latch-Up: Increased Sensitivity From Planar to FinFET

https://ieeexplore.ieee.org/document/8141939/

Seminal paper demonstrating increased SEL sensitivity in FinFET vs planar CMOS via proton/neutron testing and TCAD simulation

64-MeV proton and neutron testing plus TCAD simulation show increased SEL sensitivity in FinFET due to 3x shallower trench isolation increasing parasitic CMOS SCR gain. Predicts all FinFET technologies with similar STI parameters will experience increased SEL sensitivity.

pieper-2022-sel-vulnerability-7nm

Single-Event Latchup Vulnerability at the 7-nm FinFET Node

https://ieeexplore.ieee.org/document/9764419/

SEL at 7nm manifests as "limited current increases" (micro-latchup) with holding voltage near Vdd

Evaluates SEL at 7nm FinFET under alpha, neutrons, heavy ions. SEL effects are "limited current increases" rather than hard shorts. Holding voltage strongly temperature-dependent, within 100 mV of nominal supply.

pieper-2022-micro-latchup-7nm

Micro-Latchup Location and Temperature Characterization in a 7-nm Bulk FinFET Technology

https://ieeexplore.ieee.org/document/9954525/

Thermal imaging of micro-latchup events in 7nm FinFET showing random locations and 140°C local temperatures

Micro-latchup events at random locations across die. Temperature rises from room temp to 140°C within latchup region. Multiple micro-latchups can cluster, causing significant IC-level current increases. Characterizes a phenomenon unique to advanced FinFET nodes.

tsmc-2024-sel-rate-prediction-finfet

Single Event Latch-up (SEL) Rate Prediction Methodology in Bulk FinFET Technology

https://ieeexplore.ieee.org/document/10702151/

TSMC develops SEL rate prediction methodology for FinFET, confirming SEL characterization remains an active foundry-level problem

TSMC researchers propose SEL rate prediction methodology for bulk FinFET at varied voltage and temperature. "High consistency with experimental results within 90% statistical confidence." Specific nodes and quantitative results behind paywall. Confirms TSMC is actively working on SEL characterization for FinFET processes.

nesc-2024-sel-jedec-presentation

NESC Task: Single-Event Latch-up in Commercial Electronics — Risk Assessment and Mitigation

https://ntrs.nasa.gov/api/citations/20240006090/downloads/2024-05-SEL-JEDECSAE-V11.pdf

NASA NESC presentation confirming no formal guidance exists for COTS SEL evaluation as of 2024

Identifies key knowledge gaps in SEL risk assessment. "No formal NASA guidance exists for reliability evaluation of COTS exposed to radiation, or regarding validated mitigation approaches." Aims to develop practical engineering guidelines for COTS parts susceptible to recoverable SEL.

nesc-2025-post-sel-reliability

Reliability Impacts of Non-Destructive Single-Event Latch-ups in Commercial Electronics

https://ntrs.nasa.gov/api/citations/20250006971/downloads/Reliability%20Impacts%20of%20Non-Destructive%20SEL%20in%20COTS.pdf

NASA NESC testing shows no reliability degradation after hundreds of non-destructive SEL events in small analog devices

Four COTS device types experienced hundreds-thousands of non-destructive SEL events, then passed 1000-hour life testing at max operating temperature with no degradation. However, all tested devices are small analog/power parts at older process nodes — results may not generalize to complex 4nm digital ICs.

SpaceX lowering orbits of 4,400 Starlink satellites for safety

https://www.space.com/space-exploration/satellites/spacex-lowering-orbits-of-4-400-starlink-satellites-for-safetys-sake

SpaceX plans to lower Starlink operational altitude from 550 km to 480 km in 2026, increasing drag and propellant consumption

~4,400 satellites descending from 550 km to 480 km throughout 2026. Reduces ballistic decay time by >80%. Motivated by space safety concerns. Increases propellant consumption for drag makeup, potentially shortening fuel-limited life.

volts-dc-flexibility-2026

For data centers, a little flexibility goes a long way

https://www.volts.wtf/p/for-data-centers-a-little-flexibility

Volts podcast transcript (March 2026) with Camus CEO Astrid Atkinson and Princeton Zero Lab's Jesse Jenkins on data center grid flexibility

Discusses findings from a joint Camus/Princeton Zero Lab/Encord study on flexible data center interconnection. Optimal power flow modeling across 6 sites within one utility's territory found that only 7-35 hours/year of curtailment (≤0.4% of hours) unlocks enough transmission capacity for 500 MW data centers — bypassing years-long grid upgrade timelines. Longest curtailment events were 5-16 hours, suitable for battery ride-through. Proposes a two-part model: (1) flexible interconnection (accept occasional curtailment instead of waiting for transmission upgrades), and (2) "power parks" assembling portfolios of solar, wind, battery, and VPP resources across the broader grid. Halcyon tracks 85 GW of gas plant additions currently planned across the U.S. Jesse Jenkins estimates the opportunity cost of delayed data center deployment at ~$7B per GW per year. Google can shift data center load between sites in seconds. Average data center utilization is ~40% of nameplate. ERCOT's flexible load interconnection rules are in progress but not yet implemented.

newspaceeconomy-h100-radiation-analysis

An Analysis of Radiation Protection in the NVIDIA H100 GPU

https://newspaceeconomy.ca/2025/11/03/an-analysis-of-radiation-protection-in-the-nvidia-h100-gpu/

Analysis noting H100 has no defense against SEL and is not radiation-hardened or radiation-tolerant

H100 has comprehensive ECC across HBM3, caches, and register files but "has no defense against [SEL]" and "is in no way 'radiation-hardened' or 'radiation-tolerant.'" Describes COTS-in-space mitigation approaches (current monitoring, spot shielding, redundancy). General analysis piece, not based on empirical H100 testing.