Sources

Key Sources

patel-2024-ai-bottlenecks

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute

https://www.dwarkesh.com/p/dylan-patel

SemiAnalysis CEO on semiconductor bottlenecks, data center economics, and skepticism of space GPUs

States "Space GPUs aren't happening this decade." Estimates a 1 GW data center costs ~$13B/year in rental compute expenses, Big Tech committing ~$600B annually with ~$1T total supply chain investment. Amazon can build data centers in as little as eight months. Argues scaling power in the US will not be a problem.

handmer-2026

I guess we're doing Moon factories now

https://caseyhandmer.wordpress.com/2026/02/10/i-guess-were-doing-moon-factories-now/

Argues orbital inference is economically viable because inference value far exceeds deployment cost premium

Contends inference value could be ~100x ground-based cost while space deployment costs only ~2x more, leaving substantial profit margins. Estimates ~10,000 Starship launches/year could deliver ~100 GW orbital power. Beyond that scale, manufacturing satellite mass in space (from lunar materials) becomes necessary. Claims beaming power from Earth to Moon is 1000x cheaper than alternative lunar power generation.

musk-2026

Elon Musk — "In 36 months, the cheapest place to put AI will be space"

https://www.dwarkesh.com/p/elon-musk

Musk argues orbital AI compute will be cheaper than terrestrial within 30-36 months

Claims orbit becomes cheapest for AI compute within 30-36 months. Solar panels achieve ~5x greater output in orbit (no atmosphere, no night, no batteries). Ground solar cells cost ~$0.25-0.30/W in China; space deployment reduces effective cost by 10x. Gas turbine production sold out through 2030, utility interconnect studies take 1+ year. Envisions 100+ GW/year deployment via ~10,000 annual Starship launches (20-30 Starships cycling every ~30 hours). Projects annual AI launches to space will exceed cumulative Earth-based AI compute within five years.

Sources

laird-thermal-space

Thermal Pathways in Space

https://www.laird.com/resources/case-studies/thermal-pathways-in-space

Technical case study on heat dissipation in LEO using PCB-level thermal design

Radiation via emissivity is the sole heat dissipation mechanism in vacuum. Uses distributed radiant heat sinks at PCB level with second-surface mirrors (fluoropolymers with vapor-deposited metal layers). LEO atomic oxygen and radiation rapidly degrade organic materials and alter thermal properties, making long-term stability a critical challenge.

handmer-2025-tweet

Casey Handmer — SpaceX orbital AI inference concept

https://x.com/CJHandmer/status/1997906033168330816

First-principles analysis of Starlink-derived orbital inference satellites

Proposes inference satellites derived from Starlink v3 in sun-synchronous orbit at 560 km. Each satellite: ~130 kW solar, ~200 H100-equivalent GPUs, 13,000 tokens/sec, ~$4M revenue/year at $10/token, ~60% ROI at $50,000/kW all-in cost. Key innovation: mounting GPUs directly on solar array modules (6 kW each) with local WiFi, distributing heat rather than concentrating it. At 1 kg/m² solar arrays, one Starship launch delivers ~30 MW. 1,000 launches = 30 GW. Economics work if revenue exceeds ~$4/kWh.

hn-xai-spacex-solar

Solar Power: Space vs. Earth — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/5

HN debate on whether orbital solar-powered AI compute can compete with terrestrial solar

Proponents argue space offers continuous solar without weather/night, panels paying back ~7-8x faster. Critics note ground-based solar remains far cheaper, global PV production is only ~1-2 TW/year vs the proposed 500-1000 TW/year scale, and hardware utilization drops to ~30% in space scenarios. Most concluded orbital compute is not economically competitive with ground-based solutions.

hn-xai-spacex-resources

Resource Utilization and Scarcity — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/19

HN debate on whether Earth faces genuine resource constraints justifying orbital data centers

Critics contend Earth has vast non-arable desert land and power limitations are political/infrastructural rather than fundamental. Proponents counter that space bypasses permitting, rolling blackouts, and grid constraints (19 GW shortage, 7-year turbine lead times). SpaceX-xAI vertical integration seen as competitive advantage.

hn-xai-spacex-thermodynamics

Thermodynamics of Space Cooling — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/0

Technical analysis of Stefan-Boltzmann radiative cooling constraints

A single AI rack generates ~100 kW waste heat (equivalent to ISS power budget). ISS radiator system (1,000+ m², 6+ metric tons) dissipates only ~84 kW. Operating GPUs at 70°C rather than 20°C dramatically improves radiative efficiency due to T⁴ relationship. Critics say launch costs triple or quadruple per-rack when accounting for cooling infrastructure.

hn-xai-spacex-starship

Launch Economics and Starship — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/4

Whether Starship cost reductions make orbital data centers viable

Entire proposal hinges on Starship achieving dramatic cost reductions. Even with reduced launch costs, mass for cooling, shielding, and hardware makes space data centers far more expensive. Manufacturing bottlenecks persist — current solar cell production ~1 TW/year vs proposed 500-1000 TW/year.

hn-xai-spacex-manufacturing

Space Manufacturing and Moon Bases — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/10

Orbital vs lunar vs terrestrial alternatives, with strong skepticism toward orbital

ISS dissipates max 70 kW with 1,500 m² of radiators (6.5 metric tons) — less than a single AI rack. Commenters broadly dismiss space data centers as "insane" vs Earth-based infrastructure. Moon-based described as easier due to ground-based heat dissipation. Edge computing in space acknowledged as potentially viable.

hn-xai-spacex-maintenance

Technical Feasibility of Maintenance — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/2

Hardware failure rates and impossibility of in-orbit maintenance

Failed satellites must be deorbited and replaced entirely. At scale, one-in-a-million failures become daily certainties. AI clusters' heavy interconnection means single failures cascade. Radiation-hardened hardware is several generations obsolete by deployment. Falcon Heavy delivers ~12 racks for ~$100M, tripling or quadrupling per-rack costs.

hn-xai-spacex-compute-demand

AI Capability and Compute Demand — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/17

Whether AI compute demand growth justifies space-based infrastructure

Critics question whether proposal is "buzzword attachment to drive investment." Proponents argue terrestrial expansion faces regulatory and supply-chain bottlenecks. 100 kW per rack heat is fundamentally different from modest space telescope needs. Google also exploring space-based AI infrastructure.

hn-xai-spacex-radiators

Radiator Design and Physics — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/15

Engineering approaches including droplet radiators, ammonia loops, pyramidal designs

Proposed solutions: higher GPU temperatures (70°C), ammonia coolant loops, droplet radiators, pyramidal designs. Radiator area ~3x solar panel dimensions could maintain ~300K. But mass penalties destroy the economic case. Consensus: solvable physics, prohibitive economics. Lunar facilities described as "1000x easier."

hn-xai-spacex-latency

Latency and Data Transmission — xAI joins SpaceX (HN discussion)

https://summarizer.secondthoughts.workers.dev/jobs/60ee7d4d-b465-422e-9101-5386aa22c98b/topics/11

Latency tolerance of AI workloads and bandwidth constraints

AI training is not latency-sensitive; batch inference could work via queuing. Skeptics raise bandwidth limitations and model checkpoint transfer costs. Most acknowledge concept is speculative but potentially viable within ~decade if terrestrial economics worsen.

mccalip-space-dc

Economics of Orbital vs Terrestrial Data Centers

https://andrewmccalip.com/space-datacenters

Detailed quantitative cost model comparing 1 GW orbital vs terrestrial over 5-year lifecycle

Orbital capex 2.1x terrestrial ($31.2B vs $14.8B for 1 GW). LCOE $891/MWh orbital vs $398/MWh terrestrial (2.24x gap). Launch costs dominate orbital budget at $22.2B of $31.2B. Assumes $1,000/kg to LEO. Radiator must maintain equilibrium below 75°C. Concludes economics are "not obviously stupid, and not a sure thing."

techcrunch-orbital-brutal

Why the economics of orbital AI are so brutal

https://techcrunch.com/2026/02/11/why-the-economics-of-orbital-ai-are-so-brutal/

Analysis of orbital vs terrestrial cost disparity (Feb 2026)

A 1 GW orbital data center would cost ~$42.4B — almost 3x terrestrial equivalent. Questions whether SpaceX's million-satellite approach can achieve viability.

peraspera-realities

Realities of Space-Based Compute

https://www.peraspera.us/realities-of-space-based-compute/

Comprehensive technical analysis of orbital compute across power, thermal, radiation, communications, and timeline

100 kW system requires 3-5 metric tons (solar ~930 kg, batteries ~500 kg, radiators ~1000+ kg). Timeline phases: "Crawl" (<10 kW, near-term), "Walk" (10-500 kW, 10-15 years), "Run" (MW scale, 2040s+). LEO latency 1-4 ms one-way. Commercial AI compute at MW-scale is "still decades away."

google-suncatcher

Exploring a space-based, scalable AI infrastructure system design

https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/

Google's Project Suncatcher technical feasibility study

Sun-synchronous LEO at ~650 km. 81-satellite clusters with TPUs connected by free-space optical links. Bench demo: 800 Gbps per transceiver pair. Trillium v6e TPUs survived proton beam testing to ~2 krad(Si), nearly 3x shielded 5-year dose. Solar panels 8x more productive in orbit. Two prototype satellites launching early 2027 with Planet Labs. Economic viability requires launch costs below $200/kg, projected mid-2030s.

spacecomputer-cooling

Cooling for Orbital Compute: A Landscape Analysis

https://blog.spacecomputer.io/cooling-for-orbital-compute/

Deep technical analysis of thermal management approaches at various scales

Stefan-Boltzmann: 1 m² at 80°C radiates ~850 W; at 127°C ~1,450 W/m². Rule of thumb: 2.5 m² radiator per kW rejected. ISS achieves 166 W/m² in practice. Liquid Droplet Radiators up to 7x lighter than conventional (NASA research), achieving 450 W/kg. ESA ASCEND validated thermal feasibility but requires 10x reduction in launcher emissions; targets 50 kW proof-of-concept by 2031, 1 GW by 2050.

sci-am-space-dc

Space-Based Data Centers Could Power AI with Solar Energy — At a Cost

https://www.scientificamerican.com/article/data-centers-in-space/

Balanced assessment of orbital data center feasibility (Dec 2025)

Google estimates launch costs must fall below $200/kg by 2035. Benjamin Lee (UPenn): "Launch costs are dropping... but we would still require a very large number of launches." Saarland University found orbital facilities produce ~10x more emissions than terrestrial. As of late 2025, space data centers are "mostly an idea, a handful of small prototypes and a stack of ambitious slide decks."

aei-launch-costs

Moore's Law Meet Musk's Law: The Stunning Decline in Launch Costs

https://www.aei.org/articles/moores-law-meet-musks-law-the-underappreciated-story-of-spacex-and-the-stunning-decline-in-launch-costs/

Historical analysis of SpaceX's impact on launch cost trajectory

Pre-SpaceX average: ~$16,000/kg. Falcon 9: $2,500/kg (30x reduction vs Shuttle). Falcon Heavy: $1,500/kg. Starship expected ~$1,600/kg initially, potential $100-150/kg long-term. Musk aspirational: $10/kg. Citigroup 2040 projections: best case ~$30/kg, bear case ~$300/kg.

balerion-kilowatts

Kilowatts to Compute: Data Centers on Earth and in Orbit

https://balerionspace.substack.com/p/bsv-insights-0002-kilowatts-to-compute

Detailed comparison of orbital vs terrestrial economics at 40 MW scale

Terrestrial 40 MW facility costs ~$175M in electricity over 5-year GPU lifecycle. Starcloud claims orbital equivalent could cost "tens of millions" with no ongoing fuel/grid costs. 100 MW space solar requires 330,000 m² array. Emphasizes "time is now as valuable as cost" — terrestrial facilities face multi-year permitting delays while orbital enables incremental expansion.

starcloud-nvidia

How Starcloud Is Bringing Data Centers to Outer Space

https://blogs.nvidia.com/blog/starcloud/

Starcloud's first orbital GPU launch and future plans

Starcloud-1 (60 kg, H100) launched Nov 2025 — 100x more powerful GPU than any previous space operation. First LLM trained in space. Claims energy costs 10x cheaper than terrestrial including launch. Targets 5 GW facility with ~4 km solar/cooling panels.

blocksandfiles-starcloud

Starcloud pitches orbital datacenters as cheaper, cooler, and cleaner

https://blocksandfiles.com/2025/10/23/starcloud-orbiting-datacenters/

Critical analysis of Starcloud's economic claims

Starcloud claims 20x cost advantage: 40 MW terrestrial = $167M over 10 years vs Starcloud-2 = $8.2M. But figures exclude server/storage/networking hardware; when full system deployment included ($24B in hardware), the cost advantage shrinks to 0.007%.

thales-ascend

Thales Alenia Space — ASCEND Feasibility Study Results

https://www.thalesaleniaspace.com/en/press-releases/thales-alenia-space-reveals-results-ascend-feasibility-study-space-data-centers-0

EU-funded Horizon Europe study by consortium including Thales, ArianeGroup, Airbus, DLR, Orange, HPE validating orbital DC feasibility

ASCEND = Advanced Space Cloud for European Net zero emission and Data sovereignty; launched 2023 under Horizon Europe. Requires launcher 10x less emissive over lifecycle for CO2 reduction goals. Space DCs would not require water for cooling. Projects ROI of "several billion euros" by 2050. Targets 1 GW before 2050. Timeline: robotic demo 2026 (EROSS IOD), proof-of-concept 2031, initial deployment 2036, large-scale rollout after. Modular space infrastructure assembled in orbit using robotic tech.

enr-grid-bottleneck

Grid Access, Not Land, Emerges as Bottleneck for Data Center Construction

https://www.enr.com/articles/62227-grid-access-not-land-emerges-as-bottleneck-for-data-center-construction

Analysis of grid interconnection as primary constraint on terrestrial expansion

Data center electricity demand could triple by end of decade. Multiyear waits for grid interconnection studies. Grid upgrades add tens of millions and extend preconstruction by 1+ years. Developers now required to include on-site generation and battery storage.

bloomberg-dc-decline

US Data Center Construction Drops as Permit, Power Delays Slow Projects

https://www.bloomberg.com/news/articles/2026-02-25/us-data-center-construction-fell-amid-permit-and-power-delays

First decline in US data center construction since 2020

Capacity under construction fell to 5.99 GW (end 2025) from 6.35 GW (end 2024). Nearly half of 140 projects planned for 2026 delayed to 2027.

spacex-xai-merger

SpaceX acquires xAI — orbiting data center plans

https://www.tomshardware.com/tech-industry/artificial-intelligence/spacex-acquires-xai-in-a-bid-to-make-orbiting-data-centers-a-reality-musk-plans-to-launch-a-million-tons-of-satellites-annually-targets-1tw-year-of-space-based-compute-capacity

SpaceX-xAI merger and orbital data center ambitions

SpaceX acquired xAI. Plans to launch 1 million tons of satellites annually. Targets 1 TW/year of space-based compute capacity. 100 kW compute per ton of satellite, adding 100 GW annually at full scale.

nbf-falcon9-true-cost

SpaceX Falcon 9 True Cost to Launch

https://www.nextbigfuture.com/2026/02/spacex-falcon-9-true-cost-to-launch-is-about-300-per-pound-which-is-25-of-selling-price-to-customers.html

Analysis of SpaceX's internal launch costs vs customer pricing

Internal marginal cost ~$629/kg (25% of $2,600/kg customer price). Total marginal launch cost ~$10.5-11M. Upper stage $7M, booster amortized $1M, propellant $250K.

nbf-starship-roadmap

SpaceX Starship Roadmap Lower Launch Costs by 100 Times

https://www.nextbigfuture.com/2025/01/spacex-starship-roadmap-to-100-times-lower-cost-launch.html

Cost per kg projections at various Starship reuse rates

Build cost ~$90M. At 6 flights: $94/kg; 20 flights: $33/kg; 50 flights: $19/kg; 70 flights: $14/kg. Per-flight marginal cost target $2M.

dwarkesh-space-gpus

Notes on Space GPUs

https://www.dwarkesh.com/p/notes-on-space-gpus

Quantitative analysis of orbital datacenter satellite mass budgets

Stripped GB200 NVL72 at ~100 kg consuming 132 kW (~1,452 W/kg compute). With 200 W/kg solar, ~320 W/kg radiators at 60°C, 25% chassis overhead: ~85 W/kg (~11.8 kg/kW) integrated satellite.

nasa-smallsat-power-soa

Small Spacecraft Technology State of the Art — Power Subsystems

https://www.nasa.gov/smallsat-institute/sst-soa/power-subsystems/

NASA survey of space solar array technologies with specific power data

Flown missions clustered ~30 W/kg. State-of-art rigid: up to 200 W/kg. ROSA: 100 W/kg. FOSA: 140 W/kg. Next-gen thin-film targets 500 W/kg (not flight-proven).

nvidia-gb200-specs

NVIDIA DGX GB200 NVL72 Hardware Specifications

https://docs.nvidia.com/dgx/dgxgb200-user-guide/hardware.html

Official specifications for GB200 NVL72 rack

~1,360 kg, 120-132 kW (115 kW liquid + 17 kW air cooled). 72 Blackwell GPUs, 36 Grace CPUs.

mach33-cooling

Debunking the Cooling Constraint in Space Data Centers

https://research.33fg.com/analysis/debunking-the-cooling-constraint-in-space-data-centers

Analysis challenging thermal management as fundamental blocker

Scaling from ~20 kW to ~100 kW: radiators 10-20% of total mass, ~7% of planform area. Solar arrays dominate footprint.

melagen-radiation-shielding

Radiation Shielding for Electronics

https://www.melagenlabs.com/learn/radiation-shielding-for-electronics-what-every-space-hardware-team-needs-to-know

Overview of radiation shielding approaches and mass trade-offs

LEO below 1,000 km needs minimal additional shielding for <10 krad. Hydrogen-rich polymers 3x better per unit mass than aluminum.

epoch-gpu-failures

Hardware failures won't limit AI scaling

https://epoch.ai/blog/hardware-failures-wont-limit-ai-scaling

GPU failure rates at scale and implications for AI training

H100 MTBF ~50,000 hours (~5.7 years). At 100K GPUs: one failure every 30 min. Annualized ~9%.

meta-llama3-failures

Faulty H100 GPUs and HBM3 caused half of Llama 3 training failures

https://www.tomshardware.com/tech-industry/artificial-intelligence/faulty-nvidia-h100-gpus-and-hbm3-memory-caused-half-of-failures-during-llama-3-training

Meta's failure data from Llama 3 training on 16,384 H100 cluster

419 failures in 54 days. 148 GPU failures (0.9%), 72 HBM3 failures (0.44%). One failure every 3 hours.

gpu-depreciation-schedules

Resetting GPU depreciation

https://siliconangle.com/2025/11/22/resetting-gpu-depreciation-ai-factories-bend-dont-break-useful-life-assumptions/

GPU depreciation practices across hyperscalers

AWS/Google/Microsoft: 6-year depreciation. Industry converging toward 5-year via "value cascade" model. AI-native neoclouds use 4-5 year schedules.

Starlink Satellites Falling Out of Orbit

https://orbitaltoday.com/2026/02/28/starlink-satellites-falling-risks-statistics-analysis/

Statistics on Starlink deorbiting and failure rates

10,801 launched, 1,391 (~13%) re-entered. Designed ~5-year lifespan. Early batches 3-5% uncontrollable failure rates.

fcc-5yr-deorbit-rule

FCC Adopts New 5-Year Rule for Deorbiting Satellites

https://www.fcc.gov/document/fcc-adopts-new-5-year-rule-deorbiting-satellites-0

FCC rulemaking requiring LEO satellite disposal within 5 years

Effective September 2024. Large constellations may warrant shorter periods.

nvidia-space1-module

NVIDIA Space-1 Vera Rubin Module

(Payload newsletter)

Purpose-built space AI compute module

Up to 25x H100 AI-compute. Designed for low-SWaP. Not yet commercially available. Six launch customers announced.

jll-2026-dc-outlook

2026 Global Data Center Outlook

https://www.jll.com/en-us/insights/market-outlook/data-center-outlook

JLL data center market outlook including construction costs

Shell-and-core from $7.7M/MW (2020) to $10.7M/MW (2025), forecast $11.3M/MW (2026). AI tech fit-out adds $25M/MW.

turner-townsend-dcci-2025

Data Centre Construction Cost Index 2025-2026

https://www.turnerandtownsend.com/insights/data-centre-construction-cost-index-2025-2026/

Annual construction cost index covering 52 global markets

5.5% YoY increase (down from 9.0%). 7-10% AI premium. Tokyo ($15.2/W), Singapore ($14.5/W), Zurich ($14.2/W) most expensive.

semianalysis-gb200-tco

H100 vs GB200 NVL72 Training Benchmarks

https://newsletter.semianalysis.com/p/h100-vs-gb200-nvl72-training-benchmarks

SemiAnalysis TCO analysis of GB200 NVL72

GB200 NVL72 rack ~$3.1M (hyperscaler), ~$3.9M all-in. 120 kW/rack. 1.6-1.7x H100 per-GPU cost.

mckinsey-cost-of-compute

The cost of compute: A $7 trillion race

https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers

McKinsey analysis of global data center investment requirements

$5.2T for 125 GW by 2030. Servers ~$3.5T, electrical/mechanical ~$0.8T, power generation ~$0.4T. Implies ~$42M/MW average.

epoch-hyperscaler-capex

Hyperscaler capex has quadrupled since GPT-4

https://epochai.substack.com/p/hyperscaler-capex-has-quadrupled

Epoch AI analysis of hyperscaler capital expenditure trends

Combined capex near $500B in 2025, 70%/year growth. Could reach $770B in 2026.

xai-colossus-expansion

xAI Colossus Hits 2 GW: 555,000 GPUs, $18B

https://introl.com/blog/xai-colossus-2-gigawatt-expansion-555k-gpus-january-2026

xAI Colossus expansion details

2 GW, 555,000 GPUs for ~$18B (~$9M/MW in GPU costs). Built in 122 days initially.

epochai-power-capacity

Global AI power capacity comparable to New York State

https://epochai.substack.com/p/global-ai-power-capacity-is-now-comparable

Analysis of global AI data center power capacity

AI data centers ~30 GW as of late 2025, total US data center ~40 GW.

goldman-sachs-dc-demand

AI to drive 165% increase in data center power demand by 2030

https://www.goldmansachs.com/insights/articles/ai-to-drive-165-increase-in-data-center-power-demand-by-2030

Goldman Sachs forecast of data center power demand

Projects 122 GW globally by end of 2030. 165% increase vs 2023.

semianalysis-pjm-bills

Are AI Datacenters Increasing Electric Bills?

https://newsletter.semianalysis.com/p/are-ai-datacenters-increasing-electric

PJM capacity market price dynamics and data center responsibility

PJM capacity prices jumped 9.3x. Removing datacenters reduced payments by $9.33B (64%). 67M residents face ~15% bill increase.

ieefa-pjm-10x

Data center growth spurs PJM capacity prices by factor of 10

https://ieefa.org/resources/projected-data-center-growth-spurs-pjm-capacity-prices-factor-10

IEEFA analysis of data center impact on PJM prices

Data centers responsible for 63% of capacity price increase, $9.3B in costs.

ge-vernova-backlog

GE Vernova 80-GW gas turbine backlog stretches into 2029

https://www.utilitydive.com/news/ge-vernova-gas-turbine-investor/807662/

Gas turbine supply constraints

80 GW backlog against 20 GW/year output. Sold out through 2030.

lazard-lcoe-2025

Lazard LCOE+ (June 2025)

https://www.lazard.com/media/eijnqja3/lazards-lcoeplus-june-2025.pdf

Annual LCOE benchmark for generation technologies

Combined-cycle gas $48-107/MWh. Gas peaking $149-251/MWh. CCGT costs at 10-year high.

introl-smr-timeline

SMR Nuclear Power for AI Data Centers

https://introl.com/blog/smr-nuclear-power-ai-data-centers-implementation

SMR deployment timeline and costs

FOAK $14,600/kW vs projected NOAK $2,800/kW. Google-Kairos: 500 MW, first unit 2030. Realistic timelines 7-10 years.

introl-liquid-cooling

Liquid Cooling vs Air Cooling for AI Data Centers

https://introl.com/blog/liquid-vs-air-cooling-ai-data-centers

Comparison of cooling technologies and PUE

Air PUE 1.4-1.8. Liquid PUE 1.05-1.15. Immersion PUE 1.02-1.03.

bnef-battery-costs-2025

Battery Storage Costs Hit Record Lows — BloombergNEF

https://about.bnef.com/insights/clean-energy/battery-storage-costs-hit-record-lows-as-costs-of-other-clean-power-technologies-increased-bloombergnef/

Global benchmark for 4-hour battery storage fell 27% YoY to $78/MWh

Installed battery capex ~$125/kWh (utility-scale). LCOS of $65/MWh. 27% year-over-year decline in 2025.

google-intersect-acquisition

Google acquires Intersect Power for $4.75B

https://www.utilitydive.com/news/google-intersect-power-co-located-energy-park-data-center-ferc/735198/

Co-located energy parks with solar, batteries, and gas backup for data centers

Quantum Energy Park in TX: 640 MW solar, 1.3 GWh battery storage, plus flexible gas backup. $20B targeted renewable infrastructure investment by end of decade.

hyperscaler-solar-2025

How Data Centers Redefined Energy and Power in 2025

https://www.datacenterknowledge.com/energy-power-supply/how-data-centers-redefined-energy-and-power-in-2025

Hyperscaler clean energy procurement and onsite power trends

Hyperscalers signed 40+ GW solar in 2025. Brookfield-Microsoft 10.5 GW deal. 30% of DC sites expected to use onsite power as primary by 2030.

duke-flexible-load-study

Flexible Load Integration for Utilities

https://www.renewableenergyworld.com/power-grid/grid-modernization/as-ai-and-data-center-power-demand-skyrockets-flexible-load-integration-becomes-a-critical-strategy-for-utilities/

Duke University study on grid capacity for curtailable large loads

Grid could integrate 76-126 GW new demand with 22-88 hours/year curtailment. <50 hours/year curtailment could accommodate ~100 GW.

epri-dcflex-results

EPRI DCFlex Data Center Flexibility — IEEE Spectrum

https://spectrum.ieee.org/dcflex-data-center-flexibility

Demonstrated 25% power reduction in AI data center with no SLA breach

256 NVIDIA GPUs, 25% reduction for 3 hours, 15-minute ramp. 10-40% modulation feasible. 40+ partners including Google, Meta, Microsoft, PJM.

google-demand-response-1gw

Google Data Center Demand Response Milestone

https://blog.google/innovation-and-ai/infrastructure-and-cloud/global-network/demand-response-data-center-milestone/

Google signs 1 GW of demand response contracts

Contracts with Entergy Arkansas, Minnesota Power, DTE Energy. Demand response used to accelerate grid interconnection.

ftai-power-cfm56

FTAI Aviation Launches FTAI Power

https://ir.ftaiaviation.com/news-releases/news-release-details/ftai-aviation-announces-launch-ftai-power-ftai-adapts-worlds

Converting retired CFM56 jet engines to 25 MW gas turbines for data centers

30-45 day conversion per engine. 100+ units/year (2.5+ GW/year). 1,000+ engines owned; 22,000+ produced globally. Production starts 2026.

boom-superpower-turbine

Boom Supersonic Superpower Gas Turbines

https://boomsupersonic.com/press-release/boom-supersonic-to-power-ai-data-centers-with-superpower-natural-gas-turbines-adds-300-million-in-new-funding

42 MW turbine derived from supersonic aviation technology

$1.25B+ backlog. Crusoe launch customer (29 units, 1.21 GW). 4+ GW/year production by 2030. Prototype core testing 2026.

baker-hughes-twenty20

Baker Hughes Gas Turbine Order for Data Centers

https://investors.bakerhughes.com/news-releases/news-release-details/baker-hughes-receives-gas-turbine-order-twenty20-energy-power-us

10 Frame 5 gas turbines (~250 MW) for data centers

Twenty20 Energy order for Georgia and Texas DCs. Initial delivery 2027. Multi-GW strategic agreement.

wartsila-data-center-orders

Wärtsila Data Center Power Orders

https://www.wartsila.com/media/news/29-01-2026-wartsila-chosen-for-a-major-u-s-power-plant-project-addressing-critical-energy-demand-driven-by-data-center-development-3711601

~1 GW in reciprocating engine orders for US data centers

507 MW (27 engines, delivery 2027) + 429 MW (24 engines, late 2028/early 2029). 79 GW installed globally.

caterpillar-dc-orders

Caterpillar Gas Generator Data Center Agreements

https://www.caterpillar.com/en/news/corporate-press-releases/h/joule-caterpillar-wheeler.html

6+ GW in gas generator agreements for data center campuses

4 GW (Joule Capital, Utah) + 2 GW (AIP, West Virginia). 11.5% reciprocating engine market share. Fastest-growing segment.

utility-dive-solar-data-center

Solar as a Data Center Power Solution

https://www.utilitydive.com/news/data-center-power-problem-solar/758809/

BTM solar deployment timelines for data centers

Virginia Permit By Rule allows 18-24 month solar timeline. BTM solar constructable in months once permitted.

introl-nvl72-deployment

GB200 NVL72 Deployment: Managing 72 GPUs in Liquid-Cooled Configurations

https://introl.com/blog/gb200-nvl72-deployment-72-gpu-liquid-cooled

Detailed physical breakdown of the full NVL72 system components and mass

Full NVL72 system ships as four components: compute rack (~1,500 kg, 18 × 1U trays), NVLink switch rack (~800 kg, 9 switch trays), CDU (~400 kg, 200 L coolant), power distribution (~300 kg, 48 PSUs). Total ~3,000 kg, significantly more than the often-cited ~1,360 kg compute rack alone.

mdpi-satellite-dc-dc

State-of-the-Art DC-DC Converters for Satellite Applications

https://www.mdpi.com/2226-4310/12/2/97

Survey of space-grade DC-DC converter technologies and mass characteristics

Satellite power system constitutes ~25% of total dry mass. Modern GaN/SiC converters achieving ~0.2-0.5 kg/kW at high power. Power harness/cabling is 10-25% of electrical power system mass.

nature-multilayer-shield

Multilayer radiation shield for satellite electronic components protection

https://www.nature.com/articles/s41598-021-99739-2

Optimized graded-Z shielding designs for satellites

Three-layer shields (Au/W/Al) provide 70% better electron protection than single aluminum. For protons, W/Pb/Ta achieves 50% dose reduction vs equivalent aluminum. Graded-Z reduces electron dose by >60% over single-material shields at same areal density.

researchgate-leo-radiation

Radiation analysis and mitigation framework for LEO small satellites

https://www.researchgate.net/publication/322649302

Radiation environment characterization and shielding requirements for LEO

Below 1.5 mm Al, trapped electrons dominate dose. Above 1.5 mm, trapped protons dominate. 3 mm Al attenuates TID to <10 krad(Si) for 3-year LEO mission. 0.5 mm Al sufficient for 1-year worst-case.

catalyst-scaling-pathways

AI scaling pathways: on grid, on edge, off grid, off planet (Catalyst podcast)

https://reader.secondthoughts.workers.dev/posts/2248/text

Latitude Media Catalyst podcast with Shayl Khan (EIP) and Jake Elder (EIP) comparing grid-connected, edge, off-grid, and orbital data center pathways

Frameworks four pathways for scaling AI compute: grid-connected hyperscale (incumbent, constrained by transmission 5-7+ years and social license), edge (<50 MW, speed advantage but cost disadvantage at subscale), off-grid (>1 TW opportunity in US Southwest per Stripe/Paces study, but reliability challenges — early projects below 90% uptime), and orbital (free solar power but only 5-15% of DC cost is energy; O&M and debris are harder constraints than thermal). 10-year forecast: 50-60% grid hyperscale, 10-15% off-grid, ~15% edge, 5-10% orbital. Both hosts skeptical of Musk's 3-4 year orbital cost parity claim. Key insight: off-grid is an underexplored middle ground — why go to space before exhausting terrestrial off-grid options? Chip supply chain likely bottlenecks before either off-grid or orbital scale constraints bind. At GW scale, orbital DC would be ~4 km^2 orbiting asset; debris strike expected every hour at that size. O&M identified as hardest unsolved problem for orbital DCs.

starpath-solar-panels

Starpath Space ultra-lightweight solar panels (Payload Space newsletter)

https://reader.secondthoughts.workers.dev/posts/1576/view

Coverage of Starpath Space's Starlight Air panels at 73 g/m^2 and ~$15/watt

Starlight Air panels: 73 g/m^2, ~$15/watt (space-grade). Starlight Classic (thicker): ~$11.20/watt. PV crystalline structure in hundreds of nanometers, printed onto substrate fabric. 50 MW production facility planned; first deliveries 2026. Raised $12M seed in 2024.

spacex-fcc-million-satellite-filing

SpaceX files for million satellite orbital AI data center megaconstellation

https://www.datacenterdynamics.com/en/news/spacex-files-for-million-satellite-orbital-ai-data-center-megaconstellation/

SpaceX filed with the FCC for up to one million satellites to provide 100 GW of AI compute capacity

Filing projects launching one million tonnes of satellites annually to generate 100 GW of AI compute capacity. Scale would dwarf all existing satellite constellations combined.

blue-origin-project-sunrise

Blue Origin joins the orbital data center race

https://spacenews.com/blue-origin-joins-the-orbital-data-center-race/

Blue Origin filed FCC application on March 19, 2026 for "Project Sunrise," a 51,600-satellite orbital data center constellation

FCC filing for up to 51,600 satellites in sun-synchronous orbits at 500-1,800 km altitude. Orbital planes spaced 5-10 km apart, each containing 300-1,000 satellites. Optical intersatellite links with TeraWave broadband constellation.

starcloud-88k-constellation-fcc

Starcloud files plans for 88,000-satellite constellation

https://spacenews.com/starcloud-files-plans-for-88000-satellite-constellation/

FCC accepted Starcloud's March 2026 filing for up to 88,000 orbital data center satellites

FCC accepted filing March 13, 2026. 88,000 satellites at 600-850 km altitude in dusk-dawn sun-synchronous orbits. Orbital shell thickness up to 50 km for near-continuous solar power.

starcloud-first-ai-model-space

Nvidia-backed Starcloud trains first AI model in space

https://www.cnbc.com/2025/12/10/nvidia-backed-starcloud-trains-first-ai-model-in-space-orbital-data-centers.html

Starcloud trained Google's Gemma LLM on Starcloud-1 satellite in December 2025

Starcloud-1 launched Nov 2025 with H100 GPU — 100x more powerful than any prior space GPU. First LLM trained in orbit. Second satellite planned Oct 2026 with 100x power generation and Blackwell platform. Funded by Google and Andreessen Horowitz ($34M total).

electronics-cooling-arrhenius

Does a 10C Increase in Temperature Really Reduce the Life of Electronics by Half?

https://www.electronics-cooling.com/2017/08/10c-increase-temperature-really-reduce-life-electronics-half/

Technical analysis of Arrhenius equation limitations for electronics lifetime prediction

The "10C = half life" rule assumes activation energy ~0.7 eV; actual values range 0.3-1.0+ eV. Significant failure modes are not temperature-dependent (thermal cycling, vibration, humidity). Running GPUs at higher temperatures (as proposed for space at 70-80C) has complex reliability implications.

introl-orbital-dc-race-2026

Orbital Data Center Race 2026

https://introl.com/blog/orbital-data-centers-space-computing-race-2026

Comprehensive competitive landscape identifying 8+ companies, cost economics, and three-wave deployment timeline

Three companies with hardware in orbit: Kepler (10 optical relay sats), Axiom Space (2 DC nodes), Starcloud (H100, Nov 2025). Starcloud claims $0.005/kWh orbital energy vs $0.04-0.08/kWh terrestrial. McCalip calculator: orbital ~3x more per watt. Market forecast: $1.77B by 2029, $39.09B by 2035 (67.4% CAGR). Three waves: defense/ISR (2025-2030), AI training/premium cloud (2030-2035), potential mainstream (2035-2045).

cnbc-electricity-prices-inflation

Electricity prices rising by double the rate of inflation

https://www.cnbc.com/2026/02/12/electricity-price-data-center-ai-inflation-goldman.html

Goldman Sachs analysis of electricity price inflation driven by data center demand

Electricity prices jumped 6.9% in 2025, more than double headline inflation of 2.9%. Data centers make up 40% of electricity demand growth. Prices expected to increase up to 40% by 2030. Wholesale costs up 267% near data center clusters.

rmi-pjm-speed-to-power

PJM's Speed to Power Problem and How to Fix It

https://rmi.org/pjms-speed-to-power-problem-and-how-to-fix-it/

RMI analysis of PJM interconnection delays stretching from <2 years to >8 years

Average time from interconnection application to commercial operation: under 2 years in 2008, over 8 years by 2025. Capacity market clearing prices jumped from $29/MW-day to $330/MW-day cap. Capacity bills rose from $2.2B to $16.1B. PJM serves 67 million people.

datacenterwatch-opposition-tracker

$64 billion of data center projects have been blocked or delayed amid local opposition

https://www.datacenterwatch.org/report

Comprehensive tracker of data center projects facing community opposition

$18B blocked; $46B delayed; $64B total affected. 142 activist groups across 24 states. Bipartisan opposition (55% Republican, 45% Democrat). Loudoun County ended by-right zoning March 2025.

latitude-btm-traction

Behind-the-meter generation is picking up traction

https://www.latitudemedia.com/news/behind-the-meter-generation-is-picking-up-traction/

Rapid growth of BTM power generation for data centers

46 data centers with combined 56 GW plan BTM power, ~30% of all planned US DC capacity. 90% of BTM projects announced in 2025 alone. McKinsey estimates 25-33% of incremental demand through 2030 met by BTM.

camus-grid-connection-delays

Why Does It Take So Long to Connect a Data Center to the Grid?

https://www.camus.energy/blog/why-does-it-take-so-long-to-connect-a-data-center-to-the-grid

Technical analysis of multi-year bottlenecks in grid connection

Interconnection queue swollen to 2,600 GW nationally. Median time to commercial operation approaching 5 years. Withdrawal rates reaching nearly 80%. AI DC demand projected to grow 3.5x from 2025 to 2030 (McKinsey: 156 GW).

powermag-transformer-shortage

Transformers in 2026: Shortage, Scramble, or Self-Inflicted Crisis?

https://www.powermag.com/transformers-in-2026-shortage-scramble-or-self-inflicted-crisis/

Analysis of transformer supply crisis constraining data center and grid buildout

Power transformer lead times averaging 128 weeks (~2.5 years); GSUs 144 weeks. 30% supply shortfall for power transformers in 2025; 47% for GSUs. Cost inflation 77-95% since 2019.

aetherflux-galactic-brain

Aetherflux enters orbital data center race

https://spacenews.com/space-based-solar-power-startup-aetherflux-enters-orbital-data-center-race/

Aetherflux plans "Galactic Brain" orbital DC node in Q1 2027

Founded by Baiju Bhatt (Robinhood co-founder). $60M raised. Power-beaming demo satellite launching 2026. "Galactic Brain" first orbital DC node targeted Q1 2027. Combines space-based solar power with compute.

sophia-space-seed

Sophia Space raises $10M for orbital computing

https://www.geekwire.com/2026/sophia-space-10m-space-computing-network/

Modular TILE platform combining solar power with passive radiative cooling

Tabletop-sized satellite modules combining solar + passive radiative cooling. Multiple tiles connect into racks for scalable LEO computing. First in-orbit demo late 2027 or early 2028. One of NVIDIA's six space computing launch partners.

spacenews-economics-focus

With attention on orbital data centers, the focus turns to economics

https://spacenews.com/with-attention-on-orbital-data-centers-the-focus-turns-to-economics/

SpaceNews analysis noting $61B in terrestrial DC construction with unproven orbital business case

$61B in terrestrial data center construction last year (record). Axiom Space and Spacebilt plan ISS installation in 2027. Central finding: "it's not yet clear if the business case for data centers in space holds up."

fortune-experts-not-so-fast

AI data centers in space are having a moment. Experts say: Not so fast

https://fortune.com/2026/02/19/ai-data-centers-in-space-elon-musk-power-problems/

Expert skepticism about orbital DC timelines

Kathleen Curlee (Georgetown CSET): 2030-2035 timeline unrealistic. 1 GW orbital power requires ~1 km^2 solar panels. Jeff Thornburg (SpaceX veteran): minimum 3-5 years before functional systems. Tech companies project $5T+ in terrestrial DC spending by 2030.

chinatalk-dc-cost-comparison

How Much AI Does $1 Get You in China vs America?

https://reader.secondthoughts.workers.dev/posts/1238/view

Detailed cost comparison of 400 MW data center in China vs US

Chinese DCs cost $5.5-6.5M/MW construction; US $8-12M/MW. 400 MW construction: China ~$2.4B vs US ~$4B. US electricity for 400 MW DC: ~$600M over 3 years; China ~$350M.

payload-falcon9-price-hike

The Promise of Low Launch Prices is Still Far Off

https://pyld.omeclk.com/portal/public/ViewCommInBrowser.jsp?Sv4%2BeOSSucwiV%2BSifRJiNeUHzeOgHitiuZt0k4LaAu%2FtGh9fCjOzTvcfB6f0uDKUE90KLtIX9m6H0VKSnmjQuA%3D%3DA

Payload Pro analysis of SpaceX's March 2026 price increase and competitive dynamics

SpaceX increased Falcon 9 dedicated launch price from $70M to $74M and rideshare from $6,500/kg to $7,000/kg. Notes lack of real alternatives and concludes access to orbit has gotten more expensive in recent years despite narrative of falling launch costs.

spacenexus-launch-economics

Space Launch Economics Analysis

https://spacenexus.us/launch-economics

Comprehensive database of current launch vehicle costs per kg with historical trend data

Falcon 9 reusable $1,500/kg, expendable $2,720/kg. Falcon Heavy $1,400/kg. Starship target $10-50/kg. Global launch market $9.1B (2024), forecast $32B by 2030. Historical cost from $54,500/kg (Shuttle) to $1,500/kg (Falcon 9 reusable).

citi-gps-space-2022

Citi GPS: Space -- The Dawn of a New Age

https://www.citigroup.com/global/insights/space_20220509

Citigroup 2022 research note projecting launch costs to $100/kg by 2040 with bull/bear scenarios

Projects launch costs declining 95% to ~$100/kg by 2040. Bull case $33/kg. Driven by reusability, scale, new materials, cost-efficient production. Space industry to reach $1T revenue by 2040.

spacenews-categorical-imperative

SpaceX and the categorical imperative to achieve low launch cost

https://spacenews.com/spacex-and-the-categorical-imperative-to-achieve-low-launch-cost/

Analysis of SpaceX pricing strategy showing cost savings not passed to customers

SpaceX sells Falcon 9 launches at major markup over internal cost. Cost savings fund Starlink development rather than benefit external customers. No competitive pressure to lower customer prices given market dominance.

indexbox-starship-90m

SpaceX Starship Launch Price Set at $90 Million for 2029 Mission

https://www.indexbox.io/blog/spacex-starship-launch-price-set-at-90-million-for-2029-mission/

First publicly known Starship customer price: $90M for Voyager Starlab launch in 2029

Starship priced at $90M for Voyager Technologies Starlab station launch in 2029. Compared to $74M for Falcon 9 with far less payload capacity. Implies Starship customer price of ~$600/kg at 150t capacity.

Is Starlink Solar Module the Answer to Power in Space?

https://www.linkedin.com/pulse/starlink-solar-module-answer-power-space-stan-herasimenka-7anfc

Reverse-engineering of Starlink Gen 1.x solar array: 18% silicon cells, 78-100 W/kg achieved, 40-60 kg array mass

Starlink Gen 1.x solar arrays estimated at 78-100 W/kg specific power using mass-produced 18% efficiency silicon half-cells at ~7,535 W total per satellite.

satnews-fractal-lab-iii

The Fractal Lab -- Part III

https://satnews.com/2026/02/24/the-fractal-lab-part-iii/

Three-tier solar specific power framework: flown ~30 W/kg, lab demonstrated ~200 W/kg, near-term projection ~100 W/kg

Presents a maturity framework for solar array technology: heritage fleet at ~30 W/kg, laboratory demonstrated up to 200 W/kg, and near-term achievable at ~100 W/kg for 2030s deployable systems at megawatt scale.

mdpi-leo-degradation

Degradation Modeling and Telemetry-Based Analysis of Solar Cells in LEO

https://www.mdpi.com/2076-3417/15/16/9208

Models Si solar cell power loss of 12.5% at 300 km and 7.8% at 700 km over six months; evaluates Si, GaAs, TJ, CIGS

Silicon solar cell power output decreases approximately 12.5% at 300 km and 7.8% at 700 km over six months. Dominant degradation mechanisms include trapped charged particles, atomic oxygen, and UV radiation.

terawatt-starlight-specs

Starlight Solar Panel Specifications (Terawatt/Starpath)

https://terawatt.space/

Starlight Air: 16% efficiency, 73 g/m^2, $15/W. Starlight Classic: 19% efficiency, 900 g/m^2, $11.20/W.

Starlight Air panels at 73 g/m^2 yield ~2,980 W/kg cell-level specific power. Starlight Classic at 900 g/m^2 yield ~287 W/kg cell-level. Both radiation-hardened for LEO through Mars.

solar-degradation-geo-gaas-si

Solar array degradation on geostationary communications satellites

https://www.inderscience.com/info/inarticle.php?artid=90549

Telemetry from 11 GEO sats (1990-1998): GaAs 0.44-1.03%/yr degradation; Si 0.71-1.69%/yr

GEO GaAs cells degrade 0.44-1.03%/yr; Si cells 0.71-1.69%/yr. LEO radiation fluences 5-10x lower than GEO.

iss-solar-array-degradation

On-Orbit Performance Degradation of the International Space Station P6 Photovoltaic Arrays

https://ntrs.nasa.gov/api/citations/20030068268/downloads/20030068268.pdf

ISS silicon solar arrays: measured degradation 0.2-0.5%/yr, below predicted 0.8%/yr

ISS P6 silicon photovoltaic arrays showed measured short-circuit current degradation of 0.2-0.5%/yr at ~400 km LEO, below the predicted rate of 0.8%/yr.

satnews-physics-wall

The Physics Wall: Orbiting Data Centers Face a Massive Cooling Challenge

https://satnews.com/2026/03/17/the-physics-wall-orbiting-data-centers-face-a-massive-cooling-challenge/

SatNews analysis of radiative cooling challenges for orbital data centers, including radiator sizing, temperature tradeoffs, and active thermal control trends

Running radiators at 60C instead of 20C can reduce required area by half. Industry expected to move toward space-rated heat pumps by 2027. A centralized 1 GW orbital DC would require ~834,000 m^2 of radiators at 400K.

isnps-lightweight-radiators

Advanced Lightweight Heat Rejection Radiators for Space Nuclear Power Systems

https://isnps.unm.edu/reports/ISNPS_Tech_Report_97.pdf

NASA-funded research on Ti-water heat pipe panels ranging from 5.8-7.16 kg/m^2, with additive-manufactured embedded heat pipes achieving >70% fin efficiency at 2-3 kg/m^2

State-of-the-art heat rejection radiators with Ti-water heat pipe panels range from 5.8 kg/m^2 to 7.16 kg/m^2. NASA TFAWS 2024 demonstrated embedded branching network heat pipes at 2-3 kg/m^2 using additive manufacturing.

nasa-smallsat-thermal

7.0 Thermal Control - NASA State of the Art of Small Spacecraft Technology

https://www.nasa.gov/smallsat-institute/sst-soa/thermal-control/

NASA reference on thermal control subsystems for small spacecraft

Comprehensive survey of thermal control technologies for small spacecraft including passive radiators, heat pipes, and active thermal management systems.

toughsf-radiators

ToughSF: All the Radiators

http://toughsf.blogspot.com/2017/07/all-radiators.html

Reference survey of spacecraft radiator technologies, mass ranges from structural-panel designs to 12 kg/m^2 heavy deployable radiators

Spacecraft radiator weight varies from nearly nothing (structural panel reuse) to ~12 kg/m^2 for heavy deployable radiators. NASA target for advanced thermal management: 2 kg/m^2.

vera-rubin-nvl72-nvidia

NVIDIA Vera Rubin POD: Seven Chips, Five Rack-Scale Systems, One AI Supercomputer

https://developer.nvidia.com/blog/nvidia-vera-rubin-pod-seven-chips-five-rack-scale-systems-one-ai-supercomputer/

NVIDIA blog on Vera Rubin NVL72 rack architecture (~1,815 kg, 180-220 kW TDP, 72 Rubin GPUs + 36 Vera CPUs)

VR NVL72 rack weighs ~4,000 lbs (~1,815 kg) for the compute rack unit alone, housing 72 Rubin GPUs and 36 Vera CPUs across 18 compute trays plus 9 NVLink switch trays. System TDP is 180-220 kW.

semianalysis-vera-rubin

Vera Rubin - Extreme Co-Design: An Evolution from Grace Blackwell Oberon

https://newsletter.semianalysis.com/p/vera-rubin-extreme-co-design-an-evolution

SemiAnalysis deep dive on VR NVL72 architecture, power delivery, and NVLink 6 switch trays

VR NVL72 maintains same NVLink switch tray count as GB200. Power delivery uses four 110 kW power shelves. Compute tray uses Strata board with IBC modules stepping from 50 VDC to 12 VDC, then VRMs to ~1 VDC.

mach33-energy-parity

Orbital Compute Energy will be Cheaper than Earth by 2030

https://research.33fg.com/analysis/orbital-compute-energy-will-be-cheaper-than-earth-by-2030

Mach33 analysis deriving $/W for satellite power & cooling subsystems from Starlink V2 Mini baseline

Starlink V2 Mini hardware costs ~$650/kg. Power & cooling subsystem (~400 kg, 42.8 kW) yields ~$6.1/W. Compute-optimized Starlink derivative achieves ~$5.0/W.

spacenews-solar-bottleneck

Modernizing the satellite supply chain by breaking the solar power bottleneck

https://spacenews.com/modernizing-the-satellite-supply-chain-by-breaking-the-solar-power-bottleneck/

Analysis of solar panel supply as key satellite manufacturing bottleneck

Solar panel supply identified as a critical bottleneck for satellite manufacturing scale-up.

Cost-Saving Method Yields Solar Cells for Exploration, Gadgets

https://spinoff.nasa.gov/Spinoff2016/ee_5.html

NASA spinoff on MicroLink substrate-reuse approach; traditional space cell costs $400-500 per 4x8cm cell

Traditional space-qualified solar cell measuring 4x8 cm costs $400-500 apiece including flight qualification. Substrate accounts for ~40% of total cell material cost.

nasa-high-power-dc-dc

A 1 MW, 100 kV, less than 100 kg space based dc-dc power converter

https://ntrs.nasa.gov/citations/19920067913

NASA study of high-power space-based DC-DC converter at 11.9 kW/kg

Describes a 1 MW, 100 kV space-based DC-DC converter with estimated system mass of 83.8 kg, giving 11.9 kW/kg (or ~0.084 kg/kW).

arena-space-lasers

Making Space Lasers Boring

https://arenamagazine.substack.com/p/making-space-lasers-boring

Notes that Starlink demonstrated satellite design requirements are within reach of consumer electronics components

SpaceX demonstrated satellite design can use consumer electronics components. Interior chambers sealed and maintained at consistent temperatures, reducing need for expensive space-grade components.

ieee-h100-space

NVIDIA's H100 GPU Takes AI Processing to Space

https://spectrum.ieee.org/nvidia-h100-space

IEEE Spectrum coverage of Starcloud-1 deploying a terrestrial-grade H100 in orbit

Documents the first terrestrial, data-center-class GPU (H100) deployed in orbit aboard Starcloud-1 (November 2025), 100x more powerful than any prior space GPU.

militaryaerospace-radhard-cost

Radiation-hardened space electronics enter the multi-core era

https://www.militaryaerospace.com/computers/article/16709760/radiation-hardened-space-electronics-enter-the-multi-core-era

Analysis of rad-hard component costs vs commercial equivalents

Rad-hard power ICs that cost ~$2 in commercial volume sell for over $2,000 in space-grade versions (~1,000x multiplier). Testing costs often swamp material costs.

microchip-cots-newspace

Decrease Time to Market and Cost for the NewSpace Market by Using Radiation-Tolerant Solutions Based on COTS Devices

https://www.microchip.com/en-us/about/news-releases/products/decrease-time-to-market-and-cost-for-the-newspace-market-by-using-radiation-tolerant-solutions-based-on-cots-devices

Microchip's radiation-tolerant COTS approach for NewSpace applications

Radiation-tolerant MCUs deliver cost savings of up to 75% over rad-hard MCUs. Targets NewSpace operators who find traditional space-qualified components too expensive and slow.

meta-sdc-reliability

How Meta keeps its AI hardware reliable

https://engineering.fb.com/2025/07/22/data-infrastructure/how-meta-keeps-its-ai-hardware-reliable/

Meta's analysis of silent data corruptions in AI training and inference at scale

SDCs in inference lead to incorrect results affecting thousands of consumers. AI training workloads sometimes considered self-resilient to SDCs but only for a limited subset of manifestations.

blocventures-satellite-compute

The road to high-performance and robust satellite compute

https://blocventures.com/the-road-to-high-performance-and-robust-satellite-compute/

Analysis of COTS vs rad-hard electronics for NewSpace LEO satellites

LEO satellites below Van Allen belt have relatively low cumulative radiation exposure (<30 krad). Starlink operates with more risk tolerance because constellation-level redundancy absorbs individual failures.

nvidia-one-year-cadence

Nvidia Draws GPU System Roadmap Out To 2028

https://www.nextplatform.com/2025/03/19/nvidia-draws-gpu-system-roadmap-out-to-2028/

Nvidia shifted from 2-year to 1-year release cadence for datacenter GPUs

Hopper (2022), Blackwell (2024/25), Rubin (2026), Feynman (2028). Major architecture every 2 years, updates yearly. Each generation delivers ~2-4x inference performance improvement.

orbital-dc-race-2026

The Orbital Data Center Race: Every Major Player, Timeline, and Economic Reality in 2026

https://medium.com/@marc.bara.iniesta/orbital-data-centers-part-ii-spacexs-million-satellite-bet-cfd4e2bdcf66

Comprehensive survey of orbital DC players, regulatory filings, and economic analyses

Market valued at $1.77B by 2029, $39B by 2035 (67.4% CAGR). Three-wave deployment timeline: defense/ISR (2025-2030), AI training (2030-2035), mainstream (2035-2045).

revisiting-ml-cluster-reliability

Revisiting Reliability in Large-Scale Machine Learning Research Clusters

https://arxiv.org/html/2410.21680v2

MTTF for 1024-GPU jobs is 7.9 hours; hardware reliability scales inversely with GPU count

MTTF for 1024-GPU jobs is 7.9 hours, approximately 2 orders of magnitude lower than 8-GPU jobs at 47.7 days. Comprehensive failure taxonomy from 11 months of data across 24K A100 GPUs.

satnews-insurance-congestion

Satellite Insurers Driving Costs in a Hyper-Congested Orbital Environment

https://satnews.com/2026/02/08/satellite-insurers-driving-costs-in-a-hyper-congested-orbital-environment/

SatNews analysis of rising space insurance costs in congested LEO

LEO insurance premiums now 5-10% of mission total budget. WEF projects $42.3B in congestion-related costs over next decade across $3.03T total space infrastructure value (~1.4%).

wef-debris-cost-2026

Clear Orbit, Secure Future: A Call to Action on Space Debris

https://reports.weforum.org/docs/WEF_Clear_Orbit_Secure_Future_2026.pdf

WEF 2026 report projecting space debris costs to industry over next decade

Total congestion costs $25.8B-$42.3B over next decade, representing ~1.4% of $3.03T total space infrastructure value. Maneuver costs alone $560M. Non-catastrophic failure costs $11.1B.

The Little-Known Secret That Could Cost Elon Musk $8.2 Billion a Year

https://www.fool.com/investing/2024/02/22/spacex-secret-could-cost-musk-82-billion-a-year/

Analysis of Starlink satellite replacement costs given 5-year lifespan

Starlink satellite manufacturing cost ~$500K each. Launch cost ~$3M per satellite via Falcon 9. With 5-year lifespan across 42,000-satellite constellation, annual replacement cost ~$8.2B/year.

SpaceX's Impact on Satellite Launch Insurance

https://telecomworld101.com/spacex-launch-insurance/

Analysis of SpaceX's decision not to insure Starlink satellites

SpaceX does not insure Starlink satellites. Mega-constellation quantity functions as its own insurance. SpaceX does secure launch insurance for most Falcon 9 missions.

payload-debris-costs

WEF's Space Debris Report Projects Significant Costs

https://payloadspace.com/wefs-space-debris-report-projects-significant-costs/

Payload Space coverage of WEF debris cost report

Anomaly costs $14.2B-$30.7B over next decade. Maneuver costs alone $560M. Total ~1.4% of projected space infrastructure value.

thunder-said-dc-economics

Economic costs of data-centers?

https://thundersaidenergy.com/downloads/data-centers-the-economics/

Data center economics analysis with opex breakdown for 30 MW facility

30 MW data center requires ~$100M/year opex (~$3,333/kW/year). Standard capex ~$10M/MW; AI-heavy up to $40,000/kW. Over half of AI DC capex is GPUs.

cushman-wakefield-dc-cost-2025

U.S. Data Center Development Cost Guide 2025

https://www.cushmanwakefield.com/en/united-states/insights/data-center-development-cost-guide

Cushman & Wakefield survey of data center development costs across 19 US markets

Costs range from $9.3M/MW (San Antonio) to $15M/MW (Reno), average $11.7M/MW. Texas markets consistently lowest cost. Excludes IT equipment, land acquisition, and soft costs.

dgtl-infra-dc-cost-breakdown

How Much Does It Cost to Build a Data Center?

https://dgtlinfra.com/how-much-does-it-cost-to-build-a-data-center/

Detailed breakdown of data center construction costs by component

Total development costs $7-12M/MW. Electrical 40-45%, HVAC/cooling ~20%, powered shell 17-21%, building fit-out 20-25%. Per-sqft: $600-1,100/sqft total.

alpha-matica-dc-cost-structure

Deconstructing the Data Center: A Look at the Cost Structure Igniting the AI Boom

https://www.alpha-matica.com/post/deconstructing-the-data-center-a-look-at-the-cost-structure-1

Alpha Matica analysis of 100 MW hyperscale data center CapEx breakdown

100 MW hyperscale DC total CapEx $3.4B-$5.5B ($34-55/W including IT hardware). Infrastructure-only $900M-$1.5B ($9-15M/MW).

mckinsey-beyond-compute

Beyond compute: Infrastructure that powers and cools AI data centers

https://www.mckinsey.com/industries/industrials/our-insights/beyond-compute-infrastructure-that-powers-and-cools-ai-data-centers

McKinsey analysis: 25% ($1.3T) of $6.7T global DC investment goes to power/cooling infrastructure

25% of $6.7T total global data center investment through 2030 goes to power generation, transmission, cooling, and electrical equipment. With projected 219 GW demand, implies ~$5,900/kW.

introl-cdu-cost-analysis

Cooling Distribution Units: Liquid Cooling Infrastructure for AI Data Centers

https://introl.com/blog/cooling-distribution-units-cdu-liquid-cooling-ai-data-center-2025

CDU cost analysis: $75K-150K per 500 kW unit; CDU market growing from $1B to $7.7B at 33% CAGR

CDUs priced at $75K-150K per 500 kW unit. Piping installation $50-100 per linear foot. Cold plates and manifolds $5K-10K per server.

truelook-dc-construction-costs

Data Center Construction Costs Explained: Where Your Budget Really Goes

https://www.truelook.com/blog/data-center-construction-costs

Cost analysis showing MEP at 50% of budgets, cooling at 20% of mechanical

MEP systems consume up to 50% of total budgets. Electrical at 40-45%. Cooling systems at 43.2% of mechanical infrastructure spending in 2024. Air cooling $1.5-2M/MW; liquid cooling $3-4M/MW.

yale-dc-electricity-rates

Home electricity bills are skyrocketing. For data centers, not so much.

https://yaleclimateconnections.org/2026/01/home-electricity-bills-are-skyrocketing-for-data-centers-not-so-much/

Analysis showing K-shaped electricity pricing: residential up 25%, commercial up only 3%

Residential prices rose 25% (2020-2024). Commercial prices rose only 3% over two years. Data centers consuming more power but paying proportionally less through negotiated PPAs and industrial tariffs.

cnbc-footing-ai-bill

Who is really footing the AI energy bill?

https://www.cnbc.com/2026/03/13/ai-data-centers-electricity-prices-backlash-ratepayer-protection.html

Debate about data center electricity costs and ratepayer impact

US residential electricity prices rose from $0.1276/kWh (2020) to $0.1744/kWh (Feb 2026), 36% increase. Projected $0.1901/kWh by September 2027.

volts-pjm-explainer

What is PJM and why is everyone so mad about it?

https://www.volts.wtf/p/what-is-pjm-and-why-is-everyone-so

David Roberts (Volts) explainer on PJM capacity market dynamics and data center impact

Data centers were 40% of costs in the December 2025 auction for 2027/28. Pennsylvania Governor Shapiro called it "the largest unjust wealth transfer in the history of US energy markets."

sciencedirect-dc-lcoe-comparison

Energy solutions for data center: Comparative analysis of LCOE and recent developments

https://www.sciencedirect.com/science/article/pii/S2352484725005803

Solar+battery storage as lowest-cost option for data centers at $25.11/MWh

Solar+battery storage found lowest cost at $25.11/MWh ($0.025/kWh), though sensitive to CAPEX, capacity factors, and firmness requirements.

pv-magazine-solar-ppa-playbook

AI datacenters rewrite the solar PPA playbook

https://pv-magazine-usa.com/2026/03/13/ai-datacenters-rewrite-the-solar-ppa-playbook/

Solar PPA prices rising due to hyperscaler demand

P25 solar prices rose 3.2% in Q4 2025, up ~9% year-over-year, as hyperscaler demand compresses available supply.

premai-parallelism-guide-2026

Multi-GPU LLM Inference: TP vs PP vs EP Parallelism Guide (2026)

https://blog.premai.io/multi-gpu-llm-inference-tp-vs-pp-vs-ep-parallelism-guide-2026/

Comprehensive practical guide to multi-GPU inference parallelism strategies with specific GPU counts, bandwidth thresholds, and efficiency data

Llama 405B requires minimum 8x H100 in FP8. DeepSeek R1 (671B MoE) requires 8x H100 minimum. TP scaling: TP=2 85-95% efficiency, TP=8 56-75%. PP uses point-to-point transfers requiring far less bandwidth than TP. NVLink mandatory for TP beyond TP=2.

nvidia-wide-ep-nvl72

Scaling Large MoE Models with Wide Expert Parallelism on NVL72 Rack Scale Systems

https://developer.nvidia.com/blog/scaling-large-moe-models-with-wide-expert-parallelism-on-nvl72-rack-scale-systems/

NVIDIA technical blog: EP32 achieves 1.8x throughput vs EP8; requires 130 TB/s aggregate NVLink bandwidth

Wide-EP on DeepSeek R1 with EP=32 achieves 1.8x more output tokens/sec/GPU than EP=8. Without 130 TB/s NVLink bandwidth, large-scale EP would be impractical.

nvidia-dynamo-moe-inference

How NVIDIA GB200 NVL72 and NVIDIA Dynamo Boost Inference Performance for MoE Models

https://developer.nvidia.com/blog/how-nvidia-gb200-nvl72-and-nvidia-dynamo-boost-inference-performance-for-moe-models/

Disaggregated serving for MoE models showing 6x throughput gains with wide EP on NVL72

Disaggregated serving (prefill/decode separation) achieved 6x throughput gain. Optimal DeepSeek R1 decode uses 64 GPUs in wide-EP within single NVLink domain.

NVIDIA NVLink and NVSwitch Supercharge Large Language Model Inference

https://developer.nvidia.com/blog/nvidia-nvlink-and-nvidia-nvswitch-supercharge-large-language-model-inference/

NVSwitch delivers 1.5x inference throughput for Llama 70B; quantifies per-query data transfer

Single Llama 70B inference query requires up to 20 GB of TP synchronization data per GPU. NVSwitch-equipped H100 achieved 168 tok/s/GPU vs 112 tok/s/GPU without NVSwitch (1.5x).

Scaling AI Inference Performance and Flexibility with NVIDIA NVLink and NVLink Fusion

https://developer.nvidia.com/blog/scaling-ai-inference-performance-and-flexibility-with-nvidia-nvlink-and-nvlink-fusion/

72-GPU NVLink domain maximizes revenue and performance for inference workloads

Analysis showing full 72-GPU NVLink domain delivers optimal inference revenue and performance across frontier model workloads.

semianalysis-inferencex-v2

InferenceX v2: NVIDIA Blackwell Vs AMD vs Hopper

https://newsletter.semianalysis.com/p/inferencex-v2-nvidia-blackwell-vs

All top-tier labs use disaggregated serving with wide EP; detailed DeepSeek R1 deployment configs

All top-tier labs (OpenAI, Anthropic, xAI, Google DeepMind, DeepSeek) use disaggregated inferencing and wide expert parallelism. EP64 places 4 experts/layer/GPU vs EP8 at 32 experts/layer/GPU.

nebius-gb200-interconnect

Leveraging high-speed, rack-scale GPU interconnect with NVIDIA GB200 NVL72

https://nebius.com/blog/posts/leveraging-nvidia-gb200-nvl72-gpu-interconnect

TP groups always contained within single NVL72 rack

Technical deep-dive confirming TP groups require fastest interconnect and are always contained within a single NVL72 rack.

nvidia-moe-frontier-models

Mixture of Experts Powers the Most Intelligent Frontier AI Models

https://blogs.nvidia.com/blog/mixture-of-experts-frontier-models/

10x MoE performance on NVL72 vs H200; 60%+ of frontier models use MoE

Since early 2025, over 60% of open-source frontier model releases use MoE. NVL72 achieves 10x performance improvement for MoE vs HGX H200.

nvidia-rubin-cpx-nvl144

NVIDIA Unveils Rubin CPX: A New Class of GPU Designed for Massive-Context Inference

https://nvidianews.nvidia.com/news/nvidia-unveils-rubin-cpx-a-new-class-of-gpu-designed-for-massive-context-inference

NVL144 with 100TB memory, 1.7 PB/s bandwidth, designed for million-token context

Vera Rubin NVL144 CPX doubles domain to 144 GPUs with NVLink 6.0 at 3.6 TB/s per GPU. 100TB fast memory, 1.7 PB/s bandwidth. Rubin Ultra (2027) goes to NVLink 7.0.

lmsys-gb200-deepseek-part1

Deploying DeepSeek on GB200 NVL72 (Part I)

https://lmsys.org/blog/2025-06-16-gb200-part-1/

2.7x decode throughput improvement on NVL72

2.7x decode throughput improvement using 12 decode + 2 prefill nodes within NVL72 for DeepSeek R1.

lmsys-gb200-deepseek-part2

Deploying DeepSeek on GB200 NVL72 with PD and Large Scale EP (Part II)

https://lmsys.org/blog/2025-09-25-gb200-part-2/

3.8x prefill and 4.8x decode speedup with NVFP4 MoE on 48 decode ranks

SGLang on GB200 NVL72 achieved 26,156 input tokens/sec/GPU (prefill) and 13,386 output tokens/sec/GPU (decode) for DeepSeek R1 with FP8 attention and NVFP4 MoE.

epoch-consumer-gpu-gap

Frontier AI capabilities can be run at home within a year or less

https://epoch.ai/data-insights/consumer-gpu-model-gap

6-12 month lag before frontier capabilities run on single consumer GPU

Frontier AI capabilities become runnable on single consumer GPU (RTX 4090, ~24 GB VRAM) within 6-12 months. Small open models improve faster (+125 ELO/year) than frontier models (+80 ELO/year).

ai-dc-networking-gpu-clusters

AI Data Center Networking: How GPU Clusters Are Changing Network Design

https://www.thenetworkdna.com/2026/03/ai-data-center-networking-how-gpu.html

Technical analysis of TP, PP, DP communication patterns and bandwidth requirements

Data parallelism is embarrassingly parallel (no cross-replica communication). Pipeline parallelism uses predictable point-to-point flows. Tensor parallelism uses all-to-all AllGather and ReduceScatter collectives.