Terrestrial Data Center PUE
What is the PUE for modern liquid-cooled AI data centers?
Answer
Modern liquid-cooled AI data centers achieve PUE values ranging from 1.02 to 1.20, depending on cooling architecture. The central estimate for a new-build, liquid-cooled AI facility is PUE 1.10. Optimistic deployments using full immersion cooling reach 1.03, while retrofitted or hybrid-cooled facilities typically land around 1.20.
The industry is converging on direct-to-chip liquid cooling as the standard for AI racks above 35 kW, with immersion cooling gaining share at the frontier. New-build AI data centers universally adopt liquid cooling; the GB200 NVL72 mandates it at 120 kW/rack.
Analysis
Convergence toward a standard range
The AI data center industry is converging on PUE 1.05-1.15 for new liquid-cooled facilities:
Direct-to-chip liquid cooling is the dominant approach, commanding ~47% of the liquid cooling market as of late 2025 introl-liquid-cooling.3. It achieves PUE 1.05-1.15 introl-liquid-cooling.1 and is the cooling method mandated by NVIDIA for Blackwell and Rubin platforms introl-nvl72-deployment.1.
Immersion cooling achieves the lowest PUE (1.02-1.05) but remains a smaller share of deployments due to higher complexity, custom enclosure requirements, and challenges with serviceability.
Air cooling is effectively obsolete for AI workloads above 35 kW/rack. Since current-generation AI racks (GB200 NVL72) draw 120 kW and next-generation (Vera Rubin NVL72) will draw 180-220 kW, air cooling is not a viable option.
Why PUE 1.10 is the central estimate
The central estimate of 1.10 reflects:
- Hyperscaler fleet averages of 1.09-1.16 include older air-cooled facilities. New AI-specific builds outperform fleet averages.
- SemiAnalysis modeling uses 1.15 for a facility with adiabatic cooling assist, which is slightly conservative for pure liquid-cooled deployments.
- ChinaTalk modeling uses 1.11 as representative for a modern AI data center, which aligns closely with the central estimate.
- Direct-to-chip systems routinely achieve 1.05-1.15; the midpoint is ~1.10.
PUE overhead components at 1.10
At PUE 1.10, the non-IT overhead is 10% of IT load. No single source provides a component-level breakdown for a liquid-cooled facility at this PUE, but the approximate allocation can be derived from equipment specifications: the GB200 NVL72 power architecture achieves 97% conversion efficiency introl-nvl72-deployment.4, implying ~3% loss for power conversion alone; modern UPS systems add another 1-5% loss depending on mode (ENERGY STAR). The remainder goes to cooling pumps, CDUs, and facility systems. A representative allocation:
- Cooling distribution pumps and CDUs: ~4-5% (editorial estimate)
- Power conversion losses (UPS, PDU, transformers): ~3-4% (consistent with 97% rack-level efficiency introl-nvl72-deployment.4 plus upstream UPS/transformer losses)
- Lighting, security, facility systems: ~1-2% (editorial estimate)
Implications for orbital comparison
PUE 1.10 means a terrestrial facility needs 1.10 kW of total power for every 1.00 kW of IT load. Per the component breakdown above, cooling accounts for roughly 4-5% of IT load, which is 4-5% of total facility power (since at PUE 1.10, IT load is 1.00/1.10 = ~91% of total). This sets a high bar for orbital data centers: eliminating cooling overhead saves only ~4-5% of total power, not the ~40% that air-cooled facilities from a decade ago would have suggested. The case for orbital data centers must rest on power generation cost advantages, not cooling efficiency gains.
Evidence
Cooling technology PUE ranges
| Cooling type | PUE range | Notes |
|---|---|---|
| Traditional air cooling | 1.40-1.80 | Industry average ~1.41 (IEA) |
| Rear-door heat exchanger | 1.20-1.35 | Hybrid approach for retrofits |
| Direct-to-chip liquid (DLC) | 1.05-1.15 | Dominant liquid cooling technology [introl-liquid-cooling.3] |
| Single-phase immersion | 1.02-1.10 | GRC ICEraQ reports <1.03 |
| Two-phase immersion | 1.01-1.05 | Highest efficiency, highest complexity |
Air PUE 1.4-1.8; liquid cooling PUE 1.05-1.15; immersion PUE 1.02-1.03.
Direct-to-chip cooling commands a dominant 47% market share of the liquid cooling market as of late 2025, with Microsoft beginning fleet deployment across Azure campuses.
Hyperscaler reported fleet PUE (2024)
| Provider | Fleet-average PUE | Best site |
|---|---|---|
| 1.09 | - | |
| Meta | 1.09 | 1.08 reported in some sources |
| AWS | 1.15 | 1.04 (Europe) |
| Microsoft | 1.16 | - |
These are fleet averages from the most recent directly-reported company disclosures (sustainability reports, earnings calls) and may differ slightly from values in older third-party compilations. New-build AI-specific facilities achieve lower PUE than these fleet averages.
Hyperscaler average PUEs: AWS 1.15, Google 1.10, Microsoft 1.18, Meta 1.08. PUE of ~1.11 representative for modern AI facility. (Note: these values are from ChinaTalk's publication date and differ slightly from the table above, which uses the most recent directly-reported figures. The discrepancies — e.g., Google 1.10 vs. 1.09, Meta 1.08 vs. 1.09, Microsoft 1.18 vs. 1.16 — reflect different reporting periods and rounding.)
Uses PUE of 1.15 for Colossus 2 modeling (400 MW AI data center in Memphis).
New builds vs. retrofits
New-build AI data centers designed for liquid cooling from the ground up achieve PUE 1.05-1.12. They eliminate the overhead of maintaining parallel air-cooling infrastructure and can optimize facility power distribution for liquid-cooled racks.
Retrofitted facilities face higher PUE (1.15-1.25) due to hybrid cooling architectures, suboptimal airflow management around remaining air-cooled equipment, and legacy power distribution inefficiencies. Retrofitting to support 40 kW racks costs $50K-100K per rack; building new 100 kW infrastructure costs $200K-300K per rack.
NVIDIA mandated cooling specs
NVIDIA mandates liquid cooling for GB200 NVL72: inlet temperature 20-25C, flow rate 80 L/min, pressure drop <1.5 bar. The system generates 120 kW continuously. Deviation triggers automatic throttling that can reduce performance by 60%.