Terrestrial Data Center PUE

Answer

Modern liquid-cooled AI data centers achieve PUE values ranging from 1.02 to 1.20, depending on cooling architecture. The central estimate for a new-build, liquid-cooled AI facility is PUE 1.10. Optimistic deployments using full immersion cooling reach 1.03, while retrofitted or hybrid-cooled facilities typically land around 1.20.

The industry is converging on direct-to-chip liquid cooling as the standard for AI racks above 35 kW, with immersion cooling gaining share at the frontier. New-build AI data centers universally adopt liquid cooling; the GB200 NVL72 mandates it at 120 kW/rack.

Evidence

Cooling technology PUE ranges

Cooling type PUE range Notes
Traditional air cooling 1.40-1.80 Industry average ~1.41 (IEA)
Rear-door heat exchanger 1.20-1.35 Hybrid approach for retrofits
Direct-to-chip liquid (DLC) 1.05-1.15 ~65% of liquid cooling market in 2026
Single-phase immersion 1.02-1.10 GRC ICEraQ reports <1.03
Two-phase immersion 1.01-1.05 Highest efficiency, highest complexity

Source: introl-liquid-cooling (Introl blog, liquid cooling for AI data centers). Air PUE 1.4-1.8; liquid cooling PUE 1.05-1.15; immersion PUE 1.02-1.03.

Hyperscaler reported fleet PUE (2024)

Provider Fleet-average PUE Best site
Google 1.09 -
Meta 1.09 1.08 reported in some sources
AWS 1.15 1.04 (Europe)
Microsoft 1.16 -

These are fleet averages that include older air-cooled facilities. New-build AI-specific facilities achieve lower PUE than these fleet averages.

Source: ChinaTalk, "How Much AI Does $1 Get You in China vs America?" Reports hyperscaler average PUEs: AWS 1.15, Google 1.10, Microsoft 1.18, Meta 1.08. Notes that a PUE of ~1.11 is representative for a modern AI facility. (ChinaTalk, Feb 2026)

Source: SemiAnalysis, "From Tokens to Burgers" Uses PUE of 1.15 for Colossus 2 modeling (400 MW AI data center in Memphis). (SemiAnalysis, Jan 2026)

New builds vs. retrofits

New-build AI data centers designed for liquid cooling from the ground up achieve PUE 1.05-1.12. They eliminate the overhead of maintaining parallel air-cooling infrastructure and can optimize facility power distribution for liquid-cooled racks.

Retrofitted facilities face higher PUE (1.15-1.25) due to hybrid cooling architectures, suboptimal airflow management around remaining air-cooled equipment, and legacy power distribution inefficiencies. Retrofitting to support 40 kW racks costs $50K-100K per rack; building new 100 kW infrastructure costs $200K-300K per rack. (Introl, OCP 2025 analysis)

NVIDIA mandated cooling specs

NVIDIA mandates liquid cooling for GB200 NVL72: inlet temperature 20-25C, flow rate 80 L/min, pressure drop <1.5 bar. The system generates 120 kW continuously. Deviation triggers automatic throttling that can reduce performance by 60%. (Introl, GB200 NVL72 deployment guide)

Analysis

Convergence toward a standard range

The AI data center industry is converging on PUE 1.05-1.15 for new liquid-cooled facilities:

Why PUE 1.10 is the central estimate

The central estimate of 1.10 reflects:

  1. Hyperscaler fleet averages of 1.09-1.16 include older air-cooled facilities. New AI-specific builds outperform fleet averages.
  2. SemiAnalysis modeling uses 1.15 for a facility with adiabatic cooling assist, which is slightly conservative for pure liquid-cooled deployments.
  3. ChinaTalk modeling uses 1.11 as representative for a modern AI data center, which aligns closely with the central estimate.
  4. Direct-to-chip systems routinely achieve 1.05-1.15; the midpoint is ~1.10.

PUE overhead components at 1.10

At PUE 1.10, the non-IT overhead is 10% of IT load, allocated roughly as:

Implications for orbital comparison

PUE 1.10 means a terrestrial facility needs 1.10 kW of total power for every 1.00 kW of IT load. The cooling overhead is small -- only ~5% for the cooling system itself. This sets a high bar for orbital data centers: eliminating cooling overhead saves only ~5% of total power, not the ~40% that air-cooled facilities from a decade ago would have suggested. The case for orbital data centers must rest on power generation cost advantages, not cooling efficiency gains.