Compute Hardware Mass per kW_IT

Answer

Central estimate: 5.0 kg/kW_IT for AI compute hardware adapted for orbital deployment, including GPUs, NVLink switches, power conversion electronics, structural chassis, and cabling -- but excluding cooling systems (radiators, heat pipes) and solar arrays, which are addressed by separate parameters.

Evidence

Terrestrial baseline: full rack mass

  1. [evidence] The GB200 NVL72 system arrives in four separate components: compute rack weighing 1,500 kg, NVLink Switch rack at 800 kg, CDU (coolant distribution unit) at 400 kg, and power distribution unit at 300 kg. Total ~3,000 kg. (introl-nvl72-deployment, Introl deployment guide). The system consumes 120-132 kW. This gives a full terrestrial rack mass of ~23-25 kg/kW_IT.

  2. [evidence] A separate source states the GB200 NVL72 "weighs 3,000 kilograms and requires 2.4 megawatts of cooling capacity" for 120 kW IT load (introl-nvl72-deployment, alternate figure in same source). The 3,000 kg figure is consistent across sources.

  3. [evidence] The Vera Rubin NVL72 rack weighs "roughly 4,000 lbs" (~1,815 kg) -- this appears to be for the compute rack unit alone (not including separate switch rack, CDU, or power distribution), housing 72 Rubin GPUs and 36 Vera CPUs across 18 compute trays plus 9 NVLink switch trays (vera-rubin-nvl72-nvidia). VR NVL72 system TDP is 180-220 kW.

  4. [evidence] The DGX H100 system (8 GPUs, ~10 kW) weighs 130.45 kg total, including chassis, fans, PSUs, NVSwitch, CPUs, and storage (nvidia-gb200-specs / NVIDIA DGX H100 datasheet). This gives ~13 kg/kW_IT for a non-liquid-cooled server node.

Stripped compute-only mass

  1. [evidence] Dwarkesh Patel reports: "a stripped down GB200 NVL72 with no cooling equipment is around 100 kg. They draw 132 kW of power" yielding ~1,452 W/kg, or equivalently ~0.69 kg/kW for the bare compute boards alone (dwarkesh-space-gpus). This is the most optimistic figure, reflecting only the GPU/CPU modules and NVLink boards without any chassis, power conversion, cooling, or structural support. This figure does not include power distribution or NVLink switch racks.

  2. [opinion] Dwarkesh adds 10% overhead for intersatellite lasers and then assumes 25% of total satellite mass must be chassis, arriving at 85 W/kg (~11.8 kg/kW) for the whole integrated satellite including solar arrays, radiators, chassis, and communications (dwarkesh-space-gpus). This is a complete satellite figure, not compute-hardware-only.

NVLink switch mass

  1. [evidence] The NVLink Switch rack for GB200 NVL72 weighs 800 kg for 72 GPUs drawing 120-132 kW (introl-nvl72-deployment). This is ~6.1-6.7 kg/kW_IT for the switch fabric alone in terrestrial packaging. The switches themselves (ASICs + PCBs) are a small fraction of this; most mass is chassis, connectors, copper backplane cables, and power delivery. A stripped switch assembly for space might weigh 20-30% of the terrestrial version.

  2. [evidence] The GB200 NVL72 has 9 NVLink switch trays, each containing 2 NVLink Switch ASICs. The VR NVL72 maintains the same NVLink switch tray count with NVLink 6 ASICs (semianalysis-vera-rubin). Switch tray PCBs are upgraded to 32 layers with high-end CCL material, suggesting each tray is a dense, heavy PCB assembly.

Power conversion and distribution mass

  1. [evidence] GB200 NVL72 power distribution unit weighs 300 kg and power conversion is done in four 30 kW power shelves converting 480V 3-phase AC to 54V DC at 97% efficiency (introl-nvl72-deployment). This is ~2.3-2.5 kg/kW for terrestrial AC-DC conversion -- not needed in space (where power arrives as DC from solar arrays).

  2. [evidence] VR NVL72 power delivery uses four 110 kW power shelves (each 3U, containing six 18.3 kW PSU modules) stepping down 415-480 VAC to 50 VDC (semianalysis-vera-rubin). At the compute tray level, 50 VDC enters via busbar, then IBC modules on the Strata board step down to 12 VDC, then VRMs step down to ~1 VDC for GPUs. This multi-stage architecture would be partially retained in space (the DC-DC stages from bus voltage to point-of-load are inherent to the boards and not removable).

  3. [evidence] For space applications, a NASA study describes a 1 MW, 100 kV space-based DC-DC converter with estimated system mass of 83.8 kg, giving 11.9 kW/kg (or ~0.084 kg/kW) (nasa-high-power-dc-dc). This is an advanced high-power design, not representative of current commercial products.

  4. [evidence] The MDPI review of satellite DC-DC converters reports GaN/SiC converters achieving 0.2-0.5 kg/kW at moderate power levels, with satellite power systems comprising ~25% of total dry mass. Power harness and cabling adds 10-25% on top of the power electronics mass (mdpi-satellite-dc-dc).

  5. [evidence] A PCDU (Power Conditioning and Distribution Unit) example: the COLOSSUS model weighs 7.6 kg for 1.6 kW delivery capacity = ~4.75 kg/kW. This is a low-power satellite PCDU, not representative of a 100+ kW system where specific mass improves significantly with scale.

Space compute module data

  1. [evidence] NVIDIA's Space-1 Vera Rubin Module is "purpose-built to perform in the harsh, low-SWaP environment of space" and delivers "up to 25x more AI-compute" than an H100. It is not yet commercially available. No specific mass, weight, or power figures have been published (payload-space1-geeksquad, nvidia-space1-module). Jensen Huang acknowledged the cooling challenge: "in space there's no conduction, there's no convection, there's just radiation."

  2. [evidence] Starcloud-1 satellite weighs 60 kg total and carries one NVIDIA H100 GPU consuming ~0.7 kW (starcloud-nvidia-blog). The total satellite mass is 60 kg for 0.7 kW_IT, giving ~86 kg/kW_IT for the entire satellite (including bus, solar, comms, structure -- not just compute). The compute hardware itself is a small fraction.

  3. [opinion] Casey Handmer estimates a Starlink-derived satellite could produce ~130 kW of electrical power and host ~200 H100-equivalent GPUs on the main bus, each consuming ~700 W (handmer-space-inference). He suggests GPUs could be installed "directly to the solar module" in a distributed architecture. This implies ~100 kW_IT from 200 GPUs.

Satellite system mass budgets

  1. [opinion] Andrew McCalip's orbital datacenter model assumes mass allocation of roughly 33% compute, 33% power systems, and 33% thermal management, with total system mass of ~100,000 kg/MW (= 100 kg/kW) for deployed infrastructure including all subsystems (mccalip-space-datacenters). The compute-only portion at 33% would be ~33 kg/kW_IT total satellite mass attributed to compute -- but this includes structural allocation and is a full-system budget, not bare hardware.

  2. [opinion] Elon Musk on designing chips for space: "roughly if you increase the operating temperature by 20% in degrees Kelvin, you can cut your radiator mass in half. So running at a higher temperature is helpful... I just designed it to run hot and I think you pretty much do it the same way that you do things on earth, apart from make it run hotter. The solar array is most of the weight on the satellite" (elon-musk-dwarkesh-interview). This suggests compute hardware is a minority of total satellite mass.

Emerging rack mass trends

  1. [evidence] Vera Rubin NVL72 at ~1,815 kg (4,000 lbs) rack weight for 180-220 kW TDP represents ~8.3-10.1 kg/kW_IT for the integrated rack (compute trays + NVLink switch trays + power shelves + chassis + liquid cooling manifolds + busbars). This is the most mass-efficient rack-scale AI system announced to date, owing to cableless design and modular architecture (vera-rubin-nvl72-nvidia).

  2. [evidence] VR NVL72 compute tray is 100% liquid cooled with internal manifolds and cold plate modules on every component. Fans are removed entirely (semianalysis-vera-rubin). The liquid cooling hardware within the rack (manifolds, cold plates, QDs) adds mass that would be replaced by different thermal coupling in space.

Analysis

Decomposing the terrestrial rack for space

The GB200 NVL72 terrestrial system at ~3,000 kg / 120-132 kW gives ~23-25 kg/kW_IT. To estimate space-adapted compute hardware mass, we must identify what can be removed and what must be added:

Removable (terrestrial-only) components:

Total removable: ~920-1,080 kg

Retained (compute-essential) components:

Retained subtotal: ~250-400 kg for 120-132 kW = ~1.9-3.3 kg/kW_IT

Added for space:

Added subtotal: ~110-250 kg

Total space-adapted compute hardware

Combining retained + added: ~360-650 kg for ~130 kW_IT

This gives 2.8-5.0 kg/kW_IT for space-adapted compute hardware, including GPUs, NVLink switches, DC-DC conversion, structural chassis, and cabling -- but excluding radiators, solar arrays, batteries, and communications.

Cross-checks

Key uncertainties

  1. The "100 kg stripped NVL72" figure from Dwarkesh is unverified. It is attributed to unnamed sources. If the actual bare-board mass is 150-200 kg (which would be more consistent with the DGX H100 scaling), the optimistic end shifts upward.

  2. NVLink switch mass for space is highly uncertain. The terrestrial switch rack (800 kg) is dominated by chassis and connectors. For space, a redesigned switch fabric could be much lighter, but the dense copper backplane cabling is inherently heavy and may not be easily eliminated.

  3. Space qualification adds mass. Conformal coatings, potting compounds, radiation-tolerant component screening, and vibration damping all add mass. A 15-25% mass penalty vs. terrestrial stripped hardware is typical for space qualification.

  4. Next-generation hardware (Vera Rubin, Rubin Ultra) will improve power density. VR NVL72 delivers 10x more performance per watt than GB200 (vera-rubin-nvl72-nvidia). If power efficiency improves faster than mass, the kg/kW_IT metric will decrease with each generation.

  5. NVIDIA Space-1 module specifications are unknown. If NVIDIA publishes mass and power data for this module, it would provide the first direct measurement of space-adapted AI compute density. The 25x AI-compute improvement over H100 in a space-optimized package could significantly shift the estimate.

Summary table

Configuration Mass (kg) Power (kW_IT) kg/kW_IT Notes
GB200 NVL72 full terrestrial 3,000 120-132 23-25 Includes CDU, power dist, everything
GB200 compute rack only (terrestrial) 1,500 ~110 ~14 Excludes switch rack, CDU, PDU
VR NVL72 rack (terrestrial) ~1,815 180-220 8-10 Next-gen, higher density
DGX H100 node (terrestrial) 130 10 13 8-GPU server with PSUs and fans
GB200 NVL72 bare boards only ~100 132 ~0.76 Unverified, no switches/chassis/power
Space-adapted (optimistic) ~325 130 2.5 Aggressive stripping + GaN DC-DC
Space-adapted (central) ~650 130 5.0 Realistic near-term
Space-adapted (conservative) ~1,040 130 8.0 First-gen with margins
Starcloud-1 full satellite 60 0.7 86 Entire satellite for 1 GPU