Compute Hardware Mass per kW_IT

What is the mass per kW_IT (kg/kW_IT) for AI compute hardware adapted for orbital deployment?

Answer

Central estimate: 6.0 kg/kW_IT for AI compute hardware adapted for orbital deployment, including GPUs, NVLink switches, power conversion electronics, structural chassis, and cabling — but excluding cooling systems (radiators, heat pipes) and solar arrays, which are addressed by separate parameters.

Analysis

Decomposing the terrestrial rack for space

The GB200 NVL72 terrestrial system at ~3,000 kg / ~120 kW gives ~25 kg/kW_IT. To estimate space-adapted compute hardware mass, we must identify what can be removed and what must be added:

Removable (terrestrial-only) components:

Total removable: ~920-1,080 kg

Retained (compute-essential) components:

Retained subtotal: ~350-650 kg for ~120 kW = ~2.9-5.4 kg/kW_IT

Added for space:

Added subtotal: ~125-260 kg

Total space-adapted compute hardware

Combining retained + added: ~475-910 kg for ~120 kW_IT

This gives 4.0-7.6 kg/kW_IT for space-adapted compute hardware, including GPUs, NVLink switches, DC-DC conversion, structural chassis, and cabling — but excluding radiators, solar arrays, batteries, and communications. The central estimate of 6.0 kg/kW_IT sits in the upper half of this range, reflecting the higher bare-board floor established by the B200 cross-check.

NVL72 component-to-category mapping

The decomposition above strips the NVL72 to "retained" and "added" components but does not name every subcomponent in the rack. For transparency, here is how the full NVL72 bill of materials maps to this page's "compute mass" category vs. the structural overhead parameter (see structural-overhead):

Included in compute mass (this page): GPU dies and HBM stacks, Grace CPU modules, NVLink switch ASICs and their tray PCBs, on-board VRMs and IBC modules (inherent to compute boards), copper NVLink backplane cables between compute and switch trays, space-added DC-DC conversion from bus voltage to 50 VDC, power harness within the compute assembly, and the lightweight structural chassis that houses compute trays for launch and thermal mounting.

Included in structural overhead (separate parameter): Satellite-level wiring harness (power feeds from solar arrays to compute assembly), ADCS, propulsion, inter-satellite laser links and ground communications terminals, flight computer / C&DH, and residual bus structure beyond the compute chassis.

Removed entirely (not in either category): The terrestrial CDU (400 kg), AC-DC power shelves (300 kg), heavy steel rack frame, liquid cooling manifolds and cold plates within the rack, and air-cooling fans in the switch rack. These are replaced by space-specific equivalents modeled elsewhere — radiators and heat transport in the thermal parameter, solar arrays in the power parameter.

Terrestrial components omitted from space adaptation: Each NVL72 compute tray includes 400G ConnectX NICs, BlueField DPUs, and NVMe storage interfaces for terrestrial networking and local storage. These are not needed in the orbital architecture (inter-satellite communication uses dedicated laser terminals counted in structural overhead; local NVMe storage is unnecessary for inference workloads). Their mass is small (~1-2 kg per tray, ~18-36 kg total) and falls within the uncertainty of the retained-board estimate. Similarly, BMC (baseboard management controller) interfaces and top-of-rack Ethernet management switches serve terrestrial fleet management and would be replaced by lighter spacecraft C&DH, counted in structural overhead. No components are double-counted between this page and the structural overhead parameter.

Cross-checks

Key uncertainties

  1. The "100 kg stripped NVL72" figure from Dwarkesh is unverified. It is attributed to unnamed sources. If the actual bare-board mass is 150-200 kg (which would be more consistent with the DGX H100 scaling), the optimistic end shifts upward.

  2. NVLink switch mass for space is highly uncertain. The terrestrial switch rack (800 kg) is dominated by chassis and connectors. For space, a redesigned switch fabric could be much lighter, but the dense copper backplane cabling is inherently heavy and may not be easily eliminated.

  3. Space qualification adds mass. Conformal coatings, potting compounds, radiation-tolerant component screening, and vibration damping all add mass. A 15-25% mass penalty vs. terrestrial stripped hardware is typical for space qualification.

  4. Next-generation hardware (Vera Rubin, Rubin Ultra) will improve power density. VR NVL72 delivers 10x more performance per watt than GB200. If power efficiency improves faster than mass, the kg/kW_IT metric will decrease with each generation.

  5. NVIDIA Space-1 module specifications are unknown. If NVIDIA publishes mass and power data for this module, it would provide the first direct measurement of space-adapted AI compute density. The 25x AI-compute improvement over H100 in a space-optimized package could significantly shift the estimate.

Summary table

Configuration Mass (kg) Power (kW_IT) kg/kW_IT Notes
GB200 NVL72 full terrestrial 3,000 ~120 ~25 Includes CDU, power dist, everything
GB200 compute rack only (terrestrial) 1,500 ~110 ~14 Excludes switch rack, CDU, PDU
VR NVL72 rack (terrestrial) ~1,815 180-220 8-10 Next-gen, higher density
DGX B200 node (terrestrial) ~142 ~14.3 ~10 8 B200 GPUs, 10U server
DGX H100 node (terrestrial) 130 10 13 8-GPU server with PSUs and fans
HGX B200 baseboard only ~32 ~8 ~4.0 Bare GPU baseboard: GPUs, HBM, VRMs, PCB only
GB200 NVL72 bare boards (Dwarkesh) ~100 ~132 ~0.76 Unverified, likely understated — inconsistent with HGX B200 scaling (~288 kg for 72 GPUs)
Space-adapted (optimistic) ~520 130 4.0 HGX B200 baseboard floor + GaN DC-DC
Space-adapted (central) ~780 130 6.0 Upper half of derived range
Space-adapted (conservative) ~1,170 130 9.0 First-gen with margins
Starcloud-1 full satellite 60 0.7 86 Entire satellite for 1 GPU

Evidence

Terrestrial baseline: full rack mass

  1. The GB200 NVL72 system arrives in four separate components: compute rack weighing 1,500 kg, NVLink Switch rack at 800 kg, CDU (coolant distribution unit) at 400 kg, and power distribution unit at 300 kg. Total ~3,000 kg. The system consumes ~120 kW. This gives a full terrestrial rack mass of ~25 kg/kW_IT.

  2. A separate source states the GB200 NVL72 "weighs 3,000 kilograms and requires 2.4 megawatts of cooling capacity" for 120 kW IT load. The 3,000 kg figure is consistent across sources.

  3. The Vera Rubin NVL72 rack weighs "roughly 4,000 lbs" (~1,815 kg) -- this appears to be for the compute rack unit alone (not including separate switch rack, CDU, or power distribution), housing 72 Rubin GPUs and 36 Vera CPUs across 18 compute trays plus 9 NVLink switch trays. VR NVL72 system TDP is 180-220 kW.

  4. The GB200 NVL72 compute rack weighs ~1,360 kg and draws ~120 kW (72 Blackwell GPUs, 36 Grace CPUs). This is the compute rack alone, not the full system which includes separate switch rack, CDU, and power distribution. At ~120 kW, this gives ~11.3 kg/kW_IT for the terrestrial compute rack with liquid cooling and chassis.

Stripped compute-only mass

  1. Dwarkesh Patel reports a stripped GB200 NVL72 at ~100 kg. The source's headline figure of ~1,452 W/kg includes a 10% overhead for intersatellite laser communications; the bare compute-only figure is ~1,320 W/kg (~0.76 kg/kW). This is the most optimistic figure, reflecting only the GPU/CPU modules and NVLink boards without any chassis, power conversion, cooling, or structural support. This figure does not include power distribution, NVLink switch racks, or communications overhead.

  2. Dwarkesh adds 10% overhead for intersatellite lasers and then assumes 25% of total satellite mass must be chassis, arriving at 85 W/kg (~11.8 kg/kW) for the whole integrated satellite including solar arrays, radiators, chassis, and communications. This is a complete satellite figure, not compute-hardware-only.

  1. The NVLink Switch rack for GB200 NVL72 weighs 800 kg for 72 GPUs drawing ~120 kW. This is ~6.7 kg/kW_IT for the switch fabric alone in terrestrial packaging. The switches themselves (ASICs + PCBs) are a small fraction of this; most mass is chassis, connectors, copper backplane cables, and power delivery. A stripped switch assembly for space might weigh 20-30% of the terrestrial version.

  2. The GB200 NVL72 has 9 NVLink switch trays, each containing 2 NVLink Switch ASICs. The VR NVL72 maintains the same NVLink switch tray count with NVLink 6 ASICs. Switch tray PCBs are upgraded to 32 layers with high-end CCL material, suggesting each tray is a dense, heavy PCB assembly.

Power conversion and distribution mass

  1. GB200 NVL72 power distribution unit weighs 300 kg and power conversion is done in four 30 kW power shelves converting 480V 3-phase AC to 54V DC at 97% efficiency. This is ~2.3-2.5 kg/kW for terrestrial AC-DC conversion -- not needed in space (where power arrives as DC from solar arrays).

  2. VR NVL72 power delivery uses four 110 kW power shelves (each 3U, containing six 18.3 kW PSU modules) stepping down 415-480 VAC to 50 VDC. At the compute tray level, 50 VDC enters via busbar, then IBC modules on the Strata board step down to 12 VDC, then VRMs step down to ~1 VDC for GPUs. This multi-stage architecture would be partially retained in space (the DC-DC stages from bus voltage to point-of-load are inherent to the boards and not removable).

  3. For space applications, a NASA study describes a 1 MW, 100 kV space-based DC-DC converter with estimated system mass of 83.8 kg, giving 11.9 kW/kg (or ~0.084 kg/kW). This is an advanced high-power design, not representative of current commercial products.

  4. The MDPI review of satellite DC-DC converters reports that satellite power systems constitute ~33% of total dry mass. The paper surveys space-grade DC-DC converter topologies and radiation-hardened designs but does not provide specific kg/kW figures for GaN/SiC converters at the power levels relevant to orbital compute.

  5. [analysis] The MDPI paper does not provide specific PCDU mass examples at the power levels relevant to orbital compute (100+ kW). Scaling from traditional satellite PCDUs (typically <10 kW) to 100+ kW systems would likely improve specific mass significantly, but no flight-heritage data exists at this scale. (Editorial commentary on source gaps, not sourced evidence.)

Space compute module data

  1. NVIDIA's Space-1 Vera Rubin Module is "purpose-built to perform in the harsh, low-SWaP environment of space" and delivers "up to 25x more AI-compute" than an H100. It is not yet commercially available. No specific mass, weight, or power figures have been published. Jensen Huang acknowledged the cooling challenge: "in space there's no conduction, there's no convection, there's just radiation."

  2. Starcloud-1 satellite weighs 60 kg total and carries one NVIDIA H100 GPU consuming ~0.7 kW. The total satellite mass is 60 kg for 0.7 kW_IT, giving ~86 kg/kW_IT for the entire satellite (including bus, solar, comms, structure -- not just compute). The compute hardware itself is a small fraction.

  3. Casey Handmer estimates a Starlink-derived satellite could produce ~130 kW of electrical power and host ~200 H100-equivalent GPUs on the main bus, each consuming ~700 W. He suggests GPUs could be installed "directly to the solar module" in a distributed architecture. At 200 GPUs x 700 W, this implies ~140 kW_IT -- slightly exceeding his ~130 kW solar budget, suggesting either fewer GPUs or lower per-GPU power in practice.

Satellite system mass budgets

  1. Andrew McCalip's orbital datacenter model assumes mass allocation of roughly 33% compute, 33% power systems, and 33% thermal management, with total system mass of ~100,000 kg/MW (= 100 kg/kW) for deployed infrastructure including all subsystems. The compute-only portion at 33% would be ~33 kg/kW_IT total satellite mass attributed to compute -- but this includes structural allocation and is a full-system budget, not bare hardware.

  2. Elon Musk on designing chips for space: "roughly if you increase the operating temperature by 20% in degrees Kelvin, you can cut your radiator mass in half. So running at a higher temperature is helpful... I just designed it to run hot and I think you pretty much do it the same way that you do things on earth, apart from make it run hotter. The solar array is most of the weight on the satellite". This suggests compute hardware is a minority of total satellite mass.

B200-class hardware mass

  1. The DGX B200 is a 10U server with 8 Blackwell B200 GPUs consuming up to ~14.3 kW at full load. This gives ~10 kg/kW_IT for a complete terrestrial 8-GPU server, consistent with the DGX H100's 13 kg/kW_IT. — nvidia-dgx-b200-specs (RunPod DGX B200 guide)

  2. The HGX B200 baseboard — the GPU board assembly containing 8 B200 GPUs, HBM3e stacks, VRMs, NVLink interconnects, and PCB — weighs approximately 32 kg and draws up to ~8 kW. This gives ~4 kg/kW_IT for the bare GPU baseboard before DC-DC conversion, server chassis, cabling, or any space-specific adaptations. — nvidia-hgx-b200-pcf

  1. Vera Rubin NVL72 at ~1,815 kg (4,000 lbs) rack weight for 180-220 kW TDP represents ~8.3-10.1 kg/kW_IT for the integrated rack (compute trays + NVLink switch trays + power shelves + chassis + liquid cooling manifolds + busbars). This is the most mass-efficient rack-scale AI system announced to date, owing to cableless design and modular architecture.

  2. VR NVL72 compute tray is 100% liquid cooled with internal manifolds and cold plate modules on every component. Fans are removed entirely. The liquid cooling hardware within the rack (manifolds, cold plates, QDs) adds mass that would be replaced by different thermal coupling in space.