Compute Hardware Mass per kW_IT
What is the mass per kW_IT (kg/kW_IT) for AI compute hardware adapted for orbital deployment?
Answer
Central estimate: 6.0 kg/kW_IT for AI compute hardware adapted for orbital deployment, including GPUs, NVLink switches, power conversion electronics, structural chassis, and cabling — but excluding cooling systems (radiators, heat pipes) and solar arrays, which are addressed by separate parameters.
- Optimistic: 4.0 kg/kW_IT — The HGX B200 baseboard alone weighs ~4 kg/kW_IT nvidia-hgx-b200-pcf.1, setting a hard floor for bare GPU hardware mass before any additions. This optimistic case assumes next-generation packaging (Vera Rubin era) with GaN/SiC power conversion and minimal structural overhead achieves this floor — plausible only with purpose-built space compute modules at scale where DC-DC conversion, cabling, and structural mass are absorbed into the baseboard design. Skepticism note: This optimistic value equals the bare baseboard mass with zero margin for any space-specific additions. The bottom-up analysis below shows space additions (DC-DC converter, power harness, structural chassis, radiation shielding, spacecraft interface) totaling 1.0–2.2 kg/kW_IT for current technology. Achieving 4.0 kg/kW_IT at the system level would require packaging advances that fully absorb these additions — plausible for next-generation integrated designs but undemonstrated. A more conservative optimistic floor of ~5.0 kg/kW_IT would allow minimal space additions while still representing aggressive optimization. The difference (1.0 kg/kW_IT) has a modest impact on total satellite mass and TCO.
- Central: 6.0 kg/kW_IT — Stripped Blackwell/Rubin-class hardware (terrestrial liquid cooling, AC-DC conversion, and heavy steel chassis removed) with NVLink switches, space-grade DC-DC conversion, lightweight structural enclosure, and cabling harness. This sits in the upper half of the bottom-up derived range (4.0-6.5 kg/kW_IT), reflecting that the bare-board mass floor is higher than the unverified Dwarkesh estimate suggests (see B200 cross-check below).
- Conservative: 9.0 kg/kW_IT — Includes heavier space-qualified power electronics, more conservative structural margins for launch vibration and thermal cycling, and partial redundancy in power distribution. Closer to what first-generation orbital systems will actually achieve.
Analysis
Decomposing the terrestrial rack for space
The GB200 NVL72 terrestrial system at ~3,000 kg / ~120 kW gives ~25 kg/kW_IT. To estimate space-adapted compute hardware mass, we must identify what can be removed and what must be added:
Removable (terrestrial-only) components:
- CDU (coolant distribution unit): 400 kg -- entirely removed; space cooling uses radiators (separate parameter)
- AC-DC power shelves: 300 kg -- removed; solar arrays provide DC directly
- Liquid cooling loops within rack (manifolds, cold plates, fluid): estimated ~100-200 kg -- replaced by heat pipes or cold plates coupled to external radiators
- Heavy steel rack frame: estimated ~100-150 kg -- replaced by lightweight aluminum/composite space structure
- Fans (in NVLink switch rack): minor, ~20-30 kg
Total removable: ~920-1,080 kg
Retained (compute-essential) components:
- Bare compute boards (GPUs, CPUs, HBM, PCBs, VRMs): The Dwarkesh estimate of ~100 kg for a stripped GB200 NVL72 dwarkesh-space-gpus.1 is likely too low. Cross-checking against the HGX B200 baseboard nvidia-hgx-b200-pcf.1 (~32 kg for 8 GPUs, ~4 kg/kW_IT baseboard-only), scaling to 72 GPUs gives ~288 kg for compute baseboards alone. Accounting for architectural differences between HGX and NVL72, and for Grace CPU modules, a more conservative bare-board estimate is ~200-350 kg.
- NVLink switch board assemblies (ASICs + PCBs + connectors): estimated ~100-200 kg (stripped from 800 kg terrestrial switch rack)
- DC-DC conversion (bus voltage to point-of-load): inherent to boards, ~0 additional standalone mass
- Copper backplane cables (NVLink interconnects between compute trays and switch trays): estimated ~50-100 kg
- Board-level connectors and passive components: included in board mass above
Retained subtotal: ~350-650 kg for ~120 kW = ~2.9-5.4 kg/kW_IT
Added for space:
- DC-DC converter from solar bus voltage (~100-150 V) to 50 VDC rack bus: satellite power systems constitute ~33% of total dry mass mdpi-satellite-dc-dc.1, but at 100+ kW scale, specific mass should improve. Estimated ~0.3-0.6 kg/kW based on the NASA 1 MW converter study nasa-high-power-dc-dc.1 scaled down, giving ~35-70 kg for 120 kW
- Power harness and cabling: ~10-20 kg
- Structural chassis for launch vibration (space-qualified Al/composite): ~50-100 kg (lighter than terrestrial steel rack, but must survive launch loads and provide thermal mounting)
- Radiation shielding (spot shielding for sensitive components): ~10-30 kg
- Spacecraft interface (mounting, thermal coupling to radiator loop): ~20-40 kg
Added subtotal: ~125-260 kg
Total space-adapted compute hardware
Combining retained + added: ~475-910 kg for ~120 kW_IT
This gives 4.0-7.6 kg/kW_IT for space-adapted compute hardware, including GPUs, NVLink switches, DC-DC conversion, structural chassis, and cabling — but excluding radiators, solar arrays, batteries, and communications. The central estimate of 6.0 kg/kW_IT sits in the upper half of this range, reflecting the higher bare-board floor established by the B200 cross-check.
NVL72 component-to-category mapping
The decomposition above strips the NVL72 to "retained" and "added" components but does not name every subcomponent in the rack. For transparency, here is how the full NVL72 bill of materials maps to this page's "compute mass" category vs. the structural overhead parameter (see structural-overhead):
Included in compute mass (this page): GPU dies and HBM stacks, Grace CPU modules, NVLink switch ASICs and their tray PCBs, on-board VRMs and IBC modules (inherent to compute boards), copper NVLink backplane cables between compute and switch trays, space-added DC-DC conversion from bus voltage to 50 VDC, power harness within the compute assembly, and the lightweight structural chassis that houses compute trays for launch and thermal mounting.
Included in structural overhead (separate parameter): Satellite-level wiring harness (power feeds from solar arrays to compute assembly), ADCS, propulsion, inter-satellite laser links and ground communications terminals, flight computer / C&DH, and residual bus structure beyond the compute chassis.
Removed entirely (not in either category): The terrestrial CDU (400 kg), AC-DC power shelves (300 kg), heavy steel rack frame, liquid cooling manifolds and cold plates within the rack, and air-cooling fans in the switch rack. These are replaced by space-specific equivalents modeled elsewhere — radiators and heat transport in the thermal parameter, solar arrays in the power parameter.
Terrestrial components omitted from space adaptation: Each NVL72 compute tray includes 400G ConnectX NICs, BlueField DPUs, and NVMe storage interfaces for terrestrial networking and local storage. These are not needed in the orbital architecture (inter-satellite communication uses dedicated laser terminals counted in structural overhead; local NVMe storage is unnecessary for inference workloads). Their mass is small (~1-2 kg per tray, ~18-36 kg total) and falls within the uncertainty of the retained-board estimate. Similarly, BMC (baseboard management controller) interfaces and top-of-rack Ethernet management switches serve terrestrial fleet management and would be replaced by lighter spacecraft C&DH, counted in structural overhead. No components are double-counted between this page and the structural overhead parameter.
Cross-checks
HGX B200 baseboard nvidia-hgx-b200-pcf.1: ~32 kg for 8 GPUs at ~8 kW = ~4 kg/kW_IT for the bare GPU baseboard only (no chassis, no power supply, no cooling). This is the strongest available floor: it represents the irreducible mass of GPU packages, HBM stacks, VRMs, and PCB. Adding DC-DC conversion, chassis, cabling, shielding, and launch accommodation on top of 4 kg/kW yields 5.5-7.5 kg/kW_IT — consistent with our central-to-conservative range.
DGX B200 nvidia-dgx-b200-specs.1: 10U server, ~14.3 kW, ~10 kg/kW_IT for a complete terrestrial server. Stripping AC-DC power supply and cooling fans but adding space-specific components gives ~7-9 kg/kW_IT, consistent with our conservative case.
The Dwarkesh "100 kg bare boards" figure dwarkesh-space-gpus.1 (~0.76 kg/kW for a stripped GB200 NVL72) is inconsistent with the HGX B200 scaling: 9 × 32 kg = ~288 kg for equivalent GPU baseboards alone, nearly 3x the Dwarkesh figure. The discrepancy may reflect different definitions of "stripped" (Dwarkesh may count only GPU/CPU die + HBM, not the full baseboard assembly with VRMs and connectors). We treat the Dwarkesh figure as an unverified lower bound that likely understates bare-board mass.
The GB200 NVL72 compute rack at ~1,360 kg / ~120 kW = ~11.3 kg/kW_IT nvl72-rack-physical-specs.1 includes liquid cooling hardware, chassis, and power distribution. For space, removing liquid cooling and heavy chassis (~30-40%) but adding structural reinforcement gives ~7-9 kg/kW_IT, consistent with our conservative case.
The VR NVL72 at ~8.3-10.1 kg/kW_IT vera-rubin-nvl72-nvidia.2 (VR NVL72 ~1,815 kg rack at 180-220 kW TDP) is the full integrated rack with liquid cooling and heavy power shelves. Stripping cooling and AC-DC conversion should reduce this by 40-60%, giving 3.3-6.1 kg/kW_IT — the lower end of this range requires aggressive mass savings consistent with our optimistic case.
McCalip's 33 kg/kW mccalip-space-dc.1 (compute fraction at ~33% of total ~100 kg/kW satellite mass) for the compute fraction of total satellite mass is much higher because it includes allocated structural and integration mass at the satellite level. Our figure is for the compute subsystem hardware itself.
Key uncertainties
The "100 kg stripped NVL72" figure from Dwarkesh is unverified. It is attributed to unnamed sources. If the actual bare-board mass is 150-200 kg (which would be more consistent with the DGX H100 scaling), the optimistic end shifts upward.
NVLink switch mass for space is highly uncertain. The terrestrial switch rack (800 kg) is dominated by chassis and connectors. For space, a redesigned switch fabric could be much lighter, but the dense copper backplane cabling is inherently heavy and may not be easily eliminated.
Space qualification adds mass. Conformal coatings, potting compounds, radiation-tolerant component screening, and vibration damping all add mass. A 15-25% mass penalty vs. terrestrial stripped hardware is typical for space qualification.
Next-generation hardware (Vera Rubin, Rubin Ultra) will improve power density. VR NVL72 delivers 10x more performance per watt than GB200. If power efficiency improves faster than mass, the kg/kW_IT metric will decrease with each generation.
NVIDIA Space-1 module specifications are unknown. If NVIDIA publishes mass and power data for this module, it would provide the first direct measurement of space-adapted AI compute density. The 25x AI-compute improvement over H100 in a space-optimized package could significantly shift the estimate.
Summary table
| Configuration | Mass (kg) | Power (kW_IT) | kg/kW_IT | Notes |
|---|---|---|---|---|
| GB200 NVL72 full terrestrial | 3,000 | ~120 | ~25 | Includes CDU, power dist, everything |
| GB200 compute rack only (terrestrial) | 1,500 | ~110 | ~14 | Excludes switch rack, CDU, PDU |
| VR NVL72 rack (terrestrial) | ~1,815 | 180-220 | 8-10 | Next-gen, higher density |
| DGX B200 node (terrestrial) | ~142 | ~14.3 | ~10 | 8 B200 GPUs, 10U server |
| DGX H100 node (terrestrial) | 130 | 10 | 13 | 8-GPU server with PSUs and fans |
| HGX B200 baseboard only | ~32 | ~8 | ~4.0 | Bare GPU baseboard: GPUs, HBM, VRMs, PCB only |
| GB200 NVL72 bare boards (Dwarkesh) | ~100 | ~132 | ~0.76 | Unverified, likely understated — inconsistent with HGX B200 scaling (~288 kg for 72 GPUs) |
| Space-adapted (optimistic) | ~520 | 130 | 4.0 | HGX B200 baseboard floor + GaN DC-DC |
| Space-adapted (central) | ~780 | 130 | 6.0 | Upper half of derived range |
| Space-adapted (conservative) | ~1,170 | 130 | 9.0 | First-gen with margins |
| Starcloud-1 full satellite | 60 | 0.7 | 86 | Entire satellite for 1 GPU |
Evidence
Terrestrial baseline: full rack mass
The GB200 NVL72 system arrives in four separate components: compute rack weighing 1,500 kg, NVLink Switch rack at 800 kg, CDU (coolant distribution unit) at 400 kg, and power distribution unit at 300 kg. Total ~3,000 kg. The system consumes ~120 kW. This gives a full terrestrial rack mass of ~25 kg/kW_IT.
A separate source states the GB200 NVL72 "weighs 3,000 kilograms and requires 2.4 megawatts of cooling capacity" for 120 kW IT load. The 3,000 kg figure is consistent across sources.
The Vera Rubin NVL72 rack weighs "roughly 4,000 lbs" (~1,815 kg) -- this appears to be for the compute rack unit alone (not including separate switch rack, CDU, or power distribution), housing 72 Rubin GPUs and 36 Vera CPUs across 18 compute trays plus 9 NVLink switch trays. VR NVL72 system TDP is 180-220 kW.
The GB200 NVL72 compute rack weighs ~1,360 kg and draws ~120 kW (72 Blackwell GPUs, 36 Grace CPUs). This is the compute rack alone, not the full system which includes separate switch rack, CDU, and power distribution. At ~120 kW, this gives ~11.3 kg/kW_IT for the terrestrial compute rack with liquid cooling and chassis.
Stripped compute-only mass
Dwarkesh Patel reports a stripped GB200 NVL72 at ~100 kg. The source's headline figure of ~1,452 W/kg includes a 10% overhead for intersatellite laser communications; the bare compute-only figure is ~1,320 W/kg (~0.76 kg/kW). This is the most optimistic figure, reflecting only the GPU/CPU modules and NVLink boards without any chassis, power conversion, cooling, or structural support. This figure does not include power distribution, NVLink switch racks, or communications overhead.
Dwarkesh adds 10% overhead for intersatellite lasers and then assumes 25% of total satellite mass must be chassis, arriving at 85 W/kg (~11.8 kg/kW) for the whole integrated satellite including solar arrays, radiators, chassis, and communications. This is a complete satellite figure, not compute-hardware-only.
NVLink switch mass
The NVLink Switch rack for GB200 NVL72 weighs 800 kg for 72 GPUs drawing ~120 kW. This is ~6.7 kg/kW_IT for the switch fabric alone in terrestrial packaging. The switches themselves (ASICs + PCBs) are a small fraction of this; most mass is chassis, connectors, copper backplane cables, and power delivery. A stripped switch assembly for space might weigh 20-30% of the terrestrial version.
The GB200 NVL72 has 9 NVLink switch trays, each containing 2 NVLink Switch ASICs. The VR NVL72 maintains the same NVLink switch tray count with NVLink 6 ASICs. Switch tray PCBs are upgraded to 32 layers with high-end CCL material, suggesting each tray is a dense, heavy PCB assembly.
Power conversion and distribution mass
GB200 NVL72 power distribution unit weighs 300 kg and power conversion is done in four 30 kW power shelves converting 480V 3-phase AC to 54V DC at 97% efficiency. This is ~2.3-2.5 kg/kW for terrestrial AC-DC conversion -- not needed in space (where power arrives as DC from solar arrays).
VR NVL72 power delivery uses four 110 kW power shelves (each 3U, containing six 18.3 kW PSU modules) stepping down 415-480 VAC to 50 VDC. At the compute tray level, 50 VDC enters via busbar, then IBC modules on the Strata board step down to 12 VDC, then VRMs step down to ~1 VDC for GPUs. This multi-stage architecture would be partially retained in space (the DC-DC stages from bus voltage to point-of-load are inherent to the boards and not removable).
For space applications, a NASA study describes a 1 MW, 100 kV space-based DC-DC converter with estimated system mass of 83.8 kg, giving 11.9 kW/kg (or ~0.084 kg/kW). This is an advanced high-power design, not representative of current commercial products.
The MDPI review of satellite DC-DC converters reports that satellite power systems constitute ~33% of total dry mass. The paper surveys space-grade DC-DC converter topologies and radiation-hardened designs but does not provide specific kg/kW figures for GaN/SiC converters at the power levels relevant to orbital compute.
[analysis] The MDPI paper does not provide specific PCDU mass examples at the power levels relevant to orbital compute (100+ kW). Scaling from traditional satellite PCDUs (typically <10 kW) to 100+ kW systems would likely improve specific mass significantly, but no flight-heritage data exists at this scale. (Editorial commentary on source gaps, not sourced evidence.)
Space compute module data
NVIDIA's Space-1 Vera Rubin Module is "purpose-built to perform in the harsh, low-SWaP environment of space" and delivers "up to 25x more AI-compute" than an H100. It is not yet commercially available. No specific mass, weight, or power figures have been published. Jensen Huang acknowledged the cooling challenge: "in space there's no conduction, there's no convection, there's just radiation."
Starcloud-1 satellite weighs 60 kg total and carries one NVIDIA H100 GPU consuming ~0.7 kW. The total satellite mass is 60 kg for 0.7 kW_IT, giving ~86 kg/kW_IT for the entire satellite (including bus, solar, comms, structure -- not just compute). The compute hardware itself is a small fraction.
Casey Handmer estimates a Starlink-derived satellite could produce ~130 kW of electrical power and host ~200 H100-equivalent GPUs on the main bus, each consuming ~700 W. He suggests GPUs could be installed "directly to the solar module" in a distributed architecture. At 200 GPUs x 700 W, this implies ~140 kW_IT -- slightly exceeding his ~130 kW solar budget, suggesting either fewer GPUs or lower per-GPU power in practice.
Satellite system mass budgets
Andrew McCalip's orbital datacenter model assumes mass allocation of roughly 33% compute, 33% power systems, and 33% thermal management, with total system mass of ~100,000 kg/MW (= 100 kg/kW) for deployed infrastructure including all subsystems. The compute-only portion at 33% would be ~33 kg/kW_IT total satellite mass attributed to compute -- but this includes structural allocation and is a full-system budget, not bare hardware.
Elon Musk on designing chips for space: "roughly if you increase the operating temperature by 20% in degrees Kelvin, you can cut your radiator mass in half. So running at a higher temperature is helpful... I just designed it to run hot and I think you pretty much do it the same way that you do things on earth, apart from make it run hotter. The solar array is most of the weight on the satellite". This suggests compute hardware is a minority of total satellite mass.
B200-class hardware mass
The DGX B200 is a 10U server with 8 Blackwell B200 GPUs consuming up to ~14.3 kW at full load. This gives ~10 kg/kW_IT for a complete terrestrial 8-GPU server, consistent with the DGX H100's 13 kg/kW_IT. — nvidia-dgx-b200-specs (RunPod DGX B200 guide)
The HGX B200 baseboard — the GPU board assembly containing 8 B200 GPUs, HBM3e stacks, VRMs, NVLink interconnects, and PCB — weighs approximately 32 kg and draws up to ~8 kW. This gives ~4 kg/kW_IT for the bare GPU baseboard before DC-DC conversion, server chassis, cabling, or any space-specific adaptations. — nvidia-hgx-b200-pcf
Emerging rack mass trends
Vera Rubin NVL72 at ~1,815 kg (4,000 lbs) rack weight for 180-220 kW TDP represents ~8.3-10.1 kg/kW_IT for the integrated rack (compute trays + NVLink switch trays + power shelves + chassis + liquid cooling manifolds + busbars). This is the most mass-efficient rack-scale AI system announced to date, owing to cableless design and modular architecture.
VR NVL72 compute tray is 100% liquid cooled with internal manifolds and cold plate modules on every component. Fans are removed entirely. The liquid cooling hardware within the rack (manifolds, cold plates, QDs) adds mass that would be replaced by different thermal coupling in space.