Compute Hardware Mass per kW_IT
Answer
Central estimate: 5.0 kg/kW_IT for AI compute hardware adapted for orbital deployment, including GPUs, NVLink switches, power conversion electronics, structural chassis, and cabling -- but excluding cooling systems (radiators, heat pipes) and solar arrays, which are addressed by separate parameters.
- Optimistic: 2.5 kg/kW_IT -- Aggressive stripping of a next-generation system (Vera Rubin era), custom space-optimized packaging, high-power-density GaN/SiC power conversion, and minimal structural overhead. Plausible only with purpose-built space compute modules at scale.
- Central: 5.0 kg/kW_IT -- Stripped Blackwell/Rubin-class hardware with NVLink switches, space-grade DC-DC conversion, lightweight structural enclosure, and cabling harness. Represents a realistic near-term space-adapted system.
- Conservative: 8.0 kg/kW_IT -- Includes heavier space-qualified power electronics, more conservative structural margins for launch vibration and thermal cycling, and partial redundancy in power distribution. Closer to what first-generation orbital systems will actually achieve.
Evidence
Terrestrial baseline: full rack mass
[evidence] The GB200 NVL72 system arrives in four separate components: compute rack weighing 1,500 kg, NVLink Switch rack at 800 kg, CDU (coolant distribution unit) at 400 kg, and power distribution unit at 300 kg. Total ~3,000 kg. (introl-nvl72-deployment, Introl deployment guide). The system consumes 120-132 kW. This gives a full terrestrial rack mass of ~23-25 kg/kW_IT.
[evidence] A separate source states the GB200 NVL72 "weighs 3,000 kilograms and requires 2.4 megawatts of cooling capacity" for 120 kW IT load (introl-nvl72-deployment, alternate figure in same source). The 3,000 kg figure is consistent across sources.
[evidence] The Vera Rubin NVL72 rack weighs "roughly 4,000 lbs" (~1,815 kg) -- this appears to be for the compute rack unit alone (not including separate switch rack, CDU, or power distribution), housing 72 Rubin GPUs and 36 Vera CPUs across 18 compute trays plus 9 NVLink switch trays (vera-rubin-nvl72-nvidia). VR NVL72 system TDP is 180-220 kW.
[evidence] The DGX H100 system (8 GPUs, ~10 kW) weighs 130.45 kg total, including chassis, fans, PSUs, NVSwitch, CPUs, and storage (nvidia-gb200-specs / NVIDIA DGX H100 datasheet). This gives ~13 kg/kW_IT for a non-liquid-cooled server node.
Stripped compute-only mass
[evidence] Dwarkesh Patel reports: "a stripped down GB200 NVL72 with no cooling equipment is around 100 kg. They draw 132 kW of power" yielding ~1,452 W/kg, or equivalently ~0.69 kg/kW for the bare compute boards alone (dwarkesh-space-gpus). This is the most optimistic figure, reflecting only the GPU/CPU modules and NVLink boards without any chassis, power conversion, cooling, or structural support. This figure does not include power distribution or NVLink switch racks.
[opinion] Dwarkesh adds 10% overhead for intersatellite lasers and then assumes 25% of total satellite mass must be chassis, arriving at 85 W/kg (~11.8 kg/kW) for the whole integrated satellite including solar arrays, radiators, chassis, and communications (dwarkesh-space-gpus). This is a complete satellite figure, not compute-hardware-only.
NVLink switch mass
[evidence] The NVLink Switch rack for GB200 NVL72 weighs 800 kg for 72 GPUs drawing 120-132 kW (introl-nvl72-deployment). This is ~6.1-6.7 kg/kW_IT for the switch fabric alone in terrestrial packaging. The switches themselves (ASICs + PCBs) are a small fraction of this; most mass is chassis, connectors, copper backplane cables, and power delivery. A stripped switch assembly for space might weigh 20-30% of the terrestrial version.
[evidence] The GB200 NVL72 has 9 NVLink switch trays, each containing 2 NVLink Switch ASICs. The VR NVL72 maintains the same NVLink switch tray count with NVLink 6 ASICs (semianalysis-vera-rubin). Switch tray PCBs are upgraded to 32 layers with high-end CCL material, suggesting each tray is a dense, heavy PCB assembly.
Power conversion and distribution mass
[evidence] GB200 NVL72 power distribution unit weighs 300 kg and power conversion is done in four 30 kW power shelves converting 480V 3-phase AC to 54V DC at 97% efficiency (introl-nvl72-deployment). This is ~2.3-2.5 kg/kW for terrestrial AC-DC conversion -- not needed in space (where power arrives as DC from solar arrays).
[evidence] VR NVL72 power delivery uses four 110 kW power shelves (each 3U, containing six 18.3 kW PSU modules) stepping down 415-480 VAC to 50 VDC (semianalysis-vera-rubin). At the compute tray level, 50 VDC enters via busbar, then IBC modules on the Strata board step down to 12 VDC, then VRMs step down to ~1 VDC for GPUs. This multi-stage architecture would be partially retained in space (the DC-DC stages from bus voltage to point-of-load are inherent to the boards and not removable).
[evidence] For space applications, a NASA study describes a 1 MW, 100 kV space-based DC-DC converter with estimated system mass of 83.8 kg, giving 11.9 kW/kg (or ~0.084 kg/kW) (nasa-high-power-dc-dc). This is an advanced high-power design, not representative of current commercial products.
[evidence] The MDPI review of satellite DC-DC converters reports GaN/SiC converters achieving 0.2-0.5 kg/kW at moderate power levels, with satellite power systems comprising ~25% of total dry mass. Power harness and cabling adds 10-25% on top of the power electronics mass (mdpi-satellite-dc-dc).
[evidence] A PCDU (Power Conditioning and Distribution Unit) example: the COLOSSUS model weighs 7.6 kg for 1.6 kW delivery capacity = ~4.75 kg/kW. This is a low-power satellite PCDU, not representative of a 100+ kW system where specific mass improves significantly with scale.
Space compute module data
[evidence] NVIDIA's Space-1 Vera Rubin Module is "purpose-built to perform in the harsh, low-SWaP environment of space" and delivers "up to 25x more AI-compute" than an H100. It is not yet commercially available. No specific mass, weight, or power figures have been published (payload-space1-geeksquad, nvidia-space1-module). Jensen Huang acknowledged the cooling challenge: "in space there's no conduction, there's no convection, there's just radiation."
[evidence] Starcloud-1 satellite weighs 60 kg total and carries one NVIDIA H100 GPU consuming ~0.7 kW (starcloud-nvidia-blog). The total satellite mass is 60 kg for 0.7 kW_IT, giving ~86 kg/kW_IT for the entire satellite (including bus, solar, comms, structure -- not just compute). The compute hardware itself is a small fraction.
[opinion] Casey Handmer estimates a Starlink-derived satellite could produce ~130 kW of electrical power and host ~200 H100-equivalent GPUs on the main bus, each consuming ~700 W (handmer-space-inference). He suggests GPUs could be installed "directly to the solar module" in a distributed architecture. This implies ~100 kW_IT from 200 GPUs.
Satellite system mass budgets
[opinion] Andrew McCalip's orbital datacenter model assumes mass allocation of roughly 33% compute, 33% power systems, and 33% thermal management, with total system mass of ~100,000 kg/MW (= 100 kg/kW) for deployed infrastructure including all subsystems (mccalip-space-datacenters). The compute-only portion at 33% would be ~33 kg/kW_IT total satellite mass attributed to compute -- but this includes structural allocation and is a full-system budget, not bare hardware.
[opinion] Elon Musk on designing chips for space: "roughly if you increase the operating temperature by 20% in degrees Kelvin, you can cut your radiator mass in half. So running at a higher temperature is helpful... I just designed it to run hot and I think you pretty much do it the same way that you do things on earth, apart from make it run hotter. The solar array is most of the weight on the satellite" (elon-musk-dwarkesh-interview). This suggests compute hardware is a minority of total satellite mass.
Emerging rack mass trends
[evidence] Vera Rubin NVL72 at ~1,815 kg (4,000 lbs) rack weight for 180-220 kW TDP represents ~8.3-10.1 kg/kW_IT for the integrated rack (compute trays + NVLink switch trays + power shelves + chassis + liquid cooling manifolds + busbars). This is the most mass-efficient rack-scale AI system announced to date, owing to cableless design and modular architecture (vera-rubin-nvl72-nvidia).
[evidence] VR NVL72 compute tray is 100% liquid cooled with internal manifolds and cold plate modules on every component. Fans are removed entirely (semianalysis-vera-rubin). The liquid cooling hardware within the rack (manifolds, cold plates, QDs) adds mass that would be replaced by different thermal coupling in space.
Analysis
Decomposing the terrestrial rack for space
The GB200 NVL72 terrestrial system at ~3,000 kg / 120-132 kW gives ~23-25 kg/kW_IT. To estimate space-adapted compute hardware mass, we must identify what can be removed and what must be added:
Removable (terrestrial-only) components:
- CDU (coolant distribution unit): 400 kg -- entirely removed; space cooling uses radiators (separate parameter)
- AC-DC power shelves: 300 kg -- removed; solar arrays provide DC directly
- Liquid cooling loops within rack (manifolds, cold plates, fluid): estimated ~100-200 kg -- replaced by heat pipes or cold plates coupled to external radiators
- Heavy steel rack frame: estimated ~100-150 kg -- replaced by lightweight aluminum/composite space structure
- Fans (in NVLink switch rack): minor, ~20-30 kg
Total removable: ~920-1,080 kg
Retained (compute-essential) components:
- Bare compute boards (GPUs, CPUs, HBM, PCBs, VRMs): ~100 kg per Dwarkesh estimate (evidence #5)
- NVLink switch board assemblies (ASICs + PCBs + connectors): estimated ~100-200 kg (stripped from 800 kg terrestrial switch rack)
- DC-DC conversion (bus voltage to point-of-load): inherent to boards, ~0 additional standalone mass
- Copper backplane cables (NVLink interconnects between compute trays and switch trays): estimated ~50-100 kg
- Board-level connectors and passive components: included in board mass above
Retained subtotal: ~250-400 kg for 120-132 kW = ~1.9-3.3 kg/kW_IT
Added for space:
- DC-DC converter from solar bus voltage (~100-150 V) to 50 VDC rack bus: at 0.2-0.5 kg/kW with GaN/SiC (evidence #12), ~25-65 kg for 130 kW
- Power harness and cabling (10-25% of power electronics mass): ~5-15 kg
- Structural chassis for launch vibration (space-qualified Al/composite): ~50-100 kg (lighter than terrestrial steel rack, but must survive launch loads and provide thermal mounting)
- Radiation shielding (spot shielding for sensitive components): ~10-30 kg
- Spacecraft interface (mounting, thermal coupling to radiator loop): ~20-40 kg
Added subtotal: ~110-250 kg
Total space-adapted compute hardware
Combining retained + added: ~360-650 kg for ~130 kW_IT
This gives 2.8-5.0 kg/kW_IT for space-adapted compute hardware, including GPUs, NVLink switches, DC-DC conversion, structural chassis, and cabling -- but excluding radiators, solar arrays, batteries, and communications.
Cross-checks
The Dwarkesh "100 kg bare boards" figure (evidence #5) at 0.69 kg/kW_IT is a floor -- it excludes NVLink switches, power conversion, chassis, and all structural mass. Our optimistic case (2.5 kg/kW_IT) adds ~3.6x overhead on bare boards, which is reasonable for space packaging.
The DGX H100 at 130 kg / 10 kW = 13 kg/kW_IT includes PSUs, fans, chassis, SSDs, and NVSwitch (evidence #4). For space, we remove PSUs (~30% of mass) and fans (~5%), but add structural reinforcement. A proportional estimate gives ~8-9 kg/kW_IT, consistent with our conservative case.
McCalip's 33 kg/kW (evidence #17) for the compute fraction of total satellite mass is much higher because it includes allocated structural and integration mass at the satellite level. Our figure is for the compute subsystem hardware itself.
The VR NVL72 at ~8.3-10.1 kg/kW_IT (evidence #19) is the full integrated rack with liquid cooling and heavy power shelves. Stripping cooling and AC-DC conversion should reduce this by 40-60%, giving 3.3-6.1 kg/kW_IT -- consistent with our range.
Key uncertainties
The "100 kg stripped NVL72" figure from Dwarkesh is unverified. It is attributed to unnamed sources. If the actual bare-board mass is 150-200 kg (which would be more consistent with the DGX H100 scaling), the optimistic end shifts upward.
NVLink switch mass for space is highly uncertain. The terrestrial switch rack (800 kg) is dominated by chassis and connectors. For space, a redesigned switch fabric could be much lighter, but the dense copper backplane cabling is inherently heavy and may not be easily eliminated.
Space qualification adds mass. Conformal coatings, potting compounds, radiation-tolerant component screening, and vibration damping all add mass. A 15-25% mass penalty vs. terrestrial stripped hardware is typical for space qualification.
Next-generation hardware (Vera Rubin, Rubin Ultra) will improve power density. VR NVL72 delivers 10x more performance per watt than GB200 (vera-rubin-nvl72-nvidia). If power efficiency improves faster than mass, the kg/kW_IT metric will decrease with each generation.
NVIDIA Space-1 module specifications are unknown. If NVIDIA publishes mass and power data for this module, it would provide the first direct measurement of space-adapted AI compute density. The 25x AI-compute improvement over H100 in a space-optimized package could significantly shift the estimate.
Summary table
| Configuration | Mass (kg) | Power (kW_IT) | kg/kW_IT | Notes |
|---|---|---|---|---|
| GB200 NVL72 full terrestrial | 3,000 | 120-132 | 23-25 | Includes CDU, power dist, everything |
| GB200 compute rack only (terrestrial) | 1,500 | ~110 | ~14 | Excludes switch rack, CDU, PDU |
| VR NVL72 rack (terrestrial) | ~1,815 | 180-220 | 8-10 | Next-gen, higher density |
| DGX H100 node (terrestrial) | 130 | 10 | 13 | 8-GPU server with PSUs and fans |
| GB200 NVL72 bare boards only | ~100 | 132 | ~0.76 | Unverified, no switches/chassis/power |
| Space-adapted (optimistic) | ~325 | 130 | 2.5 | Aggressive stripping + GaN DC-DC |
| Space-adapted (central) | ~650 | 130 | 5.0 | Realistic near-term |
| Space-adapted (conservative) | ~1,040 | 130 | 8.0 | First-gen with margins |
| Starcloud-1 full satellite | 60 | 0.7 | 86 | Entire satellite for 1 GPU |