💧 Data Centres — Liquid Cooling

Liquid Cooling Infrastructure:
CDU Sizing, Manifold Design
and Facility Modifications

Selecting a liquid cooling technology is the first decision. Designing the facility infrastructure that makes it work reliably at scale is the real engineering challenge — CDU hydraulics, row manifold sizing, coolant chemistry, pipe material selection, integration with existing chilled water plants, and the facility modifications that no one budgets for until construction has started.

📅 Jun 2025 ⏱ 16 min read ✍️ KVRM Engineering Team 📐 ASHRAE TC 9.9 / ASME B31.9

Liquid cooling in data centres has moved rapidly from hyperscaler experiment to mainstream deployment. Direct liquid cooling (DLC) cold plates are now standard specification on GPU servers from all major OEMs. Rear-door heat exchangers are being retrofitted into existing high-density rows. Immersion tanks are being purpose-built into new AI data halls. What has not kept pace is the facility engineering understanding of the secondary infrastructure — the coolant distribution units, manifolds, pipework, chemical treatment systems, and building modifications that sit between the IT load and the heat rejection plant.

This article addresses that gap. It is written for MEP engineers, data centre facility managers, and project managers who are past the technology selection stage and need to design, specify, and commission the liquid cooling infrastructure correctly the first time.

The Liquid Cooling Infrastructure Hierarchy

Understanding the full infrastructure stack prevents the most common design error — specifying the CDU without modelling the entire hydraulic circuit from chiller to cold plate. Every element in the chain is hydraulically coupled; a sizing error or pressure drop miscalculation at any level propagates through the entire system.

Level 1 — Heat Rejection Plant

Chiller, cooling tower, dry cooler, or adiabatic cooler. Sets the supply temperature to the CDU secondary inlet. Determines whether free cooling is achievable and at what ambient conditions. The most consequential facility design decision for long-term operating cost.

Level 2 — Coolant Distribution Unit (CDU)

The hydraulic and thermal interface between the facility water loop and the IT coolant loop. Contains pump sets, plate heat exchanger, expansion vessel, chemical dosing, flow metering, and controls. Located in the data hall or adjacent plant room.

Level 3 — Row Manifold

Distributes coolant from the CDU to individual rack connections along a row. Supply and return headers running above or below the rack row. Each rack connection has an isolation valve, flow control valve, and quick-connect coupling. The manifold is the most maintenance-critical component in the system.

Level 4 — Rack and Cold Plate

Flexible hoses connect the manifold quick-couplings to the rack manifold. Within the rack, a manifold tree distributes coolant to individual cold plates mounted on CPUs, GPUs, memory, and VRMs. Cold plate thermal resistance determines junction temperature — the most critical parameter for GPU sustained performance.

Every litre per minute of coolant flow, every Pascal of pressure drop, and every degree Celsius of temperature rise must be accounted for from the chiller evaporator to the GPU cold plate and back. A system that is undersized at any level degrades performance at every level above it.

CDU Engineering: Hydraulics, Heat Transfer, and Redundancy

The CDU is the heart of the liquid cooling system. It is not a commodity item — every parameter must be specified by the MEP engineer, not left to the CDU vendor to interpret from a vague brief.

Thermal Sizing

// CDU Plate Heat Exchanger Thermal Duty

Q_PHX = m_pri × Cp_pri × ( T_pri_out − T_pri_in )
      = m_sec × Cp_sec × ( T_sec_out − T_sec_in )

// Primary loop (IT coolant side) — typical DLC system
// Supply to cold plates: 20°C  |  Return from cold plates: 40°C  |  ΔT = 20°C

m_pri = Q_IT / ( Cp × ΔT_pri )
       = 500 kW / ( 4.18 kJ/kg·K × 20 K )
       = 5.98 kg/s  →  ~21.5 m³/hr

// Secondary loop (facility water side)
// Supply: 18°C  |  Return: 32°C  |  ΔT = 14°C  (tighter ΔT — lower flow rate)

m_sec = 500 / ( 4.18 × 14 )
       = 8.54 kg/s  →  ~30.7 m³/hr

// Safety factor on CDU duty: 1.15–1.20 (allow for future GPU TDP increase)
Q_CDU_rated = 500 × 1.20 = 600 kW

Hydraulic Sizing — Primary Loop Pump

// Primary loop pump head — sum of all pressure drops

H_pump = ΔP_PHX + ΔP_manifold + ΔP_hoses + ΔP_cold_plate + ΔP_valves

// Typical DLC system budget at design flow rate
// PHX (plate heat exchanger)         :  30–50 kPa
// Row manifold supply + return        :  15–25 kPa
// Flexible hoses (supply + return)    :  10–20 kPa
// Cold plate assembly                 :  40–80 kPa  ← OEM data sheet value
// Isolation and balancing valves      :  20–30 kPa
// ─────────────────────────────────────────────────
// Total system ΔP (typical range)     : 115–205 kPa  →  ~1.2–2.1 bar

// Add 10–15% margin for fouling and future growth
H_pump_design = 200 kPa × 1.15 = 230 kPa  →  specify 250 kPa pump head

Cold plate ΔP is the dominant term: OEM cold plate assemblies for GPU servers specify pressure drops of 40–100 kPa at rated flow. This single component often exceeds all other hydraulic losses combined. Always obtain the cold plate pressure drop at the design flow rate from the server OEM’s data sheet before sizing CDU pumps — never estimate this value.

CDU Redundancy Architecture

CDU redundancy must match the data centre tier requirement. Unlike chillers which can be sized N+1 at the plant level, CDUs serving a specific group of racks must be redundant at the CDU level — a chiller N+1 arrangement does nothing for the IT load if the CDU serving that load zone fails.

Configuration Redundancy Level IT Load Protected Maintenance Capability Typical Use
Single CDU per zone None Full outage on CDU failure Hot-swap impossible Lab / development environments only
N+1 CDU per zone N+1 N CDUs carry full load One CDU maintainable live Tier III production — standard specification
2N CDU (active-standby) 2N Standby takes over in <30 s Either CDU maintainable live Tier IV / financial / defence
Distributed CDU (per-row) Partial Failure affects one row only Row isolated; others unaffected Large hyperscale halls; phased deployment

Row Manifold Design

The row manifold is the distribution spine of the liquid cooling system — the component that connects the CDU to every rack in a row. It is also the most frequently underspecified element, with manifold sizing often delegated to a mechanical contractor who is not familiar with the hydraulic interdependencies of a multi-rack liquid cooling system.

Manifold Header Sizing Principle

A row manifold header must be sized so that the pressure drop along the header is small relative to the pressure drop through each branch (cold plate circuit). If header ΔP is comparable to branch ΔP, racks at the far end of the manifold receive significantly less flow than racks near the CDU connection — a classic hydraulic imbalance problem that causes temperature non-uniformity across the row and GPU throttling at the distal end.

// Manifold header sizing — velocity and pressure drop check

// Design rule: header velocity ≤ 1.5 m/s (avoids erosion, limits ΔP)

A_header = Q_total / v_max
          = 21.5 m³/hr / 3600 s  /  1.5 m/s
          = 3.98 × 10⁻³ m²  →  DN80 pipe (ID 80.9 mm, A = 5.14 × 10⁻³ m²)

// Actual velocity with DN80: v = 21.5/3600 / 5.14×10⁻³ = 1.16 m/s  ✓

// Pressure drop per metre — Darcy-Weisbach (water, smooth stainless pipe)
ΔP/L = f × ( ρ × v² / 2 ) / D
       // f ≈ 0.018 (turbulent, smooth pipe, Re ~95,000)
       = 0.018 × ( 1000 × 1.16² / 2 ) / 0.081
       = ~149 Pa/m  →  for 20 m header: ΔP_header ≈ 3.0 kPa

// Header ΔP (3.0 kPa) << cold plate ΔP (40–80 kPa) — hydraulic balance achieved ✓
// If header ΔP > 10% of branch ΔP → upsize to next DN or add balancing valves

Branch Connection and Balancing

Each rack branch connection from the manifold header requires three components that are frequently omitted from preliminary designs:

  • 01

    Manual Isolation Valve (ball valve, full-bore)

    Allows individual rack isolation for maintenance or swap without depressurising the entire manifold. Must be full-bore to avoid adding significant additional pressure drop. Locate on both supply and return branches, accessible from the front or side aisle without moving adjacent racks. This is non-negotiable for live operations — without it, any rack-level maintenance requires a full system shutdown.

  • 02

    Automatic Balancing Valve (PICV or manual commissioning valve)

    Pressure-independent control valves (PICVs) automatically maintain design flow through each rack branch regardless of pressure variations elsewhere in the manifold — the preferred solution for large deployments where rack-by-rack commissioning is impractical. Manual commissioning valves are acceptable for small deployments (<10 racks) provided each branch is flow-balanced during commissioning with a calibrated flow meter.

  • 03

    Dry-Break Quick-Connect Coupling

    The connection between the manifold branch and the rack flexible hose. Dry-break (non-spill) couplings are mandatory — standard push-to-connect couplings release coolant on disconnection, contaminating IT equipment. Stäubli, Colder Products (CPC), and Parker Hannifin are the primary suppliers. Specify by name and part series in the procurement documents — generic “quick-connect” specifications result in incompatible couplings from different suppliers arriving on site.

Manifold Material and Joint Selection

Material Suitable Coolants Max Temp (°C) Joint Type Relative Cost Notes
316L Stainless Steel All water-glycol; distilled water 200+ Orbital weld / press-fit Medium-high Preferred for permanent headers. Orbital welding mandatory for clean systems — no threaded joints in liquid cooling primary loop.
Copper (C106) Water-glycol; avoid deionised water 150 Press-fit / braze Medium Deionised water leaches copper ions — use only with inhibited glycol solutions. Well-understood installation trade; widely available fittings.
HDPE / PP-R All water-based; deionised water 60 (HDPE) / 95 (PP-R) Butt fusion / electrofusion Low Excellent chemical resistance. Electrofusion joints are fully reliable when made correctly — requires certified installer. Avoid for high-temperature secondary loops.
PVDF (Kynar) Deionised water; aggressive chemistries 120 Butt weld / socket fusion High Specified where ultrapure water or aggressive coolant additives are used. Common in semiconductor and pharma-adjacent data centre applications.

No threaded joints in the primary IT coolant loop: Threaded pipe joints (NPT, BSP) are not acceptable in any pipework that passes through or above IT equipment. The leak risk from threaded joints — particularly under thermal cycling — is unacceptable in a live data hall. All joints in the primary loop must be orbital welded (stainless), press-fit (copper, stainless), or fused (HDPE/PP-R). This must be stated explicitly in the specification — contractors default to threaded fittings if not otherwise directed.

Coolant Chemistry and Water Treatment

The choice of coolant chemistry is as consequential as the choice of pipe material — they must be matched. A water treatment error in a liquid cooling system does not merely reduce efficiency; it destroys cold plates, CDU heat exchangers, and pump impellers through galvanic corrosion, microbiological fouling, and scale deposition.

Coolant Options

Inhibited Propylene Glycol Solution (25–40%)

The most widely used coolant for liquid cooling primary loops. Provides freeze protection and corrosion inhibition. Compatible with copper, stainless steel, and aluminium cold plates (if inhibitor package is aluminium-compatible — specify explicitly). Change inhibitor recharge every 2–3 years. pH target: 7.5–9.0. Conductivity: 500–2,000 µS/cm.

Deionised (DI) Water

Used in systems where electrical isolation between coolant and IT equipment is required (e.g. some direct-to-chip immersion systems). Very low conductivity (<1 µS/cm) prevents electrolytic corrosion — but only if maintained at that purity. DI water in contact with copper or iron rapidly becomes a corrosive ion carrier. Requires HDPE or stainless steel pipework, continuous resistivity monitoring, and ion exchange resin bed maintenance.

Water with Corrosion Inhibitor Package

For systems operating above 10°C minimum ambient with no freeze risk (enclosed data halls with maintained heating). Molybdate, azole, and silicate inhibitor packages provide corrosion protection without glycol’s viscosity penalty — lower pump energy at equivalent flow rates. Requires quarterly water analysis and inhibitor top-up programme.

Manufacturer-Specified Engineered Coolants

Some cold plate OEMs (Asetek, Coolit, Fujitsu) specify proprietary coolant formulations and void warranty with non-specified fluids. Always check server OEM coolant compatibility matrix before specifying any coolant — this review must happen during design, not after procurement. Switching coolants in a live system requires full flush and recommission.

Water Treatment Parameters and Monitoring

// Liquid Cooling Primary Loop — Water Quality Targets
// (Inhibited glycol system, mixed copper/stainless/aluminium circuit)

pH              : 7.5 – 9.0        // Below 7.5 → copper corrosion accelerates
Conductivity    : 500 – 2,000 µS/cm // Monitor inhibitor concentration
Dissolved O₂    : < 0.1 mg/L       // Closed system — O₂ causes pitting corrosion
Total hardness  : < 150 mg/L CaCO₃ // Scale risk on heat exchanger plates
Iron (Fe)       : < 0.3 mg/L       // Indicator of ferrous corrosion in system
Copper (Cu)     : < 0.1 mg/L       // Indicator of copper leaching
Bacteria (TVC)  : < 1,000 CFU/mL   // Legionella risk if stagnant zones exist
Glycol conc.    : 25 – 40% by vol.  // Verify freeze point for site winter minimum

// Testing frequency: quarterly for established systems
// Testing frequency: monthly for first year of new installation
// After any system breach or top-up: immediate retest

Galvanic compatibility is a system-level decision: A liquid cooling circuit containing aluminium cold plates, copper manifolds, and stainless steel CDU components creates a multi-metal galvanic cell in the coolant. Without the correct inhibitor package — specifically one formulated for aluminium protection — aluminium cold plates corrode rapidly. This is not theoretical: field failures of aluminium cold plates within 18 months of commissioning due to incorrect coolant specification are documented across multiple hyperscaler deployments. Specify the inhibitor package by chemistry, not brand name.

Facility Modifications for Liquid Cooling Retrofit

Retrofitting liquid cooling into an existing air-cooled data centre requires modifications across every building system — electrical, mechanical, civil, and fire protection. The scope is consistently underestimated at project approval stage, which is the primary cause of budget overruns on liquid cooling retrofit programmes.

  • 01

    Raised Floor Penetrations and Pipe Routing

    Liquid cooling supply and return headers must route from the CDU room to the data hall and then along each row. In raised floor halls, pipes are typically run below the raised floor in the plenum — but the plenum is the air supply path, and pipe penetrations through the raised floor at each row disrupt the airflow to residual air-cooled equipment. Model the airflow impact before committing to below-floor routing. Overhead pipe routing (in cable trays or dedicated pipe hangers) is an alternative that avoids floor penetrations but requires structural capacity verification for the added pipe and fluid weight.

  • 02

    CDU Room or Alcove — Space and Services

    Each CDU requires a footprint of 0.6–1.2 m² plus maintenance clearance of 1.0 m on service access sides. A zone of 10 racks served by two N+1 CDUs requires approximately 4–6 m² of dedicated CDU space. This space must have: floor drain for coolant spillage; 415V power feed on UPS-backed bus (5–15 kW per CDU); ambient temperature monitoring; secondary containment for the CDU volume. In retrofit projects, this space is almost never available and must be found by repurposing an existing utility area or designating an aisle end bay.

  • 03

    Chilled Water or Warm Water Connection

    The CDU secondary side must connect to the facility heat rejection plant. In a retrofit, the existing chilled water plant may be at 6/12°C — suitable for DLC cold plate systems but potentially over-cooling for systems designed for warm water operation. Where the CDU is designed for 25–40°C secondary supply, connecting to a 6°C chilled water system wastes free cooling opportunity and over-stresses the glycol mixture. A buffer tank and mixing loop may be required to decouple the CDU from the chiller and enable warm water operation even when the chiller is running for air-cooled residual zones.

  • 04

    Leak Detection System

    Liquid in a data hall requires a dedicated leak detection system. Sensing cables installed beneath the raised floor along the full length of all supply and return headers, at CDU base trays, and under rack manifolds. Water sensor panels with zone identification — not single point detection. Alarm integrated with BMS with automatic CDU pump shutdown on confirmed leak. This is an insurance requirement in most colocation facilities and is increasingly a client specification requirement in enterprise data centres. Sensor cable must be installed before the raised floor tiles are relaid over the pipes.

  • 05

    Air Cooling Transition — Managing the Mixed Environment

    In a partial liquid cooling retrofit, the hall contains both air-cooled and liquid-cooled racks simultaneously. The liquid-cooled racks produce significantly less heat to the room air — but the CRAC units that serve that zone do not know this. Without reconfiguration, CRACs serving a liquid-cooled row will reduce cooling output (sensed by lower return air temperature) and inadvertently reduce cooling to adjacent air-cooled racks. CRAC setpoints and zone boundaries must be reconfigured as liquid cooling is progressively rolled out, and airflow modelling should be updated to reflect the changed thermal map after each deployment phase.

  • 06

    Fire Suppression Recertification

    The introduction of liquid coolant into the data hall changes the fire load and the suppressant requirements. The existing clean agent suppression system design — agent quantity, nozzle layout, room volume calculation — was certified for a specific room configuration. Adding CDUs, pipe runs, and potentially glycol-based coolant requires a new fire risk assessment and, in most cases, recertification of the suppression system by the system designer and the insurer. Budget for this. It is a 6–12 week process and cannot be bypassed.

Integration with Existing Chilled Water Plant

Most retrofit liquid cooling deployments must integrate with an existing chilled water system that was designed for air-cooled CRAC units — not for the different temperature and flow characteristics of a liquid cooling CDU. This integration is a hydraulic design exercise that is frequently handed to the CDU vendor with inadequate information, resulting in poor system performance after commissioning.

The Delta-T Problem

Existing chilled water plants are typically designed for a 6/12°C supply/return temperature (ΔT = 6°C) serving CRAC units. Liquid cooling CDUs designed for warm water operation want 25–40°C supply. Connecting a warm-water CDU directly to a 6°C chilled water header wastes cooling energy, potentially causes condensation on CDU components, and provides no opportunity for free cooling. The correct solution is a hydraulic separation with a buffer vessel — a 1,000–5,000 litre buffer tank with a mixing valve that allows the CDU secondary loop to operate at a higher temperature setpoint, independent of the chiller supply temperature.

Free Cooling Integration

The long-term operating cost advantage of warm-water liquid cooling comes from free cooling — using dry coolers or cooling towers to reject heat without running chillers during cool ambient conditions. Integrating free cooling into a retrofit requires a three-way valve and controls strategy that switches between: mechanical cooling (chiller) during hot weather; partial free cooling (mixing) during mild weather; and full free cooling (dry cooler or cooling tower only) during cool periods. The controls logic must handle smooth transitions between modes without coolant temperature transients that cause IT equipment thermal throttling events.

Free cooling hours for Indian locations: Based on ASHRAE weather data — Delhi achieves approximately 4,200 hours/year of full or partial free cooling at 35°C CDU supply setpoint; Pune approximately 5,100 hours/year; Hyderabad approximately 3,800 hours/year. At 1 MW of liquid-cooled IT load, even partial free cooling reducing chiller operation by 50% saves approximately ₹80–120 lakh per year in electricity cost at current Indian tariff rates — providing clear ROI justification for the dry cooler capital investment.

Commissioning and Handover Protocol

Liquid cooling systems require a structured commissioning process that is distinct from standard HVAC commissioning. The consequences of a poorly commissioned liquid cooling system — a leak, a pressure surge, or an imbalanced flow — are more severe than for an HVAC system because the coolant is in direct proximity to energised IT equipment.

# Commissioning Stage Activity Accept Criterion
01 Hydrostatic Pressure Test Pressurise primary loop to 1.5× design working pressure with water (no IT equipment connected). Hold for 1 hour. Zero pressure drop. Visual inspection: no weeping joints, no deformation.
02 Flush and Clean Circulate clean water at high velocity through all headers and branches before connecting to CDU heat exchanger. Flush to bypass. Sample at drain for particulate count and visual clarity. Flush water particle count <5 mg/L. Visually clear. Prevents cold plate fouling from construction debris.
03 Coolant Fill and Chemistry Baseline Fill with specified coolant. Take initial water sample. Record pH, conductivity, glycol concentration, inhibitor levels. All parameters within specification. Documented baseline for ongoing monitoring comparison.
04 CDU Functional Test (No IT Load) Run CDU at design flow rate. Verify pump operation, flow meter reading, pressure differential, temperature sensors, alarm outputs. Design flow ±5%. Alarms function on signal injection. Controls respond to setpoint changes within 30 seconds.
05 Branch Flow Balancing Measure flow at each rack branch connection using ultrasonic clamp-on or in-line flow meter. Adjust balancing valves to achieve design flow ±10% at each branch. All branches within ±10% of design flow. Record balanced valve positions for O&M documentation.
06 IT Load Performance Test Connect servers and apply 100% CPU/GPU load using stress test software. Monitor inlet and outlet temperatures at CDU, manifold, and cold plate level for minimum 4 hours. Cold plate outlet temperature <OEM maximum. GPU junction temperature <thermal throttle threshold. No alarms.
07 Leak Detection System Test Drip 50 mL of water onto each sensing cable zone. Verify alarm activates and zone is correctly identified at BMS panel. 100% of sensing cable zones alarming within 60 seconds of water contact.

Conclusion: Engineering the Plumbing That Powers AI

Liquid cooling infrastructure is not glamorous engineering. CDU sizing, manifold hydraulics, glycol chemistry, and pipe joint specifications do not generate the same excitement as the AI hardware they serve. But they are the reason that hardware either runs at full performance for five years or suffers from hot GPU throttling, corroded cold plates, and coolant leaks in the first year of operation.

The fundamental discipline required is straightforward: model the complete hydraulic circuit before specifying any component, specify coolant chemistry and pipe materials as a matched system, and commission methodically with IT load applied. Data centres that follow this approach — engaging MEP engineers with liquid cooling expertise at design stage rather than discovering the requirements during construction — deliver liquid cooling projects on schedule, on budget, and at the performance targets the IT team was promised.

As GPU TDPs continue climbing and liquid cooling transitions from high-density exception to standard practice, the MEP engineers who understand this infrastructure in depth will be the ones designing the AI data centres of the next decade.

Designing Liquid Cooling Infrastructure for Your Data Centre?

KVRM provides complete liquid cooling MEP design — CDU sizing, manifold hydraulic calculations, coolant chemistry specification, facility modification scoping, and commissioning protocols for DLC and immersion cooling deployments across India and the Gulf region.

Request a Free Consultation →
KVRM Engineering Team

Data Centre MEP · Liquid Cooling · Hydraulic Design · ASHRAE TC 9.9

Scroll to Top