2016 Japan Forum – Deploying Liquid-Cooled IT and Maximizing the Re-use of Waste Heat

27 July, 2016 | Presentation

Tahir Cader, Hewlett Packard Enterprise

With the widespread availability of smartphones, and the rapidly escalating connections of smart systems like automobiles, homes, and wearable devices, the demand for computational capacity and network bandwidth continues to escalate. In the last decade, data centers, particularly Hyperscale data centers, have pursued a path of keeping rack power levels down while growing their horizontal data center footprints. There are signs that some of this is changing. For example, virtual reality applications adopted by social media companies will increasingly rely on high power GPUs – when densely packed in chassis and racks, rack powers will rise and challenge the ability to effectively air cool such racks.

In the high performance computing space (HPC), the picture is a little “clearer.” HPC applications demand low system latency, meaning that all servers/racks in a cluster need to be close together – the latency constraint is a primary driver of server/chassis/rack density. As we move from Petascale to Exascale computing, and have space and power envelopes imposed on data centers, rack power densities will continue to climb aggressively. Finally, computational device power roadmaps, e.g., CPU, GPU, and memory DIMM roadmaps, all show fast-climbing power levels. To cope, data centers are adopting more aggressive cooling solutions like liquid-cooling. This presentation reviews the current state of the industry, will then cover the key drivers towards liquid-cooling, then provide several examples of how liquid-cooling is being implemented in large and prestigious production data centers today. The presentation also covers the movement towards maximizing the capture and re-use of the waste heat from liquid-cooled data centers.