Managing the Entry of Liquid Cooling into the Enterprise Data Center

04 March, 2019
Asetek
Unlike most data center operators who oversee racks of moderate density with relatively low average server utilization, the Enterprise Tech operator has to satisfy the added demand for very high compute HPC-like throughputs. This entry of HPC-like loads into the traditional enterprise is being driven in a large part by the evolution of hyperscale and cloud businesses requiring AI, machine learning and data analytics. Search functionality requiring real time analytics and higher compute intensity mimics HPC within the enterprise data center. Many HPC clusters have adopted liquid cooling to address the resulting higher wattages at both the node and rack-level with the need to maintain dense interconnects for lower latency and optimal response time.  But for the enterprise data center, because many of the liquid cooling approaches are one-size-fits-all solutions, it can be difficult to move to liquid cooling on an as-needed basis. What is required is an architecture that is flexible to a variety of heat rejection scenarios, is cost effective and can be implemented without disruption.   Asetek’s direct-to-chip liquid cooling provides a distributed cooling architecture to address the full range of heat rejection scenarios. It is based on low pressure, redundant pumps and sealed liquid path cooling within each server node.  Unlike centralize pumping systems, placing coolers (integrated pumps/cold plates) within server or blade nodes, with the coolers replacing CPU/GPU air heat sinks to remove heat with hot water has advantages. The liquid cooling circuit in the server can also incorporate memory, VRs and other high wattage components into this low PSI pumping circuit. Distributed pumping is the foundation for flexibility on the side of heat-capture. On the side of heat-rejection, the Asetek architecture enables adaption to existing air-cooled data centers and evolution to fully liquid-cooled facilities.  Adding liquid cooling with no impact on data center infrastructure can be done with Asetek’s ServerLSL™, a server-level Liquid Assisted Air Cooling(LAAC). With ServerLSL the redundant liquid pump/cold plates are paired with a HEX (radiator) also in the node.   Via the HEX  the captured heat is exhausted into the data center. Existing HVAC systems handle the heat. ServerLSL can be viewed as a tool to quickly incorporate the highest wattage CPUs/GPUs and racks can contain a mix of liquid-cooled and air-cooled nodes. Available in 2018, InRackLAAC™ places a shared HEX with a 2U “box” that is connected to a “block” of up to 12 servers. Because the HEX is removed from the individual nodes greater component density is possible. When facilities water is routed to the racks, Asetek’s RackCDU™ options enable a much greater impact on OPEX for the data center.  RackCDU D2C (Direct-to-Chip) captures between 60 percent and 80 percent of server heat into liquid, reducing data center cooling costs by over 50 percent and allowing 2.5x-5x increases in data center server density. RackCDUs have additional advantages in the OPEX of heat management. Because hot water (up to 40ºC) is used to cool, the data center does not require expensive CRACs and cooling towers and can utilize inexpensive dry coolers With RackCDUthe heat collected is moved via a sealed liquid path to heat exchangers for transfer of heat into facilities water. Heat removal is done by using heat exchangers in the RackCDU. RackCDUs come in two types to give additional flexibility to data center operators. InRackCDU™ is mounted in the rack along with servers. Using 4U, it connects to nodes via Zero-U PDU style manifolds in the rack.  Alternatively, VerticalRackCDU™ consists of a Zero-U rack level CDU (Cooling Distribution Unit) mounted as a 10.5” extension at the rear of the rack. The distributed pumping architecture at the server, rack, cluster and site levels delivers flexibility in the areas of heat capture, coolant distribution and heat rejection that other approaches do not.