Meeting the challenge of rising heat loads in data centres

With increasing heat loads in data centres, cooling solutions with a rack-based view become necessary. Shown here are Liebert overhead cooling modules on the ceiling and vertical top coolers mounted on the racks themselves.
Escalating heat loads in data centres pose cooling challenges that cannot always be met by traditional approaches. ROBERTO FELISI explores the issues.For over 40 years ‘Moore’s Law’, in which co-founder of Intel Gordon Moore observed that the number of transistors per square inch on integrated circuits would periodically double for a considerable period of time, has proved to be correct. However, the miniaturisation of integrated circuits and the increase in processing speed resulting from the introduction of micro-processors has inevitably led to increased heat dissipation. This problem has been exacerbated following the introduction of blade server technology. It is a generally understood operating principle that every Watt consumed will be dissipated as heat. In the most common form of data-centre architecture, server cabinets are placed on a raised floor and cooled by chilled air circulating through perforated floor tiles. Limit There is a limit to the cooling capacity of a single perforated tile, which may generally be taken to be about 3 kW. Assuming that a server cabinet can draw chilled air from more than one floor tile, the achievable limit is likely to be 5 kW per server cabinet. The problem created by blade server technology is that, in theory, it is possible to stack 84 blade servers in one cabinet — with a potential cooling requirement of 21 kW. Historically, the cooling solution to this problem would be based solely on the average W/m2 of heat load in the room, but this no longer applies. Changing dynamics of the data centre, with a mix of functions of different types of equipment and differing utilisation of that equipment, means that heat ranges across the data centre will increase. This situation is emphasised by the concentration of the heat load into areas where clusters of blade servers are housed. Heat loads that did not generally exceed 8 kW were common until fairly recently, but ranges that exceed 30 kW are to be expected. The cooling solution must, then, be based on flexible, targeted cooling. The first step in any solution is to optimise current resources. The most common data-centre design utilises the hot aisle/cold aisle approach. In this configuration ensure that empty spots within racks are closed by blanking panels, that there is no space between racks and that cable apertures in the floor are sealed. Cooling system units should have compressors that are capable of stepped unloading or total variable capacity to deliver the required level of cooling without cycling the compressor off and on. This not only helps energy saving, but also reduces compressor wear. Effective communication between units in the system can also be beneficial by ensuring that different units are not counteracting each other. Once these measures are in place, specific additional cooling measures can be considered. Additional cooling The additional cooling solutions should have a rack-based view and focus on ensuring the proper cooling of individual racks. Further, they should be based on providing flexibility to respond to changes in the location of high-density heat sources within the data centre as equipment type and usage evolves or changes. One solution is to locate blade servers in a sealed environment, with high-capacity cooling having redundancy for emergency cooling needs, preferably with fire detection and extinguishing capability. Closed-loop rack-cooling units of this type cool the server and not the room, and digital scroll compressor technology will enable cooling capacity to be controlled according to the server heat load. Supplemental cooling to provide additional capacity in conjunction with the under-floor system may also be considered. This can take the form of modules in an overhead cooling system utilising piping to carry the coolant, which does not consume valuable floor space. By pre-piping the facility, cooling modules with flexible connecting pipes can be added or moved to respond to variable high-density heat sources. By delivering supplemental cooling in specific areas where high-density systems are located, or by operating directly within the racks, flexible solutions enable the computing power in data centres to be intensified without requiring extensive redesign or enlargement of existing infrastructures. Roberto Felisi is with Emerson Network Power Ltd, Globe Park, Marlow, Bucks SL7 1YG.
Related links:
Related articles:

modbs tv logo

WIN a Bang & Olufsen speaker system – in the MBS subscriber prize draw 2024

Subscribe, or renew your free subscription, to MBS between 1st June and September 30 2024 and you could win a great prize.

Going Underground

ABM has announced a significant contract win with Transport for London (TfL) to provide mechanical and electrical (M&E) services across the London Underground network.