Tier III data centre achieves new tier in energy performance

Fujitsu
The unprepossessing entrance to Fujitsu’s North London Data Centre, the only Tier III data centre in Europe, gives no hint of its extremely high energy efficiency and its extremely dependable support services.

Sophisticated data centres require totally reliable supporting services, but that does not mean that substantial reductions in energy consumption cannot be achieved — such as more than half at the North London Data Centre of Fujitsu, as explained to Ken Sharpe.

Not only is the North London Data Centre of Fujitsu the only fully certified Tier III data centre in Europe but it also has less than half the M&E services energy consumption of standard designs of such buildings — thanks to an holistic design approach by consultants Red Engineering Design. 

The Tier classification system for data centres, there are four levels, has been devised by the The Uptime Institute of the USA to define performance standards such as redundant capacity, distribution paths for data communication and the effect of maintenance work. 

Tier II, and Tier III, require N+1 active capacity equipment such as chillers, pumps etc. to support the IT load. 

Tier III, unlike Tier II, also requires each and every piece of equipment such as an entire chiller, pump or an individual valve, to be maintainable or replaceable with no disruption to the operation of the centre. Also, unlike Tier II, Tier III requires two distribution paths for power and cooling and data, both within the centre and with the outside world.

Also affecting the design of a Tier III data centre is on-site generation being regarded as the primary source of electrical power — which has been addressed in a novel way on this project. 

The level of energy efficiency achieved at Fujitsu’s North London Data Centre, for which the overall build and fit-out costs were approximately £44 million, and the high operational integrity required might seem incompatible. However, Lee Prescott of Red Engineering Design does not see it that way at all. 

Having more chillers, pumps and other equipment than required to meet the design load enables, with the aid of inverter control of motors and compressors, more equipment to be run at part load rather than less at higher loads. The square laws and cube laws that apply to fans and pumps then come into their own to reduce energy consumption. 

Red Engineering Design also thought very carefully through all energy-usage patterns to reduce overall energy consumption. One example, of which more below, is the use of heat pumps drawing energy from the chilled-water return, which will have absorbed heat from the servers, to supply heating coils in the air-handling units — reducing the heat to be rejected and the load on the chillers. 

A generally accepted definition of the energy efficiency of a data centre is the PUE (power usage effectiveness), which compares the total energy use of the site with IT power requirement. Total energy use being double that of the IT equipment is regarded as good. Most data centres achieve 2.3 to 2.6. The Fujitsu data centre has been designed to a design day PUE of 1.41 with an annual average PUE, taking into account free cooling, of 1.27 . 

The cooling strategy is based on chilled water to computer-room air-conditioning (CRAC) units with heat rejection using cooling towers. The more common approach is direct-expansion units in the data halls or chilled water with air-cooled chillers, perhaps with free cooling. 

Lee Prescott explained that as long as a strict maintenance regime is in place the use of cooling towers for heat rejection poses no risk to the operational resilience of this data centre. 

The centre was previously a single-storey warehouse and a 2-storey office complex. It has been refurbished to provide technical space comprising six data halls, two comms rooms and a build room. 

The initial study was based on an average IT load density of 750 W/m2, and the base engineering systems have provision for high-density equipment to 20% of the data halls with a load density of 1500 W/m2.

Chilled water for the M&E infrastructure is provided by three McQuay chillers using R134a and having variable-speed drives. Only two chillers are required to meet the load, so there is N + 1 redundancy — which is exploited by running all chillers at part load to improve energy efficiency. A failed chiller would automatically be isolated from the chilled-water system. 

Heat rejection is via three Marley open-circuit cooling towers. To ensure continuity of operation if the mains water supply should fail, sufficient water is stored for at least three days at peak summer load. 

Chilled-water temperatures are higher than usual practice to increase cooling plant efficiency and maximise the potential of free cooling. 

Lee Prescott explains that the heat-rejection capacity of a cooling tower depends mainly on the wet-bulb temperature, rather than the higher dry-bulb temperature, so water leaves a cooling tower at a lower temperature than from a dry cooler. The leaving condenser water temperature varies, depending on ambient conditions, enabling the chillers to achieve COPs of up to 14.8. The water cooled chillers are selected carefully to still achieve a minimum COP of 8 even during design-day conditions. This compares with 3 to 3.4 for a typical air-cooled chiller. In addition, more air- cooled chillers would be required and need more space. Relatively high flow/return temperatures are used because most of the heat in the data rooms is sensible, so there is no requirement for dehumidification.

A plate heat exchanger connected to each chiller, the cooling towers and the CRAC circuits provides the means of delivering free cooling — both total and partial. 

Whenever condenser water from the cooling tower is colder than the return water from the data halls, there is the opportunity for free cooling, which is available for over 50% of the year. 

The free cooling, which is under the control of the Trend Building-management system, is an extremely efficient and cost-effective engineering solution compared with large banks of separate free-cooling plant. The efficiently produced chilled water is also used efficiently by the CRAC units. These units deliver cooled air downwards into the floor void, from where it exits into the cold aisles. Return air passes into the top of the CRAC units, which have a ‘collar’ on top to increase their height so that the return air is at a higher temperature — increasing heat-exchange efficiency. 

The performance of the CRAC units is matched to the load using variable-speed secondary chilled-water pumps and a 2-port control valve. Less water is pumped round the system, significantly reducing the energy consumption of pumps. 

Similarly, the CRAC units have variable-speed fans so that to meet maximum load fans in all units, including redundancy capacity, can run at reduced speed. The fans also reduce in speed if the data centre is operating at low load. 

All the technical spaces have a separate conditioned supply of fresh air from two (N+N) air-handling units to pressurise these spaces and improve cleanliness by preventing the ingress of particles of dirt and dust. 

Heating for the fresh air is provided by three Dynaciat water-to-water heat pumps. Two can meet the requirement, so there is one redundant heat pump (N + 1). The energy source for these heat pumps is the return chilled water — a proportion of the flow is drawn off. Water for heating comes off at 50°C. Lee Prescott explains that the only cost of generating this heat is due to the lower efficiency of the heat pumps compared with the main chillers. 

The CRAC units operate with a sensible cooling ratio of unity, and all the latent heat is in the fresh air. The fresh air humidification requirements are met by cold-water atomised-spray humidifiers from JS Humidifiers. While the capital cost is higher than an electrode-boiler system, the energy costs are much less. The only energy required is electrical power for the air compressor and booster pump. Water is treated using silver ions. This method of humidification is adiabatic, which has a cooling effect, so the supply air may sometimes need preheating. The pre-heating is sourced from the heat recovery systems. 

Security of power supply is vital in a Tier III data centre. There are two incoming 11 kV supplies, either one of which can meet the centre’s requirements of 7.5 MVA. There are two separate substations for the mechanical plant, again for security of operation. 

In the event of a total failure of the mains supply, there are six Diesel Rotary Uninterruptible Power Supply units (DRUPS) in N+2 configuration, with space for a future seventh unit) to meet the total load — including all mechanical services. The DRUPS standby engines are fuelled by diesel, and there is sufficient stored on site to cover a three day bank-holiday weekend. 

Batteries are not used to cover the break in supply between the mains failing and the standby engines taking over the load. Instead the DRUPS system is used (diesel rotary uninterruptible power supply). Mains power keeps a kinetic energy store spinning, which in turn, drives an alternator. If the mains fails, the kinetic energy store keeps the alternator turning while the diesel engine starts and runs up to speed before taking up the drive. 

The environmental control of the office areas will be much more familiar. Daikin VRVIII equipment provides energy-recovery heating and cooling, so that some areas can be cooled at the same time as others are heated. The fresh-air supply for the offices is controlled to meet the requirement using carbon-dioxide sensors to monitor air quality. 

The aging boiler and its associated perimeter heating system have been removed, and domestic hot water is now provided by point-of-use electric water heaters. 

The team at Red Engineering Design have designed a large number of data centres in the UK and overseas, the North London Data Centre of Fujitsu of which is the most sophisticated. Not only is its projected energy consumption, for the supporting M&E services, less than 40% of a conventionally serviced centre but at a cost less than the minimum rule of thumb £/m2 figure for a Tier III facility.

The facility has been awarded ‘Most sustainable refurbishment’ in the BSJ Sustainability awards, something the whole team is very proud to have achieved. The project has also won an award for Innovation in the Mega Data Centre from Data Centre Dynamics.

Cooling Towers
To deal with the massive cooling requirement, cooling towers were chosen to reject heat from the data suites.
Chilled-water circuit
Free cooling is an important part of the energy strategy, and partial free cooling for the data halls is available for over half of the year using large plate heat exchangers in the chilled-water circuits.
Related links:
Related articles:



modbs tv logo

Register NOW for Smart Buildings Show 2024

Now is the time to register for Smart Buildings Show 2024 which has announced an impressive conference programme featuring industry leading speakers who will tackle all of today’s big issues head on.

Inside number nine

Click Scolmore was the preferred supplier of electrical accessories for a prestigious single-phase build to rent development in London.