Data centres that can be made to order

Eaton-Williams, data centre, Colt Services
This is one of 12 cooling units built by Eaton-Williams that cool a 500 m2 data hall using free cooling as much as possible.

How do you build a data centre in a third of the time that has traditionally been required — and with a good PUE? Ken Sharpe found out how Colt Data Centre Services has adopted the approach of the oil industry.

Building a data centre in the traditional way is a capital-intensive process. A new building is erected or an existing warehouse-style building adapted for the purpose or existing office space can be used. The next stage is to provide the services for the IT equipment that will be installed later.

Those services include cooling systems to remove the huge amounts of heat produced by equipment in the closely packed server racks.

Standby power is essential to meet the high levels of uptime required. Mains power has to be made available, increasingly using busbars that can be tapped into.

Data halls need lighting, even though they are seldom occupied.

Finally, data cabling has to be brought to the rack positions.

The traditional approach to building a larger data centre that is ready for occupancy takes about 14 months, according to David Duane, senior consultant with Colt Technology Services. His company is a major operator of data centres, with 19 across Europe, including the UK.

And that kind of data centre is what Colt Technology Services built — until Guy Ruddock joined the company. He is now vice president, operations for Colt Data Centre Services. He came from the oil industry and had experience in building rigs for operating out at sea based on a modular system.

Guy Ruddock asked why the same approach could not be applied to building data centres, and a team was set up to develop a design that could be customer ready three to four times more quickly than the traditional approach and using the best in brand of available technology. That kind of build speed means that a data centre could be built in an astoundingly short four months — and is, indeed, now being achieved at Colt’s London 3 data centre 25 miles from London.

The 14 acre site has a 25 000 m2 building that was once a cold-storage warehouse with the capability of accommodating up to 13 000 m2 of usable IT space.

Colt moved onto the site in 2007 and upgraded the incoming electrical power supply of 8 MVA by adding a further 25 MVA at 132 kV to meet all future development. There are also four points of entry for fibre-optic cables.

Work started on two halls within the main building in 2007. They were built in the traditional way and completed in 2010.

Then the new approach was implemented and has already seen the completion of four 500 m2 data halls, double stacked. There is space for a further 14 halls of 500 m2 each and four 375 m2 halls, all double stacked,

These high-performance halls are designed to achieve a PUE (power usage effectiveness) of 1.21. That figure translates into a power consumption of 1 kW for IT systems being supported by 210 W of power for support services, including UPS and cooling.

The key to the speed of construction of these new data halls is something the building-services industry is familiar with — prefabrication and off-site construction, but not on this scale. Nor does space need to be ready and waiting for a client, because anyone who wants space can be guaranteed it will be ready in just four months when deployed at a Colt site.

Modules for a new hall are built in Newcastle and are 16 m long, 3 m wide and 4 m high — that largest size that can be delivered by road in Europe without requiring a police escort. 12 modules make a data hall.

These modules are totally fitted out with services such as overhead busbars to provide electricity to the racks, deep floor voids that are penetrated only by structural columns (there are no columns to support floor panels), smoke detectors, fire-suppression system, lighting etc. etc.

Services such as cooling, transformer, switchgear and uninterruptible power supply are also built offsite. When the hall itself has been assembled, the services are arranged around its perimeter, so they do not impinge on activities within the hall. What is more, all maintenance activity to these services can be done outside the hall itself. This is a Tier 3+ data centre

Only 20 people are needed on site to put together such a data hall, compared with perhaps 120 for the traditional approach.

Eaton-Williams, data centre, Colt Services
This 500 m2 data hall at the London 3 Data Centre of Colt is built up of 12 modules built off site in Newcastle and delivered to site — reducing the build time to about four months, compared with 14 months for a traditional approach.

Eaton-Williams became involved in the development of the modular concept in late 2008 to develop the cooling systems. That was at the conceptual stage, and the cooling units developed were 100% bespoke.

Jeff Muir, engineering manager with Eaton-Williams, explains, ‘Eaton-Williams engineered a high-quality, flexible and innovative solution specifically designed to meet Colt’s demanding performance specification. With power costs a major concern for data centres, optimising energy efficiency was a key objective and addresses Colt’s environmental concerns.’

The cooling strategy is designed to maximise the use of free cooling with ambient air. The ambient air is drawn into the main building that encloses the data suites and is then drawn into the cooling units by three large EC plug fans.

The design conditions are an internal temperature of 18 to 27°C and relative humidity from 20 to 80%; other wider or closer conditions from the microprocessor display are available and can be set during commissioning.

Each 500 m2 modular data hall has 12 CTF cooling units around its perimeter to provide N+1 resilience. They alternate with cabinets housing the UPSs. The CTF units are built at Eaton-Williams’s factory in Stoke-on-Trent. For ease of installation on site, all external connections are plug and socket. Eaton Williams also makes the cabinets for the UPSs and fits them out.

There are three stages to the cooling strategy, according to the temperature and humidity of the outside air.

For over 8200 h of the year, free cooling using ambient air meets the entire cooling requirement. There are two approaches. Which is used is determined by the need to prevent relevant humidity in the data hall falling below an acceptable level.

If there is no problem with relative humidity, direct free cooling is used. This is the optimum cooling mode and only requires running the main fans to deliver filtered fresh air to the data hall. As the outside temperature falls, fresh air and warm air are mixed to maintain the required supply-air temperature and relative humidity. Room pressure relief is provided internally by the unit to allow hot air from the room to escape to the outside of the building.

If introducing fresh air into the space would result in relative humidity becoming unacceptably low, such as when the absolute humidity of the air is very low, indirect free cooling is provided.

Outside air is passed over a coil to cool glycol, which is then pumped to another coil to cool air drawn from the space. The cooled air is then returned to the space.

For the relatively few hours of the year when fresh air alone cannot provide sufficient cooling, mechanical refrigeration comes into play to provide DX cooling using R407C. 100% fresh air at 24° C can meet the cooling requirements. Even with outside air at 27°C, some free cooling can be achieved by pre-cooling return air from the data hall using a run-around coil. The air is then further cooled by the refrigeration system. The system runs the variable-refrigerant-flow compressors and the EC variable-speed fans to match the cooling requirement and minimise energy consumption. There are two compressors in each CTF unit, with separate refrigeration circuits and electronic expansion valves.

Cooling air is delivered through the floor void at a slight positive pressure and emerges through floor grilles supplied by Eaton-Williams’s sister company Colman. Cool air is drawn through the server racks and emerges into an enclosed hot aisle, from where it is returned to the cooling units.

Compared with the traditional style of data centre, this approach to cooling is expected to reduce the energy costs for a 500 m2 module by over £300 000 a year.

As more modular data centres are added to the site, improvements based on experience can be made. While the CTF units have not seen much change between the installation of the first and second modular data centre, improvements have been made to the control algorithms to optimise the PUE. One change has been to control the speed of the fans according to the temperature difference between the ambient air and return air. Those improvements have been applied to the first project.

The use of the modular concept is not confined to Colt’s London 3 data centre; it can be applied to a customer’s choice of site anywhere in Europe.

And what of the opinion of Guy Ruddock, the architect of the modular concept? He says, ‘Cooling is an essential ingredient in our modular-data-centre solution. After having worked with Eaton-Williams for many years, we are excited that they are part of our modular development. Their ability to meet our demanding performance specifications and their highly efficient and reliable cooling solutions means that we have the ideal partner to deliver and develop modular solutions to our customers.’

Eaton-Williams, data centre, Colt Services
Technical services such as cooling and uninterruptible power supplies for Colts modular data hall are built off site and installed around the perimeter of the hall
Related links:
Related articles:



modbs tv logo

First keynote speakers announced by Europump

The first two keynote speakers have been announced for an annual event being hosted by the British Pump Manufacturers Association (BPMA).

‘Landmark’ prosecution of online seller welcomed by REFCOM

The air conditioning and refrigeration industry’s largest safety register REFCOM has welcomed the successful prosecution of online sales company Appliances Direct (AD) for breaching F-Gas Regulations.