25 March 2019
Data centers have gone from being almost hardly noticed to one of the most important pieces of infrastructure in the global digital economy. They host everything from financial records to Netflix movies. As a result, data centers have become a multibillion-dollar industry, precisely because their role is so important. Designing, building and supporting data centers requires strategic planning and careful construction, in order to keep clients’ mission-critical data secure and available 24/7 - regardless of what it is.
There are many factors that must be addressed when designing and building a data center. For starters, it’s all about power - finding it and managing it.
Data centers require an incredible amount of electricity to operate, and this electricity often requires the direct intervention of regional utilities in order to work. Energy infrastructure needs to be shifted, power lines need to be run and redundancies need to be established. The most secure data centers have two separate feeds from utilities, so that if something happens to one of the lines — like an unexpected squirrel attack — the center doesn’t immediately lose all of its functionality.
Coordinating that takes a lot of effort, and often the clout of a large corporation in order to get anywhere. But even the big players need to check the policies of utilities and local governments in any area in which they are planning on building a data center, to make sure that they will be able to do establish those inputs. Because without that redundancy, data centers can be vulnerable to power outages that could result in not only the loss of critical customer data, but also a negative impact on the brand of the data center owner.
The price and availability of that power are also incredibly important considerations because a data center is going to be a large draw at all times. With a significant amount of power going into computing, and even more going into cooling computers down, it’s no surprise that data centers are using more than 1.8% of the power of the entire United States. Again, companies planning data centers need to work with local governments and utilities for subsidies and deals that can make that energy easier to afford.
Much of the support infrastructure in data centers is focused on making sure that their power cannot be interrupted. According to research by the Ponemon Institute, the average cost of a data center outage in 2016 stood at $740,357, up 38% from when the report was first developed in 2010. That’s $8,851 per minute of lost revenue and unproductive employees (“e-mail is down, time to play Fortnite!”). A great deal of engineering attention has therefore been paid to keeping data centers operational in any kind of crisis.
Uninterruptible power supplies (UPS) - powerful batteries that can start providing power almost instantaneously - are critical for this effort. They ensure that in an emergency, power comes back on in milliseconds, instead of seconds or minutes that could result in the loss of data or functionality for thousands of computer systems. But most UPS systems don’t serve as backup power for long. They simply don’t have the kind of power storage capacity that it takes to power a data center for more than a matter of minutes. In order to keep data centers fully running without utility power, data center operators usually turn to large diesel-powered generators, stocked with 24-48 hour of fuel at all times, in case they’re needed.
All of this redundancy is required because of the incredible amount of energy that data centers use. But the other key factor in a data center’s success is the efficiency with which that energy is used. That starts with the organizational strategy used for cooling.
Data centers are carefully planned structures. Every square foot needs to contribute to the wider goals of powerful and efficient computing. You can’t just slam server racks together, because their placement needs to fit in with the cooling system used to prevent overheating.
Data centers run hot, and today’s advances in High-Performance Computing (HPC) mean that they are using as much as five times more energy than they used to. This makes a cooling solution one of the most important decisions that a data center operator can make.
By far the most common data center cooling method involves airflow, using HVAC systems to control and lower the temperature as efficiently as possible. These systems typically use:
While liquid cooling has historically been the domain of enterprise mainframes and academic supercomputers, it is being deployed more and more in data centers. More demanding workloads driven by mobile, social media, AI and the IoT are leading to increased power demands, and data center managers are scrambling to find more efficient alternatives to air-based cooling systems.
The liquid cooling approach can be hundreds of times more efficient and use significantly less power than typical HVAC cooling systems. But the data center market is still waiting for some missing pieces of the puzzle, including industry standards for liquid-cooling solutions and an easy way for air-cooling data centers to make the transition without having to manage two cooling systems at once. Still, liquid cooling will likely become the norm in years to come, as the growing need for more efficient cooling shows no signs of slowing.
Building a data center is about executing an extremely complex plan, with input from experts in wide-ranging fields. Firms thinking about building their own data center should consult with experts who have dealt with their specific difficulties before, to make sure that all of these core areas can be built without incident. Modern data centers are planned down to the last wire on Building Information Management (BIM) applications and similar software, so that the outcome is as guaranteed as possible before the first wall is erected. Data centers are key arteries of the digital economy, funneling the data of the modern economy between consumers, companies, governments and citizens. That takes a lot of energy!