The equipment that IT puts into data centers today is stressing power and cooling systems in a way that was not fathomed when most data centers were built. In an era of “greener” living, it would seem that we should be reducing power consumption, not the inverse. In reality, IT requires fewer watts per unit of computing power and gigabytes of storage in contrast to years past, yet continues to need more power and cooling for the data center. This irony has its roots in what is currently the battle cry of most IT shops: Consolidation.
In today’s data centers, consolidation is essential. The operating costs of a highly distributed and “siloed” infrastructure can quickly eat into any business’ profitability, so the answer is to reduce the number of resources to manage, thus reducing operating expenses. But here’s the kicker: You can reduce the size and number of moving parts but you cannot take away the increasing need to process and store information. What you end up with is more storage and computing power in a far smaller space.
Packing more storage and computing capabilities into a smaller space is efficient because it increases the lifespan of the current facility by creating more space to absorb growth, minimizes the number of resources to operate and manage, and allows IT to do more with less. However, a problem that was once spread across the data center has been compressed into a much smaller space, which can be a problem because making the hardware smaller, more powerful and efficient increases the density of power and heat to critical levels.
Do the math
Before we delve into “why should I care” and “what should I do about it,” consider some basic data-center environmental math. The consumption of electrical power (watts) produces heat (commonly British Thermal Units, or BTUs) at a rate of approximately 3.413 BTUs per hour per 1 watt consumed. Cooling capabilities are required to remove the heat from the data center as power is consumed. One ton of cooling is required to remove approximately 12,000 BTUs per hour. So in theory, if you have an equal amount of cooling capability to match your gross power consumption, you should be able to maintain a constant temperature in your data center. However, many IT managers are finding that this is not the case.
Most data center cooling systems today were deployed in a manner that assumes the load will not only be spread out evenly, but more importantly, that the load in any given area will never be far greater than its relative share of the total data-center space. Additionally, traditional cooling systems are designed and deployed as part of the permanent building systems.
According to the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), a typical data center with dense form-factor servers and storage devices averages 5,000 watts per square foot of equipment space, which is up from slightly more than 2,000 watts in the same amount of space in 2002, representing more than a 2X increase in heat density in four years. In 1998, equipment was not as dense, but even the more densely packed environments averaged approximately 800 watts per square foot. That represents more than a 600% increase in power per square foot in roughly eight years. And the increases are not slowing down.
Considering the average server and storage equipment refresh cycle of three to five years, the power-consuming, heat-generating components of a data center are likely to change 3X to 4X during the normal lifespan of a cooling system, which makes it difficult to intelligently plan and design a cooling system. A healthy business will grow and will require more computing power and storage space. Often, relocating a data center is not only difficult, but extremely expensive to plan and execute in a manner that minimizes downtime. However, continuing to rely on older technology that lacks the ability to keep up with the technical demands of the enterprise is not an option.
Once the equipment is installed, there are several key factors that drive heat production and power inefficiency. First, it takes power to move the heat that is created by the consumption of power. A study conducted by the Lawrence Berkeley National Lab in New York showed that 49% percent of the total power consumed by the data center was non-computational power (lighting, power distribution, UPS, generator, fans, and the cooling system itself). The addition of even greater heat densities will multiply these affects and are cause for today’s data center planners to make some wise decisions—decisions that must assume the problem will get much worse.
There are several methods to combat these problems. For example, many companies simply spread the equipment out over a larger number of racks. This ends up consuming the same amount of power, but the density issues go away. The downside to this is that additional raised floor space is often needed and the space to create it may not be available. Additionally, this can result in large data centers that are sparsely populated and thus inefficient from a computing-power-per-square- foot perspective.
Power/cooling tips
Assuming you have enough cooling capacity for the equipment you have installed today, consider the placement of your equipment first. Adopting an alternating hot-aisle/cold- aisle layout can correct many problems in a typical data center. The principle is simple—understand the airflow of your server/storage equipment (where it takes cool air in and where it exhausts the hot air). Typically, the front side draws in the cool air and hot air is expelled out the back. By placing equipment with the same airflow characteristics in the same rack rows, you can create the hot and cold aisles. Opposing aisles of the same temperature should face one another so that hot air is not being drawn into the cold side of another rack. Ample vented floor tiles at the base of each rack in the cold aisle will provide the cool air needed for the equipment.
While the hot/cold aisle principle is a best practice, sometimes it falls short. This is particularly true for systems that reside near the top of the rack. The reason is twofold. First, heat rises, so naturally a bulk of the colder air will end up cooling systems that are lower in the rack. Secondly, the velocity of the airflow being expelled at the rear can cause hot air to balloon back over the top of the rack and be drawn back through the server/storage system on the cold side. In some cases this causes the air to eventually become superheated once inside the chassis, which ultimately causes a failure or requires a controlled shutdown of the equipment.
Two ways to fix this situation are velocity control and zone enclosure. In the former, additional fans are installed in specific places throughout the data center that direct hot and cold airflow to the right place. In most cases, the velocity of air blown through the floor and out the venting tiles is not enough to get an adequate amount of cold air to the top of the rack, especially when there are more vent tiles on the floor than the fans can effectively blow air through. Simply adding vent tiles without understanding how much air you can blow through the floor can result in a sharp drop in your cooling ability.
Zone enclosure is a way to take the alternating hot and cold aisles to an even greater level of isolation. Using this method, the aisles are completely walled off from the top of the rack to the ceiling. This creates isolated rooms in the data center, typically with doors at one or both ends for operator access to the front and back of the racks. While somewhat radical in design, it can be a cost-effective way of limiting hot and cold airflow in smaller data centers. The thing to be concerned about is airborne debris created by the construction process and the increased physical activity in the data center while the walls are constructed. In many cases, simply adopting hot and cold aisles—and even the more radical zone enclosure methods—are band-aids when compared to the power and cooling requirements of the next generation of storage and computing equipment.
Water, water everywhere
Water is another possible solution. Water can move up to 3,500 times the amount of heat that the same volume of air can. Chilled water systems can also eliminate the need for mechanical systems on the data center floor. The adoption of chilled water as a commonplace cooling method in the data center will take some time, as many find it difficult to fathom pipes of running water snaking through their data centers. However, water was once a primary cooling method for large systems and has been proven to be reliable and safe in the data center.
The water cooling of older monolithic systems was relatively easy to achieve since these large machines gave system designers a lot of real estate to play in routing cooling lines. Super-dense computing platforms do not afford that luxury, so another way to use water cooling is necessary.
Water cooling will become common in racks, which is possible through a number of options, each with a varying level of complexity and ease of retrofit. In some cases, heat exchangers can be located within a rack and water can be circulated in a closed loop within the rack and cooled by the traditional forced air from computer room air conditioning (CRAC). In other examples, the heat exchanger is located outside of the rack. Water continues to be pumped through a closed loop and chilled externally using a traditional CRAC. These two methods are hybrid air/water systems, but offer the ability to move more heat than air-only systems, and can be retrofitted into a data center without significant changes to the existing structure.
The last example of water cooling utilizes a completely external chiller system and cools the electronics within a rack through a closed-loop chilled water system with separate supply and return lines, usually routed through the data center floor. The chiller and heat exchanger are often located outside the data center and may use evaporative or refrigerated cooling methods. In the next-generation data center, this method may be the only viable way of moving the amount of heat tomorrow’s (and in come cases today’s) systems will generate.
The first line of defense will always be awareness. Data center planners, facility engineers, and the server/storage/communication engineers must work in concert to forecast and plan deployment of equipment in the data center. Having an independent facility assessment by a third-party partner can help you understand how to ease the cooling burden today and adequately plan for it tomorrow. Power and cooling will become the most significant considerations in all data centers, and the foundations for dealing with those considerations should be laid now.