Cooling Systems in Data Centers
STULZ KNOW-HOW
The exponential growth in data processing has driven the continuous evolution of cooling strategies for data centers. Energy efficiency, carbon footprint reduction, and adaptability to high-density environments are key factors driving innovation in HVAC systems. We are at a crossroads that could lead to a threefold increase in electricity demand for data centers between 2020 and 2030.
ENERGY IMPACT OF DATA CENTERS
Information and communication technology is demanding more and more energy, posing a major challenge for electricity generation and distribution infrastructures.
Moreover, energy production must rely on technologies that do not contribute to CO₂ emissions or other greenhouse gases, in order to avoid increasing the carbon footprint.
It is estimated that in 2024, 1% of the world’s electricity (460 TWh) was consumed by data centers, but short- to medium-term projections suggest that by 2030 this figure could exceed 900 TWh.
Spain is expected to become one of the key hotspots for new data center developments over the next five years, with several planned hyperscale projects that could triple the current installed capacity. This will help position Spain as a major digital hub in Europe. This is undoubtedly a challenge that all market stakeholders are already working on — from the perspective of both data center construction and operation, as well as the supporting infrastructure.
Within a data center, the highest energy consumption comes from the IT equipment itself, such as data processing servers, storage systems, and networking devices. However, it’s important to note that nearly all this energy is dissipated in the form of heat.
Depending on the type of cooling system used and the characteristics of the demand, cooling systems can account for between 30% and 50% of total energy consumption. This highlights the need to optimize cooling system efficiency.
It’s also important to consider that the characteristics of IT equipment (maximum allowable temperature, power density, compatibility with liquid cooling, acceptable humidity range, etc.) largely define the appropriate cooling system.
In this regard, we are also facing a technological shift toward systems with such high power density that traditional air-based cooling solutions can no longer cope, due to the physical limitations of air as a cooling medium. As a result, we’re now seeing the emergence of systems using direct liquid cooling, full immersion, or hybrid solutions that combine liquid and air-based cooling.
DATA CENTER COOLING REQUIREMENTS
Before listing the existing systems, let's review the key parameters that need to be controlled in cooling systems, as these will guide us in choosing the right solution.
First, we must consider the setpoint conditions for temperature and relative humidity required for proper server operation.
These parameters depend heavily on the type of servers. In standard equipment, the cooling air outlet temperature may range from 30 °C to 35 °C, but it is increasingly common to find server types where air can be extracted at up to 45 °C.
We can refer to the ASHRAE TC 9.9 guidelines for recommendations on temperature and humidity ranges.
In addition to temperature and humidity conditions, another key factor that will determine the cooling system is the thermal load density to be dissipated from the servers.
Depending on this density, some systems may be unsuitable because they are unable to dissipate such high thermal loads. In this regard, an initial selection can be made based on the following criteria:
Once a system has been selected based on the thermal load dissipation needs, energy efficiency must be taken into account to reduce power consumption and the carbon footprint. The goal is to lower the PUE (Power Usage Effectiveness) through efficiency measures such as air-side free cooling, chilled water free cooling, demand-based airflow reduction, and so on.
In some locations, or as part of a company's sustainability strategy, water consumption may need to be reduced or eliminated entirely. The reference value for this consumption is the WUE (Water Usage Effectiveness), which relates the annual amount of water used to the kWh consumed by the IT equipment. This will determine whether systems like direct or indirect free cooling with adiabatic cooling can be used.
Depending on this density, some systems may be unsuitable because they are unable to dissipate such high thermal loads. In this regard, first approximation can be made based on the criteria of the graph shown on the left.
Once a system has been selected based on the thermal load dissipation needs, energy efficiency must be taken into account to reduce power consumption and the carbon footprint. The goal is to lower the PUE (Power Usage Effectiveness) through efficiency measures such as air-side free cooling, chilled water free cooling, demand-based airflow reduction, and so on.
In some locations, or as part of a company's sustainability strategy, water consumption may need to be reduced or eliminated entirely. The reference value for this consumption is the WUE (Water Usage Effectiveness), which relates the annual amount of water used to the kWh consumed by the IT equipment. This will determine whether systems like direct or indirect free cooling with adiabatic cooling can be used.
COOLING SYSTEMS FOR DATA CENTERS
We now explore the equipment that directly serves the data center, conditioning the air in terms of temperature and humidity in air-based cooling solutions, or circulating water or coolant to liquid cooling systems.
This section does not cover the definition of chilled water generation and distribution systems that supply these units. We will begin with the systems used for lower thermal load densities and move toward those designed for higher-density applications.
1. CRAC/CRAH: A Standard in Perimeter Cooling
CRAC (Computer Room Air Conditioning) and CRAH (Computer Room Air Handler) systems have been the dominant solution for data center cooling for years.
- CRAC: Units that use compressors for cooling, operating similarly to traditional air conditioning systems.
- CRAH: Operate with chilled water in heat exchangers, eliminating the need for compressors and allowing for greater energy efficiency.
Both systems distribute cold air under a raised floor or through ductwork, maintaining strict control over the data center’s temperature and humidity. Although they remain a reliable solution, their effectiveness decreases as heat densities per rack increase.
2. Fanwalls: Homogeneous and Flexible Air Distribution
Fanwall systems represent an evolution in air distribution within data centers. Their design consists of high-efficiency EC fan modules arranged at the rear or sides of the room, generating a uniform airflow that adapts to cooling needs.
Benefits:
- Elimination of hot spots by improving airflow distribution.
- Reduced power consumption thanks to fans with variable frequency drives.
- Greater adaptability to changes in thermal load.
This system is ideal for infrastructures aiming to optimize airflow without implementing entirely new solutions.
3. Air Handling Units (AHU) with Direct and Indirect Free Cooling
To increase efficiency and reduce energy consumption, many facilities opt for customized AHUs with Free Cooling, which use outdoor air and adiabatic cooling instead of relying solely on mechanical systems.
- Direct Free Cooling: Introduces filtered outside air into the data center when temperature and humidity allow.
- Indirect Free Cooling: Separates outdoor and indoor air using heat exchangers, avoiding contamination of the data center environment.
These systems can significantly reduce the PUE (Power Usage Effectiveness) and support sustainability strategies by minimizing mechanical cooling and water consumption.
4. In-Row and Rear Door Cooling: Localized Solutions for High Density
In-Row and Rear Door cooling systems have become popular in high-density data center configurations, where heat dissipation per rack exceeds 15–20 kW.
- In-Row Cooling: Cooling units placed between server racks that work in close proximity to the equipment, reducing the distance for cold air to travel and improving thermal efficiency.
- Rear Door Cooling: Heat exchanger panels mounted at the rear of the racks that capture heat before it enters the room, allowing for more effective dissipation.
These systems reduce the mixing of hot and cold air, optimizing thermal management in high-computing-density infrastructures. They can be combined with the aforementioned systems.
5. Liquid Cooling: The Future of High-Density Cooling
As thermal densities continue to rise, liquid cooling has become a key solution for hyperscale data centers and high-performance computing applications.
Types of Liquid Cooling:
- Direct-to-Chip Liquid Cooling: Cold plates mounted directly on processors dissipate heat through a liquid coolant circuit.
- Immersion Cooling: Servers are fully immersed in a dielectric fluid, completely eliminating the need for air-based cooling.
Advantages:
- Reduced energy consumption by eliminating fans and optimizing heat transfer.
- Ability to handle racks with power demands exceeding 30–100 kW.
- Less physical space required compared to traditional air-based HVAC systems.
Conclusions
Each of these cooling solutions addresses specific needs within data centers. While CRAC/CRAH systems and Fanwalls remain reliable options, the evolution of workloads is driving the adoption of AHUs with Free Cooling, localized solutions such as In-Row and Rear Door cooling, and disruptive technologies like Liquid Cooling. The appropriate selection will depend on factors such as thermal load density, availability of water resources, and energy efficiency goals.
As data processing demand continues to grow, cooling will remain a key pillar in data center design and operation, driving innovations that enable greater sustainability and performance.