Please select your country

STULZ worldwide

Efficiency in data center

 

STULZ KNOW-HOW

One of the most topical issues in the DPC sector is achieving operational efficiency. Efficiency understood both from the point of view of minimizing energy consumption and the efficient and uniform treatment of the thermally treated space.

In this article we will review the most common measures or metrics in the sector to define efficiency objectively, including some of recent implementation, linked to the global climate in which we live.

In addition, we will address other relevant issues for maximizing these metrics, such as current changes in the service conditions to be maintained and which freecooling systems are being observed in the design of data centers.

 

METRICS FOR DATA CENTRE ENERGY MANAGEMENT

What do they indicate and where do they come from?

The first metrics that come to mind in the sector are PUE and DCiE.

PUE is the acronym for Power Usage Effectiveness, and is the value that results from dividing the energy consumption used by all the facilities in the data center by the energy supplied to the IT equipment in the data center.

PUE = Total energy consumption required by the data center / Total energy consumption required by the IT equipment.

DCiE is the acronym for Datacenter Infrastructure Efficiency, and is the inverse of the previous one, i.e. the total energy consumption required by the IT equipment divided by the total consumption of the center.

Therefore, these are indicators that relate the total consumption of the data center to the specific consumption required by the IT equipment. As detailed in various documents from the metric generator, The Green Grid, IT equipment energy includes the energy associated with all IT equipment (computing, storage and networking equipment) along with ancillary equipment (switches, monitors, and workstations/laptops used to control the data center).

On the other hand, total facility energy includes all IT equipment energy by adding the energy use of power supply components, (UPS systems, switches, generators, power distribution, batteries and distribution losses external to the IT equipment), HVAC system components (cooling units, cooling towers, pumping systems, computer room air handling, computer room air conditioning units (CRACs)) and other miscellaneous component loads, such as data center lighting.

As advanced above, this metric was generated by "The Green Grid", which is a non-profit IT organization (back in 2007) and is the most widespread method for measuring consumption and efficiency in data centers.  It was published as a standard under ISO/IEC 30134-2:2016 and currently in Europe under EN 50600-4-2:2019.

How is it measured? What values are considered optimal?

The regulations and documentation itself establishes how to measure by establishing up to four categories of PUE according to the levels of precision or quality sought in the measurement.

The main difference between these categories is based on:

  • The way of defining the energy measurement interval, which can be, in the strictest case, 15 minutes or less, and in the laxest case, up to monthly periods.
  • The energy measurement point, which can be from the energy connections of the infrastructure to the connection point of the IT systems.

Therefore, the PUE is an indicative measure that allows us to know the energy efficiency of the data center, from a strictly energy point of view. On the other hand, as we know, measuring and establishing energy control guidelines is the first activity in the process of improving the activity.

The range in which the PUE moves is between 1 and infinity. A value of 1 would indicate 100% efficiency, which is the ideal and theoretical case. Most of the sectoral studies related to data centers speak of PUE values below 2.0 to be considered average efficiency and can reach 1.2 in the case of extremely efficient infrastructures.

OTHER METRICS TO CONSIDER

Although the above energy metrics are the predominant ones in the sector, given the global environmental situation, other types of complementary metrics are beginning to be proposed by different sectors and companies that consider the resources used in air conditioning and management of the data center in its environmental aspect. It should be borne in mind that approximately 1% of the world's energy is already used in data centers and that the techniques and resources used in their operation have expanded.

Among these metrics that will be implemented sequentially in the sector, the following should be highlighted:

  • Greenhouse Gas Emissions (GHG). It establishes, in its different scopes, the emissions of gases into the atmosphere linked to the operation of the data center. Scope 1" considers the direct emissions produced from sources controlled or owned by the data center organization. In its "Scope 2", location-based emissions, indicating those associated with networks at the data center location, within a defined geographical area and a defined period. And in its "Scope 3" other indirect emissions, e.g. from the value chain, business travel and waste management linked to the data center. Within the corporate social responsibility of many companies and the associated decarbonization plans, these are already relevant points for reflection and action.
  • Carbon Use Effectiveness (CUE), which measures the carbon emissions of the data center. It relates the annual data and CO2 emissions and energy demand of IT equipment. It is similar to the carbon intensity scopes 1 and 2 above, but is compared relatively to the IT load of the data center.
  • Water Use Efficiency (WUE), which relates the water consumption of the data center to the sum of the energy consumed by the IT equipment.

 

TREATMENT SYSTEMS, OPERATION AND ENERGY EFFICIENCY

Based on the above, it is clear that the sector is evolving along the dual path of optimizing consumption and meeting energy needs in the most environmentally coherent way. These strategies include the following trends in which we are involved:

Use of Free cooling

The operation of data centres is more efficient if we optimize the cooling systems. In this sense, the most appropriate is the use of free cooling or freecooling, which consists of using low outside temperatures to cool the facilities. Its adoption is increasing due to its benefits in energy efficiency and operating costs. It is no coincidence that the latest specialized sector publications stress its relevance and provide guidelines for its selection and calculation. Among the technologies described and recommended, depending on the type of data center to be cooled, we have the following possibilities:

  • Use of freecooling with direct expansion
  • Use of freecooling with dual direct-expansion coil and chilled water
  • Use of freecooling with direct pumping of refrigerant in direct expansion
  • Indirect evaporative cooling
  • Direct evaporative cooling
  • Use of freecooling by adiabatic cooling with the room air conditioner

There is no single way to select the right freecooling for a specific data center. Many factors such as geographical location, user requirements, building type or expected operating costs can influence the selection and calculation. STULZ has specialized technical consultants who will advise with the most suitable solution for the specific site under development.

 

Temperature management:

Another of the relevant measures, whenever the operation and typology of the data center allows it, is the management of the interior temperature. Bearing in mind that most electronics manufacturers set upper limits of no more than 30-35 ºC for thermal stress, there is a certain capacity to allow for its variability and to achieve adequate uniformity in its operation. There are several points to consider:

The ASHRAE design guide (ASHRAE's Technical Committee 9.9 (TC 9.9) Mission Critical Facilities, Technology Spaces, and Electronic Equipment) is adapting and extending the operating range of data centers according to their criticality and typology. Along these lines, operating in ranges from 18 to 27ºC in impulsion and with very wide humidity limits of 20-70%.

  • Monitoring. It is increasingly common to increase the monitoring points of the treated area to establish hot spots and their associated problems. ASHRAE sets clear guidelines for the location and type of probes for the correct energy performance of the installation.
  • More common technologies such as hot and cold air containment (implementation of physical barriers to achieve preferential and more efficient directional airflows) can be complemented with local treatments beneficial to the data center.

STULZ has the technical support to make the implementation and monitoring of these measures a reality in your projects, optimizing the commissioning of the air conditioning and facilitating its operation and maintenance.