Treat the cause not the symptom: Data centres and energy

By Marcin Bala, CEO of telecoms network specialist Salumanus
How engineers can better manage energy usage in order to effectively cool data centres

As data centres expand, much of the focus in recent years has been devoted to cooling technology. However, it’s important to remember that while better cooling provides symptomatic relief, it doesn’t necessarily address the root cause of the problem, which is heat generation. Here, Marcin Bala, CEO of telecoms network specialist Salumanus, explains how engineers can take a better approach to cooling in data centres.

Short of submerging your data centre in icy waters, relocating it to colder climes, such as Norway, or using district heating, data centres typically require extensive cooling mechanisms to extract waste heat. It’s no surprise, then, that the data centre cooling market is projected to reach a value of $15.7bn by 2025.

These cooling systems account for anywhere between 30-50 per cent of total electricity use and, according to the IEA, data centres and data transmission networks make up 1-1.5% of global electricity use. Given that Europe’s energy crisis is far from over — with many analysts predicting that it could last until 2024 — reducing heat output may hold the answer to lowering costs and emissions. 

Heat and power

The relationship between power consumption and heat output is linear. Nearly all of the power consumed by an IT device is converted into heat. This means that if you can reduce power consumption, you can directly reduce the amount of heat output, and subsequently the amount of cooling capacity required.

For example, take a common piece of hardware used in nearly all data centres: an optical transceiver module, such as the 100 Gigabit Ethernet QSFP28 100G CWDM4. This device consumes around 3.5 W and we have customers that purchase more than 10,000 units every year. We recently developed a third generation of this device that reduces power consumption to between 2.5–2.7 W. If we multiply this 0.8 W power saving by 10,000 units running 24 hours and 365 days a year, power consumption is reduced by at least 70 MW per year.

Scaling sustainably

However, there are even bigger heat savings to be made. As well as reducing the power consumption per device, data centre managers and optical networking engineering should look at ways of reducing the total number of devices on the network altogether. 

As bandwidth requirements increase, so does the amount of rack space used. The problem is that, because most data centres are still air cooled, racks are intentionally underutilised to prevent them from overheating, leading to an unsustainable expansion in the coming years.  

According to Research and Markets, the global data centre market is expected to grow by 73 per cent over the next four years. Key drivers of this include development of hyperscale facilities of over 200 MW, as well as growth in developing markets, and the introduction of advanced liquid cooling technologies.

While trends like immersion liquid cooling and making data centres ‘sustainable by design’ will become more cost effective over time, they are currently capital-intensive undertakings. While smaller hyperscale facilities can cost $200mn, the largest sites can incur costs of around $1bn or more. In the short term, operators can focus on improving power efficiency and reducing the complexity of their network architectures.

For example, given that a standard switch has 32 QSFP28 ports, if we were to install 10,000 100G transceivers, we would need 312 switches. We can reduce the number of switches drastically by upgrading from 100G to 400G transceivers. This would mean we would only need 78 switches to achieve the same bandwidth. 

In terms of energy, a 100G switch has a power consumption of 600 W (four would consume 2.4 kW), whereas one 400G switch consumes 1.3 kW. Taking this 1.1 kW saving and multiplying by 24 hours and 365 days, we would reduce power consumption by 600 MW. Add this to the earlier saving per transceiver and we’re looking at a total annual saving of around 670 MW by making these two small changes. 

Not only would this result in a significant reduction in cooling capacity required, it would also result in a considerable reduction in power consumption, which is especially important given the current energy crisis in Europe.

Another benefit of switching to next-generation transceivers is scalability and futureproofing. Switching from 100G CWDM4 technology to Single Lambda technology like 100G FR1 allows you to connect a 100G device directly to a 400G transceiver like a QSFP-DD 400G 4FR1.  

Because this type of transceiver is also capable of running at the 800G and 1.6T speeds we expect to see in the future, the hardware will have a longer usable life and reduce e-waste. At the same time, it’s still backwards compatible with current 100G devices. This allows data centres to scale rack space efficiently in the coming years, to keep up with higher bandwidth requirements, while keeping cooling demand to a minimum.

Share

Featured Articles

How Siemens Gamesa Became a Global Wind Power Leader

One of the world’s largest wind companies, Siemens Gamesa played a major role in the early years of electricity and is now a leader in the renewable space

Earth Day 2024: Renewable Energy Key To Sustainable Future

Celebrated annually on 22 April, Earth Day 2024’s main theme centres around ‘People vs Plastics’ but also looks at sustainability as a whole

What's Apple’s Promise on Clean Energy and Water Investment?

Tech giant Apple is working to increase its sustainable output, supporting more than 18GW of clean energy use & billions of gallons in water savings

Data Centre Demand Putting Pressure on Energy Capabilities

Technology & AI

Q&A with Hitachi Energy’s EVP & Head of North America

Sustainability

OMV Takes Strides in Energy Efficiency & Emissions Reduction

Sustainability