It’s an increasingly important issue as data centres - and there are around 10,600 globally - are consuming around 5% of global energy and in Europe alone are responsible for 6.3 million tonnes of CO2 each year. As a result, we are seeing much greater experimentation and innovation when it comes to the use of green power or deploying new technology that consumes less power.
Many technology players are deploying Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning (ML) to bring down the Power Usage Effectiveness (PUE) metric of datacentres and by using AI it’s been possible to reduce the energy used for cooling by as much as 40 per cent, while cutting overall energy use by as much as 17 per cent.
The sustainable advantages of a green data centre also help to deliver a strong ESG performance which is becoming more important to businesses in terms of the benefits that they can pass on to their end consumers and to the wider environment.
More sustainable data centres reflect a convergence between environmental responsibility, economic viability, and technological innovation but delivering more sustainable data centres brings challenges and barriers that need to be addressed.
While AI and new computing infrastructure are helping companies to deliver growth and improved levels of productivity, their use has meant that many companies could end up actually missing their own emissions targets.
For many the boom in AI is coming at the expense of the tech industry’s climate aspirations.
Google has admitted that AI is threatening its environmental targets. According to the company its datacentres – which are at the heart of AI – have seen greenhouse gas emissions rise by 48% since 2019 and it has warned that there was “significant uncertainty” around its ability to reach its target of net zero emissions by 2030 – reducing the overall amount of CO2 emissions it is responsible for to zero.
The biggest financial backer of ChatGPT developer OpenAI, Microsoft, also warned that its 2030 net zero “moonshot” might not succeed owing to its AI strategy.
So, why does AI pose such a significant threat to tech companies’ green goals?
Datacentres are a core component when it comes to training and operating AI models such as OpenAI’s GPT-4 and servers are used to process the vast amounts of data that underpin AI systems. The amount of processing that is required needs large amounts of electricity to run, which in turn generates a vast amount of CO2 and much of today’s AI model training tends to rely on fossil fuel-powered energy.
Figures from the International Energy Agency show that total electricity consumption from datacentres could double from 2022 levels to 1,000 TWh (terawatt hours) in 2026, while research firm SemiAnalysis calculates that AI will result in datacentres using 4.5% of global energy generation by 2030. And while many tech companies are moving to renewable sources of energy there isn’t enough renewable power for other energy users who, instead, are increasingly reliant on fossil fuels.
Data centres have a larger carbon footprint than the aviation industry, but AI is clearly a part of our future. So, what's the solution?
Efficiency and performance
In response, companies like AMD and NVIDIA are working to enable modern data centres that are more efficient while delivering increased performance and security.
Speaking earlier this year AMD CEO, Lisa Su, said that the explosion in AI LLMs was putting energy efficiency at the forefront of problems confronting the data centre industry and that power would be the new limiting factor for the industry going forward.
With many new data centres being built and the need for thousands of GPUs, consuming tens of thousands of megawatt-hours, the growth in AI LLMs will need several gigawatts of power just to train a single model.
According to Su AMD, like many other companies, is focused on improving power efficiency, and is developing new silicon – in AMD’s case this is a 3nm Gate All Around (GAA) transistor – as well as advanced packaging and interconnects that enable more power-efficient and cost-effective modular designs.
Focused on delivering more efficiency, NVIDIA unveiled its new Blackwell platform enabling organisations to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.
“For three decades we’ve pursued accelerated computing, with the goal of enabling transformative breakthroughs like deep learning and AI,” said Jensen Huang, CEO of NVIDIA. “Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution.”
Blackwell has been adopted by the likes of Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI.
Commenting Demis Hassabis, cofounder and CEO of Google DeepMind said that while the transformative potential of AI was incredible and would help to solve some of the world’s most important scientific problems technology, like Blackwell, was vital and would “provide the critical compute needed to help the world’s brightest minds chart new scientific discoveries.”
AMD has also been working to accelerate server energy efficiency and the company has set itself a goal of delivering a 30x increase in energy efficiency for AMD processors and accelerators powering servers for AI-training over the five-year period to 2025. That represents a more than 2.5x acceleration of the industry trends from 2015-2020 as measured by the worldwide energy consumption for these computing segments.
However, energy efficiency gains from process node advances are now smaller and less frequent so more of the improvements are now having to come from innovations in silicon architecture and packaging in addition to expected gains from silicon process technology.
Using more energy efficient servers can result in fewer physical servers being required to meet computing demands, which can have a cascading effect of avoided environmental impacts – so less raw materials, manufacturing, shipping, energy use, and data centre space
Last month AMD announced plans to acquire server maker ZT Systems for $4.9 billion as the company looked to expand its portfolio of artificial intelligence chips and hardware.
The company’s decision to buy ZT Systems is a response to the increased computing requirements of AI and to the trend in which companies like NVIDA and AMD are now looking to deliver whole server systems.
"AI systems are our number one strategic priority,” said Su and by acquiring ZT Systems AMD would be, “in a better position to more quickly test and roll out its latest AI graphics processing units (GPUs) at the scale cloud computing giants require.”
Sustainability in data centres will lead to significant economic advantages whether that’s reduced operational costs i.e. electricity bills and operational expenses; delivering substantial long-term savings over the lifecycle of a data centre and enhanced competitiveness, attracting environmentally conscious customers, partners, and investors.
Likewise, the environmental benefits of sustainable data centres are profound. By using renewable energy sources and by improving energy efficiency, data centres will be able to significantly reduce their carbon emissions, contributing to climate change mitigation.
Next generation power
With the growing demand for AI and high-performance computing (HPC) systems so there is a need for a next generation of data centre power solutions.
Navitas Semiconductor, a developer of GaNFast gallium nitride (GaN) and GeneSiC silicon carbide (SiC) power semiconductors, has released a 4.5 kW AI data centre power supply reference design, with optimised GaNSafe and Gen-3 ‘Fast’ (G3F) SiC power components. Described as a response to the next-generation AI GPUs like NVIDIA’s Blackwell B100 and B200 which each require over 1 kW of power for high-power computation, which is 3x higher than traditional CPUs, their improved power density and efficiency mean that these next-gen GaN and SiC solutions can provide sustainability benefits, specifically CO2 reductions due to system efficiency increases and ‘dematerialization’.
“AI is dramatically accelerating power requirements of data centres, processors and anywhere AI is going in the decades to come creating a significant challenge for our industry,” said Gene Sheridan, CEO of Navitas Semiconductor. “Our latest GaNFast technology are delivering the highest power density and efficiency the world has ever seen…the perfect solution for the Blackwell AI processors and beyond.”
Compound semiconductors applications are also seen as having an important role in delivering more energy-efficient data centres.
“We are generating more data than ever before. Estimates suggest that globally we create hundreds of zettabytes of data each year and a single zettabyte (a billion trillion bytes) is so large that it would take a billion powerful home computers to store this amount of data,” said Martin McHugh, CEO at the CSA Catapult.
“Thankfully, we have data centres to do this for us. But in solving one problem, we have created another in that data centres are so energy intensive,” McHugh said. “Improving the energy efficiency of data centres is of utmost importance and we can do this is by using compound semiconductor applications.”
Compound semiconductors are being used to improve the distribution of power as it flows from the energy grid into a data centre.
“PSUs are critical to this, converting incoming alternating current (AC) into direct current (DC). Each server rack in a data centre is often equipped with multiple PSUs which use silicon carbide (SiC) compound semiconductors. SiC devices are smaller and more efficient and produce less heat,” according to McHugh.
The challenge for businesses is to squeeze every bit of efficiency out of SiC and with roughly 4,500 PSUs used in a typical hyper-scale data centre, even small gains can have a significant overall impact.
Solid-state transformers (SSTs) are also an emerging technology, built using compound semiconductors, that could revolutionise the power electronics industry, according to McHugh.
“Unlike traditional transformers, SSTs use power electronics to precisely control power flow. SSTs could provide significant energy savings, produce less heat, and adapt quickly to changing load conditions. They are also much smaller, so would be ideally suited to data centres where every square meter is valuable.”
Compound semiconductors are also used to build integrated photonic devices that rapidly exchange and transport data between servers. These devices replace traditional copper interconnects and enable faster speeds, higher bandwidth and improved energy efficiency.
Conclusion
The sharp increase in energy demand from AI datacentres is putting pressure on climate targets and while sources of renewable energy are growing, they are not fast enough to meet that demand.
Breakthroughs in AI technology are enabling companies to do more with less and a DeepMind project called Chinchilla showed that researchers could train AI models using radically less computing power, but that didn’t see a reduction in the use of electricity. It resulted in the same amount of electricity being used to make even better AI systems.
And that is a problem that AI and data centres need to square if we are to hit environmental targets.
It’s got a name too, the “Jevons’ paradox”, which was named after an economist who noted the improvement of the steam engine by James Watt allowed for much less coal to be used. However, rather than reducing the amount of coal burned demand surged as new uses for coal were discovered.
So, we seem to be in exactly that same predicament with AI today, but as we have seen there are solutions to hand.