AI’s Energy Footprint
By 2025, AI could be drawing as much electricity as some mid-sized European nations, and the numbers are only accelerating. As a result, significant media attention has focused on the increasing power demand of AI applications as a whole.
And yet, despite this growing attention to AI’s energy footprint, the actual evolution of its power demand over recent years remains unclear – which has implications for data centre operators. While companies like Microsoft and Google acknowledged rising electricity use and carbon emissions in their 2024 environmental reports, attributing this trend primarily to AI, they disclosed only aggregate data centre figures. As a result, it is not possible to isolate the specific contribution of AI workloads from other computing operations.
A new study [1] published in Joule by Alex de Vries-Gao, a PhD candidate at VU Amsterdam and founder of Digiconomist, tackles this head on by offering a granular, bottom-up estimate of AI’s global electricity consumption based on semiconductor production data. Rather than relying on opaque corporate disclosures, which the authors admits have become increasingly scarce, de Vries-Gao reconstructs a detailed model of AI power demand using supply chain analysis. Central to his methodology is the capacity of Taiwan Semiconductor Manufacturing Company (TSMC)’s chip-on-wafer-on-substrate (CoWoS) packaging technology, a key bottleneck in high-end AI accelerator production.
“With useful information regarding AI’s power demand becoming increasingly scarce, academic research has repeatedly stressed the urgent need for better data,” de Vries-Gao writes.
Tracing power to packaging
The study focuses on Nvidia and AMD, which together consumed more than half of TSMC’s CoWoS capacity in 2023 and 2024. Nvidia alone used up to 48% of that capacity in 2024. By reverse-engineering how many AI accelerator devices could be manufactured from a given number of wafers – considering interposer sizes and yield rates – the paper estimates Nvidia and AMD together produced several million high-power AI chips, with a combined thermal design power (TDP) of 3.8 GW. This means that “without further production output in 2025, AI accelerator modules produced by Nvidia and AMD alone could consume more electricity than a country such as Ireland in 2025,” he writes.
These estimates are not speculative either. TSMC confirmed during earnings calls that its CoWoS capacity was overwhelmed by AI-related demand: “We have very tight capacity and cannot even meet customers’ need[s]…Today is all AI focused,” the company said in Q4 2024.
To account for the entire AI hardware landscape, including Google’s undisclosed custom chips, de Vries-Gao scales the total potential TDP to 6.7 GW – representing all CoWoS usage. Then, by including supporting infrastructure—networking, storage, cooling—the total system-level TDP is projected at 12 GW. He calculates this with an example. “Typical AI systems such as the DGX H100/H200 and DGX B200, have a TDP at least 79% higher than the TDP of the AI accelerator modules alone,” the paper notes.
When applying a realistic hardware utilisation rate (65%) and a modern data centre Power Usage Effectiveness (PUE) of 1.2, the effective power draw of AI systems is calculated to range between 5.3 and 9.4 GW in 2025. Over a full year, that equates to 46–82 TWh, which is comparable to the annual electricity consumption of Switzerland, Austria, or Finland.
Rising capacity, rising risk
TSMC’s plans to double CoWoS capacity again in 2025 could push cumulative AI power demand to unprecedented levels. “At this rate, the cumulative power demand of AI accelerator modules…could reach 12.8 GW by the end of 2025,” de Vries-Gao calculates. The study projects that if device production continues scaling with packaging capacity, total electricity demand from AI systems deployed between 2023–2025 could reach 23 GW by the end of 2025 – more than the global footprint of Bitcoin mining, and approaching half of all non-crypto data centre electricity usage in 2024 (415 TWh according to the IEA).
Fork in the road?
Yet the path ahead isn’t purely exponential. Yield challenges with next-generation packaging (CoWoS-L), potential market saturation, and geopolitical constraints could slow deployment. Nvidia’s own Blackwell series faced yield difficulties in 2024, leading the paper to conservatively assume just seven usable chips per wafer for this line – down from 28–29 for earlier generations.
Moreover, infrastructural limits are becoming visible. In an interview with CNBC, Google warned in early 2025 that the US was already experiencing a “power capacity crisis” in its AI race with China. Emerging AI models like China’s DeepSeek R1 claim to rival ChatGPT using less powerful hardware and more efficient code. But the study cautions against viewing such developments as relief for the power grid.
“Any positive effects on AI power demand as a result of efficiency gains may be negated by rebound effects,” writes de Vries-Gao, referring to the dynamic where increased efficiency leads to more usage, not less – the so-called Jevons paradox. Indeed, the “bigger is better” logic driving AI model development persists, with ever-larger parameter counts and training datasets demanding more compute power, not less.
A call for transparency
The paper underscores the urgent need for regulatory mandates around energy disclosure in AI. While the European Union’s AI Act includes provisions for energy reporting, these only apply to model training (not inference) and won’t take effect until August 2025. Inference, the study notes, “accounted for most of Google’s AI electricity costs from 2019 to 2021” and likely plays an even larger role today.
In the meantime, researchers like de Vries-Gao must continue to rely on indirect yet technically grounded estimation methods to understand AI’s growing energy footprint. “This analysis provides a starting point for further investigation in an otherwise opaque industry,” he concludes. “While a growing reliance on fossil fuels threatens to undermine climate goals, effective policy responses first require urgent transparency.”
[1] https://www.cell.com/joule/fulltext/S2542-4351(25)00142-4