When you buy through links on our articles, Future and its syndication partners may earn a commission.
Elon Musk’s Memphis Supercluster recently went online, and with a hundred thousand liquid-cooled H100 GPUs onboard, this data center will undoubtedly eat up a lot of power. With each H100 GPU consuming at least 700 watts, Musk’s AI data center will need upwards of 70 megawatts of power to run all 100,000 GPUs concurrently — and that’s before we add in all of the supporting servers, networking, and cooling equipment. Surprisingly, Musk uses 14 massive mobile generators to power the facility as he works out power supply agreements with local utilities.
Dylan Patel, an AI and semiconductor analyst who heads SemiAnalysis, originally ruminated on X that it would’ve been impossible for Musk to run the Memphis Supercluster because of power constraints. He said only 7 MW is currently being drawn from the grid, which would only power around 4,000 GPUs. The Tennessee Valley Authority (TVA) will deliver 50 MW to the facility if xAI signs a pending deal, but it can only do so by August 1 at the earliest. Patel also observed that the 150 MW substation on the xAI site is still under construction and won’t finish until 24Q4.
However, after examining satellite imagery, Patel soon posted a new Tweet when he realized how Musk did it — by using 14 VoltaGrid mobile generators connected to what looks like four mobile substations.
Each of these semi-trailer-sized generators can provide 2.5 MW of power, meaning Musk already has an incredible 35 MW of power available on site. When you combine this with the 8 MW of power the Memphis Supercluster is getting from the grid, you get a total of 43 MW — which should be enough to run the 32,000 H100 GPUs with some power limits in place, as other power requirements are also necessary (i.e. for cooling, CPUs, motherboards, infrastructure, etc.)
If the Tennessee Valley Authority delivers the 50 MW that Musk needs at the start of August, he’d have enough power to potentially run 64,000 GPUs concurrently — again, that’s only looking at the GPU power requirements. Patel says that you need around 155 MW to run 100,000 GPUs, and xAI will need to have the substation running to get it. So, either the substation is ahead of schedule, or Musk will deploy more mobile generators to get the power he needs.
AI data centers need a ton of electricity
Massive power consumption and its impact on global warming are the main issues that AI data centers face now. All the data center GPUs sold in 2023 alone consume more power than 1.3 million average American households combined, thus pushing the electricity grid to its limits. And we can’t just build more power plants to deliver a data center’s needs — we also must consider additional infrastructure like high-tension power lines, substations, and more to get electricity from the plant to the servers.
Aside from the time and cost needed to build power plants for AI computing, we also have to consider greenhouse gas emissions. Although the mobile generators that Musk deployed at his Memphis Supercluster use natural gas for fuel (cleaner than coal or oil), they still spew carbon into the atmosphere while operating. Google recently revealed that its carbon footprint jumped 48% from 2019 because of data center energy demands. So, we can expect xAI to follow suit unless Musk switches to a cleaner form of energy production.
Musk is pushing xAI to be at the forefront of AI development, using every means available to achieve this. However, we hope that using mobile generators is a temporary solution. The Memphis Supercluster needs to transition to a cleaner energy source, which the TVA can deliver. Since the latter uses a mix of nuclear, hydroelectric, and fossil fuel plants, xAI’s carbon footprint should be smaller if it sourced its power from TVA rather than rely on mobile generators that use natural gas alone.