How data centers can learn to turn off and help the grid

By Jason Plautz | 08/19/2025 06:36 AM EDT

Using less electricity as a response to high demand had been a nonstarter for big tech companies. That might be changing.

A Google data center in Georgia.

A Google data center in Georgia. David Goldman/AP

For years, calculating the power needs for data centers has revolved around five numbers: nine, nine, nine, nine and nine.

High-performance companies sought enough power to run their massive banks of computers and chips to run 99.999 percent of the time, which they said is necessary to keep up with the nonstop digital economy. Even the lowest-grade data centers were seeking enough power to run more than 99 percent of the time, leaving utilities and grid operators scrambling for new generation as developers built bigger facilities.

But faced with a power crunch, data center developers are exploring whether everyone would be better served if they didn’t require quite so much juice all the time.

Advertisement

This month, Google announced a pair of agreements with utilities to plug in data centers with demand-response capabilities — helping those buildings use less power during periods of high demand on the grid. The grid operator covering the central U.S. is seeking approval for a plan to allow data centers to get on the grid quicker if they agree to limit power when requested.

The White House also suggested that greater flexibility could help get data centers on the grid as part of its sweeping AI Action Plan. The plan, released last month, called for “new and novel ways for large power consumers to manage their power consumption during critical grid periods,” although it did not delve into specifics.

“The developers and the hyperscalers who can achieve the type of load growth they want to achieve to win the AI race are the ones who are figuring out the answer to the demand question,” said Allison Clements, a former commissioner on the Federal Energy Regulatory Commission who now works for a data center advisory firm. “Whoever figures out how to be more flexible will have a market advantage.”

An influential study published by Duke University this year found that there was enough existing generation on the grid to add nearly 100 gigawatts of new data center load — provided those data centers could turn down their power demand for less than 44 hours every year. If the data centers could employ demand response for 1 percent of the year, or less than four total days, there would be 126 GW of extra room.

Since that study was published, author Tyler Norris, a fellow at Duke’s Nicholas Institute for Energy, Environment and Sustainability, said the idea has only grown. Pilot programs have shown viability and the Google arrangement, he said, marks how utilities and hyperscalers can build demand response in from the start.

“Necessity is the mother of invention. And right now we’re in a supply-constrained environment where it takes a substantial amount of time to get generation and transmission expansions,” Norris said. “To the extent that demand response can be exchanged for an accelerated interconnection or some speed upside, that’s the holy grail for hyperscalers.”

A separate study published last month by researchers at MIT found that data center flexibility lowers overall system costs by reducing demand when system costs are at their highest and by averting investments in new generation. In Texas — one of three markets studied in the study — the cost savings were as high as 5 percent. The study, however, also found that adding flexible data centers in markets with less renewable energy could keep some fossil fuel resources on the grid and drive up emissions.

Google’s model

Demand response doesn’t mean that the data centers will turn off entirely. Norris’ paper assumes facilities can still have at least 50 percent of their power online. In other scenarios, tech companies could simply shift their existing workload to data centers in other regions.

And with data centers increasingly focused on training and developing large language models to support artificial intelligence, companies say there are even more opportunities to divert workload when the grid needs it. A pilot project in Arizona, for example, found that reducing workload at data centers by 25 percent across three hours did not result in the loss of any valuable work.

Not all data center operations are created equal. Creating and training a model — when computer models scan through massive quantities of text and data to find patterns — may be considered less essential work that can be flexed to different times or locations, based on grid needs. That would leave space for essential data centers to stay on the grid, and for utilities to maintain power for the homes and businesses they serve.

“The more we look at the operations of AI, there’s an opportunity to be a resource to the grid,” said David Porter, vice president for electrification and sustainable energy strategy for the Electric Power Research Institute (EPRI). “Depending on the workload, there is a level of flexibility that allows the data center not to be just an off taker of energy.”

Google’s new agreements with the Tennessee Valley Authority and Indiana Michigan Power (I&M Power) could show the way. According to the tech company, the arrangements will target machine learning workloads in data centers in the two service territories, which it said can facilitate AI growth even where power generation and transmission are constrained.

Andrew Williamson, director of regulatory services for I&M Power, said in an interview that the agreement reflected a “collaborative fashion” of working with the tech giant, which opened a $2 billion data center in Fort Wayne, Indiana, that was powered up in November 2024. While I&M Power has worked on demand response for other customers, Williamson said that data centers present a unique challenge because of the constant demands of the internet.

“An industrial site can shift production to different periods, the customer probably isn’t demanding the service right there, 24/7,” Williamson said.

Information about how much power Google will reduce, the length of reductions, the date of implementation and how the company will be compensated is redacted from public filings for confidentiality reasons.

Ben Inskeep, program director for the Indiana-based Citizens Action Coalition, an environmental and consumer group, said in an email that the group plans to intervene in the special contract case “to advocate for greater transparency and a fair deal for Hoosier ratepayers.”

But Williamson said the demand response program will allow I&M to avoid investing in new generation and should present a model for future contracts as more data centers open in the Midwest.

“This is in our interest and in theirs,” Williamson said. “Having this commitment is a valuable tool so that when the grid does get stressed, we’ve got a single place to go to relieve a meaningful amount of demand on the system, which in turn helps us support reliability and lower costs.”

Scott Brooks, a spokesperson for TVA, said that demand response programs “are an important way we work with local power companies and customers on days of peak energy use to protect the system.” However, he said, with “tremendous” projected energy demand and with tech companies requesting more than 11,000 megawatts of new load, the arrangement would not avert the need for additional generation.

“The Tennessee Valley region is an attractive place to work, live, and play and that’s not slowing down anytime soon,” Brooks said.

‘A vast amount of low-hanging fruit’

Severin Borenstein, who chairs the board of governors for the California ISO, said the grid operator has been looking at what role demand response can play in getting a flood of data centers onto the grid quickly. That’s part of a broader discussion of how grid operators can take advantage of demand flexibility to help integrate more renewables and deal with increasing load.

“Nationally, we are vastly underutilizing demand flexibility,” said Borenstein, a professor at the University of California, Berkeley. “There’s a vast amount of low-hanging fruit that would allow us to balance these systems.”

The need to quickly get hyperscalers on board quickly, he said, “should lead us more broadly to integrate demand response and flexibility, and do it in a smarter way.”

While the idea seems sound, not all tech companies would want to agree to limits, even for a few hours. But some regulators are looking at ways to incentivize the arrangements. The Electric Power Research Institute has organized hyperscalers, grid operators and utilities under a program it calls DCFlex, which explores how flexible data centers could be added to the grid more quickly.

Southwest Power Pool, the grid operator that operates across 14 states, plans to offer expedited grid connections for data centers that offer more flexibility or agree to bring their own backup power. The proposal — which still needs federal approval — is meant to solve tech companies’ complaints about slow grid connections and offer more security to the grid.

PJM Interconnection, the grid operator covering the mid-Atlantic and Great Lakes regions, hinted that a similar proposal could be coming. In a letter outlining plans to quickly draft new rules for data centers, the operator mentioned that the initiative could include discussion of “resource adequacy tools, including demand response.”

In a filing last month with North Carolina state regulators, Duke Energy wrote that it places performance requirements on potential large load customers, including “mandated interruptible requirements for a specified period of time.”

And even some data center developers are building grid operations in from the start. Verrus, a new developer, is advertising a unique platform that it says can squeeze up to 30 percent more energy out of its connections while making the data centers more responsive to grid needs. That includes built-in backup battery systems that can be deployed on short notice, pulling some or all of the center’s load off the grid as needed.

Jeff Bladen, head of energy for Verrus, said in an interview that the design “has the grid in mind at the beginning” and is meant to be in “constant communication with the grid.” Bladen, who previously worked for two large grid operators and on Meta’s energy team, said the work will not only help utilities, but can show how building large data centers in a more responsive way can boost productivity even for the tech companies.

“By getting grid flexibility from these assets, it becomes more economically productive,” Bladen said. “This is not interrupting workloads in a way that reduces the productivity of these massive capital assets. We’re pursuing this as a way to be a net positive.”