Cloud repatriation hits its stride

analysis
May 9, 20255 mins
Cloud ComputingHybrid CloudMulticloud

With AI needing every spare dollar in enterprise budgets, enterprises are scrutinizing every workload to determine its optimal infrastructure.

A woman running down an empty road.
Credit: Shutterstock

For the past decade, the cloud was the ultimate destination for forward-thinking IT leaders. Hyperscale providers sold a compelling promise: agility, scalability, and always-on innovation. CIOs pushed cloud-first mandates and for a time, moving workloads to AWS, Azure, or Google Cloud seemed like the most logical step for companies of every size.

But 2025 feels different. Repatriationโ€”once a quiet undercurrentโ€”has surged into the mainstream. The driving force behind this movement? Artificial intelligence. AI isnโ€™t just another workload type. Its need for specialized compute, from GPUs to high-bandwidth networking and massive storage, has fundamentally challenged the economics that justified mass cloud migrations in the first place.

Donโ€™t take my word for it, listen to cloud giant AWS. The New Stake reports:

In a recent U.K. Competition and Markets Authority (CMA) hearing, AWS challenged the notion that โ€œonce customers move to the cloud, they never return to on-premises.โ€ They pointed to specific examples of customers moving workloads back to on-premises systems, acknowledging customersโ€™ flexibility in their infrastructure choices. Despite hyperscalersโ€™ earnings growing fast, there is a rising concern about the sustainability of that growth.

AI: A new budget superpower

Many enterprises are now confronting a stark reality. AI is expensive, not just in terms of infrastructure and operations, but in the way it consumes entire IT budgets. Training foundational models or running continuous inference pipelines takes resources of an order of magnitude greater than the average SaaS or data analytics workload. As competition in AI heats up, executives are asking tough questions: Is every app in the cloud still worth its cost? Where can we redeploy dollars to speed up our AI road map?

Weโ€™re witnessing IT teams pore over their cloud bills with renewed vigor. Brownfield apps with predictable usage are primarily under the microscope. Does it really make sense to pay premium cloud prices when legacy or colocation facilities can handle steady workloads at a fraction of the cost? For many, the answer is increasingly no, and those cloud resources are starting to find their way back home.

The hyperscalers arenโ€™t oblivious. AWS, Microsoft, and Google are seeing their most sophisticated enterprise clients not just slow cloud migrations but actively repatriate workloads. These are often the workloads with the steadiest, most predictable resource profilesโ€”the kind that are easiest to budget for when owned outright but hard to justify at on-demand, public cloud prices.

Simultaneously, a new breed of AI infrastructure providers is rising, offering bare metal, GPU-as-a-service, or colocation solutions purpose-built for machine learning. These platforms attract business by being more transparent, customizable, and affordable for enterprises tired of chasing discounts and deciphering complexity in hyperscaler pricing. The hyperscalers are responding with hybrid and multicloud offeringsโ€”even working to allow easier migration, better reporting, and more granular consumption-based pricing.

Still, thereโ€™s an acknowledgment in the boardrooms of Seattle and Silicon Valley: The easy growth is gone. Enterprises now want flexibility, especially when core business transformation depends on AI investment. Cloud providers must be more than arms-length landlordsโ€”they must become close partners, prepared to meet client workloads both on-prem and in the cloud, depending on what makes the most sense that quarter.

Repatriation doesnโ€™t signal the end of cloud, but rather the evolution toward a more pragmatic, hybrid model. Cloud will remain vital for elastic demand, rapid prototyping, and global scaleโ€”no on-premises solution can beat cloud when workloads spike unpredictably. But for the many applications whose requirements never change and whose performance is stable year-round, the lure of lower-cost, self-operated infrastructure is too compelling in a world where AI now absorbs so much of the IT spend.

In this new landscape, IT leaders must master workload placement, matching each application to a technical requirement and a business and financial imperative. Sophisticated cost management tools are on the rise, and the next wave of cloud architects will be those as fluent in finance as they are in Kubernetes or Terraform.

Expect the next few years to feature:

  • Continued pressure on hyperscalers:ย Demands for transparency, flexible pricing, and hybrid support arenโ€™t going away. Providers that donโ€™t respond risk losing their best (and most profitable) enterprise customers.
  • The normalization of workload mobility:ย Moving between cloud and on-prem will become routine, not exceptional.
  • Budget reallocation at scale:ย Enterprises will double down on cost optimization not just to save money but to free up the resources AI demands.

AI isnโ€™t just another line itemโ€”itโ€™s the force reshaping cloud economics and triggering the widespread reconsideration of where and how enterprises run their most important workloads. In order to stay relevant, hyperscalers must evolve, offering realistic pricing and embracing hybrid. For CIOs, the new north star is optimizationโ€”of costs as well as of business value. Repatriation, once a tactical move, is now a strategic lever in a world where AIโ€™s potential requires every available dollar and ounce of efficiency.

David Linthicum

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insiderโ€™s Guide to Cloud Computing. Daveโ€™s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Insider blog for InfoWorld. His views are his own.

More from this author