Breaking the cloud monopoly

analysis
Apr 22, 20255 mins

Being all-in with the hyperscalers isn't working anymore. Enterprises are diversifying their data platforms for the AI era.

monopoly go language golang
Credit: William Warby

The dominance of hyperscalers AWS, Google Cloud, and Microsoft Azure has shaped the cloud landscape for more than a decade. Enterprises flocked to these platforms to simplify IT operations, lower costs, and drive innovation. For a while, it worked. The allure of scalability, convenience, and a centralized platform to power workloads was hard to resist.

However, times are changing. Many organizations are rethinking their reliance on hyperscalers for existing and future workloads, particularly those powered by artificial intelligence. This movement comes down to one simple truth: Enterprises need more control over their data and whatโ€™s being done with it. Cost, data sovereignty, and the freedom to innovate without operational constraints are driving enterprises to look beyond the major players.

This paradigm shift doesnโ€™t suggest the complete demise of hyperscalers; theyโ€™ll remain pivotal players for specific use cases. However, more and more enterprises are breaking free and embracing heterogeneous platforms to reduce costs, regain control, and power AI innovation with local-first, repatriated data strategies and AI-driven systems. This shift is one of the most significant trends in enterprise IT in the past five years.

Letโ€™s explore why this trend to diversify is gaining momentum and what it means for the future of enterprise platforms.

The reality of cost pressures

Cloud platforms were sold as cost-savers when they rose to prominence. Today, the reality of cloud economics is hitting hard. Enterprises are realizing that hyperscalers usually canโ€™t offer the same savings or margin control as on-premises infrastructure or specialized platforms. A 2022 report by Andreessen Horowitz estimated that public software companies lose as much as $100 billion in market value because of high dependency on cloud platforms. Similarly, the Barclays CIO survey showed that the percentage of organizations planning to repatriate workloads has risen dramatically, from 43% in 2020 to 83% in 2024.

For years, IT leaders believed those who said โ€œyouโ€™re crazy if you donโ€™t start in the cloud.โ€ That advice made sense for greenfield projects requiring rapid deployment, but enterprises that scaled existing platforms found their cloud costs grew unpredictably. A total cost of ownership analysis shows that maintaining workloads on hyperscalers often results in price spikes as companies are forced to scale up compute resources, bandwidth, or storage. Data-intensive workloads, such as those for AI training or analytics, can balloon cloud expenses even further. AI computing demands significant resources, making the high costs of renting computing and storage a problematic long-term option.

Why ownership matters

Data control has emerged as a leading pain point for enterprises using hyperscalers. Businesses that store critical data that powers their processes, compliance efforts, and customer services on hyperscaler platforms lack easy, on-demand access to it. Many hyperscaler providers enforce limits or lack full data portability, an issue compounded by vendor lock-in or the perception of it. SaaS services have notoriously opaque data retrieval processes that make it challenging to migrate to another platform or repurpose data for new solutions.

Organizations are also realizing the intrinsic value of keeping data closer to home. Real-time data processing is critical to running operations efficiently in finance, healthcare, and manufacturing. Some AI tools require rapid access to locally stored data, and being dependent on hyperscaler APIsโ€”or integrationsโ€”creates a bottleneck. Meanwhile, compliance requirements in regions with strict privacy laws, such as the European Union, dictate stricter data sovereignty strategies.

With the rise of AI, companies recognize the opportunity to leverage AI agents that work directly with local data. Unlike traditional SaaS-based AI systems that must transmit data to the cloud for processing, local-first systems can operate within organizational firewalls and maintain complete control over sensitive information. This solves both the compliance and speed issues.

Hybrid and heterogeneous platforms

Homogeneous enterprise platforms entirely dominated by one cloud provider will soon be a thing of the past. The future lies in hybrid and highly heterogeneous infrastructures that balance hyperscaler services with local-first systems, specialized platforms, and even on-premises strategies for repatriated workloads.

This heterogeneity isnโ€™t just theoretical. GitHub has shown the value of combining local-first technology with cloud-based collaboration. New AI platforms designed for local operationsโ€”like Metaโ€™s Llama or DeepSeekโ€”show that cutting-edge applications can move off the cloud. These advancements enable low-cost and local ownership without compromising functionality. As more CIOs and IT leaders adopt these approaches, expect collaboration tools, conflict-free replicated data type (CRDT) systems, and other local-first techniques to increase significantly in relevance and availability.

Hyperscalers will continue to play a role in enterprise IT. Public cloud platforms are essential for elastic scaling of workloads, data backends, and thousands of other functions. But they are no longer the default choice. Companies are adopting long-term strategies that balance cloud utility with the control and cost-savings of on-premises, local-first, and alternative systems.

In the coming years, enterprises will fundamentally reinvent their relationships with data and digital infrastructure. The competitive edge will belong to organizations that can balance cloud capabilities with locally controlled data to innovate more quickly, meet compliance requirements, and maintain lean operating costs.

Hyperscalers have served us well, but they were never meant to be the only solution. Enterprise IT is diverse, and the platforms supporting it must be just as varied. Whether driven by AI workloads, compliance demands, or cost-control pressures, organizations now have the tools, incentives, and strategies to make those changes happen.

David Linthicum

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insiderโ€™s Guide to Cloud Computing. Daveโ€™s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Insider blog for InfoWorld. His views are his own.

More from this author