Experts offer advice on strategically approaching multicloud development, from balancing features from disparate clouds to mastering observability and automation.
If the features of one cloud environment are a business benefit, deploying multiple clouds has to be even better, right?ย ย
Itโs true that a multicloud architecture promises to give you the best of all possible worlds, letting you take advantage of the specialized features of multiple cloud providers โ but thereโs a catch. Itโs true only if your development practices are ready for the challenge.ย ย
Writing code for multiple clouds is a strategic, architectural, and operational shift from traditional cloud computing. From container orchestration to observability to internal tooling, every part of the development process needs to evolve to match the complexity of your infrastructure.ย ย
We spoke to engineering leaders and architects who are getting it right โ and who admit they sometimes get it wrong. Hereโs what theyโve learned.ย
Plan your multicloud attackย
Before your development teams write a single line of code destined for multicloud environments, you need to know why youโre doing things that way โ and that lives in the realm of management.ย ย
โMulticloud is not a developer issue,โ says Drew Firment, chief cloud strategist at Pluralsight. โItโs a strategy problem that requires a clear cloud operating model that defines when, where, and why dev teams use specific cloud capabilities.โ Without such a model, Firment warns, organizations risk spiraling into high costs, poor security, and, ultimately, failed projects. To avoid that, companies must begin with a strategic framework that aligns with business goals and clearly assigns ownership and accountability for multicloud decisions.ย

Running a multicloud environment offers clear benefits in terms of features and flexibility, but itโs a complex process. Here five things you need to know.
IDG
This process shouldnโt just be top-down. Heather Davis Lam, founder and CEO of Revenue Ops, emphasizes the need for cross-functional communication. โTalk to each other,โ she says. โMulticloud projects involve developers, ops, security, sometimes even legal. Problems usually come from miscommunication, not bad code. Regular check-ins and honest conversations go a long way.โ
This planning process should settle on the question of why multicloud is a good idea for your enterprise, and how to make the best use of the specific platforms within your infrastructure.ย ย
โThe ultimate paradox of multicloud is how to optimize cloud capabilities without creating cloud chaos,โ Firment says. โThe first rule of thumb is to abstract the core shared services that are common across clouds, while isolating cloud-specific services that deliver unique customer value. For example, use a standard authentication and compute layer across all clouds while using AWS to optimize the cost and performance of queries on large datasets using Amazon S3 and Athena.โย
Generic vs. specific cloud environmentsย
The question of when and how to write code thatโs strongly tied to a specific cloud provider and when to write cross-platform code will occupy much of the thinking of a multicloud development team. โA lot of teams try to make their code totally portable between clouds,โ says Davis Lam.ย ย
โThatโs a nice idea, but in practice, it can lead to over-engineering and more headaches.โ Davis warns against abstracting infrastructure to the point that development slows and complexity increases. โIf you or your team find yourselves building extra layers just so that this will work anywhere, itโs a good moment to pause.โ
Patrik Dudits, senior software engineer at Payara Services, agrees. He says excessive abstraction as a common but misguided attempt at uniformity: โOne common mistake is trying to limit your architecture to the โlowest common denominatorโ of cloud features. In practice, embracing the strengths of each cloud is a more successful strategy.โ
Dudits advocates for designing systems with autonomy in mind โ where services can operate independently in their respective clouds rather than being yoked together by a need for identical implementation.ย
This principle of autonomy, rather than strict uniformity, also plays a central role in how Matt Dimich, VP of platform engineering enablement at Thomson Reuters, approaches multicloud design. โOur goal is to be able to have agility in the platform we run our applications on, but not total uniformity,โ he says. โThere is innovation in less expensive, faster compute every year, and the quicker we can take advantage of that, the more value we can deliver to our customers.โ Dimich stresses a balanced approach: leveraging the native services of individual cloud services where it makes sense while still keeping a watchful eye on avoiding tight coupling.ย
Pluralsightโs Firment also sees the need for balance. He says that โthe ultimate paradox of multicloud is how to optimize cloud capabilities without creating cloud chaos. The first rule of thumb is to abstract the core shared services that are common across clouds, while isolating cloud-specific services that deliver unique customer value.โ For example, you might standardize authentication and compute layers while taking advantage of AWS-specific tools like Amazon S3 and Athena to optimize data queries.ย
Similarly, Davis Lam suggests dividing business logic and infrastructure. โKeep the core business logic portable โ APIs, containerized apps, shared languages like Python or Node โ thatโs where portability really matters,โ she says. โBut when it comes to infrastructure or orchestration, Iโd say lean into what the specific cloud does best.โ
Dudits agrees: โMultiple clouds are leveraged because there is clear advantage for a specific task within an intended application,โ he says. โSimply mirroring the same stack across providers rarely achieves true resilience and often introduces new complexity.โย
Writing cross-platform codeย
Whatโs the key to making that core business logic as portable as possible across all your clouds? The container orchestration platform Kubernetes was cited by almost everyone we spoke to.ย ย
Radhakrishnan Krishna Kripa, lead DevOps engineer at Ansys, has helped build Kubernetes-based platforms that span Azure, AWS, and on-prem environments. โUse Kubernetes and Docker containers to standardize deployments,โ he says. โThis helps us write code once and run it in AKS, AWS EKS, or even on-prem clusters with minimal changes.โ
Sidd Seethepalli, CTO and co-founder of Vellum, echoes that view. โWe rely on Kubernetes rather than provider-specific services, allowing us to deploy consistently anywhere a Kubernetes cluster exists.โ Vellum uses templated Helm charts to abstract away cloud-specific configurations and employs tools like KOTS to simplify deployment customization.ย ย
For Neil Wylie, principal solutions architect at Myriad360, Kubernetes is just the foundation. โBuilding on Kubernetes allows me to standardize application definitions and deployments using Helm, typically automating the rollout via a GitOps workflow with tools such as ArgoCD,โ he says. This approach offers โtrue workload mobilityโ while ensuring consistent, validated deployments through CI/CD pipelines.ย
Speaking of CI/CD, the tools that power your codeโs development pipelines matter just as much as the infrastructure your code will run on runs on. Kripa recommends standardizing pipelines using cloud-neutral tools like GitHub Actions and Terraform Cloud. โDesign your pipelines to be cloud-neutral,โ he says.ย ย
โWe primarily use Azure, but tools like GitHub Actions allow us to manage builds and infrastructure across multiple environments with a consistent workflow.โ This consistency helps reduce the burden on developers when moving between providers or deploying to hybrid environments.ย
No matter how much you standardize your code, however, youโll still have to interact with APIs and SDKs of individual cloud providers. Anant Agarwal, co-founder and CTO at Aidora, has a pattern to do that without sacrificing portability: adapter layers. โWe treat every cloud API or SDK like a dependency: We wrap it in an internal library and expose a clean, generic interface to the rest of the codebase,โ Agarwal says. This approach keeps cloud-specific logic isolated and swappable, making core application logic easier to maintain and more resistant to platform lock-in.ย
The open-source community is also helping fill in the gaps, especially where proprietary cloud features have historically created friction. โI like to keep an eye on the CNCF landscape to see the emerging projects โ generally, what you notice is that itโs exactly these โstickyโ points that the new projects try to solve for,โ says Wylie, pointing to the Serverless Workflow project as an example.ย ย
Conquering with multicloud complexityย
As itโs no doubt become clear, heterogenous multicloud environments are complex, and your development process will need to accommodate that. Visibility is particularly important, and getting it right starts with centralizing your logs and alerts. โWe route all logs to a unified observability platform (Datadog), and create a consolidated view,โ says Aidoraโs Agarwal. โPerfect coverage is tough with newer tools, but centralization helps us triage incidents fast and keep visibility across cloud providers.โย
Payaraโs Dudits emphasizes a similar approach. โWe recommend investing in a central, provider-neutral dashboard for high-level metrics across your multi-cloud estate,โ he says. โThis unified view helps developers and ops teams quickly spot issues across providers, even if deeper diagnostics are still done through provider-specific tools.โย
For Revenue Opsโ Davis Lam, good logging is one of the most critical tools in a multicloud environment. โItโs tough enough to debug one cloud. When youโre working across three or four, good logging and monitoring can save you hours โ or days โ of work. Get it right early,โ she says. But she cautions against collecting logs and setting alerts just for the sake of it. โA big tip is to think about what should actually retry and what should just fail and alert someone. Not every failure should automatically trigger a retry loop or fallback. Sometimes itโs better to let a process stop and get someoneโs attention.โย
Automation is another tool that can tame multicloud development environments. โDeployment processes need to be bulletproof because coordinating across providers is error-prone,โ Agarwal says. โWe automate everything using GitHub Actions to ensure schema changes, code deploys, and service updates go out in sync.โย
Agarwal also noted that internal AI tools can streamline complex multicloud workflows. โWeโve turned our internal playbooks into a custom GPT that answers context-specific questions like โWhere do I deploy this service?โ or โWhich provider handles file uploads?โ instantly,โ he says. โTo reduce friction further, weโve codified the same rules into Cursor so developers get inline guidance right inside their IDE.โย
Ultimately, the biggest takeaway might be to simply plan for failure. โThe more clouds and services you tie together, the more chances there are for something to break โ usually in the spots where they connect,โ says Davis Lam. โSo things like API timeouts, auth tokens expiring, or just weird latency spikes become more common. Youโll want to expect those kinds of failures, not treat them as rare events. Think about what should actually retry and what should just fail and alert someone. Not every failure should automatically trigger a retry loop or fallback. Sometimes itโs better to let a process stop and get someoneโs attention.โ ย
ย โAt the end of the day, multicloud development is messy โ but if you expect that and plan for it, youโll write better, stronger code,โ she adds. โAssume things will break and build with that in mind. Itโs not pessimistic, itโs realistic.โย


