Andrew C. Oliver
Contributing Writer

Cloud outages show multicloud is essential

analysis
Feb 27, 20206 mins

Outages are inevitable and vendors are unreliable. You canโ€™t move fast enough unless you already have your service running on two or more clouds

failure leads to innovation
Credit: Thinkstock

Something is rotten in the state of Denmarkโ€”in all of Europe actuallyโ€”and Amazon has been tight-lipped about it. It seems there might have been a hack or a well-executed denial-of-service attack. I realize this was in October, but Google autocomplete suggests that โ€œAWS DDoS attackโ€ be followed by a year. These things happen frequently.

Denial-of-service attacks are as old if not older than the internetโ€”and so is the lack of candor on the part of your data center operator or hosting provider. The thing that protected us all in the past from watching the whole net go black is the same thing that will protect us again: multiple data centers run by different providers. That is to say, multicloud.

A multicloud strategy starts with the obvious โ€” deploying (or maintaining your ability to deploy) on multiple vendorsโ€™ clouds. Meaning you keep your software on AWS and Azure and maybe even on GCP. You forego using any vendor services that might prevent your ability to move, and you pursue a data architecture that allows you to scale across data centers.

Single cloud advantages and drawbacks

Relying on a single vendorโ€™s cloud allows you to eat the buffet of sometimes lower-cost alternatives from the cloud provider. Adding these is usually seamless. Meaning, if youโ€™re an AWS customer, you use Amazon Elasticsearch Service instead of building your own search cluster. If youโ€™re on Google, you can use their document database, Google Cloud Datastore, instead of rolling your own.

However, as with every vendor platform strategy, there is a cost: your freedom. Okay, that sounds heavy, but hear me out. Sure, your cloud vendor is cheaper nowโ€”but will it always be? Moreover, will it one day be unceremoniously canceled as the cloud vendor shifts strategy? They may never even really announce it. And what if your regionโ€™s AWS data center goes down, slows, or becomes unreliable for an extended period of time. Can you take the loss?

Some of these vendor-provided services (especially Amazonโ€™s) are forks of more famous open source alternatives that are supposed to maintain API compatibility. However, theyโ€™re famously a release or more behind. That might be generally okay for larger, slow-to-upgrade large enterprises. However, โ€œgenerallyโ€ isnโ€™t always.

Even larger enterprises must move quickly when circumstances necessitate it. If there is a big security flaw that canโ€™t be patched in the current release, you move. If there is something in the next release that is absolutely required for higher scaleโ€”and you need thatโ€”or some other feature needed for your own next release, then being on your cloud vendorโ€™s schedule puts you behind the curve.

When consuming cloud vendor services, it is important to ask what every actor or scriptwriter asks: โ€œWhatโ€™s their motivation?โ€ Sure, they might want the extra 30% markup above their IaaS offering, but more likely, they want to keep you on their platform and get every last one of your compute dollars.

However, as reliable as each cloud vendor has become, they havenโ€™t become completely reliable. There are multiple regional and even multi-region outages each year. Some last for awhile. If you canโ€™t just up and install your code somewhere else (or better yet, have it there already as part of your process), then when disaster strikesโ€”and it willโ€”youโ€™re just waiting.

Finally, when itโ€™s time to negotiate pricing, how flexible do you think your cloud vendor is going to be if they know you canโ€™t leave?

(Full disclosure: I work for Couchbase. They have partnerships with multiple cloud vendors including Amazon.)

Multicloud advantages and drawbacks

A multicloud strategy necessitates both vendor neutral and more resilient architectural choices. This means more up-front complexity. It means negotiating, at times, with multiple vendors. It also means ensuring the integration points between the technology exist and exist securely.

However, a multicloud strategy gives you more freedom and security than using a single provider. We learned this during the platform warsโ€”despite many companies standardizing first on mainframes and then DEC, HP, and Sun before trying to standardize on Windows NT.

Single-vendor platforms often fail to live up to their promise. Remember that in the 1990s, and even into the early 2000s, Microsoftโ€™s technologies were often well-integrated but immature. Then came rapid changes. Seasoned developers remember the data access technologies, DAO, RDO, OleDB, and ADO, which were all released and advocated in rapid succession. Letโ€™s not even speak of the .NET transition and the mis-marketing (i.e. Windows.NET) that occurred. It isnโ€™t just Microsoft. I started my career writing OS/2 device drivers. Then IBM launched Warp 4 and it warped out of existence.

Despite the up-front costs of platform independence, companies that pursue it tend to produce more resilient architectures. These companies adopt standard interfaces between applications. They pick best-of-breed technologies that fit the use case as opposed to just whatever the platform is pushing (remember Visual SourceSafe?). Best of all, when a vendor proves to be an unreliable partnerโ€”or jacks up the price too muchโ€”platform independent companies have the freedom to exit.

Minimum requirements for a multi-cloud strategy

The biggest requirement for multicloud is to rely on open standards and industry standards for key touch points. Here are some of the obvious ones:

  • Kubernetes.ย The open source container management platform is now the industry standard for deploying services. If you are creating standard Kubernetes deployments that run on your laptop, they should run on multiple cloud providers.
  • Open source.ย Using open source tools and technologies for your core architecture. This ensures that as platform strategies change, you can opt for a different path.
  • Open standards. This isnโ€™t to say that you need to get really involved in the way your application server clusters itself, but all of the touch points with other software should follow open and vendor-neutral industry standards (e.g. JSON).
  • Caution towards branded services.ย If you need a fixed IP and various DNS services, Amazon brands its version of these pretty common network tools. Of course, you donโ€™t need to run your own distributed DNS, and you do have to use your providerโ€™s means of providing a fixed IP. It also doesnโ€™t really lock you down as it is just configuration and works the same way on Azure and GCP. However, you should be a bit more circumspect when using a machine learning service, for instance.

In the end, โ€œJust do it!โ€ There is no way to ensure you can move quickly except to already have your service running on two or more clouds. Even if youโ€™re mostly going to direct traffic to one cloud for various cost or accounting reasons, you should have some standbys and text on another provider. Then when the inevitable outage or eventual financial shakedown happens, youโ€™re already there.