Hardware-backed confidential computing in Microsoft Azure now includes protected environments for VMs, containers, and GPUs, without the need to write specialized code.
One of the biggest challenges facing any enterprise using the public cloud is the fact that itโs public. Yes, your applications run in isolated virtual machines and your data sits in its own virtual storage appliances, but thereโs still a risk of data exposure. In a multitenant environment, you canโt be certain that memory is freed up safely, so that your data isnโt leaking across the boundaries between your systems and others.
Thatโs why businesses keep close watch on their regulatory compliance, and often keep sensitive data on premises. That allows them to feel sure that theyโre managing personally identifiable information securely (or at least in private), along with any data that is subject to regulations.
However, keeping data on-prem means not taking advantage of the cloudโs scalability or global reach. As a result, youโre working with isolated islands of information, where you canโt develop deeper insights or where youโre forced to regularly download data from the cloud to build smaller local models.
Economically thatโs a problem, because egress charges for cloud-hosted data can be expensive. And thatโs before youโve invested in MPLS links to your cloud provider to ensure you have private, low-latency connectivity. Thereโs an additional issue, because now you will need a larger security organization to keep that data safe.
How can you be confident in the security of your cloud-hosted data when you donโt have access to the same level of monitoring, or threat intelligence, or security experience as the cloud providers? If we look at modern silicon, it turns out there is a middle way, confidential computing.ย
Confidential computing advances
I wrote about how Microsoft used Intelโs secure extensions to its processor instruction sets to provide a foundation for confidential computing in Azure a few years ago. In the years since, the confidential computing market has taken a few steps forward.
The initial implementations allowed you to work only with a chunk of encrypted memory, ensuring that even if VM isolation failed, that chunk of memory could not be read by another VM. Today you can encrypt the entire working memory of a VM or hosted service. Also, you now have a broader choice of silicon hardware, with support from AMD and Arm.
Another important development is that Nvidia has added confidential computing features to its GPUs. This allows you to build machine learning models using confidential data, as well as protecting the data used for mathematical modeling. Using GPUs at scale allows us to treat the cloud as a supercomputer, and adding confidential computing capabilities to those GPUs allows clouds to partition and share that compute capability more efficiently.
Simplifying confidential computing on Azure
Microsoft Azureโs confidential computing capabilities are evolving right along with the hardware. Azureโs confidential computing platform began life as a way of providing protected, encrypted memory for data. With the latest updates, which Microsoft announced at Ignite 2023, it now provides protected environments for VMs, containers, and GPUs. And thereโs no need to write specialized code; instead you can now encapsulate your code and data in a secure, isolated, and encrypted space.
This approach lets you use the same applications on both regulated and unregulated data, simply targeting the appropriate VM hosts. Thereโs a bonus in that the use of confidential VMs and containers allows you to lift and shift on-premises applications to the cloud, while maintaining regulatory compliance.
Azure confidential VMs with Intel TDX
The new Azure confidential VMs run on the latest Xeon processors, using Intelโs Trust Domain Extensions. With TDX thereโs support for using attestation techniques to ensure the integrity of your confidential VMs, as well as tools to manage keys. You can manage your own keys or use the underlying platform. Thereโs plenty of OS support too, with Windows Server (and desktop options) as well as initial Linux support from Ubuntu, with Red Hat and Suse to come.
Microsoft is starting to roll out a preview of these new confidential VMs, across one European and two US Azure regions, with a second Europe region arriving in early 2024. Thereโs plenty of memory and CPU in these new VMs, as theyโre intended for hefty workloads, especially where you need a lot of memory.
Azure confidential VMs with GPU support
Adding GPU support to confidential VMs is a big change, as it expands the available compute capabilities. Microsoftโs implementation is based on Nvidia H100 GPUs, which are commonly used to train, tune, and run various AI models including computer vision and language processing. The confidential VMs allow you to use private information as a training set, for example training a product evaluation model on prototype components before a public unveiling, or working with medical data, training a diagnostic tool on X-ray or other medical imagery.
Instead of embedding a GPU in a VM, and then encrypting the whole VM, Azure keeps the encrypted GPU separate from your confidential computing instance, using encrypted messaging to link the two. Both operate in their own trusted execution environments (TEE), ensuring that your data remains secure.
Conceptually this is no different from using an external GPU over Thunderbolt or another PCI bus. Microsoft can allocate GPU resources as needed, with the GPU TEE ensuring that its dedicated memory and configuration are secured. Youโre able to use Azure to get a security attestation in advance of releasing confidential data to the secure GPU, further reducing the risk of compromise.
Confidential containers on Kubernetes
More confidential computing tools are moving into Microsoftโs managed Kubernetes service, Azure Kubernetes Service, with support for confidential containers. Unlike a full VM, these run inside host servers, and theyโre built on top of AMDโs hardware-based confidential computing extensions. AKSโs confidential containers are an implementation of the open-source Kata containers, using Kataโs utility VMs (UVMs) to host secure pods.
You run confidential containers in these UVMs, allowing the same AKS host to support both secure and insecure containers, accessing hardware support through the underlying Azure hypervisor. Again, like the confidential VMs, these confidential containers can host existing workloads, bringing in existing Linux containers.
These latest updates to Azureโs confidential computing capabilities remove the roadblocks to bringing existing regulated workloads to the cloud, providing a new on-ramp to delivering scalable and burst use of secure computing environments. Yes, there are additional configuration and management steps around key management and ensuring that your VMs and containers have been attested, but those are things you should do when working with sensitive information on-premises as well as in the cloud.
Confidential computing needs to be seen as essential when weโre working with sensitive and regulated information. By adding these features to Azure, and by supporting the features in the underlying silicon, Microsoft is making the cloud a more attractive option for both health and finance companies.


