Serdar Yegulalp
Senior Writer

Why you should use Docker and OCI containers

feature
Jan 15, 20259 mins
ContainersDockerKubernetes

Learn how lightweight, portable, self-contained operating system containers improve software development, application deployment, and business agility.

agile change agent flexible dance balance skills by drew graham unsplash
Credit: Drew Graham

A book published in 1981, called Nailing Jelly to a Tree, describes software as โ€œnebulous and difficult to get a firm grip on.โ€ That was true in 1981, and it is no less true four decades later. Software, whether it is an application you bought or one that you built yourself, remains hard to deploy, hard to manage, and hard to run.

Docker containers, and the OCI standard for containers and their runtimes, provide a way to get a grip on software. You can use containers to package an application in such a way that its deployment and runtime issuesโ€”how to expose it on a network, how to manage its use of storage and memory and I/O, how to control access permissionsโ€”are handled outside of the application itself, and in a way that is consistent across all โ€œcontainerizedโ€ apps. You can run your container on any Linux- or Windows-compatible host that has a container runtime installed.

Containers offer many other benefits beyond encapsulation, isolation, portability, and control. Containers are small compared to virtual machines, measurable in megabytes versus gigabytes. They start instantly. They have their own built-in mechanisms for versioning and component reuse. They can be easily shared via directories like the public Docker Hub or a private repository.

Containers are also immutable, which has both security and operational benefits. Any changes to a container must be deployed as an entirely new, differently versioned container.

In this article, weโ€™ll explore how containers make it easier to both build and deploy software. Youโ€™ll learn what issues containers address and how they address them, when containers are the right answer to a problem, and when theyโ€™re not.

Life before containers

For many years now, enterprise software has typically been deployed either on โ€œbare metalโ€ (i.e., installed on an operating system that has complete control over the underlying hardware) or in a virtual machine (installed on an operating system that shares the underlying hardware with other โ€œguestโ€ operating systems). Naturally, installing on bare metal made the software painfully difficult to move around and updateโ€”two constraints that made it hard for IT to respond nimbly to changes in business needs.

Then virtualization came along. Virtualization platforms (also known as hypervisors) enabled multiple virtual machines to share a single physical system, with each virtual machine emulating the behavior of an entire systemโ€”complete with its own operating system, storage, and I/Oโ€”in an isolated fashion. IT could now respond more effectively to changes in business requirements, because VMs could be cloned, copied, migrated, and spun up or down to meet demand or conserve resources.

Virtual machines also helped cut costs, because more VMs could be consolidated onto fewer physical machines. Legacy systems running older applications could be turned into VMs and physically decommissioned to save even more money.

But virtual machines still have their share of problems. Virtual machines are large (measured in gigabytes), with each one containing a full operating system. Only so many virtualized apps can be consolidated onto a single system. Provisioning a VM still takes a fair amount of time. Finally, the portability of VMs is limited. After a certain point, VMs cannot deliver the kind of speed, agility, and savings that fast-moving businesses require.

Docker and the OCI standard

Containers were conceived as a way to bundle up and organize a clutch of native capabilities in Linux, such as running processes in isolation. But they were difficult to use in concert; if you wanted anything like what weโ€™d recognize today as container-like behavior, youโ€™d have to do a fair amount of manual heavy lifting.

Docker, launched in 2013, made it easy to automate all the things one had to do to containerize apps. Dockerโ€™s success as a project, and later as a company monetizing the project, made the Docker approach to containers something of a de facto standard. Over the next few years, container adoption proliferated to the point that competing implementations started to crop up with dueling ideas about how best to implement them.

Eventually, a common standard emerged. The Open Container Initiative (OCI) specification, formalized in 2017, featured contributions from Docker and its competitors. Now, Docker the company is a remnant of its former self, although Docker the product and Docker the open source project live onโ€”so it makes sense for the OCI standard to survive and thrive on its own.

Benefits of containers and containerization

Containers work a little like VMs, but in a far more specific and granular way. They isolate a single application and its dependenciesโ€”all of the external software libraries the app requires to runโ€”both from the underlying operating system and from other containers.

All the containerized apps share a single, common operating system (either Linux or Windows), but they are compartmentalized from one another and the system at large. The operating system provides the needed isolation mechanisms to make this compartmentalization happen. Containers wrap those mechanisms in a convenient set of interfaces and metaphors for the developer.

The benefits of containers show up in many places. Below are some of the major advantages of using containers over VMs or bare metal.

Containers use system resources more efficiently

Instances of containerized apps use far less memory than virtual machines, they start up and stop more quickly, and they can be packed far more densely on their host hardware. All of this leads to less spending on IT.

The cost savings will vary depending on what apps are in play and how resource-intensive they are, but containers invariably work out as more efficient than VMs. Itโ€™s also possible to save on the cost of software licensing because you need many fewer operating system instances to run the same workloads.

Containers enable faster software delivery cycles

Enterprise software must respond quickly to changing conditions. That means both easy scaling to meet demand and easy updating to add new features as the business requires.

Containers make it easy to put new versions of software, with new business features, into production quicklyโ€”and to quickly roll back to a previous version if you need to. They also make it easier to implement strategies like blue/green deployments.

Containers enable application portability

Where you run an enterprise application mattersโ€”behind the firewall, for the sake of keeping things close by and secure; or out in a public cloud, for easy public access and high elasticity of resources. Because containers encapsulate everything an application needs to run (and only those things), they allow applications to be shuttled easily between environments. Any host with a container runtime installedโ€”whether that machine is a developerโ€™s laptop or a public cloud instanceโ€”can run a container, assuming it has resources enough for that particular containerized application.

Containerization simplifies microservices

Containers make it easier to build software along forward-thinking lines, so youโ€™re not trying to solve tomorrowโ€™s problems with yesterdayโ€™s development methods.

One of the software patterns containers simplify is microservices, where applications are constituted from many loosely coupled components. By decomposing traditional, โ€œmonolithicโ€ applications into separate services, microservices allow the different parts of a line-of-business app to be scaled, modified, and serviced separatelyโ€”by separate teams and on separate timelines, if that suits the needs of the business.

Containers arenโ€™t required to implement microservices, but they are perfectly suited to the microservices approach and to agile development processes generally.

Problems containers donโ€™t solve

The first thing to keep in mind about containers is the same piece of advice that applies to any software technology: It isnโ€™t a silver bullet. Containers by themselves canโ€™t solve every problem. Letโ€™s look at a few particular problems containers donโ€™t solve.

Containers wonโ€™t fix your security issues

Software in a container can be more secure by default than software run on bare metal, but thatโ€™s like saying a house with locked doors is more secure than a house with unlocked doors. It doesnโ€™t say anything about the condition of the neighborhood, the visible presence of valuables tempting to a thief, the routines of the people living there, and so on. Containers can add a layer of security, but only as part of a general program of securing an application in context.

Containers donโ€™t turn applications into microservices

If you containerize an existing app, that can reduce its resource consumption and make it easier to deploy. But it doesnโ€™t automatically change the design of the application, or how it interacts with other applications. Those benefits only come through developer time and effort, not just a mandate to move everything into containers.

If you put an old-school monolithic or SOA-style application in a container, you end up with, well, an old-school application in a container. That doesnโ€™t make it any more useful to your work; if anything, it might make it less useful.

Containers by themselves donโ€™t have the mechanisms to compose microservice-style apps. One needs a higher level of orchestration to accomplish this. Kubernetes is the most common example of such an orchestration system. A more minimal solution, Docker swarm mode, can be used to manage many Docker containers across multiple Docker hosts.

Containers donโ€™t replace virtual machines

One persistent myth of containers is that they make virtual machines obsolete. Many apps that used to run in a VM can be moved into a container, but that doesnโ€™t mean all of them can or should. If youโ€™re in an industry with heavy regulatory requirements, for instance, you might not be able to swap containers for VMs, because VMs provide more isolation than containers.

The case for containers

Enterprise development as a field is notorious for being hidebound and slow to react to change. Enterprise developers chafe against such constraints all the timeโ€”the limitations imposed by IT, the demands of the business at large, etc. Containers give developers more of the freedom they crave, while simultaneously providing ways to build business apps that respond quickly to changing business conditions.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author