Levi, Ray & Shoup, Inc.

What’s the big deal about containers?

1/25/2018 by Sam Cohen

You may have heard about containers, Docker, or Secured Service Containers and wondered what the big deal is.

Let’s discuss the background of containers and find out why customers are interested in them.

The idea behind containers is an extension of having different servers for customer workloads. The original server, the mainframe, is one large server with multiple customer workloads running on a transaction processor. If a customer had issues with running multiple business processes in one transaction processor, then the mainframe could run multiple transaction processors, one for each business process.

As distributed computing became popular, business processes were spread among more and more physical servers. Supporting the business process servers and applications became more difficult, especially since multiple servers would need to be updated at the same time when code was updated. Automation tools were developed to roll out code updates, but there was still a chance (often a good chance) that not all servers would truly be updated with the correct code or the transmission of the code updates might be garbled, leading to a broken server.

As virtualization became popular in the distributed computing world (mainframes have had virtualization since 1967), fewer physical servers were needed, but the total number of server images did not decrease. Virtualization did not address the need to manage a complex distribution of business processes across multiple server images.

The idea behind containers is to reduce the number of server images and simplify the mechanism for rolling out business process updates. A complete release package is created with all business process code and supporting operating system libraries. This package is locked down so that no individual changes can be made to this package.

The package is now called a container because it contains the business process code at a certain release level. Any updates to the business process code requires the building of a new package.

Container management software, such as Docker, loads a container and makes the container visible to the operating system, whether that’s Windows, Unix/Linux, LPAR, or z/VM. If you need to release a new level of business process code, you build a new container, deactivate the old container and activate the new one. If you find a problem, you can deactivate the new container and reactivate the old one. Later, you can unload the old container so it doesn’t take up disk space.

What’s the difference between a container environment and traditional virtualization? In traditional virtualization, a complete operating system runs in every virtual machine. In a container environment, there is one complete operating system handling multiple containers, representing multiple business processes or workloads. Each container contains just enough operating system components to serve the business process, but it doesn’t need things like device drivers. In addition, each container is isolated from every other container and the overlaying operating system, leading to better application stability.

This does not mean that containers are a panacea. Careful planning must occur to include operating system updates (which would affect all containers), container management software updates (which would also affect all containers), and container rollouts as well as the memory and CPU requirements of each container’s workload.

The IBM Z environment, with the ability to dynamically move Linux images between logical partitions and physical machines under z/VM Single System Image (SSI) provides a higher level of availability with fewer virtual machines than a distributed environment does.

When the IBM Z is combined with the stabilizing effect of containerized business process applications, you may find that you can reduce the number of application images for a business process as well as the number of virtual machines.

Feel free to contact LRS by submitting the form below if you’d like to explore how to exploit your private cloud by simplifying your server farm with IBM Z.

About the author

Sam Cohen is a System Z Consultant for LRS IT Solutions. Deep mainframe experience and a sharp focus on customer needs have marked Sam’s 40+ years in information tech­nology. Sam has worked with mainframe clients across the US, implementing customer solutions based on z/OS, VSE, z/VM, and Linux. Because the mainframe is always part of a total solution, Sam also has deep networking experience along with knowledge of Enterprise storage solutions.