Cashing in on Containers
CIOREVIEW >> Application Management >>

Cashing in on Containers

Asanka Abeysinghe, VP, Solutions Architecture, WSO2
Asanka Abeysinghe, VP, Solutions Architecture, WSO2

Asanka Abeysinghe, VP, Solutions Architecture, WSO2

Containers for managing software and services are quickly moving into the mainstream. In fact, 76 percent of enterprises responding to a 2016 survey conducted by and Cluster HQ stated that they had deployed containers in production environments—up from 38 percent the previous year. This growth is no surprise when you consider that among those organizations with container deployments, 72 percent report that using containers enabled them to meet or exceeded business or IT objectives.

Moreover, the migration to a container-based application delivery model within enterprises is encouraged by Gartner Inc. The firm stated in a July 2016 Gartner Blog Network post, “Gartner asserts that applications deployed in containers are more secure than applications deployed on the bare OS.”

Working with enterprises across a range of industries, I have seen the benefits they are already realizing from containerization first-hand. At the top of the list is their ability to maintain a competitive edge, since the flexible and adaptive architecture style fosters innovation within the enterprise. It also enables rapid application development and continuous delivery given that the enterprise can use containers to split applications into subcomponents to perform a specific task. This helps them to cater to consumer demand in an efficient and timely manner.

 Containerization is clearly the next-generation mode of running applications, already demonstrating the ability to save enterprise IT organizations time and money 

Another benefit is that containerization is a highly scalable approach, enabling enterprise IT organizations to utilize hardware more efficiently to minimize waste. Because containers can spin up a new instance as and when required, and there typically will be more instances of a given server application on a specific piece of hardware, enterprises can maximize of hardware usage during average as well as peak hours. Additional benefits from a tech perspective include less down-time, high availability, and the ease of scaling due to time efficiencies between developing, testing, and deploying applications.

Looking closer, the benefits are really driven at two stages of the implementation: development and production. Development is simple with little or no complexity as containers are less dependent on the application infrastructure. They also enable security and governance services to be handled independently outside of the containers, further helping to reduce complexity.

At the implementation stage, containerization offers greater portability, since it allows developers to leverage different cloud platforms based on cost and performance. Additionally, developers don’t need to waste time on setting up sandboxes; instead, they can quickly recreate an instance from a prebuilt container, which will in turn improve productivity.

Enabling Rapid Development and Hassle-Free Production

Within software development, containers are streamlining the lifecycle—from development, to testing, to production—and reshaping how organizations go to market.

The ease of creating an environment is an essential aspect of container-based development. It helps developers to build environments in seconds, as well as automate the development and test processes. Developers and test engineers can wipe out existing container-based deployments and rebuild these quickly based on their needs. Additionally, the lightweight nature and limited resource usage of containers allow developers to build complex distributed setups even in their sandbox environments. These aspects of containerization enable greater efficiency and agility while encouraging teams to be creative.

At the same time, container-based production systems free people from having to worry about problems with memory outages or session management. The quick startup time of containers allows developers to spin up containers on demand. And adopting a microservice architecture forces them to run one application per container (1:1), thereby forcing the spin up and shutdown of containers based on consumer requests. For example, let’s consider a web app that’s deployed in a container. For each web user request, you can create a container, service the user, and then shut down the app after the session ends or times out.

Creating an Effective Container Strategy

The use of containers provides enterprises several benefits, most notably high scalability and cost efficiency. Given that a server can involve multiple containers running various applications, which are independent from each other, it’s extremely lightweight and offers some degree of portability. Containerization also requires fewer resources (memory, CPU, disk) and is cheaper since many containers can be added to a single server rather than running one server per application.

However, some approaches to containerization are more effective than others. Whether starting a new container implementation or extending an existing one, there are three key strategies enterprises should consider to ensure they are maximizing the potential benefits of containerization.

Maintain proper governance and standards. Enterprises must pick and choose what they want to run in a container and not blindly apply this concept to every application. And when apps are containerized, each container should be governed by a proper set of standards. For instance, applications that fit into containers should be stateful, have a single function, and run only for short time periods. Enterprises should consider these factors when deciding which applications must run in containers.

Automating the creation and deployment of containers brings governance. Notably, a continuous integration server—for example, Jenkins, Bamboo, CruiseControl, or Microsoft Visual Studio Team System (VSTS)—combined with a deployment management tool, such as Puppet, Chef, Ansible or Shell Scripts, can be used to build the automation framework. Applications, server runtimes, or any other artifacts that need to be packaged inside the containers can be stored in repositories, such as version control systems or registries, so that the appropriate artifacts can be selected during the building process.

Ensure efficient discover mechanism. Even though the underlying technology and concepts with containers are starkly different from those of a virtual setup, there are also many similarities. One is the availability of a file system, or registry, that can be accessed over a network as with any computer system. So there should be a very efficient discovery mechanism that enables the enterprise to easily add or remove new containers as required.

To further illustrate the concept, let’s consider the typical example of a system that’s configured for the containers to spin up on demand. Based on the request type, the system should identify the application, as well as the container that has wrapped the application, that needs to be started. A container registry and a lookup system can help the request router and the container management system to pick the correct container image.

Include a container management layer for complex scenarios. Feature revisions and additions are important for containers to become increasingly scalable within an enterprise. And for these container-based applications to maintain scalability, enterprises must include an orchestration layer to enable interactions between containers. Container orchestration tools, such as Docker Swarm, Kubernetes, and Mesos, offer scheduling and clustering capabilities that are required to manage interaction between containers.

However, orchestration functionality is only required if the deployment is a complex one, and it should be incorporated using an iterative approach to keep the implementation simple. Let’s consider, for example, a large telecommunications provider that is using a Docker Swarm-based Container-as-a-Service (CaaS) framework. This approach helps the company to implement the governance of the containers, integrate with the continuous integration, rapidly build environments, and enable auto scaling in the production environment.

Understanding Containers’ Limitations

A key limitation to deploying containers is that it’s useful and relevant to only short-term applications, since the configuration management doesn’t encourage the hot updates generally required by long-running applications. Changes can be made at runtime, but there’s no refill option to maintain an existing container for an extended period, i.e. more than a day. There’s also a possible security risk with orchestration if malicious code in one container is passed on to other when containers talk to each other.

Still, containerization is clearly the next-generation mode of running applications, already demonstrating the ability to save enterprise IT organizations time and money. With the pros significantly outweighing the cons it’s likely that more enterprises will join the majority group of 72 percent already reaping the benefits.

Read Also

The Perils of Big data in Procurement

Fabio Baldassari l Procurement Director - Logistics l Niagara Bottling

Data Storytelling: The Last Mile in Analytics

Heather Fitzgerald, SVP - Distribution Intelligence & Salesforce - JACKSON

Nullifying Data Vulnerability In One’s Asset Management

Dr Philipp Raether, Group Chief Privacy Officer,Allianz

Creating Integrated Digital Ecosystems

Dan Jeavons, Vice President for Digital Innovation and Computational Science, Shell

Digital Twin: Unlocking The Power Of Your Asset’s Data

Rob Kennedy, Global Head of Digital Twin, Full Asset Lifecycle, Wood