By now, unless you’ve been vacationing on a remote island off the grid, containers have gained incredible momentum with organizations seeking to increase productivity, agility and innovation. Developers quickly understood the benefits of containers:
Simplicity. They are super easy to use. All you need is Docker and a laptop and you are off and running.
Lightweight. Containers include only the libraries, other binaries, and configuration files needed to run the application.
Portability. Containers can run on top of virtual machines, or bare metal servers. They can run on-premises or in the cloud. Developers can code their applications, place them in a container, and then the container and application can easily move across various environments.
But what about operations? What types of challenges emerge when these new or repackaged containerized applications need to move from the developer’s laptop to production environments?
We recently held group discussions at VMworld US and Barcelona on just this topic. What we found out was that, although momentum and mindshare with developers is extremely high, the typical IT or even VI admin, is quite new to container technology. Though many attendees were aware of their dev teams experimenting or developing with containers, they were just starting to collect information on what running containerized applications in production would mean for them.
What should they consider? What are the risks? In addition, what solutions are available for them to, not only run these applications in production, but also enable the development teams with the adequate tools and infrastructure to innovate while protecting and ensuring SLAs and compliance needs are met?
For operations, containers are much more challenging to run in production. At the end of the day, an application running in a container is still an application. As such, you still need enterprise grade networking, security, data persistence, health and performance monitoring, logging, backup, disaster recovery, high availability, and so on. Running an application in a container does not eliminate any of these requirements; in fact, it only makes it more difficult. Let us look at why.
Often, in all the excitement about containers, the concern for security gets lost. First, containers do not contain. By default, users are not name-spaced, so any attacks that break out of the container will have the same privileges on the host as they did in the container; if you were root in the container, you will be root on the host. Second, images may not be clean. Vulnerabilities can be easily packed up along with the application. Most development begins with leveraging container images pulled down from public registries.
Can you be sure the image is from a trusted source/repository? Has it been scanned for known vulnerabilities? How can you manage something that you can’t see? Creating a containerized application solves one problem for operations (you know what is in it). However, a much harder problem emerges; understanding the dependencies of a containerized application.
Networking can be tricky especially when containerized applications are part of a hybrid architecture and need to connect to other services running in containers or on VMs.
Tying back to visibility, it is crucial to understand where containers are running, and what their dependencies are. Being able to quickly define and enforce affinity and anti-affinity policies is crucial for regulated industries such as retail, health care, and financial services.
The challenges are many, but coupling strong container infrastructure, like vSphere Integrated Containers from VMware, with proven enterprise grade management solutions, like vRealize, will enable you to confidently move containerized applications from developer laptops to production.
Stay tuned while we continue discussing these challenges and how operations can best get ready for the containerized IT world.