Alon Dulce, BGV Intern (and MBA Graduate from Kellogg School of Management) shares his perspective on the next phase of IT infrastructure innovation around containers.
The adoption of containers is poised to become one of the most disruptive trends impacting IT infrastructure since virtualization. Companies are adopting containers at an exponential rate, and especially in large enterprises where data center costs is a significant portion of the bottom line (see chart below).
Docker has been the pioneer and poster child for this trend. While some pundits claim that containers will disrupt traditional virtualization software while others suggest that containers and traditional virtual machines will cooperate with each other – what is the truth and what are the implications for the next phase of innovation in this area?
A container is a software package that bundles together an entire runtime environment needed to execute an application: the application itself, dependencies, libraries and other binaries, and the configuration files to run it. By using a container the differences in OS distributions and underlying infrastructure are abstracted away.
This has several benefits:
- More efficient use of resources than VMs – the claim is somewhere between a factor of 4 to 6 (read less servers = less $$ on servers and therefore on data center power costs)
- Allow faster deployment times which lead to lower costs (whether we’re talking about cloud or on premise resources) and also allows an easy environment to manage and deploy applications. (read easy management = less $$ on maintenance)
- Finally, since containers are easy to use and lightweight, it’s very easy for anyone to develop an app, upload it to the cloud in a container and then you have instant application portability between devices/OS etc. by using ready to run containerized applications from the cloud. This may allow elimination of platform specific development in the near future. (read – less $$ in duplicate development efforts in order to accommodate several devices/OSs).
We believe that the extent of adoption of containers will be driven by factors such as:
- Standardization – This is being tackled by Docker (by its sheer popularity and collaborations with Google, RedHat, and other players on their open source libcontainer which make it a de-facto standard for Linux based containers). Google is also porting their programmers to using libcontainer instead of their own lmctfy library.
- Management – Container management tools are not competitive to VM management tools such as VMWare’s vCenter or Microsoft’s System Center, which can be used to manage virtualized infrastructure. The trends to look for in containers are governance, management and monitoring of container “farms” in a similar way to how multiple VMs are managed in the data center. Elastic load balancing, performance monitoring, failover management and auto-scaling elements sit as part of this management ecosystem that provides end-to-end deployment and management capabilities to end users.
- Security is a huge concern for many enterprises– since containers share an OS and many binaries, many applications and containers run super user authorizations on their OS, and therefore if a container is compromised it can spread to the OS and onwards. Gartner recommends running de-privileged containers or containers in a VM if there are security concerns. We see many companies trying to address security holes, but a lot of organic efforts are ongoing by container providers. As a result, building a “security for containers” may not be a viable standalone business, and may become part of the management platform.
- OS – Another problem with the majority of container solutions is the fact that they’re almost all Linux Based, while many Enterprises have demands for Windows based development, data centers etc. ContainerX just launched the first Windows Container as a Service platform and there’s room to grow there.
Furthermore it is more likely that containers will not replace virtual machines in all use cases. Many small and large-scale enterprises are adopting containers, this doesn’t mean that they’re neglecting VMs. Containers, at least now, have a different scope than VMs – they’re specifically tailored to run a single application with fewer resources than a VM. This means that enterprises are using containers in addition
to VMs to maximize processing power when running multiple applications or when deploying application (such as the Google product suite mentioned before). We also see some enterprises deploying containers within VMs as well.
Next Innovation Opportunity
Containers are here and now the interesting question is where is the next innovation opportunity, i.e. which container-related areas make sense for innovation and future VC funding. When looking at the evidence above there are two themes emerging:
First, containers for different OS – with a focus on Windows. In the Linux sphere, there are already several leading startups backed by significant players (Docker with Google, RedHat, Parallels etc.)
Second, and more important is the technology that will allow containers to mature and compete with VMs – this means security, management and analytics tools such as Twistlock , Panamax.io, Containn, Galactic Exchange, Pachyderm.io and others. This is an area where we believe there is immense future innovation potential.
In conclusion, while Docker was the pioneer in the Container space we believe that there is more innovation still to come albeit in different shapes and forms from other startups – necessary to drive broader enterprise adoption in production systems beyond DevOps.