DockerCon sailed through Seattle recently, leaving behind in its wake a new swath of rapid adopters plus a trail of related company and product announcements. Docker itself produced perhaps the most exciting announcements of all with the launch of its DockerStore, a searchable marketplace for validated software and tools used in the Docker format, plus the launch of version 1.12 of its software, currently in public beta.
But the most important message delivered during the event came from Docker’s CEO, Ben Golub, who stated during his keynote address (video below) that upwards of 70 percent of enterprise companies have now implemented containerization.
That is a drastic upturn, but then the excitement behind Docker is increasing daily, with news of deployments at ADP and GE only serving to fuel that trend. The Seattle event was sold out -- it’s 4,000 attendees represent twice the number who attended DockerCon last year and an eight-fold increase from three years ago. And according to the company, Docker’s rapid adoption has resulted in more than 460,000 containerized applications. Does that level of container-based penetration signal the beginning of the end for the Puppet and Chef IT automation platforms? I believe so.
Puppet and Chef frameworks came into vogue in 2013. They were each robust, timely releases that helped facilitate better continuous delivery and integration, but as most users know they were also clunky -- hard to implement and difficult to use.
Over the past year killer innovations delivered by Docker (such as Swarm and the Docker Trusted Registry), Kubernetes and Amazon’s ECS Container Service (a high-performance container management service for Docker containers on managed Amazon EC2 instances) have targeted Puppet’s and Chef’s weaknesses. As a result both platforms are now a bit less relevant in the storage world. In addition, Google's Container Engine service offers similar cluster management capabilities for Docker containers.
Ultimately, Puppet and Chef failed because of what originally made the platforms attractive -- open source ecosystems. Unfortunately, these ecosystems are plagued with problems like bitrot, unforgiving structures, and platforms that require significant configuration.
Unhappy Puppet and Chef users typically complained that neither Puppet nor Chef worked right out-of-the-box and required that they install numerous modules which often failed during installation, requiring users to reinstall these modules over and over again until they did start working. Ultimately, working with Puppet and Chef created too many internal and external conflicts for enterprise CTO’s to produce significant ROI.
Succeeding with Docker
Staying ahead of the technology curve is a bit like leading a 10k running race -- your competitors are either in front of you or close behind. If they are in front the tendency is to follow their path, and if they are behind their natural inclination is to follow your steps.
Don’t fall into this trap.
In today’s world best practices are often discovered internally, but not shared publicly. So you may have knowledge that competitor A is running Puppet Enterprise and generating massive ROI, but what you don’t know are the exact configurations that company’s IT team implemented and what level of testing was required to satisfy the CTO’s KPIs.
Forge your own path! Only you can determine what success looks like in your organization. That being said, I would like to offer a couple of evergreen suggestions for succeeding with Docker in an enterprise environment.
1. Turn your monolithic application and framework into "micro-services." Pick one application that is made up of several modules and services. Then have your team whiteboard the entire platform and all of the discrete services and APIs (if that hasn’t already been done). Then facilitate a conversation on how to containerize those services into a stand-alone container. A good analogy that I heard was delivered by the CTO of ADP -- where a monolithic application is a chicken and the containerization represents chicken nuggets. Therefore, containerization = "nuggetization" so to speak.
2. Engage Swarm. Consider a minimum Swarm of 3 clusters (Test/Dev, Pre-Prod, Prod) and institute role based action controls (RBAC) to determine which developers get access to certain cluster functions. This streamlines efficiencies, conforms to best practices as it pertains to continuous integration and delivery, and helps avoid scenarios where a junior Dev deletes all of the containers by accident. Swarm can also offer similar features that are generally reserved for cloud or clustering technologies such as high availability, self-healing and fault tolerance to hardware or node failures.
3. Another important suggestion is to expose any developers that are accustomed to Windows to Linux. Reaping the maximum possible benefits of the Docker platform requires comfort with Linux. The Docker Universal Control Plane and Windows Docker client may help ease that pain. But the short-term investment in helping developers acquire Linux proficiency will pay dividends not only in integrating Docker and getting max ROI, but also future tools platforms which seem to be more and more Linux based.
Will Docker suffer the same drop-off in users three years that Puppet and Chef is experiencing now? Or does it have enough chutzpa to become a ubiquitous centerpiece for enterprise CTO’s? Only time will tell, but as of right now Docker is doing all the right things and the market forces seem uniquely aligned.
Certainly the market trend of continued democratization of containerization through Container as a Service (CaaS) improvements through the use of a Docker data center point to sustainability. CaaS improvements allow enterprise IT teams to manage thousands of containers in a very intuitive manner without tapping the DevOps department. Now general IT support can help manage the containers and allow the devs to do what they do best -- develop -- rather than container management.
Lastly, Microsoft's effort to support Docker and contribute to the platform should not be taken lightly. In fact, Microsoft was the title sponsor for DockerCon.
During the conference the Microsoft’s Azure CTO Mark Russinovich demonstrated a Microsoft SQL Server running on Linux, in a Docker Container on Azure stack in a Docker Swarm cluster that is Hybrid being managed by Docker Data Center running the Azure public cloud. Admittedly this was a fairly complex stack, hence why his was the first demo to fail. Attending CTO’s should take note of that failure, as it was likely logged by Docker and should be used to improve the platform going forward.
As an attendee of DockerCon and a huge fan of the open source community, I for one am hoping Docker reaches that unique place of product ubiquity for both professional and personal reasons. In fact, I can’t wait to see what the attendance numbers from DockerCon 2017 look like. Stay tuned!
This article is published as part of the IDG Contributor Network. Want to Join?