Taming wild containers and clusters on the hybrid cloud DevOps frontier

BrainBlog for Morpheus Data by Jason English

Pull up to the fire, pardner. Let me tell you about a time before cloud, when we used to wrangle as many as 10 different prod and pre-prod environments in our data center, loaded with hundreds of VMs, largely by looking at system-level metrics and logs to see which racked servers we’d need to update or reboot.

The wild west days of operating physical IT infrastructure for use by application development teams—before the cloud abstracted everything—really aren’t that far in the past. Even as recently as a decade ago, most executives probably thought of the cloud as a place for hosting SaaS apps, rather than an operating model for application environments.

Today, we must support DevSecOps patterns across both private and public clouds, each with its own complexities. Operations teams are being asked to provide management and observability front ends, databases, and other services, many of which are still delivered as virtual machines while more and more are being delivered within customized Docker containers and ephemeral Kubernetes-orchestrated pods.

Now hold on a minute—it sure sounds like we are still living in the wild west! How can platform operations teams corral so many moving parts for application delivery teams, and bring this heterogeneous hybrid cloud herd into the future?

Drawing ops into the dev container wrangling business

The early stages of cloud-native development happened in fits and starts, as developers sought ways to run applications within highly portable containers that could run practically anywhere, encapsulating dependencies including code, OS, and libraries.

For the first time, developers could get past some of the operational constraints of legacy IT approaches. They could realistically provision their own target applications—just download a container image from a library that looks close enough, install code, and spin it up on their own test server, in the data center, or on AWS or Azure-–in less than a second.

Kubernetes (or K8s) came along and offered even more promise for development teams to control their own destiny, by letting them deploy and orchestrate complete cloud-native environments with internal networking, security, and data handling features, within the release pipeline.

However, anyone who has ever had to install and setup a K8s cluster much less maintain their clusters post-release in a real production environment will tell you that wrangling containers across multiple releases and distributions, in a way that supports an entire enterprise just ain’t that simple. Time for ops teams to ride to the rescue…

Read the entire BrainBlog here.

SHARE THIS:

Principal Analyst & CMO, Intellyx. Twitter: @bluefug