BrainBlog for Avesha by Jason English
Cloud native application delivery – and Kubernetes orchestration technology specifically – are finally past the early adoption territory where only new cutting-edge vendors dared to tread.
Even well-established companies are weighing their options for breaking up existing applications into containerized workloads atop highly elastic, on-demand infrastructure. Kubernetes provides an ideal way to orchestrate the deployment and deprovisioning of ephemeral microservices, whether in an on-premises data center or in public cloud infrastructure.
Unfortunately, while a savvy group of early adopters are shipping applications with astounding performance and value results, many larger enterprises who waited to enter the game are running into problems as they scale their new Kubernetes infrastructures beyond the dimensions of a single cluster or type of infrastructure.
While Kubernetes got us away from infrastructure constraints with right-sized application workloads, the authentication and usage requirements of many different types of users and services – or tenants – that depend upon K8s-orchestrated applications are anything but one-size-fits-all as well.
Download the BrainBlog / Whitepaper asset here. (Registration required.)