We’ve seen many of the technology advances described in the Star Trek milieu become reality over the last 50 years, from personal communication devices and instant translators, to GMOs, medical robots, 3D printing and weapons that stun.
But, ah yes, the matter transporter. By far, the invention that captivates us most.
If you could say “Beam Me Up” and have Scotty or Geordi instantly transport your whole body from a hostile planet back to the spaceship in orbit, with mind intact, how cool would THAT be?
Most of us would be thrilled to transport our way past the next traffic jam.
Unfortunately, it doesn’t look like we’ll be seeing matter transport arrive anytime soon for personal use, but it already exists for a critical part of your application experience. Look no further than the load balancer appliance, once confined to a pizza box unit in your on-premises server rack.
From physical to virtual appliances – and beyond
Traditional data center applications had a single hardware load balancer that was required to make multiplex switching decisions for performance or security reasons, routing user traffic between several dependent applications on the network.
The applications, and the load balancer that served them, were hardly portable at all. (Yes, you could still turn them off, box them up and roll them onto a truck and install them in another facility, but they’d be offline for days or weeks.)
The rise of virtualization over the last 15 years brought about a software-defined revolution in how we think about infrastructure. Virtually any computing device once housed in a metal case, and the wires that connected it, started becoming available as software-defined appliances and networks.
At the heart of many of the world’s high availability internet applications there still remained a load balancer and firewall appliance. Once purchased and licensed only as hardware, with a defined capacity, these critical devices grew in functionality into today’s application delivery controllers (ADCs) as they handled more of the routing, data filtering and authentication/authorization aspects once left up to the security of the applications themselves.
While there are still many original hardware load balancers productively churning away in racks, the modern ADC has become better known as a virtual software-defined appliance that can be provisioned and consumed as a service.
Fast forward to today’s cloud-native delivery models, and now you can deliver and manage a custom-tailored application experience (or AX, the quality-of-service counterpart to UX) that meets customer needs anywhere in the world — instantly and without constraints.
CI/CD in the cloud is the ticket for the AX trip
IT Operations teams can no longer simply stay aboard their own data center ship and keep the lights on. They are now enlisted in the DevOps movement and partnered across the CI/CD pipeline — as they support the tool chains of more distributed, agile teams that are delivering application functionality.
For their part, application architects and development teams are doing more Ops-like things: selecting and defining load balancing services and ADCs as part of their own work streams, and coding them right into their delivery environments.
Cloud-native development is accelerating this change in architecture. ADCs need to become as portable, and instantly transportable, as the applications themselves. Cloud-native apps generate hybrid IT workloads that can move between private and public cloud services, run in ephemeral containers, and support stateless microservices processes.
This makes management and automation of ADCs a lot more complex to achieve. We are getting to a point where the infrastructure stack is becoming too complex for software architects and engineers to manage, while still defining the business logic.
Using the best practice of Infrastructure as Code (IaC), the optimal AX can be defined in cloud environments or on bare metal, by team members with tuning expertise using a tool like Ansible, Chef or Jenkins. (See a concise guide to implementing multiple load balancers in Ansible from Kemp here.)
One ADC per application? Make it so.
The ability to define AX as code in the cloud, with flexible scaling and more advanced security features to customize the ADC for the application has driven us to a one-load-balancer-per-application-instance future state.
Yes, the application, and the assurance of its secure, performance-optimized experience, can now be beamed up together to cloud-native instances, with shared logic intact.
Having a bespoke load balancer provides the additional benefit of reducing the blast radius of any traffic problems or attack scenarios to the single app instance, rather than allowing any ripple effect on the performance of other app instances and infrastructure.
Users from anywhere in the world can be directed by a unique ADC instance to the closest possible implementation of the application with the desired performance and cost settings. Regardless of where they are visiting from, they are transported to this optimal region only for they time they are using the application.
What was once technically impossible, could now happen millions of times every day. But there’s still one non-technical hurdle to overcome.
Licensing and management: The final frontiers
If the licensing model makes the cost-per-instance of a transportable app cost-prohibitive, we’re still stuck at the starting gate. As soon as app/AX pairs start scaling out in production, costs can balloon out of control.
One AX vendor, Kemp, offers a way past this hurdle with a ‘pay only for the load balancing you use’ pricing model, rather than a per-application or per-instance licensing model.
Another key consideration is the presence of many different forms of load balancer and firewall appliances already in use within any reasonably sized enterprise IT shop supporting existing applications. Whether the existing apps and their associated ADCs were built in the past or gained via acquisition, their interdependencies may work well enough to support business needs.
For some production apps, the IT leadership should take an ‘if it ain’t broke, don’t fix it’ model, and simply monitor the environment carefully. For other apps, businesses may need to keep them running as well as possible, while prioritizing their AX modernization plans on the roadmap.
Kemp also operates a Kemp 360 Central global multi-cloud AX service that provides management and analytics for their own LoadMaster ADCs, whether virtual or hardware-based, while also providing monitoring and analytics for other load balancers such as F5 and HA Proxy in a single management view.
The Intellyx Take
The transportable nature of today’s applications and the AX that accompanies them — as they beam into hybrid IT environments, across multiple clouds and heterogeneous data centers — has made the globe much smaller.
We’re truly living in the future, as we witness the rise of a more globally aware view of application experience management.
How cool is that? Well, it’s still not nearly as cool as transporting your own body somewhere else, but at least there’s no worries about coming out on the other end as a mixed-up quivering mass of genetics.
Transporting applications and AX as code is happening here and now. So, ready for transport?
© 2020 Intellyx LLC. Intellyx retains final control over the content of this article. At the time of writing, Kemp Technologies is an Intellyx customer. None of the other companies mentioned here are Intellyx customers. Image composite sources: sammydavisdog and Scott Swigart, flickr.