Cloud-Native Security and Performance: Two Sides of the Same Coin

Brainblog for Sysdig, by Jason Bloomberg

You’re running Kubernetes in a production environment, and you need to apply a patch — perhaps to a commercial application, an open source component or even a container image. How long should it take to implement that patch in production? Thirty days? One day? One hour?

Remember, cloud-native environments are supposed to respond to change in real-time. Such a response isn’t simply scaling up or down as needed. It’s also essential to respond to security threats, and performance issues, as close to real-time as possible.

Containers, and the microservices they support, are also ephemeral. They can live for five minutes or even less, and yet unpatched microservices are just as dangerous as more permanent code.

Patching must also fit within your application lifecycle. Teams with a mature DevOps process typically release code weekly, daily or even hourly. Their ability to apply the patch and roll to production quickly, significantly reduces their security risk. Are your operators up to the task?

Poor Security Management Becomes a Performance Problem

The connections between security and performance go well beyond cloud-native environments. Denial-of-service (DoS) attacks obviously target a site’s performance, as do cryptojacking and to a lesser extent, ransomware attacks.

In many cases, performance monitoring gives an organization its first indication that such an attack is underway. However, monitoring isn’t just for recognizing attacks in progress. It also plays an important role in the prevention of attacks as well — especially in cloud-native environments.

To understand this point, think about how organizations have traditionally handled patch management, especially to software infrastructure: A vendor (or open source project) releases a patch. The enterprise applies the patch in a test environment that resembles production somewhat. After running a range of integration tests over the course of days or even weeks, the ops team may be ready to deploy the patch into production.

Another likely scenario: The ops team may collect several patches to different pieces of software, hoping to test and deploy them all at once. In the meantime, more time goes by, giving attackers even more opportunity for mischief.

The reason that applying patches in an enterprise production environment takes so long is because IT leadership perceives that patch management is a high-risk activity. Patches may make a piece of software behave poorly or not at all — and the complex interdependencies among both applications and infrastructure components compound the risk of failure unpredictably.

Read the entire article here.

Sysdig is an Intellyx customer.

SHARE THIS: