How to ensure high availability for pods in Kubernetes

I was thinking the other day about these Kubernetes pods, and how they’re like little spaceships floating around in the cluster. But what happens if one of those spaceships suddenly vanishes? Poof! Gone! That’s a real problem. So I started wondering, how can we ensure our pods are always there, ready to do their job, even if things go wrong? It’s like trying to keep a juggling act going while someone’s moving the floor around you…

Let me tell you about this tool called Karpenter. It’s like a super-efficient hotel manager for our Kubernetes worker nodes, always trying to arrange the “guests” (our applications) most cost-effectively. Sometimes, this means moving guests from one room to another to save on operating costs. In Kubernetes terminology, we call this “consolidation.”

The dancing pods challenge

Here’s the thing: We have this wonderful hotel manager (Karpenter) who’s doing a fantastic job, keeping costs down by constantly optimizing room assignments. But what about our guests (the applications)? They might get a bit dizzy with all this moving around, and sometimes, their important work gets disrupted.

So, the question is: How do we keep our applications running smoothly while still allowing Karpenter to do its magic? It’s like trying to keep a circus performance going while the stage crew rearranges the set in the middle of the act.

Understanding the moving parts

Before we explore the solutions, let’s take a peek behind the scenes and see what happens when Karpenter decides to relocate our applications. It’s quite a fascinating process:

First, Karpenter puts up a “Do Not Disturb” sign (technically called a taint) on the node it wants to clear. Then, it finds new accommodations for all the applications. Finally, it carefully moves each application to its new location.

Think of it as a well-choreographed dance where each step must be perfectly timed to avoid any missteps.

The art of high availability

Now, for the exciting part, we have some clever tricks up our sleeves to ensure our applications keep running smoothly:

  1. The buddy system: The first rule of high availability is simple: never go it alone! Instead of running a single instance of your application, run at least two. It’s like having a backup singer, if one voice falters, the show goes on. In Kubernetes, we do this by setting replicas: 2 in our deployment configuration.
  2. Strategic placement: Here’s a neat trick: we can tell Kubernetes to spread our application copies across different physical machines. It’s like not putting all your eggs in one basket. We use something called “Pod Topology Spread Constraints” for this. Here’s how it looks in practice:
topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app: your-app
  1. Setting boundaries: Remember when your parents set rules about how many cookies you could eat? We do something similar in Kubernetes with PodDisruptionBudgets (PDB). We tell Kubernetes, “Hey, you must always keep at least 50% of my application instances running.” This prevents our hotel manager from getting too enthusiastic about rearranging things.
  2. The “Do Not Disturb” sign: For those special cases where we absolutely don’t want an application to be moved, we can put up a permanent “Do Not Disturb” sign using the karpenter.sh/do-not-disrupt: “true” annotation. It’s like having a VIP guest who gets to keep their room no matter what.

The complete picture

The beauty of this system lies in how all the pieces work together. Think of it as a safety net with multiple layers:

  • Multiple instances ensure basic redundancy.
  • Strategic placement keeps instances separated.
  • PodDisruptionBudgets prevent too many moves at once.
  • And when necessary, we can completely prevent disruption.

A real example

Let me paint you a picture. Imagine you’re running a critical web service. Here’s how you might set it up:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: critical-web-service
spec:
  replicas: 2
  template:
    metadata:
      annotations:
        karpenter.sh/do-not-disrupt: "false"  # We allow movement, but with controls
    spec:
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: DoNotSchedule
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: critical-web-service-pdb
spec:
  minAvailable: 50%
  selector:
    matchLabels:
      app: critical-web-service

The result

With these patterns in place, our applications become incredibly resilient. They can handle node failures, scale smoothly, and even survive Karpenter’s optimization efforts without any downtime. It’s like having a self-healing system that keeps your services running no matter what happens behind the scenes.

High availability isn’t just about having multiple copies of our application, it’s about thoughtfully designing how those copies are managed and maintained. By understanding and implementing these patterns, we are not just running applications in Kubernetes; we are crafting reliable, resilient services that can weather any storm.

The next time you deploy an application to Kubernetes, think about these patterns. They might just save you from that dreaded 3 AM wake-up call about your service being down!

Share