Troubleshooting

Playing detective with dead Kubernetes nodes

It arrives without warning, a digital tap on the shoulder that quickly turns into a full-blown alarm. Maybe you’re mid-sentence in a meeting, or maybe you’re just enjoying a rare moment of quiet. Suddenly, a shriek from your phone cuts through everything. It’s the on-call alert, flashing a single, dreaded message: NodeNotReady.

Your beautifully orchestrated city of containers, a masterpiece of modern engineering, now has a major power outage in one of its districts. One of your worker nodes, a once-diligent and productive member of the cluster, has gone completely silent. It’s not responding to calls, it’s not picking up new work, and its existing jobs are in limbo. In the world of Kubernetes, this isn’t just a technical issue; it’s a ghosting of the highest order.

Before you start questioning your life choices or sacrificing a rubber chicken to the networking gods, take a deep breath. Put on your detective’s trench coat. We have a case to solve.

First on the scene, the initial triage

Every good investigation starts by surveying the crime scene and asking the most basic question: What the heck happened here? In our world, this means a quick and clean interrogation of the Kubernetes API server. It’s time for a roll call.

kubectl get nodes -o wide

This little command is your first clue. It lines up all your nodes and points a big, accusatory finger at the one in the Not Ready state.

NAME                    STATUS     ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master-1            Ready      master   90d   v1.28.2   10.128.0.2       34.67.123.1     Ubuntu 22.04.1 LTS   5.15.0-78-generic   containerd://1.6.9
k8s-worker-node-7b5d    NotReady   <none>   45d   v1.28.2   10.128.0.5       35.190.45.6     Ubuntu 22.04.1 LTS   5.15.0-78-generic   containerd://1.6.9
k8s-worker-node-fg9h    Ready      <none>   45d   v1.28.2   10.128.0.4       35.190.78.9     Ubuntu 22.04.1 LTS   5.15.0-78-generic   containerd://1.6.9

There’s our problem child: k8s-worker-node-7b5d. Now that we’ve identified our silent suspect, it’s time to pull it into the interrogation room for a more personal chat.

kubectl describe node k8s-worker-node-7b5d

The output of describe is where the juicy gossip lives. You’re not just looking at specs; you’re looking for a story. Scroll down to the Conditions and, most importantly, the Events section at the bottom. This is where the node often leaves a trail of breadcrumbs explaining exactly why it decided to take an unscheduled vacation.

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 13 Oct 2025 09:55:12 +0200   Mon, 13 Oct 2025 09:45:30 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 13 Oct 2025 09:55:12 +0200   Mon, 13 Oct 2025 09:45:30 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 13 Oct 2025 09:55:12 +0200   Mon, 13 Oct 2025 09:45:30 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Mon, 13 Oct 2025 09:55:12 +0200   Mon, 13 Oct 2025 09:50:05 +0200   KubeletNotReady              container runtime network not ready: CNI plugin reporting error: rpc error: code = Unavailable desc = connection error

Events:
  Type     Reason                   Age                  From                       Message
  ----     ------                   ----                 ----                       -------
  Normal   Starting                 25m                  kubelet                    Starting kubelet.
  Warning  ContainerRuntimeNotReady 5m12s (x120 over 25m) kubelet                    container runtime network not ready: CNI plugin reporting error: rpc error: code = Unavailable desc = connection error

Aha! Look at that. The Events log is screaming for help. A repeating warning, ContainerRuntimeNotReady, points to a CNI (Container Network Interface) plugin having a full-blown tantrum. We’ve moved from a mystery to a specific lead.

The usual suspects, a rogues’ gallery

When a node goes quiet, the culprit is usually one of a few repeat offenders. Let’s line them up.

1. The silent saboteur network issues

This is the most common villain. Your node might be perfectly healthy, but if it can’t talk to the control plane, it might as well be on a deserted island. Think of the control plane as the central office trying to call its remote employee (the node). If the phone line is cut, the office assumes the employee is gone. This can be caused by firewall rules blocking ports, misconfigured VPC routes, or a DNS server that’s decided to take the day off.

2. The overworked informant, the kubelet

The kubelet is the control plane’s informant on every node. It’s a tireless little agent that reports on the node’s health and carries out orders. But sometimes, this agent gets sick. It might have crashed, stalled, or is struggling with misconfigured credentials (like expired TLS certificates) and can’t authenticate with the mothership. If the informant goes silent, the node is immediately marked as a person of interest.

You can check on its health directly on the node:

# SSH into the problematic node
ssh user@<node-ip>

# Check the kubelet's vital signs
systemctl status kubelet

A healthy output should say active (running). Anything else, and you’ve found a key piece of evidence.

3. The glutton resource exhaustion

Your node has a finite amount of CPU, memory, and disk space. If a greedy application (or a swarm of them) consumes everything, the node itself can become starved. The kubelet and other critical system daemons need resources to breathe. Without them, they suffocate and stop reporting in. It’s like one person eating the entire buffet, leaving nothing for the hosts of the party.

A quick way to check for gluttons is with:

kubectl top node <your-problem-child-node-name>

If you see CPU or memory usage kissing 100%, you’ve likely found your culprit.

The forensic toolkit: digging deeper

If the initial triage and lineup didn’t reveal the killer, it’s time to break out the forensic tools and get our hands dirty.

Sifting Through the Diary with journalctl

The journalctl command is your window into the kubelet’s soul (or, more accurately, its log files). This is where it writes down its every thought, fear, and error.

# On the node, tail the kubelet's logs for clues
journalctl -u kubelet -f --since "10 minutes ago"

Look for recurring error messages, failed connection attempts, or anything that looks suspiciously out of place.

Quarantining the patient with drain

Before you start performing open-heart surgery on the node, it’s wise to evacuate the civilians. The kubectl drain command gracefully evicts all the pods from the node, allowing them to be rescheduled elsewhere.

kubectl drain k8s-worker-node-7b5d --ignore-daemonsets --delete-local-data

This isolates the patient, letting you work without causing a city-wide service outage.

Confirming the phone lines with curl

Don’t just trust the error messages. Verify them. From the problematic node, try to contact the API server directly. This tells you if the fundamental network path is even open.

# From the problem node, try to reach the API server endpoint
curl -k https://<api-server-ip>:<port>/healthz

If you get ok, the basic connection is fine. If it times out or gets rejected, you’ve confirmed a networking black hole.

Crime prevention: keeping your nodes out of trouble

Solving the case is satisfying, but a true detective also works to prevent future crimes.

  • Set up a neighborhood watch: Implement robust monitoring with tools like Prometheus and Grafana. Set up alerts for high resource usage, disk pressure, and node status changes. It’s better to spot a prowler before they break in.
  • Install self-healing robots: Most cloud providers (GKE, EKS, AKS) offer node auto-repair features. If a node fails its health checks, the platform will automatically attempt to repair it or replace it. Turn this on. It’s your 24/7 robotic police force.
  • Enforce city zoning laws: Use resource requests and limits on your deployments. This prevents any single application from building a resource-hogging skyscraper that blocks the sun for everyone else.
  • Schedule regular health checkups: Keep your cluster components, operating systems, and container runtimes updated. Many Not Ready mysteries are caused by long-solved bugs that you could have avoided with a simple patch.

The case is closed for now

So there you have it. The rogue node is back in line, the pods are humming along, and the city of containers is once again at peace. You can hang up your trench coat, put your feet up, and enjoy that lukewarm coffee you made three hours ago. The mystery is solved.

But let’s be honest. Debugging a Not Ready node is less like a thrilling Sherlock Holmes novel and more like trying to figure out why your toaster only toasts one side of the bread. It’s a methodical, often maddening, process of elimination. You start with grand theories of network conspiracies and end up discovering the culprit was a single, misplaced comma in a YAML file, the digital equivalent of the butler tripping over the rug.

So the next time an alert yanks you from your peaceful existence, don’t panic. Remember that you are a digital detective, a whisperer of broken machines. Your job is to patiently ask the right questions until the silent, uncooperative suspect finally confesses. After all, in the world of Kubernetes, a node is never truly dead. It’s just being dramatic and waiting for a good detective to find the clues, and maybe, just maybe, restart its kubelet. The city is safe… until the next time. And there is always a next time.

Essential Skills for Troubleshooting in DevOps and SRE

Have you ever felt like you’re trying to solve an unsolvable puzzle when troubleshooting a complex system? Welcome to the world of DevOps and Site Reliability Engineering (SRE), where every mystery is an opportunity to improve. Think of yourself as a detective, unraveling the secrets of computer systems and networks. Your tools? Knowledge, curiosity, and a systematic approach to problem-solving.

Let’s explore the essential skills you need to master troubleshooting and thrive in the exciting world of DevOps and SRE.

The Troubleshooting Landscape. A Puzzle That Keeps Changing

As technology evolves, systems become more intricate, like trying to piece together a puzzle that keeps shifting. Troubleshooting in this environment is more critical than ever. It’s not just about fixing what breaks, it’s about truly understanding the dynamic interplay of software, hardware, and networks that power our digital world.

Think of it this way: every system failure is a new mystery waiting to be solved. To excel in this field, you need to cultivate a unique blend of technical know-how and creative problem-solving skills.

The Troubleshooter’s Toolkit. Essential Skills for Success

1. Thinking Like Sherlock. A Systematic Approach to Problem-Solving

Let’s start with the basics: every great troubleshooter is systematic. Like Sherlock Holmes, you gather evidence, form hypotheses, and test them one at a time. The process is systematic, guesswork won’t get you far.

First, clearly define the problem. What’s happening, and what should be happening? When did the issue begin? Once you have a solid grasp, gather clues, logs, metrics, error messages, and network traffic. Look for patterns or anomalies. Form hypotheses based on your findings, then test each systematically until the root cause is revealed. It’s like piecing together a story, where each clue brings you closer to the solution.

2. The Tech Polymath. Broad Technical Knowledge

Troubleshooting requires a breadth of technical knowledge. While you don’t need to be an expert in every area, having a working understanding of key technologies will broaden your ability to diagnose and resolve issues:

  • Operating Systems: Get comfortable with Linux, Windows, and even a few specialized systems.
  • Networking: Know how data flows through networks, and grasp concepts like protocols and the OSI model.
  • Cloud Infrastructure: Be familiar with platforms like AWS, Azure, and Google Cloud.
  • Databases: Understand the basics of relational and non-relational databases, along with common issues.
  • Application Stacks: Know how components like web servers and application servers work together.

The more you know, the more connections you can make when problems arise. Think of it as expanding your toolkit—having the right tool for the job can make all the difference.

3. The Digital Detective’s Arsenal. Mastering Debugging Tools and Techniques

Just as a detective needs magnifying glasses and forensic kits, troubleshooters need their own set of specialized tools. Some of the most valuable tools you should master include:

  • Log Analysis: Learn to dissect logs with tools like the ELK stack (Elasticsearch, Logstash, Kibana).
  • Network Monitoring: Get proficient with tcpdump, Wireshark, and nmap to troubleshoot network-related issues.
  • Profilers: Use profiling tools to detect performance bottlenecks in applications.
  • Monitoring and Observability Tools: Platforms like Prometheus, Grafana, and Datadog are indispensable for keeping an eye on system health.

These tools are powerful, but remember: their effectiveness depends on how and when you use them. Knowing what to look for, and how to interpret what you find, is key to solving complex issues.

4. Digging Deep. The Art of Root Cause Analysis

When it comes to troubleshooting, surface-level fixes are like band-aids on broken bones. To be effective, you need to go beyond fixing symptoms and dig deep into root cause analysis. Ask yourself: Why did this problem happen? What chain of events led to this failure? Is there a deeper design flaw or a misconfiguration?

By addressing the root cause, you not only fix the current issue but prevent it from recurring. In the long run, this approach saves time and effort while making your systems more robust.

5. The Crystal Ball. Proactive Problem Prevention

The best troubleshooters don’t just react to problems; they prevent them. It’s like having a crystal ball that helps you foresee potential issues before they spiral out of control. How do you do this?

  • Monitoring: Set up comprehensive monitoring systems to keep tabs on your infrastructure.
  • Alerting: Configure smart alerts that notify you when something might go wrong.
  • Chaos Engineering: Intentionally introduce failures to identify weaknesses in your system—stress-testing for the unexpected.

By being proactive, you ensure that small issues don’t grow into large-scale disasters.

The DevOps and SRE Perspective. Beyond Technical Skills

Troubleshooting isn’t just about technical expertise; it’s also about how you interact with your team and approach problems holistically.

1. Teamwork and Communication, Your Key to Success

In DevOps and SRE, collaboration is essential. You’ll work with cross-functional teams, from developers to security experts. Effective communication ensures that everyone stays on the same page, and the faster information flows, the faster issues get resolved.

  • Knowledge Sharing: Always be willing to share what you learn with others, whether through documentation, informal discussions, or training sessions. It’s like being part of a detective agency where everyone’s combined experience makes solving mysteries easier.
  • Clear Documentation: Whenever you solve a problem, document it. You’ll thank yourself later when the issue resurfaces or a teammate needs the solution.

2. The Robot’s Assistant, Embrace Automation

Automation is your tireless assistant. By automating routine tasks, you can focus on the bigger mysteries. Here’s how automation supercharges troubleshooting:

  • Automated Diagnostics: Write scripts that gather system data and run common checks automatically.
  • Runbooks: Develop automated runbooks for frequent issues. Think of them as step-by-step guides that speed up incident response.
  • Incident Response Automation: Automate responses to certain types of incidents, giving you valuable time to focus on more complex problems.

3. The Eternal Student, Never Stop Learning

The tech world changes constantly, and as a troubleshooter, you must keep evolving. Embrace continuous learning:

  • Stay Updated: Follow new tools, technologies, and best practices in the DevOps and SRE communities.
  • Learn from Incidents: Every problem you solve is a learning opportunity. Analyze post-mortems to identify patterns and areas for improvement.
  • Share Knowledge: Teaching others not only helps them but reinforces your understanding.

The more you learn, the sharper your troubleshooting skills become.

Real-World Adventures. Troubleshooting in Action

Let’s apply what we’ve discussed to a couple of real-world scenarios:

Scenario 1: The Case of the Mysterious Slowdown

Imagine your web application suddenly starts running slowly, and users are complaining. Here’s how you could approach the problem:

  • Gather Data: Start by collecting logs, monitoring metrics, and database query times.
  • Form Hypotheses: Could it be a server overload? A network bottleneck? An inefficient database query?
  • Test Methodically: Begin with quick checks, like server load, and move to deeper analyses like database profiling.
  • Collaborate: Work with the development team to identify recent code changes.
  • Root Cause: You discover that a new feature introduced an inefficient query.
  • Fix and Prevent: Optimize the query and add performance tests to avoid future issues.

Scenario 2: The Midnight Alert Storm

It’s 2 AM, and your alert system is going wild. Multiple services are down. Here’s how to tackle it:

  • Quick Assessment: Identify the affected services and their dependencies.
  • Triage: Prioritize critical services.
  • Use Your Toolkit: Run network diagnostics, analyze logs, and check monitoring tools.
  • Collaborate: Wake up key team members and coordinate the response.
  • Fix: Track down a misconfigured network setting that caused cascading failures.
  • Post-Mortem: Conduct a thorough review to prevent similar issues in the future.

Your Journey to Troubleshooting Mastery

Troubleshooting in DevOps and SRE is an art that blends systematic thinking, deep technical knowledge, and a proactive mindset. Each problem is an opportunity to learn, improve, and make systems more reliable.

Whether you’re new to DevOps or a seasoned SRE, focus on these key areas:

  • Systematic problem-solving
  • Broad technical knowledge
  • Mastery of debugging tools
  • Root cause analysis
  • Proactive problem prevention
  • Collaboration and communication
  • Automation skills
  • Continuous learning

With these skills in your arsenal, you’ll not only solve today’s problems. you’ll help build more resilient and efficient systems for tomorrow. Embrace the challenges, stay curious, and remember: every troubleshooting adventure is a step toward mastery.