IncidentResponse

Your metrics are lying

It’s 3 AM. The pager screams, a digital banshee heralding doom. You stumble to your desk, eyes blurry, to find a Slack channel ablaze with panicked messages. The checkout service is broken. Customers are furious. Revenue is dropping.

You pull up the dashboards, your sacred scrolls of system health. Everything is… fine. P95 latency is a flat line of angelic calm. CPU usage is so low it might as well be on a tropical vacation. The error count is zero. According to your telemetry, the system is a picture of perfect health.

And yet, the world is on fire.

Welcome to the great lie of modern observability. We’ve become masters at measuring signals while remaining utterly clueless about the story they’re supposed to tell. This isn’t a guide about adding more charts to your dashboard collection. It’s about teaching your system to stop mumbling in arcane metrics and start speaking human. It’s about making it tell you the truth.

The seductive lie of the green dashboard

We were told to worship the “golden signals”: latency, traffic, errors, and saturation. They’re like a hospital patient’s vital signs. They can tell you if the patient is alive, but they can’t tell you why they’re miserable, what they argued about at dinner, or if they’re having an existential crisis.

Our systems are having existential crises all the time.

  • Latency lies when the real work is secretly handed off to a background queue. The user gets a quick “OK!” while their request languishes in a forgotten digital purgatory.
  • Traffic lies when a buggy client gets stuck in a retry loop, making it look like you’re suddenly the most popular app on the internet.
  • Errors lie when you only count the exceptions you had the foresight to catch, ignoring the vast, silent sea of things failing in ways you never imagined.

Golden signals are fine for checking if a server has a pulse. But they are completely useless for answering the questions that actually keep you up at night, like, “Why did the CEO’s demo fail five minutes before the big meeting?”

The truth serum: Semantic Observability

The antidote to this mess is what we’ll call semantic observability. It’s a fancy term for a simple idea: instrumenting the meaning of what your system is doing. It’s about capturing the plot, not just the setting.

Instead of just logging Request received, we record the business-meaningful story:

  • Domain events: The big plot points. UserSignedUp, CartAbandoned, InvoiceSettled, FeatureFlagEvaluated. These are the chapters of your user’s journey.
  • Intent assertions: What the system swore it would do. “I will try this payment gateway up to 3 times,” or “I promise to send this notification to the user’s phone.”
  • Outcome checks: The dramatic conclusion. Did the money actually move? Was the email really delivered? This is the difference between “I tried” and “I did.”

Let’s revisit our broken checkout service. Imagine a user is buying a book right after you’ve flipped on a new feature flag for a “revolutionary” payment path.

With classic observability, you see nothing. With semantic observability, you can ask your system questions like a detective interrogating a witness:

  • “Show me all the customers who tried to check out in the last 30 minutes but didn’t end up with a successful order.”
  • “Of those failures, how many had the new shiny-payment-path feature flag enabled?”
  • “Follow the trail for one of those failed orders. What was the last thing they intended to do, and what was the actual, tragic outcome?”

Notice we haven’t mentioned CPU once. We’re asking about plot, motive, and consequence.

Your detective’s toolkit (Minimal OTel patterns)

You don’t need a fancy new vendor to do this. You just need to use your existing OpenTelemetry tools with a bit more narrative flair.

  1. Teach your spans to gossip: Don’t just create a span; stuff its pockets with juicy details. Use span attributes to carry the context. Instead of just a request_id, add feature.flag.variant, customer.tier, and order.value. Make it tell you if this is a VIP customer buying a thousand-dollar item or a tire-kicker with a free-tier coupon.
  2. Mark the scene of the crime: Use events on spans to log key transitions. FraudCheckPassed, PaymentAuthorized, EnteringRetryLoop. These are the chalk outlines of your system’s behavior.
  3. Connect the dots: For asynchronous workflows (like that queue we mentioned), use span links to connect the cause to the effect. This builds a causal chain so you can see how a decision made seconds ago in one service led to a dumpster fire in another.

Rule of thumb: If a human is asking the question during an incident, a machine should be able to answer it with a single query.

The case of intent vs. outcome

This is the most powerful trick in the book. Separate what your system meant to do from what actually happened.

  • The intent: At the start of a process, emit an event: NotificationIntent with details like target: email and deadline: t+5s.
  • The outcome: When (or if) it finishes, emit another: NotificationDelivered with latency: 2.5s and channel: email.

Now, your master query isn’t about averages. It’s about broken promises: “Show me all intents that don’t have a matching successful outcome within their SLA.”

Suddenly, your SLOs aren’t some abstract percentage. They are a direct measure of your system’s integrity: its intent satisfied rate.

Your first 30 days as a telemetry detective

Week 1: Pick a single case. Don’t boil the ocean. Focus on one critical user journey, like “User adds to cart -> Pays -> Order created.” List the 5-10 key “plot points” (domain events) and 3 “promises” (intent assertions) in that story.

Week 2: Plant the evidence. Go into your code and start enriching your existing traces. Add those gossipy attributes about feature flags and customer tiers. Add events. Link your queues.

Week 3: Build your “Why” query. Create the one query that would have saved you during the last outage. Something like, “Show me degraded checkouts, grouped by feature flag and customer cohort.” Put a link to it at the top of your on-call runbook.

Week 4: Close the loop. Define an SLO on your new “intent satisfied rate.” Watch it like a hawk. Review your storage costs and turn on tail-based sampling to keep the interesting stories (the errors, the weird edge cases) without paying to record every boring success story.

Anti-Patterns to gently escort out the door

  • Dashboard worship: If your incident update includes a screenshot of a CPU graph, you owe everyone an apology. Show them the business impact, the cohort of affected users, the broken promise.
  • Logorrhea: The art of producing millions of lines of logs that say absolutely nothing. One good semantic event is worth a thousand INFO: process running logs.
  • Tag confetti: Using unbounded tags like user_id for everything, turning your observability bill into a piece of abstract art that costs more than a car.
  • Schrödinger’s feature flag: Shipping a new feature behind a flag but forgetting to record the flag’s decision in your telemetry. The flag is simultaneously the cause of and solution to all your problems, and you have no way of knowing which.

The moral of the story

Observability isn’t about flying blind without metrics. It’s about refusing to outsource your understanding of the system to a pile of meaningless averages.

Instrument intent. Record outcomes. Connect causes.

When your system can clearly explain what it tried to do and what actually happened, on-call stops feeling like hunting for ghosts in a haunted house and starts feeling like science. And you might even get a full night’s sleep.