Skip to main content

4 posts tagged with "observability"

View All Tags

Making Certificate Expiry Boring

· 21 min read
Ranjan Sakalley
Founder at base14

Making Certificate Expiry Boring

Certificate expiry issues are entirely preventable

On 18 November 2025, GitHub had an hour-long outage that affected the heart of their product: Git operations. The post-incident summary was brief and honest - the outage was triggered by an internal TLS certificate that had quietly expired, blocking service-to-service communication inside their platform. It's the kind of issue every engineering team knows can happen, yet it still slips through because certificates live in odd corners of a system, often far from where we normally look.

What struck me about this incident wasn't that GitHub "missed something." If anything, it reminded me how easy it is, even for well-run, highly mature engineering orgs, to overlook certificate expiry in their observability and alerting posture. We monitor CPU, memory, latency, error rates, queue depth, request volume - but a certificate that's about to expire rarely shows up as a first-class signal. It doesn't scream. It doesn't gradually degrade. It just keeps working… until it doesn't.

And that's why these failures feel unfair. They're fully preventable, but only if you treat certificates as operational assets, not just security artefacts. This article is about building that mindset: how to surface certificate expiry as a real reliability concern, how to detect issues early, and how to ensure a single date on a single file never brings down an entire system.

Understanding What Increases and Reduces MTTR

· 5 min read
base14 Team
Engineering Team at base14

What makes recovery slower — and what disciplined, observable teams do differently.


In reliability engineering, MTTR (Mean Time to Recovery) is one of the clearest indicators of how mature a system — and a team — really is. It measures not just how quickly you fix things, but how well your organization detects, communicates, and learns from failure.

Every production incident is a test of the system's design, the team's reflexes, and the clarity of their shared context. MTTR rises when friction builds up in those connections — between tools, roles, or data. It falls when context flows freely and decisions move faster than confusion.

Why Unified Observability Matters for Growing Engineering Teams

· 11 min read
Ranjan Sakalley
Founder at base14
Why Unified Observability Matters for Growing Engineering Teams

Last month, I watched a senior engineer spend three hours debugging what should have been a fifteen-minute problem. The issue wasn't complexity—it was context switching between four different monitoring tools, correlating timestamps manually, and losing their train of thought every time they had to log into yet another dashboard. If this sounds familiar, you're not alone. This is the hidden tax most engineering teams pay without realizing there's a better way.

Observability Theatre

· 11 min read
Ranjan Sakalley
Founder at base14
Observability Theatre

the·a·tre (also the·a·ter) /ˈθiːətər/ noun

: the performance of actions or behaviors for appearance rather than substance; an elaborate pretense that simulates real activity while lacking its essential purpose or outcomes

Example: "The company's security theatre gave the illusion of protection without addressing actual vulnerabilities."


Your organization has invested millions in observability tools. You have dashboards for everything. Your teams dutifully instrument their services. Yet when incidents strike, engineers still spend hours hunting through disparate systems, correlating timestamps manually, and guessing at root causes. When the CEO forwards a customer complaint asking "are we down?", that's when the dev team gets to know about incidents.