Cloudflare went down again. Time to reframe resilience.
Cloudflare crashed yesterday. X, ChatGPT, Uber—all down. Even DownDetector went offline because it runs on Cloudflare.
One company, one bug, hundreds of services dark for hours. This happened 30 days after AWS took down half the internet for 15 hours.
For years the trade-off seemed obvious: Centralized = reliable, proven, professional. Decentralized = slow, clunky, ideological. After two major outages in 30 days, I’m wondering if that still holds.
My hunch: CFOs are looking at these events differently now. Not “could this happen?” but “what did this cost us?”
The dependency is the vulnerability.
Lightning didn’t go down during the AWS outage. It couldn’t—there’s no AWS to fail. No Cloudflare to crash. No central coordinator whose DNS can cascade. Lightning routes through independent nodes. One fails, you route around it. Nostr relay dies? Connect to a different relay. No coordinator means nothing to coordinate, nothing to fail.
That’s how it worked during AWS. That’s how it worked yesterday.
The architecture delivers protocol-level redundancy. No single point of failure. Real resilience during centralized failures. But the pitch typically leads with censorship resistance, sovereignty, freedom from government control.
After yesterday, enterprises are asking “how do we reduce infrastructure dependency risk?” That’s a risk management question. It has budget. It gets put in RFPs.
You translate architectural advantages into enterprise risk management language.
The new frame:
- “We’re decentralized” ➔ “We eliminate coordinator dependencies”
- “Censorship-resistant” ➔ “Protocol-level redundancy”
- “Freedom technology” ➔ “No single point of failure”
Same architecture. Different frame.
Infrastructure risk is fresh. Enterprise pain is real. Now is the moment to test new messaging.