There’s a mental shortcut that quietly ruins decisions in business, health, investing, and any other domain you can imagine.
It sounds like this:
- “No one has complained, so customers must be happy.”
- “We’ve never had a breach, so our security is strong.”
- “I don’t see studies proving harm, so it’s safe.”
That’s the fallacy: confusing the absence of evidence with evidence of absence. And it’s one of the cleanest ways to walk confidently into a mess you didn’t prepare for.
Nassim Nicholas Taleb has been warning about this for years. He doesn’t just call it a logic error—he treats it as a risk management failure that shows up whenever we mistake what’s visible for what’s true.
The problem isn’t ignorance. It’s hidden data.
If you and I look around and don’t see something, our brains want closure. We want to conclude: “It’s not there.”
But reality doesn’t care about our need for tidy narratives.
Taleb’s work keeps returning to one idea: what you don’t see can be more important than what you do see, especially in domains shaped by rare events (Black Swans) and nonlinear consequences.
In The Black Swan, he illustrates how “no evidence” can simply mean you haven’t reached the moment when reality reveals itself. Think of the classic turkey problem: the turkey is fed every day and concludes life is safe. This safety is factual and reliable until Thanksgiving arrives. The lack of prior “evidence” wasn’t safety. It was delayed information.
Taleb’s name for it: “silent evidence”
One of Taleb’s most useful frames is silent evidence: outcomes that should be counted but aren’t, because they disappeared, failed quietly, or were never recorded.
That’s why survivorship bias is so persuasive. You see the winners and build a story. The losers are invisible and so your brain treats them as irrelevant.
Taleb explicitly ties this to the fallacy of silent evidence (and survivorship bias) in his own writing.
And once you see it, you can’t unsee it:
- You read about unicorn founders, not the 10,000 founders who ran out of cash.
- You hear about “overnight success,” not the graveyard of near-identical attempts.
- You celebrate a strategy that “worked,” without asking how many times it failed off-camera.
Where this fallacy shows up at work (a lot)
1) “No complaints” ≠ “good experience”
Silence is not satisfaction. People churn quietly, adapt, downgrade expectations, or leave without drama. If you only track complaints, you’re tracking the loud minority.
2) “No incidents” ≠ “low risk”
A long streak of calm can be meaningless when the system is fragile. Taleb’s point isn’t that disasters happen daily. His point is that disasters can be rare and still dominate outcomes.
3) “No proof of harm” ≠ “safe enough”
In high-stakes domains (public health, environment, systemic risk), Taleb argues the burden of proof shouldn’t sit on the people shouting “danger.” If the downside is catastrophic, waiting for perfect evidence is a luxury you don’t have.
A simple way to think better: the “What’s Missing?” checklist
Next time you’re tempted to conclude “it doesn’t exist,” run this quick check:
- What would I expect to see if the risk/problem were real?
- Could that evidence be naturally hidden, unreported, or delayed?
- Who benefits if this stays invisible?
- What’s the downside if I’m wrong? Is it small… or ruinous?
- Am I looking at survivors and mistaking them for the full population?
This is the shift Taleb pushes: don’t just ask “What do we know?”
Ask “What are we systematically not seeing?”
Conclusion: Don’t let “clean” data make you reckless
The absence of evidence can mean many things: no measurement, no incentive to report, delayed effects, buried failures, or missing history. In Taleb’s world, that’s exactly where the biggest risks like to hide.
So, the next time someone says, “There’s no evidence,” don’t rush to relax.
Instead, ask the better question:
“Is there no evidence… or is the evidence silent?”