Practicing Wisdom — Issue #16

A distillation of the most interesting things I explored, learned, and thought about.

1. What I Learned This Time

Survival is not about being right—it’s about being present when you are.

The people and systems that win are not necessarily the smartest or earliest—they are the ones that stay in the game long enough to intersect with tail events.

In the conversation with Marc Andreessen, he highlights the following: most venture returns come from a handful of outliers. Missing one Google hurts more than losing money on dozens of failures. The asymmetry is so extreme that the only rational strategy is endurance—you keep investing, because the cost of absence is higher than the cost of error.

That same pattern is now playing out in AI—but with a twist.

Ben Thompson’s argument reframes the current moment: we’re not just scaling models, we’re scaling agency. Agents reduce the number of people required to produce meaningful output. That means the upside from being “in the game” is expanding—because the leverage of any given participant is increasing. At the same time, we still don’t know how that upside will manifest.

Jeremy Grantham’s historical lens reminds us that every transformative technology looks obvious in hindsight and chaotic in real time. Railways, electricity, the internet—all produced enormous value, but in ways that destroyed many early participants.

The winners were not those who predicted the future correctly, but those structured to survive being wrong along the way.

That’s the part that feels underappreciated in AI today. We’re all trying to predict what AI will do. But the more antifragile question is: who is positioned to still be around when it does something unexpected?

Rubinstein’s piece on private credit is a useful counterpoint. These funds weren’t necessarily wrong about credit quality—but they may still suffer because their structure couldn’t withstand synchronized withdrawals. The failure mode isn’t incorrect analysis; it’s fragility under stress.

And then Stancil’s essay adds the human layer: systems don’t just evolve based on logic—they evolve based on perception, narrative, and coordination. Which means the path to those tail events is not just unpredictable—it’s socially distorted.

So you end up with a kind of inversion:

  • The future is clearly valuable (AI will create enormous economic surplus)

  • But the path is noisy, nonlinear, and socially driven

  • And therefore, the edge shifts from prediction → positioning

The question is no longer: What will AI do? The question is: What survives long enough to benefit from whatever it does?

That’s where antifragility comes in. Antifragile systems don’t just withstand volatility—they gain from it. In an AI world where outcomes will surprise us, the winners won’t be those optimized for a single scenario. They’ll be those that stay solvent, adaptive and exposed to upside. In a world dominated by tail events, survival is strategy.

Sources Referenced

Marc Andreessen and Charlie Songhurst on the Past, Present and Future of Silicon Valley — Cheeky Pint (link)

It’s the People, Stupid —Benn Stancil (link)

Redemption Day — Net Interest (link)

Agents Over Bubbles — Stratechery (link)

VALUING AI - Extreme Bubble, New Golden Era, or Both? - GMO Capital (link)

2. Key Distillations

  • “You don’t need to predict the future—you need to survive until it arrives.”

  • “In power-law worlds, absence is the only unforgivable mistake.”

  • “Fragility is not about being wrong—it’s about not being able to recover.”

  • “Leverage magnifies outcomes; endurance determines who receives them.”

  • “Antifragility is just optionality that compounds.”

3. One Contrarian Viewpoint

The biggest risk in AI isn’t being disrupted—it’s overcommitting too early.

The dominant narrative says: move fast or get left behind, but history suggests the opposite danger is more common.

In every technological wave, early capital and early conviction often get destroyed—not because the thesis was wrong, but because it was prematurely concentrated.

The railway investors were right about railways. The dot-com investors were right about the internet. Many still lost everything.

Why? Because they optimized for being right, not for lasting long enough.

In AI, the temptation is to go all-in on a specific vision—models, infrastructure, applications. But if the path is uncertain (and it is), then concentration becomes fragility.

Better to be broadly exposed and financially durable than precisely correct and early.

4. One Investable Idea

Own Optionality in the AI Stack

If AI outcomes are uncertain but directionally positive, the best strategy is not prediction—it’s positioning across multiple possible futures.

That suggests three types of exposure:

  1. Infrastructure with staying power (compute, data centers, tooling) – survives most scenarios, even if timing varies

  2. Horizontal platforms that adapt (APIs, agent frameworks, orchestration layers) – benefit regardless of which use cases win

  3. Cash-flow businesses with AI upside, not dependence – can absorb disruption while selectively adopting it

Avoid structures that require one specific future to work. Favor those that can endure volatility, pivot as the landscape evolves and remain in the game long enough to catch second- and third-order effects

Thesis: The highest-return AI investments won’t be the most accurate predictions—they’ll be the most durable exposures to uncertainty.

5. From the Archives: A Recall Highlight

“The game is not to be right—it’s to still be playing when the compounding shows up.”

In a world shaped by tail events, the ultimate advantage isn’t insight, it’s endurance.


Next
Next

Practicing Wisdom — Issue #15