Skip to content
← All writing
Variance & Monte Carlo

On Entropy (And Other Excuses for a Bad River Call)

By FelixD
On Entropy (And Other Excuses for a Bad River Call)

The Finnish high-stakes legend Elmerixx famously said that at the highest levels, ” Our job is to handle uncertainty.” This is a deceptively simple statement. It is also the most precise and complete description of the professional poker problem. When stripped of its romanticism, the job boils down to the management of a single physical property: entropy.

Entropy is a measure of surprise, uncertainty, or disorder. It’s the difference between a two-headed coin (zero entropy) and a fair coin (high entropy). Poker is a high-entropy system.

The physicist Arthur Eddington claimed the law that entropy always increases, the Second Law of Thermodynamics, “holds the supreme position among the laws of Nature.” Our job is to operate within the confines of this supreme law.

The QRE World and the Ghost of GTO

Our operating environment is not the clean, logical world of a Nash Equilibrium (GTO). It is best described as a Quantal Response Equilibrium (QRE)—a messy, human system where players make probabilistic errors. The “skill” of a player in this model is their lambda (λ), a measure of their precision. A Nash Equilibrium is simply a theoretical QRE where all players have an infinite lambda.

This clarifies the job of “handling uncertainty.” It is about building an antifragile system—one that, as Nassim Taleb wrote, has “a love of errors, a certain class of errors”—to operate within this QRE world.

This job plays out across four domains of entropy.

1. Signal Entropy: The Uncertainty You Measure and Project

This is the core of the game at the table. Your strategy’s entropy is, as the father of information theory Claude Shannon put it, “the surprise value of a message.”

Offensively, our goal is to become a high-entropy machine. We want to put our opponents in spots where they feel helpless, where they can’t figure out what’s going on because our strategy is so balanced and surprising. This feeling of confusion we induce in others is the mark of a successful strategy.

But what happens when we are on the receiving end? This is where the antifragile response becomes critical. The feeling of strategic helplessness is not a personal failing; it is your mind’s intuitive detection of an opponent operating with a high lambda. It is the signal that their QRE has a very low Kullback-Leibler (KL) Divergence from a balanced, GTO baseline. Your confusion is the data telling you: “There is no obvious, low-entropy leak to attack here.”

The fragile player panics or guesses. The robust player gets frustrated and over-folds. The antifragile player doesn’t panic; they listen to this signal. They use the feeling of being lost as an immediate trigger to abandon a flawed exploitative plan and revert to their own high-entropy, defensive GTO strategy. They convert a painful feeling into a correct, real-time strategic adjustment.

2. System Entropy: The Uncertainty of Your Own Skill

Eddington’s Second Law dictates that your skill set, left alone, will decay into disorder. The antifragile response is to treat the chaos of a shifting metagame not as a threat, but as a beneficial stressor that forces a necessary and profitable evolution of your skills. Always dedicate the time necessary to expand your skills.

3. Mental Entropy: The Uncertainty in Your Head

Tilt and distraction are states of high internal entropy. The antifragile response is to treat these internal shocks as diagnostic tools. Each moment of tilt is data that, properly processed, helps you build a more robust, lower-entropy mind for the future.

4. Outcome Entropy: The Uncertainty of Reality

Finally, there is the entropy of your financial results. This operates on two levels.

  • The Micro Level: Your skill (your high lambda) produces a more reliable edge, which means lower Outcome Entropy for you personally. This has a direct mathematical consequence: it allows for a higher Kelly Criterion fraction, leading to a faster geometric growth rate.
  • The response to a downswing : the chaos of losing forces you to improve, which lowers your future entropy and accelerates your growth.
  • The Macro Level: Zooming out, the entire distribution of winnings across a large player pool naturally settles into a Maximum Entropy Distribution. The antifragile player understands that the system’s default state is to place them in that unforgiving tail. This provides the ultimate motivation to relentlessly pursue tiny edges—it is a determined fight against statistical gravity.

Conclusion: The Job Description

This brings us to a precise definition of how a professional “handles uncertainty” in Absurdistan:

  1. Assess Lambda: Is this opponent skilled (high λ) or prone to error (low λ)?
  2. Assess Skew: If lambda is low, what is the “skew” in their distribution? Do they have a predictable, low-entropy tendency?
  3. Formulate a Response:
  • Against a low-lambda player with a known bias, deploy a specific, exploitative counter-strategy.
  • Against an opponent you assess to have a very high lambda, you default to your high-entropy, GTO-based strategy—not to exploit them, but to protect yourself.

Perhaps this all seems overly abstract. But there’s a certain utility to it. The mathematician John von Neumann famously advised Claude Shannon on what to call his new concept of uncertainty:

“You should call it entropy… no one knows what entropy really is, so in a debate you will always have the advantage.”

On the surface, this is just a wonderfully cynical joke, and a useful shield for us. If someone questions our complex models, we can hide behind the jargon. But the joke is far deeper and more revealing than that.

Von Neumann was being playful because he was already an expert on a different kind of entropy. The term had already made a long journey before Shannon ever touched it. It was first formalized in Thermodynamics by Boltzmann and Gibbs to describe heat and molecular disorder. Then, von Neumann himself co-opted it to describe uncertainty in Quantum Mechanics (what we now call von Neumann Entropy), long before Shannon used it for Information Theory.

The fact that this single concept—entropy—can be so powerfully applied across physics, quantum mechanics, information theory, and now, our analysis of a card game, is not a strange coincidence. It is a convergence.

It reveals that all these fields, at their core, are grappling with the same fundamental problems: incomplete information, predicting probabilistic systems, and distinguishing signal from noise. The strange overlap between the highest levels of physics and professional poker exists because we are, in our own chaotic way, facing the same universal challenges. We are all just trying to handle uncertainty.

And at the very least, thanks to von Neumann, we now have a very smart-sounding word for it to use in the next hand discussion.

Newsletter

New essays in your inbox

Free Substack — subscribe to get new posts as they ship. No upsell.

Related on the platform

Try the free variance simulator

Real tournament-shaped outcomes, browser-side, no upload.