Python Binomial Distribution: A Practical, Developer‑Centric Guide

I still remember the first time a product manager asked me, “What’s the chance at least 4 of 6 users click this new button if the true click rate is about 0.6?” That question looks simple, but answering it well requires a clean mental model and reproducible code. When you’re evaluating feature rollouts, A/B tests, reliability checks, or quality inspections, you’re often counting successes in a fixed number of independent trials. That’s exactly what the binomial distribution is built for.

I’ll walk you through the binomial distribution from the ground up, but with a developer’s mindset. You’ll see the math in plain language, learn how to compute probabilities and expectations in Python, and then build the exact distribution table you need for decisions. I’ll also show you what commonly goes wrong in real projects, how to check your assumptions, and how to scale to modern workflows in 2026 without turning the logic into a black box. If you can answer, “How many successes do I expect out of n tries, and how surprising is a given result?” you’ll be able to justify better choices in product, engineering, and analytics.

A binary success story: the Bernoulli trial

The binomial distribution starts with a single trial that has two outcomes: success or failure. That’s a Bernoulli trial. Think of it like a signup form that either submits or errors. For each attempt, success happens with probability p, failure with probability 1 − p, and every attempt is independent.

Independence is the key guardrail. If one event changes the probability of the next event, the binomial model starts to drift. For example, if users see the same offer multiple times and become less likely to click each time, that’s not independent. You either need a different model or to restructure your data so each trial is reasonably independent.

When I model reliability, I picture a stream of identical checks: each check is a yes/no outcome with a stable probability. If that feels like your problem, you’re already in binomial territory.

From one trial to n trials: the binomial random variable

Now scale to n identical trials. The binomial random variable counts how many successes happen across those n trials. I’ll call that count r. For r successes and n − r failures, the base probability for any specific sequence of outcomes is:

p^r * (1 − p)^(n − r)

But there are many sequences that yield the same number of successes. The number of distinct arrangements is the binomial coefficient:

n! / ((n − r)! * r!)

So the probability mass function (pmf) becomes:

P(R = r) = (n! / ((n − r)! r!)) p^r * (1 − p)^(n − r)

If you’re building intuition, I suggest thinking of it as: “How many ways can I place r successes into n slots, multiplied by the probability of each arrangement.” That framing makes the combinatorics feel less abstract.

A concrete example: 6 trials with p = 0.6

Let’s use the example I started with: 6 independent attempts where a success has probability 0.6. The distribution table for r = 0 to 6 looks like this:

r

0

1

2

3

4

56

P(r)

0.004096

0.036864

0.138240

0.276480

0.311040

0.1866240.046656

A few observations I use when sanity-checking results:

  • The probabilities add up to 1 (or very close when you use floating point).
  • The mass is concentrated around r = n * p, which here is 3.6.
  • The distribution is skewed right because p > 0.5, so higher r values are a bit more likely.

Mean and variance are straightforward:

  • Mean = n * p
  • Variance = n p (1 − p)

So for n = 6 and p = 0.6, mean = 3.6 and variance = 1.44. I use these as quick diagnostics before I even look at the full table.

Computing the distribution in Python with SciPy

When I’m building analysis pipelines, I prefer SciPy for probability distributions. It’s stable, fast, and readable. Here’s a full example that prints the distribution table along with the mean and variance.

from scipy.stats import binom

Define parameters

n = 6

p = 0.6

All possible success counts

r_values = list(range(n + 1))

Mean and variance

mean, var = binom.stats(n, p)

Probability mass for each r

dist = [binom.pmf(r, n, p) for r in r_values]

Print the table

print("r\tp(r)")

for r, pr in zip(r_values, dist):

print(f"{r}\t{pr}")

print(f"mean = {mean}")

print(f"variance = {var}")

That code gives you the exact probabilities from the table above. I like this workflow because it is explicit and easy to test. You can drop it into a notebook, CI job, or internal report without much ceremony.

Practical tip: check the sum

Floating-point math can introduce tiny rounding errors. In a robust workflow, I usually add:

print(f"sum = {sum(dist)}")

You should see something like 0.999999999999 or 1.0. If you don’t, check whether r spans 0..n or if you passed the wrong n or p.

Visualizing the distribution with Matplotlib

For non-technical stakeholders, a chart is often more convincing than a table. A simple bar chart communicates the outcome probabilities immediately.

from scipy.stats import binom

import matplotlib.pyplot as plt

n = 6

p = 0.6

r_values = list(range(n + 1))

probabilities = [binom.pmf(r, n, p) for r in r_values]

plt.bar(r_values, probabilities)

plt.xlabel("Number of successes")

plt.ylabel("Probability")

plt.title("Binomial distribution: n=6, p=0.6")

plt.show()

When p = 0.5, the distribution becomes symmetric and visually resembles a normal curve as n grows. That’s the familiar normal approximation in action. For small n, it’s still discrete, but you can see the symmetry show up quickly.

Interpreting results like an engineer, not a statistician

Raw probabilities are useful, but decisions typically require thresholds. Here’s how I translate a binomial model into action:

  • Expected successes: n * p gives you the baseline outcome. Use it to set realistic targets.
  • Tail probabilities: “At least r successes” helps answer questions like “How likely is it that 5 or more of 6 systems pass?”
  • Risk bands: low-probability outcomes can highlight process instability or data quality issues.

In Python, you can compute cumulative probability directly with the cumulative distribution function (cdf) or survival function (sf). For example, to compute P(R ≥ 5):

from scipy.stats import binom

n = 6

p = 0.6

P(R >= 5) = 1 - P(R <= 4)

probatleast_5 = 1 - binom.cdf(4, n, p)

print(probatleast_5)

I prefer sf for numeric stability when probabilities are tiny:

probatleast_5 = binom.sf(4, n, p)

This matters in systems engineering where extreme tail events are not just theoretical but operationally important.

Common mistakes I see in real code

Even experienced developers misapply the binomial distribution. Here are the top mistakes I still run into, and how I avoid them:

1) Treating dependent events as independent

If one trial affects the next, the binomial model breaks. Examples:

  • A rate limit after several successes
  • User fatigue after repeated prompts
  • A model’s predictions changing because it learns in real time

When you suspect dependence, you may need a different model or you need to redesign the measurement so the trials are independent.

2) Using the wrong p

Many teams set p based on a short sample or gut feeling. I encourage you to estimate p from data and attach a confidence interval. If p is uncertain, you can model it as a random variable or run sensitivity analysis across a range of p values.

3) Confusing “at most” and “at least”

It seems trivial, but I’ve seen dashboards with flipped tail probabilities that led to bad decisions. Use cdf for “at most” and sf for “at least,” and write a one-line comment in your code.

4) Ignoring n changes

If your number of trials varies (say, number of users per cohort), you’re no longer comparing like-for-like. Normalize or compute probabilities at the correct n for each group.

5) Misreading p = 0.5 as “balanced outcomes”

A 0.5 probability does not mean equal outcomes in any given sample. With small n, you can still see lopsided results. If your team expects perfect balance, show them the actual distribution at your n.

When you should use the binomial model — and when you shouldn’t

I recommend the binomial model when:

  • Each trial has only two outcomes.
  • The probability of success is stable across trials.
  • Trials are independent.
  • You care about the number of successes in a fixed number of trials.

I avoid it when:

  • Probabilities change from trial to trial (use a model with varying p).
  • Outcomes are not binary (use multinomial or another distribution).
  • Trials are not independent (consider Markov or hierarchical models).

If you’re unsure, sanity-check with small samples and compare the predicted distribution to actual counts. If the model consistently underestimates variance, your assumptions are probably wrong.

Performance and scaling notes for 2026 workflows

In 2026, I see teams increasingly run these analyses in automated pipelines with AI assistants generating dashboards or insights. That’s useful, but you should still keep an eye on numerical stability and performance.

Performance range guidance

  • Computing a binomial pmf table for n ≤ 1,000 is typically fast (10–15ms in a local Python process).
  • For n in the tens of thousands, computing every r value can become expensive. At that point, compute only the r values you need, or rely on cumulative functions.
  • If you’re running this in batch across many segments, vectorize inputs with NumPy to avoid Python loops.

Practical patterns I use

  • Cache computed distributions if n and p repeat across cohorts.
  • Use scipy.stats.binom.logpmf for very small probabilities to avoid underflow.
  • If you must handle large n and extreme p, consider normal or Poisson approximations and then verify with a sampled binomial check.

AI-assisted workflows, with guardrails

When an AI tool suggests code for statistical analysis, I still verify these items:

  • Are the assumptions stated clearly in the output?
  • Is “at least” or “at most” computed correctly?
  • Are input units consistent (n is integer, p in [0, 1])?
  • Are the results plausible based on mean and variance?

I treat AI-generated analysis as a first draft, not a final answer. It’s fast, but responsibility still sits with you.

Practical scenarios and edge cases

Here are a few examples that map cleanly to binomial reasoning:

Feature rollout success rate

You launch a feature to 200 users. You estimate p = 0.08 for conversion. How likely is it that at least 25 users convert?

from scipy.stats import binom

n = 200

p = 0.08

prob = binom.sf(24, n, p) # P(R >= 25)

print(prob)

If that probability is low, you should expect lower conversions and adjust your goals before you commit to a release.

Automated test suite reliability

You run 120 integration tests nightly. You expect each test to pass with probability 0.98. What’s the chance you see 5 or more failures?

from scipy.stats import binom

n = 120

p = 0.98

Failures are successes in the binomial sense if you model failure rate

failure_prob = 1 - p

prob5ormorefailures = binom.sf(4, n, failure_prob)

print(prob5ormorefailures)

This framing helps you decide whether a given nightly failure count is noise or a strong signal.

Quality inspection in manufacturing

Out of 50 units, with an expected defect rate of 1%, what is the probability of 2 or more defects?

from scipy.stats import binom

n = 50

p = 0.01

prob2or_more = binom.sf(1, n, p)

print(prob2or_more)

If your expected rate yields too many “rare” events in practice, you might be underestimating the true defect probability.

Traditional vs modern approaches

When I compare older workflows to modern, AI-assisted ones, I focus on clarity and reproducibility rather than novelty. Here’s a quick mapping I share with teams:

Traditional approach

Modern approach (2026)

Hand-derived tables in spreadsheets

Parameterized Python scripts in a repo

Static PDFs

Live dashboards with reproducible code

Manual calculation checks

Automated tests that validate pmf sums and mean/variance

One-off analysis

Reusable functions with type hints and CI

Human-only review

AI-assisted draft + human verificationThe modern approach is faster, but only if you keep the statistical meaning explicit. I strongly prefer a small, well-tested Python module that explains assumptions, rather than a massive notebook with ad hoc cells.

A reusable function for binomial tables

If you need binomial distribution tables regularly, encapsulate the logic in a function. Here’s a clean version I use in utilities:

from dataclasses import dataclass

from typing import List

from scipy.stats import binom

@dataclass

class BinomialSummary:

n: int

p: float

r_values: List[int]

probabilities: List[float]

mean: float

variance: float

def binomial_table(n: int, p: float) -> BinomialSummary:

if n < 0:

raise ValueError("n must be non-negative")

if not (0 <= p <= 1):

raise ValueError("p must be in [0, 1]")

r_values = list(range(n + 1))

probabilities = [binom.pmf(r, n, p) for r in r_values]

mean, variance = binom.stats(n, p)

# Normalize minor floating error

total = sum(probabilities)

if abs(total - 1.0) > 1e-12:

probabilities = [v / total for v in probabilities]

return BinomialSummary(n, p, r_values, probabilities, float(mean), float(variance))

The explicit input validation and normalization makes downstream failures much easier to debug. This is the kind of small utility I drop into a shared analytics library so every team uses the same, reliable logic.

Edge behavior and numerical stability

If you push extreme values, you should be careful:

  • When p is very close to 0 or 1, many outcomes become numerically tiny. logpmf can keep values stable.
  • For large n (say 100,000+), direct computation of all r values is slow and unnecessary. Compute just the range you care about, or approximate with a normal or Poisson model and then validate with targeted pmf values.
  • Don’t forget integer constraints: n must be an integer, and r must be in 0..n. It’s easy to pass a float by accident when values come from user input or CSVs.

Here’s a quick example using logpmf:

from scipy.stats import binom

import math

n = 1000

p = 0.01

r = 0

log_prob = binom.logpmf(r, n, p)

prob = math.exp(log_prob)

print(prob)

That avoids underflow in cases where raw pmf might return 0.0 due to floating-point limits.

A simple mental model you can share with teammates

I often explain it this way: imagine n identical light switches that each have a probability p of turning on when you flip them. The binomial distribution tells you how many are likely to turn on, and how surprising any given number is. You don’t need to memorize formulas if you can explain the mechanism. This mental model makes it easy to communicate expected outcomes and uncertainty to non-technical stakeholders.

Deep dive: building intuition with simulations

When I introduce binomial logic to teams, I like to simulate it to remove the mystery. A quick Monte Carlo simulation validates the math and helps people trust the model.

import random

from collections import Counter

n = 6

p = 0.6

trials = 100000

counts = []

for _ in range(trials):

successes = 0

for _ in range(n):

if random.random() < p:

successes += 1

counts.append(successes)

freq = Counter(counts)

for r in range(n + 1):

print(r, freq[r] / trials)

You’ll see empirical frequencies that converge toward the pmf values. This is great for teaching, and it’s also a practical debugging tool when you want to sanity-check custom implementations.

Estimating p from data

The binomial distribution assumes p is known, but in real systems p often comes from data. The most straightforward estimate is the sample proportion:

p_hat = successes / n

That’s the maximum likelihood estimate. But it can be misleading when n is small or when successes are rare. Here’s a Python example that pairs p_hat with a simple confidence interval using the Wilson score method, which behaves better near 0 and 1.

import math

def wilson_interval(successes: int, n: int, z: float = 1.96):

if n == 0:

return (0.0, 1.0)

p_hat = successes / n

denom = 1 + (z2 / n)

center = p_hat + (z2 / (2 * n))

margin = z math.sqrt((phat (1 - phat) / n) + (z2 / (4 * n2)))

lower = (center - margin) / denom

upper = (center + margin) / denom

return (lower, upper)

successes = 8

n = 120

print(wilson_interval(successes, n))

I like this because it forces the team to acknowledge uncertainty instead of treating p as a fixed constant. If the interval is wide, you shouldn’t be overly confident in any single probability estimate.

Working with multiple cohorts and varying p

In product analytics, it’s rare to have a single global p. You might have p per cohort, per country, or per device type. This means you need consistent code to compute binomial values for a vector of probabilities.

Here’s a NumPy-friendly pattern that avoids Python loops for per-cohort expectations:

import numpy as np

n = np.array([100, 120, 80, 150])

p = np.array([0.08, 0.11, 0.05, 0.09])

expected = n * p

variance = n p (1 - p)

print(expected)

print(variance)

For full distributions, you can still loop but keep the logic centralized. In practice, I’ll usually compute just a handful of tail probabilities rather than entire pmf tables when working at scale.

Alternative approaches and approximations

There are times when an exact binomial calculation is either expensive or unnecessary. In those cases, I lean on approximations with guardrails.

Normal approximation

If n is large and p is not too close to 0 or 1, the binomial can be approximated by a normal distribution with mean n p and variance n p * (1 − p).

A quick rule of thumb: both n p and n (1 − p) should be at least around 10.

Poisson approximation

If n is large and p is small (rare events), the binomial can be approximated by a Poisson distribution with rate λ = n * p.

This is useful for counting rare defects or rare clicks when p is tiny, and it simplifies the math and computation. I still verify critical decisions with the exact binomial when feasible.

When approximations fail

If you’re in the middle ground (n moderately large, p near the extremes), approximations can be off enough to matter. That’s when I use exact methods and, if needed, log-space computations to avoid underflow.

Decision thresholds and operational triggers

The binomial distribution becomes operationally powerful when you define thresholds. For example:

  • “If the probability of seeing 10 or fewer successes is below 1%, trigger an investigation.”
  • “If P(R ≥ r) exceeds 95%, we ship the feature.”

This is where people often confuse “rare under the model” with “impossible.” A low probability doesn’t mean a system is broken; it means you should check whether your assumptions still match reality.

Here’s a pattern I use for automated alerting with a two-sided check:

from scipy.stats import binom

def is_unusual(count, n, p, alpha=0.01):

# Two-sided tail check

lower = binom.cdf(count, n, p)

upper = binom.sf(count - 1, n, p)

return lower < (alpha / 2) or upper < (alpha / 2)

print(is_unusual(2, 50, 0.1))

That gives you a simple “unusual or not” flag without turning your pipeline into a statistics textbook.

Binomial vs. related distributions (quick clarity)

It helps to know where the binomial fits in the probability family tree:

  • Bernoulli: one trial, two outcomes.
  • Binomial: fixed number of independent Bernoulli trials.
  • Geometric: number of trials until the first success.
  • Negative binomial: number of trials until a fixed number of successes.
  • Hypergeometric: like binomial but without replacement (no independence).

If you sample without replacement from a finite population, the hypergeometric distribution is often more correct. But if the population is large relative to your sample, the binomial becomes a good approximation.

More practical scenarios you can reuse

Below are a few additional scenarios that I’ve seen in production systems.

Email deliverability checks

Suppose your email system usually delivers with p = 0.97. You send 500 emails. What’s the chance fewer than 470 get delivered?

from scipy.stats import binom

n = 500

p = 0.97

problessthan_470 = binom.cdf(469, n, p)

print(problessthan_470)

This is useful for setting early warning thresholds in monitoring dashboards.

Fraud detection sampling

You inspect 40 transactions with an estimated fraud rate of 2%. What’s the chance you see 3 or more frauds?

from scipy.stats import binom

n = 40

p = 0.02

prob3or_more = binom.sf(2, n, p)

print(prob3or_more)

If that probability is extremely low, three frauds should trigger a deeper investigation into the upstream system.

Load testing pass rate

You run 30 load tests with a historical pass rate of 0.9. You observe only 22 passes. Is that abnormal?

from scipy.stats import binom

n = 30

p = 0.9

observed = 22

pvaluelower = binom.cdf(observed, n, p)

print(pvaluelower)

That gives a quick signal that your load test environment might be unstable or that the historical rate is no longer valid.

A binomial checklist I keep in code reviews

When I review analytics code that uses a binomial model, I scan for these items:

  • Is n fixed and clearly defined?
  • Is p documented and sourced from data?
  • Are independence assumptions reasonable?
  • Are tails computed with cdf or sf correctly?
  • Are edge cases (p=0, p=1, n=0) handled?
  • Is the outcome range limited to 0..n?

If any of these are missing, I ask for a revision. It takes minutes to fix but can prevent weeks of confusion later.

Production‑ready binomial utility module

If you want something closer to production grade, here’s a small module pattern I use. It includes validation, explicit tail methods, and log-space helpers.

from dataclasses import dataclass

from typing import List

from scipy.stats import binom

import math

@dataclass

class BinomialSummary:

n: int

p: float

r_values: List[int]

probabilities: List[float]

mean: float

variance: float

def validate_params(n: int, p: float) -> None:

if not isinstance(n, int):

raise TypeError("n must be an integer")

if n < 0:

raise ValueError("n must be non-negative")

if not (0.0 <= p <= 1.0):

raise ValueError("p must be in [0, 1]")

def binomial_table(n: int, p: float) -> BinomialSummary:

validate_params(n, p)

r_values = list(range(n + 1))

probabilities = [binom.pmf(r, n, p) for r in r_values]

mean, variance = binom.stats(n, p)

total = sum(probabilities)

if total == 0:

raise ValueError("probabilities sum to 0, check params")

if abs(total - 1.0) > 1e-12:

probabilities = [v / total for v in probabilities]

return BinomialSummary(n, p, r_values, probabilities, float(mean), float(variance))

def probatleast(k: int, n: int, p: float) -> float:

validate_params(n, p)

if k <= 0:

return 1.0

if k > n:

return 0.0

return binom.sf(k - 1, n, p)

def probatmost(k: int, n: int, p: float) -> float:

validate_params(n, p)

if k < 0:

return 0.0

if k >= n:

return 1.0

return binom.cdf(k, n, p)

def log_pmf(r: int, n: int, p: float) -> float:

validate_params(n, p)

if r n:

return float("-inf")

return float(binom.logpmf(r, n, p))

This makes your code base predictable and easy to test. The helper functions also reduce the chance of a team member accidentally using the wrong tail.

Testing binomial code in CI

If you’re building an analytics library, it’s worth adding small tests. These are quick sanity checks that catch common errors.

def testbinomialsumcloseto_one():

summary = binomial_table(10, 0.3)

assert abs(sum(summary.probabilities) - 1.0) < 1e-10

def testbinomialmean_variance():

summary = binomial_table(12, 0.2)

assert abs(summary.mean - (12 * 0.2)) < 1e-10

assert abs(summary.variance - (12 0.2 0.8)) < 1e-10

def testprobatleastbounds():

assert probatleast(0, 5, 0.5) == 1.0

assert probatleast(6, 5, 0.5) == 0.0

You don’t need many tests, but these prevent classic regressions when someone refactors the library.

Handling streaming data and shifting p

A subtle problem in real systems is that p can drift over time. If your input comes from streaming data, you should consider a rolling estimate of p, or use a Bayesian update with a prior distribution. Even if you don’t go full Bayesian, you can at least update p daily and recompute expectations.

A quick rolling update looks like this:

from collections import deque

def rolling_p(window: int, stream):

q = deque(maxlen=window)

for outcome in stream:

q.append(outcome)

p_hat = sum(q) / len(q)

yield p_hat

This is not a perfect solution, but it’s better than treating a stale p as fixed when behavior changes month to month.

Communicating results in plain language

Statistics only help if stakeholders understand them. I usually translate binomial outputs into simple language:

  • “We expect about 24 conversions out of 300 users. Seeing 10 or fewer would be very unlikely if the rate is still 8%.”
  • “If the true pass rate is 98%, seeing 5 or more failures in 120 tests is rare. That’s why we’re investigating the test environment.”

This framing aligns decisions with probabilities, not just numbers.

A quick comparison of binomial and A/B testing intuition

A/B tests often rely on approximations, but the binomial distribution gives you the raw counting intuition:

  • Each user is a trial.
  • A conversion is a success.
  • The number of conversions is binomial.

For large samples, people skip directly to z-tests or normal approximations. That’s fine, but the binomial still drives the reasoning. If you understand the binomial, you can sanity-check results from more complex methods.

Performance considerations for large‑scale analysis

When you’re running binomial calculations at scale, performance bottlenecks appear in a few predictable places:

  • Recomputing the same distribution for identical n and p.
  • Iterating in Python loops rather than vectorized operations.
  • Using pmf repeatedly where cdf or sf would suffice.

If I’m processing thousands of cohorts, I’ll precompute a small set of common distributions, cache them, and only compute exact values for the cohorts that are outliers. This is usually enough to keep dashboards and pipelines responsive.

Guardrails for large n

For very large n, I sometimes move to log-space workflows end-to-end. For example, you can compute log pmf values and then normalize if you need a distribution. This prevents underflow and makes it possible to handle probabilities that would otherwise round to zero.

If you’re dealing with huge n and you care only about a narrow range (say, around the mean), compute only that range and skip the rest. That’s often a 100x win with no loss in decision quality.

A practical visual trick for stakeholders

When I need to communicate uncertainty, I plot both the pmf and the cumulative probability curve. The pmf tells them where the mass is, the cdf tells them how quickly the probability accumulates. It’s a simple way to explain “at most” and “at least” without using those exact words.

What breaks: a quick troubleshooting guide

If your results look wrong, here are the first things I check:

  • Is p outside [0, 1] due to bad input parsing?
  • Did you accidentally pass n as a float?
  • Are you counting failures when you meant successes (or vice versa)?
  • Did you compute “at least” with cdf instead of sf?
  • Is the data actually independent, or do repeated events bias the outcome?

Almost every production issue I’ve seen traces back to one of these.

Practical ethics and responsibility

One more note: probabilities influence decisions about people, products, and quality. If you’re using a binomial model to justify a threshold that affects customers or workers, document your assumptions and keep the analysis transparent. The math is precise, but the inputs are often estimates. Be honest about that uncertainty.

Wrapping it together

Here’s the simplest summary I can give a teammate: the binomial distribution counts how many successes you get out of n independent tries when the chance of success stays the same. With a small amount of Python, you can compute exact probabilities, tail risks, expected values, and build charts for decision‑making. The math is clean, but the assumptions matter — independence and a stable p are the foundation.

If you keep your inputs honest, use the right tail functions, and validate against intuition and mean/variance, the binomial distribution becomes one of the most reliable tools in your analytics toolkit.

A final “why this matters” note

When teams skip the binomial and jump straight to tools or dashboards, they lose the grounding that makes those results believable. I’ve seen teams argue over conversion volatility or test flakiness simply because no one wrote down the underlying distribution. If you build a binomial model first, you can defend your decisions with clarity and confidence.

That’s why I still teach it and still reach for it — it’s the simplest model that solves an enormous number of real-world questions.

Scroll to Top