Mind the Gap
Fathom’s Analysis of the State of AI Governance
In January, we published “Building for the Thinking Machine Age”—our attempt to make sense of where we are in the AI transition and Fathom’s role in it. We argued that AI represents a genuine industrial revolution, one arriving into a society already fraying at the seams, and that the institutions we need to navigate it don’t yet exist.
That essay was about the stakes. This one is about the terrain.
If we’re going to build the governance architecture for the AI century, we need to understand where the gaps actually are and why they persist. We’ve stepped back to chart out the landscape: studying which domains have strong governance infrastructure and which don’t, where public concern is driving action and where it’s absent, and what’s causing the gaps that exist.
We’ve just released Peaks and Valleys: Mapping Gaps in the AI Governance Landscape: a strategic analysis that maps these gaps. We’re publishing it as a starting point for conversation—including the conversations ahead at The Ashby Workshops this week.
This post briefly reviews the framework and the thinking behind it.
Gaps Across the Board
Our core finding is that governance activity isn’t well-matched to either democratic demand or expert-identified risk.
We mapped ten domains against two dimensions: how concerned the American public is, and how experts assess the risk. What emerged from this mapping is a systematic mismatch.
The domains where Americans most want action—children’s AI safety, workforce impact, companion chatbots—have among the weakest policy infrastructure. These are areas where harms are visible and political demand is clear. Children’s AI safety polls as the top concern across party lines. Yet governance remains thin relative to the public mandate.
Meanwhile, the domains that experts assess as highest risk—AI agents, CBRN, existential risk, open-source model accountability—have limited public visibility. This creates a political economy problem: policymakers face little constituent pressure to act on issues the public isn’t yet focused on, even when expert assessment suggests urgency.
The result is that we’re not governing to public demand, and we’re not governing to expert risk. We’re mostly just not governing.
Caption: The Governance Priority Matrix maps ten domains against public concern and expert-assessed risk. Circle size reflects current governance activity.
Why Gaps Persist
Identifying what’s missing only gets you so far. The more important question is why—because diagnosis shapes intervention.
We identified six gap types that appear across domains:
Measurement gaps. Without agreed metrics for harms, you can’t set clear standards or verify compliance. You can’t manage what you can’t measure—and measurement frameworks for AI harms are still nascent in most domains.
Codification gaps. In some domains, authoritative frameworks simply don’t exist. AI agents are the clearest example: systems that can autonomously take real-world actions are deploying ahead of any governance framework.
Operational capacity gaps. Even where frameworks exist on paper, implementation infrastructure lags. Regulators need technical expertise and resources to enforce standards; regulated entities need guidance to operationalize compliance. This is the most universal gap type—it appears nearly everywhere.
Trust infrastructure gaps. Independent verification, clear liability frameworks, and democratic legitimacy are essential for governance credibility. In many domains, companies’ safety commitments can’t be independently validated—not because anyone is acting in bad faith, but because the verification infrastructure doesn’t exist yet.
Coordination gaps. Governance fragments across jurisdictions, sectors, and institutions. This is partly inevitable given the global nature of AI development, but better coordination mechanisms could help.
Market incentive gaps. When safety investment offers no competitive advantage, governance swims upstream. This gap is also nearly universal. Until the market rewards safety, we’re relying on goodwill alone.
These gaps interact. Measurement gaps are often foundational: without agreed metrics, codification stalls. Operational capacity gaps frequently stem from market incentive failures—when safety investment offers no competitive advantage, neither regulators nor firms build enforcement infrastructure. Some interventions are therefore higher leverage than others.
Unlocking Progress
Two findings stand out.
First, operational capacity and incentive gaps are nearly universal. This is encouraging for intervention design: investments in implementation infrastructure and incentive realignment aren’t just solving one problem—they can unlock progress across multiple domains simultaneously.
Second, progress can happen quickly when conditions align. Synthetic media went from largely ungoverned to extensively regulated in under three years—more than 40 state laws plus federal action via the TAKE IT DOWN Act. Visible harms, clear frameworks, and political will converged. The question for other domains is how to create those conditions.
Cross-Cutting Solutions
Because the same gap types repeat across domains, certain interventions can address multiple challenges at once:
Independent evaluation infrastructure moves beyond self-reported compliance to external verification, addressing trust gaps while supporting enforcement. This connects directly to Fathom’s work on Independent Verification Organizations.
Standardized measurement frameworks for AI harms would unblock codification across multiple domains. Priority areas include benchmarks for output accuracy, bias auditing protocols, and harm taxonomies that allow aggregation across deployments.
Centralized incident reporting, modeled on aviation or pharmaceutical systems, creates the evidence base for smarter policy. AI incidents are harder to detect than plane crashes, but the principle—systematic tracking that identifies patterns—applies.
Clarified liability standards make prevention commercially rational. The current absence of clear frameworks leaves all parties uncertain about exposure, which paradoxically may reduce caution rather than increase it.
Incentive realignment mechanisms—including deployer requirements, insurance frameworks, and disclosure standards—can shift commercial pressure toward governance investment. When safety becomes verifiable and tied to market consequences, responsible practice moves from competitive disadvantage to competitive requirement.
None of these are silver bullets. But each addresses multiple gap types, and together they represent the institution-building that the AI century requires.
An Invitation
In “Building for the Thinking Machine Age,” we wrote that the new bargains forged between governments and citizens, between markets and workers, and between innovation and security, will determine whether the AI century benefits everyone or only those building the technology.
This report is our attempt to understand the terrain where those bargains will be struck. It’s not a policy wish list or a critique of existing efforts. It’s a framework—one we hope will be useful to anyone working on these challenges, and one we expect to improve with input.
If you’re working on AI governance—in government, civil society, or industry—we welcome the chance to compare notes. Where does this framework miss the mark? What would you add? What are we getting wrong? The gaps are too large and the timeline too short for any of us to work alone.
The full report is available here.
Onward,
Andrew



Exactly this. We're facing a wisdom gap, AI generates knowledge faster than we cultivate the wisdom to govern it responsibly.
I've just published research on nuclear governance and whether we're repeating the same catastrophic mistakes with AI: https://doi.org/10.5281/zenodo.18487639
The responsibility falls on all of us, developers, policymakers, society at large. Leadership can't delegate this one.