Zero Knowledge Proofs: Verifying Your Deepest Secrets
The quest for a private, yet verifiable future
I’ve rewritten and reworked this article a few times over the years, each version reflecting a different stage of my relationship with privacy, cryptography, and what it means to build systems that protect their users rather than expose them. This essay isn’t meant to be definitive; it’s a snapshot of a journey (my own) through a field that is still defining itself, with plenty ebbs and flows, across bears and bulls.
If anything, this is a map drawn while walking: smudged and blurry in places, overly nerdy and detailed in others, with the occasional drift into irresistible metaphors. But that’s the way zero‑knowledge entered my life, unexpected, disorienting, and deeply transformative.
This is a philosophical extension of a more educational piece published last year in Obscura’s blog (an infrastructure project for privacy networks that I used to be part of) and goes deeper into the history of the field, the underpinnings of privacy by design, the cultural and engineering struggles, and the renewed optimism made possible by the latest generation of polynomial commitments schemes and privacy-first networks.
If you’ve ever felt the tension between visionary math and messy human reality, you’ll probably recognize parts of yourself here.
Entering the Labyrinth: my First Encounters With Zero‑Knowledge
I didn’t “adopt” zero‑knowledge proofs. I stumbled into them the way a character in a fantasy novel stumbles into a hidden library: half by accident, half by destiny, guided by curiosity and a desire to understand how the world should work (also mildly terrified of staring at the cryptographic abyss of elliptic curves and polynomial commitment schemes.
Like many engineers, my early crypto work was practical, not philosophical. I liked building systems that behaved predictably, where I can write test and run them locally, to see a bunch of green checkmarks before hitting the commit button in my IDE. I liked writing code that did what it was supposed to do. But ZK disrupted that. It didn’t behave like software. It behaved like language: recursive, self‑referential, in need of dialog between nodes of a connected network.
My early days experimenting with Aleo, Mina, and Aztec were chaotic. The first circuits I built leaked information. The second ones didn’t compile. The third ones compiled but produced proofs that failed verification for reasons so obscure they may as well have been eldritch runes.
One morning I’d be writing documentation for my team, referencing online docs from a privacy Layer 1, then after lunch the very same page that I had open in my browser wouldn’t load anymore. The work that I’ve been doing for the past week was rendered useless because the developers on the other side had pushed some breaking changes to the domain specific language we were using. There I learned what “bleeding edge” ment. It was sharp, it made you bleed.
Still, something about the field wouldn’t let me go.
There was an energy in the ZK world that reminded me of certain sci‑fi novels, the ones where characters unlock strange mathematical symmetries that reshape physics. It felt like entering the Library of Alexandria or the multiverse of Permutation City. A place where ideas didn’t behave the way I expected. Where a few lines of code could reshape the way society operates.
Over time, the chaos started to make sense. Patterns emerged. Algebra became a topography I could navigate. And I realized something fundamental: in decentralized systems, privacy isn’t a nice‑to‑have; it’s structural. Without it, every application, every identity layer, every financial flow con collapse like buildings on shifting sands.
The labyrinth, confusing as it was, became my home.
Zero‑Knowledge Proofs: a Brief Recap
The story of zero‑knowledge proofs begins decades before blockchains existed. The early work of Goldwasser, Micali, and Rackoff didn’t come from fintech ambitions—they came from pure theory. They asked a philosophical question: Can knowledge be proven without being revealed? And against all intuition, the answer was YES.
The early systems were deeply interactive. A prover and a verifier danced through multiple rounds of challenges. It was mathematically beautiful but useless in a world where you want to post a single proof to a decentralized network to be verified.
Then, the Fiat–Shamir heuristic changed everything. It turned that interactive ritual into a static object, a single proof; by replacing the verifier’s randomness with a hash oracle. Just like that, proofs could become part of distributed systems.
But even then, ZK wasn’t ready.
Through the 1990s and early 2000s, researchers slowly chipped away at performance bottlenecks. Pairing‑friendly curves like BN254 and later BLS12‑381 emerged. FFT‑based polynomial commitment schemes grew efficient enough to use in practice. And proving systems evolved into families:
zk‑SNARKs: tiny proofs, fast verification, trusted setups.
zk‑STARKs: larger proofs, transparent setup, post‑quantum safety.
PLONK‑ish systems: universal setups, flexible circuits.
Recursive SNARKs: proofs that verify other proofs—mathematical inception.
By the time I formally stepped into the space in 2022, the field felt like a living organism, something that had evolved through decades of quiet intellectual struggle. Reading pairing equations for the first time felt like reading incantations. And yet, this “magic” ran on algebra, not mysticism.
I internalized something important: ZK isn’t a blockchain fad. It’s a continuation of a long academic lineage, an intellectual pursuit that only recently found its industrial moment, solving a pressing problem: how to keep secrets in a decentralized system.
Privacy by Design: Philosophy and Practice
“Privacy by design” is one of those phrases that sounds clean and corporate, but the real experience is messy, philosophical, and deeply human. It also comes from an actual framework designed in collaboration between Canadian and Dutch researchers in the 90’s. It has seven main principles:
Proactive not reactive; preventive not remedial
Privacy as the default setting
Privacy embedded into design
Full functionality – positive-sum, not zero-sum
End-to-end security – full lifecycle protection
Visibility and transparency – keep it open
Respect for user privacy – keep it user-centric
It became a form of discipline for me. It meant approaching every system with humility. It meant assuming that anything exposed will be exploited. It meant acknowledging that data, once leaked, can never be un‑leaked.
Blockchains, for all their strengths, are built on radical transparency. Every transaction, every state transition, every interaction is public, timestamped, indexed, and immortalized. It’s an architecture designed for institutional accountability, not individual dignity.
Most early Web3 builders didn’t realize this. Or they dismissed it, preaching that pseudonymity will be enough to substitute privacy. Maybe they considered it a valid trade‑off, worth accepting in exchange for decentralization. I didn’t. And once I started building private systems, I saw how broken the default model truly was.
Privacy by design asks different questions:
What does the system really need to know?
What must be public for correctness? What can remain shielded?
How do we prevent metadata leakage, not just data leakage?
How can a user retain control of their own narrative?
The painful truth is that building privacy‑first systems is harder (way harder). You write more logic. You audit more assumptions. You think like an adversary. Tooling misbehaves. Proving is expensive. And users rarely appreciate the invisible effort.
But the reward is profound: systems that treat people with dignity. Systems that do not demand exposure as the price of participation.
That’s why I stayed.
Auditability: the missing link for real-world solutions
Privacy without auditability is like a locked box with no key; secure, perhaps, but useless for collaboration or trust. In my early ZK experiments, I learned this the hard way: shielding data felt empowering, but without ways to selectively reveal or verify, systems became black boxes that no one trusted, or even worse they became outright illegal in most jurisdictions.
Auditability in zero-knowledge systems flips the script. It allows users to prove properties about their private data without exposing the data itself. This balance is what makes ZK truly usable. For instance, in financial applications on platforms like Aleo, you can audit compliance (e.g., proving transaction validity) while keeping amounts private. It’s the cryptographic equivalent of a confessional booth: confession without identification.
This duality: privacy preserved, auditability enabled, addresses real-world needs. Regulators can verify tax compliance without seeing your full ledger. Employers can confirm qualifications without invasive background checks. In building a private voting system prototype on Mina Protocol, I saw how recursive proofs allowed aggregate audits of votes while individual choices remained secret. It’s not magic; it’s succinct proofs compressing mountains of data into verifiable nuggets.
But auditability demands careful design. Poorly implemented, it can leak metadata. That’s why systems like Aztec’s zk.money incorporate viewing keys; optional windows into private state. The key insight: auditability should be opt-in, granular, and user-controlled.
Without this counterpart, privacy tech risks isolation. With it, we build ecosystems where trust flows from math, not exposure. For deeper reading, check one of the OG Groth’s SNARK paper which underpins many auditable ZK systems.
The Disillusionment Arc
I don’t want to romanticize the journey: there was a moment where I almost walked away.
Not because ZK was too hard (that’s part of its charm) but because I hit a cultural wall. Users didn’t care. Investors wanted quick wins. Tooling was immature. And many ZK projects felt almost cult-like, insular, ritualistic, brilliant, but disconnected from real-world adoption.
During that period, I drifted back toward AI. It felt alive. Experimental. Urgent. The problems in AI invited creativity. The problems in ZK required patience.
But as I explored AI more deeply, I realized something important: AI needs verifiability. It needs cryptographic trust. And ZK is (one) of the missing pieces.
When I returned to the ZK ecosystem in 2025, everything had changed. Proof systems were faster. Networks more coherent. Tooling had grown teeth. And suddenly, the dream of privacy by design didn’t feel utopian—it felt inevitable.
ZK in 2025
The ZK landscape of 2025 is nothing like what I entered in 2022. Back then, networks were half‑broken prototypes and ambitious research papers. Today, we finally have a stack that feels… real. Let me expand on the key players that shaped my recent work.
Aleo
Aleo stands out for its focus on programmable privacy. Using their Leo language, I built my first truly private smart contracts. It’s like writing code in a fog: computations happen without revealing inputs or state. Aleo’s zero-knowledge virtual machine (zkVM) makes it accessible for developers transitioning from traditional blockchains.
Mina Protocol
Mina takes a unique approach with its succinct blockchain— the entire chain is just 22KB, thanks to recursive ZK proofs. In my experiments, this meant lightning-fast syncs and verifications on mobile devices. It’s privacy through minimalism, proving the whole history without storing it.
Aztec
Aztec is the closest thing we have to a privacy‑native execution environment that still feels connected to Ethereum’s economic gravity. Noir evolved from an experimental DSL into a genuine developer language. The private DeFi patterns emerging on Aztec don’t feel duct‑taped; but natural extensions of what Ethereum aspire to be.
Aztec’s research team, led by visionaries like Barry Whitehat and Zack Williamson (whose papers on PLONK and hybrid circuits have pushed the cryptographic boundaries—check Williamson’s work on ePrint), just announced their token going live in two weeks via a community-first auction.
And with the incoming Aztec token launch, the ecosystem is about to gain its first serious liquidity anchor. I see Aztec today the way early Ethereum devs described the frontier: a little rough, a little wild, but undeniably full of mininful change.
Kohaku
If Aztec feels like the wild frontier, then Kohaku (freshly announced by Vitalik Buterin himself in DevConnect Argentina) represents the polished gateway to Ethereum-native privacy. Backed by the Ethereum Foundation, Kohaku is a toolkit designed to infuse privacy into every wallet, making it as effortless as sending ETH. If adoption follows, it could become the missing bridge: an SDK that lets existing wallets add shielded balances, stealth addresses, and private state reads without reinventing the wheel.
What excites me most is how Kohaku embodies “privacy by default.” No more niche apps or clunky UX—your MetaMask or Rainbow could soon handle private sends seamlessly. It’s built on mature ZK-SNARKs and integrates with tools like Helios for trustless RPCs, addressing real pain points like RPC surveillance. Vitalik’s endorsement feels like a turning point, signaling that 2025 is when ZK finally hits escape velocity.
Aligned Layer
Aligned Layer solves one of the worst bottlenecks in ZK systems: expensive on‑chain verification. Instead of baking specialized verification logic into every contract, Aligned lets you route proofs through a dedicated verification network. It’s elegant. It’s modular. And more importantly: it actually works.
With the bonus of being build by the Lambda Class team, one of the most impressive crypto-engineering teams I’ve ever seen in action. They have steadily helped many of the biggest projects in the ecosystem to become efficient and cryptographically robust for the past ten years. Shout out to Fede who just received an award from the Argentinian government and was a fundamental force in bootstrapping a new branch of cryptography education in the University of Buenos Aires, the Cryptography and Distributed Systems Research Center. This is a big milestone in advancing crypto in Latinamerica.
And to my own surprise the Lambda Class team has just announce TODAY at DevConnect, an Argentinian stable coin called Sur Coin.
EigenLayer
Restaking is divisive, but the reality is simple: shared security is powerful. ZK‑powered AVSs (Actively Validated Services) can inherit serious economic guarantees through EigenLayer. This matters because proving systems and privacy networks often depend on credible verification layers. Eigen gives them that backbone.
RISC Zero
I’ve always had a deep respect toward Rust. It’s not the easiest language to adopt, but once you wrap your brain around the concept of “borrowing”, the primitives of the languages start to shine a bit brighter, and you actually start making less mistakes and introducing less critical bugs into your codebase. RISC Zero brought ZK development into a world where I didn’t have to switch mental models entirely when designing around privacy.
You write Rust. You compile. You prove.
No handcrafted circuits for every small computation. No algebraic contortions (unless you really want to go deep into the optimization rabbit hole). For builders like me, this isn’t a convenience; it’s liberation.
Some use cases to get you proving and verifying
After years of building prototypes—games, leaderboards, private transactions, identity systems—I’ve learned to distinguish between use cases that matter and those that merely signal cool early innovation, but don’t solve any core problem. So here a few cases that you may want to explore more in depth:
Private Decentralized Finance
Not optional. Not decorative. Absolutely necessary. The idea that financial behavior should be public by default is absurd. ZK turns finance from a surveillance surface into a shield.
Selective Disclosure Identity: (Credentials)
The world is moving beyond binary KYC. ZK enables new forms of identity: flexible, contextual, domain‑specific. I can prove I’m allowed to enter without revealing why. Checkout zkPassport. And for the more controversial conversations, Sam Altman’s WorldCoin
ZK‑Games
This field has a special place in my heart. I’ve been a gamer my entire life, and zk-gaming was my initial gateway into the creative side of ZK. Imperfect games with hidden state and provable fairness. Worlds where strategy is private, but legality is public. It’s a new design space that traditional engines cannot touch. Check out Dark Forest as one of the first brave experiments on fully on-chain games. I’ve written a whole piece on Autonomous Worlds, if you’re interested in the frontier of virtual reality.
ZKML & Verifiable Computation
There’s a quieter frontier forming at the intersection of ZK and AI: verifiable inference, model privacy, proof‑of‑correctness for machine learning. Some projects feel vaporous; others feel like early portals to new computational realms.
If you would like to get you mind blown away, I suggest you check out the newly emerging field of ZKML (Zero-Knowledge Machine Learning). Teams like Giza and Ezkl are trying to marry the matrix operations native from the world of LLMs with the impossible algebra of polynomial commitment schemes, to achieve verifiable AI inference.
These experiments remind me of early spacefaring sci‑fi—full of prototypes and untested physics. But the direction feels right. The era of surveillance-by-default is crumbling; privacy is becoming the norm, not the exception.
Off-chain computation with on-chain attestation will redefine how decentralized systems scale. It’s the difference between blockchains as execution engines and blockchains as judges.
The Road Ahead: a Verifiable World
I see a trajectory defined by persistence, failure, accidental breakthroughs, and the kind of obsession that only deep technical fields inspire. I learned that impossible problems eventually become engineering tasks. That privacy, once dismissed as a niche concern, is becoming a structural necessity.
We are moving from systems that expose everything to systems that reveal almost nothing. From architectures built on surveillance to those built on consent. From centralized trust to cryptographic truth.
And I’m grateful I got lost in this labyrinth early. Because now, as privacy by design becomes real, I feel more convinced than ever that zero‑knowledge will shape the technological spine of the next era.
The future I imagine isn’t loud. It’s not a world of hyper‑visible blockchains or interfaces flooded with cryptographic jargon. It’s quieter; more like the technology described in Death’s End, the third book of Liu Cixin’s Three Body Problem saga. Where tech is invisible with translucent materials that serve both as architecture and active computational infrastructure.
In a world like that: computation is verifiable by default. Identity is provable and self-referential privacy is the baseline, not the exception. Systems are accountable but not voyeuristic. Exposure is a choice, not a requirement.
Zero‑knowledge proofs aren’t just technical artifacts; they’re moral ones. They allow honesty without strip‑mining personal data. They allow cooperation without surveillance.
If the 2010s were about blockchains, and the 2020s about AI, then the 2030s will hopefully be about trustless verification and global privacy.
The future is private, yet verifiable.
And we are building it, proof by proof, node by node.




I learned more from this article than whole of X about zk!
I finally understand the differences between zk-STARKS and zk-SNARKS. Thank you!