Inspiration

Trust online is broken. Every platform makes you start from zero.

You apply for a job with five years of experience, strong references, and a proven track record. But the company uses a different system — so none of that transfers. You upload the same resume, fill out the same forms, start from scratch. You hire someone on a freelance platform with great reviews, then move the work off-platform to save fees — suddenly those reviews don't exist. You're trusting a stranger again.

And the most powerful trust signal humans have — word-of-mouth vouching — doesn't exist online at all. Your friend says "I've worked with this person, they're great." That vouch carries weight in real life, but there's no way to verify it, quantify it, or carry it into a job application, a contract, or a new platform.

We asked: what if your reputation was portable, stakeable, and worked for both you and your AI agents?

With the rise of AI agents acting on behalf of humans, this problem is about to get much worse. When an agent applies for a job, submits code, negotiates a contract, or represents you in a meeting — who is accountable? Veridex was built to answer that question before it becomes a crisis.

What it does

Veridex is a trust protocol built on World ID and Worldchain that turns fragmented online reputation into a single, verified identity — portable across jobs, platforms, and services. It has five layers:

  1. Proof of Personhood — Every user verifies through World ID. One real human, one identity. No bots, no duplicates. This is the foundation.

  2. Reputation Aggregation — We pull verified signals from GitHub (repos, commits, languages, collaboration patterns), LinkedIn exports, portfolio URLs, and supporting documents. A deterministic scoring pipeline (no LLM in the scoring loop) computes a Veridex score from six weighted components:

$$\text{Veridex Score} = \sum_{i=1}^{6} w_i \cdot S_i$$

where $S_i \in {\text{identity, evidence, consistency, recency, employer outcomes, staking}}$

  1. Social Staking on Worldchain — Anyone can vouch for another user by staking real ETH on Worldchain. When that person delivers — completing contracts, earning reviews, building a track record — stakers earn a proportional reward. If trust is misplaced, stakers lose their stake. This creates a self-correcting reputation market with real skin in the game — settled transparently on World's infrastructure.

  2. Contracts & Reviews — Employers hire workers through escrowed contracts. The buy-in cost scales with the worker's stake backing — more trusted workers cost more to hire, but come with economic guarantees. After completion, employers leave reviews that directly affect the worker's score.

  3. Agent Credentials — Users register AI agents tied to their World ID-verified identity. Each agent receives a credential ID and a delegated Veridex score. Any third-party service can call our public API to verify the human behind an agent before granting access.

How we built it

Frontend: Next.js 14 (App Router) with a custom glass morphism design system — powder blue backgrounds, translucent cards with backdrop blur, Fraunces/Inter typography. All styling uses design tokens for consistency across 15+ pages.

Backend: Express.js + TypeScript with JWT authentication, connected to Supabase (PostgreSQL) with Row Level Security. The reputation pipeline is fully deterministic — no LLMs in the scoring loop.

AI Integration: Gemini 3 Flash powers two features: (1) a contextual chat that lets anyone ask questions about a user grounded in their real verified data, and (2) evidence extraction from LinkedIn PDFs and portfolio documents.

Identity & Chain: World ID for proof-of-personhood verification — this is the root of the entire trust system. Worldchain handles staking, vouch settlement, and reward distribution. GitHub OAuth for developer signal ingestion, and MetaMask wallet connection for on-chain interactions.

Scoring Pipeline: Six components are computed from verified signals — identity assurance, evidence depth, consistency, recency, employer outcomes, and staking weight. Each component uses deterministic rules (commit thresholds, review sentiment, stake-weighted trust propagation) rather than LLM inference.

Challenges we ran into

World ID integration was the hardest part. The IDKit WASM bridge, QR code generation, and session management involve multiple moving parts. We spent significant time debugging the rp-context flow and WebSocket connections between the browser and the World App.

Merge conflicts across a distributed team were frequent — multiple teammates working on the same files (dashboard, onboarding, API routes) meant careful conflict resolution to preserve everyone's features while maintaining design consistency.

Deterministic scoring without LLMs required careful algorithm design. We needed the scoring to be auditable and reproducible — the same inputs always produce the same score — while still being meaningful. Balancing six components with different data availability (some users have GitHub, some don't, some have reviews, some only have stakes) required graceful degradation logic.

Real-time evidence extraction from LinkedIn PDFs and portfolio URLs needed to be fast enough for the onboarding flow while still being accurate. We used Gemini for extraction but kept it out of the scoring pipeline to maintain determinism.

Accomplishments that we're proud of

  • The staking mechanism works end-to-end on Worldchain — real ETH staking through MetaMask, on-chain verification on Worldchain, proportional reward distribution on contract completion, and stake-weighted trust propagation.

  • The AI chat is grounded in real data — when someone asks "What's this person's experience with React?", the response cites actual repo names, commit counts, and review quotes. No hallucination, no generic answers.

  • Agent credentials solve a real problem — as AI agents become more prevalent, the question "who is accountable for this agent?" needs an answer. Our public verification API lets any service trace an agent back to a World ID-verified human in one call.

  • The scoring pipeline is fully deterministic and auditable — every component of the Veridex score can be traced back to specific verified signals. No black boxes.

  • Role-aware UX — different user types see different dashboards, onboarding flows, and actions. The experience adapts to who you are and what you need.

What we learned

Trust is a multi-dimensional problem. A single number isn't enough — you need to show the breakdown. Identity verification, evidence depth, consistency over time, recency of activity, employer outcomes, and community staking all tell different parts of the story.

Staking changes behavior. When reviewers have to put money behind their reviews, the quality of feedback improves dramatically. Skin-in-the-game is a better spam filter than any algorithm.

Agent accountability is an unsolved problem. We went into this hackathon thinking agents were a nice-to-have feature. We came out believing that binding AI agents to verified human identity is one of the most important infrastructure problems for the next few years.

Design consistency matters more than individual feature polish. Having a cohesive visual language (the powder blue glass morphism system) across every page made the product feel 10x more polished than having individually beautiful but inconsistent screens.

What's next for Veridex

  • Agent credential standard — publish a simple integration spec: how sites receive a credential ID, what GET /api/agent/:id guarantees, and what humans are liable for. Make it easy for any service to verify an agent in one call.
  • Risk tiers per surface — let humans cap what an agent is allowed to "spend" in trust (e.g. browsing vs checkout vs contract signing) rather than pretending one score fits everything.
  • Audit trail for high-stakes actions — signed logs that link agent actions to a credential and parent human, enabling dispute resolution without overengineering governance on day one.
  • Employer and marketplace pilots — one real integration path that shows trust-gated access before bids, messages, or payouts.
  • World App-first flows — verification and agent credential issuance optimized for how people already prove personhood through World App, not a separate mobile build.
  • Portable reputation connectors — prioritize sources that carry client-paid outcomes (marketplaces, paid invoices) over vanity profiles, starting with one high-signal integration.

Built With

Share this project:

Updates