The Problem
the job market is broken for students and new graduates. the unemployment rate for recent college graduates climbed to 5.7% by the end of 2025, with underemployment hitting 42.5%, the highest since 2020. for the first time in recent history, 22-to-27-year-olds face higher unemployment than the general workforce.
it takes an average of 42 applications to land a single interview, with only 2.4% of candidates making it past the screen. most applicants submit 100-200+ applications before getting an offer, and cold applications convert at a brutal 0.1-2% rate. that's hundreds of hours tweaking resumes, writing cover letters, and filling out the same forms over and over.
what if a team of ai agents could discover which companies are actively growing, not from job boards, but from real workforce movement data, tailor your resume, submit the application, handle recruiter emails, and prep you for the interview with a real-time voice mock session? what if the whole pipeline ran autonomously in your sleep?
that's tapn.
What tapn does
tapn is a multi-agent ai system that automates the entire job search lifecycle. six specialized ai agents work in sequence, coordinated by a central conductor. the user doesn't touch a single form.
each agent operates from its own workspace with core instruction files: agents.md (role, rules, procedures), user.md (the user's full job search profile from onboarding), and soul.md (personality and communication style). together these give each agent its identity, its knowledge of the user, and its mission.
the pipeline
scout → taylor → echo → hermes → aria
scout runs a data analysis pipeline on a 75,000-record workforce dataset (~835mb, 3 jsonl files) from live data technologies using duckdb in-memory queries. instead of scraping job boards, it analyzes actual hiring patterns and computes a custom workforce signal model with five engineered features per company:
hire_velocity— time-decay weighted hire count (exponential decay, λ=0.02)acceleration_ratio— current vs prior window hiring momentumrole_diversity_index— distinct titles hired, signaling team buildoutseniority_mix— junior vs senior hire rationet_growth— hires minus attrition
companies score 0-100 via weighted composite, date-anchored to max(started_at). scout processes ~500 companies in ~15 seconds, takes the top 10, browses their careers pages, and extracts full job descriptions.
taylor takes the base resume + jd, extracts keywords into three tiers, rephrases and reorders content, and generates an ats-optimized pdf via latex. cardinal rule: every item traces back to the original resume. no fabrication. ever.
echo opens an isolated browser, fills the application form, uploads the tailored resume, and submits with url allowlists, data protection rules, and prompt injection detection baked in.
hermes runs on cron (every 2hrs, 9-5 weekdays), monitoring gmail for interview invitations, checking calendar conflicts, scheduling interviews, and responding to recruiters automatically.
aria generates 15-20 role-specific interview questions, configures a real-time voice interviewer via elevenlabs conversational ai, and after the mock, scores the transcript across five dimensions with a full debrief.
tapn (the conductor) orchestrates everything, spawning agents, passing data (full jd flows through every stage untruncated), deduplicating against supabase, and summarizing via telegram. a react + vite dashboard provides real-time visibility into every agent action (~1s latency via supabase realtime).
The Build
built on openclaw, an open-source multi-agent framework. the gateway (websocket) manages agent sessions, spawning, routing, and coordinating all six agents. each agent is powered by claude sonnet with behavior defined through agents.md files. every agent has declared permissions in openclaw.json — isolated chromium browser profiles for scout/echo, bash for taylor (latex) and hermes (gmail/calendar via gog cli), bash + web for aria (elevenlabs api).
the entire system runs autonomously overnight through cron jobs and openclaw's heartbeat mechanism. heartbeat keeps the openclaw gateway alive and agent sessions persistent, without it, long-running pipelines would drop mid-execution. cron handles scheduling: the full pipeline fires at 1am pst nightly, follow-up checks at 9am, and hermes monitors the inbox every 2 hours during business hours. this means tapn is genuinely submitting applications and processing interview invitations while the user sleeps, it's not a "click run and wait" tool, it's a set-it-and-forget-it system.
scout's data pipeline runs on duckdb with pandas and numpy for feature engineering. the webapp is react + vite with react router, framer motion, ogl (webgl shaders), react three fiber (3d galaxy model), and gsap (cinematic animations). supabase handles the backend — postgres, auth, rls, and realtime subscriptions across four tables (profiles, agent_logs, applications, mock_interview_sessions). mock interviews use elevenlabs conversational ai via @elevenlabs/react with signed urls so the api key never touches the browser.
Challenges we ran into
agent-to-webapp communication — getting the dashboard to reflect agent actions in real time meant nailing supabase realtime channels and rls policies. one misconfigured policy had us debugging for hours before we found the service-role key wasn't being passed correctly in curl headers.
multi-agent debugging — when something breaks at echo, you're tracing back through three agents worth of context. the full jd passthrough rule made this trickier: if any handoff truncated the job description, every downstream agent produced garbage.
duckdb optimization — deeply nested jsonl records required careful sql unnesting. our first version of scout_query.py ran for over two minutes; after optimizing with duckdb's native json handling, we got it down to ~15 seconds.
browser automation — every ats platform structures forms differently. file upload alone needed three fallback methods because no single approach worked across greenhouse, lever, indeed, and handshake.
Accomplishments that we're proud of
a fully autonomous pipeline from raw workforce data to submitted application to voice mock interview, zero manual input. the workforce signal model with five engineered features scoring 500 companies in 15 seconds felt like real data science. real-time voice mock interviews where aria dynamically writes a custom interviewer persona per role, patches it to elevenlabs, and the user walks into a conversation that feels like a real interview. the dashboard updating live as agents work via supabase realtime. and taylor's anti-fabrication system, in a world where every ai tool hallucinates, building an agent that structurally cannot lie on your resume felt like the right thing to do.
Findings and takeaways
multi-agent coordination is a completely different paradigm, the hardest part isn't making one agent smart, it's making six agents pass data without losing anything. workforce movement data is an incredibly underutilized signal; job boards are lagging indicators. browser automation humbled us, every platform is different and file uploads are never standardized. supabase rls is powerful but unforgiving; one bad policy silently blocks everything. and the biggest takeaway: agent design is product design. agents.md, user.md, and soul.md aren't just prompts, they're the product.
What's next for tapn?
deploying to vercel for public access.
expanding platform integrations — especially linkedin, which we deliberately avoided because we didn't have time to build browser security robust enough for agents to access a user's personal account.
evolving the workforce signal model with glassdoor sentiment, linkedin headcount trends, and time-series analysis to predict hiring cycles.
multi-user scale — the architecture (supabase rls, per-user profiles, per-user user.md) was designed for multi-tenancy from day one.
tapn started as a datathon project. we want to turn it into the job search tool we wish we had.

Log in or sign up for Devpost to join the conversation.