Project Story: Perplexity ReSearcher
Inspiration
The project was inspired by the imbalance between talented researchers and the lack of access to opportunities. As Creative AI co-Chair at NeurIPS 2025, I noticed researchers from São Paulo, Lagos, and Mumbai getting papers accepted but unable to afford the $4,700 required to attend. At the same time, AI companies spend $30,000+ per hire looking for people with the same expertise these authors already demonstrated in their peer-reviewed work.
Traditional hiring asks, "Do you have this skill?"—which is unverifiable. Research papers instead answer, "Can you reason through complex problems?"—with peer-reviewed evidence.
If we can match demonstrated reasoning (in papers) with real-world problems (in jobs), we can replace $30K recruiters with $5K sponsorships that fund researchers to attend and present. This aligns with Perplexity’s mission: helping people ask better questions and get evidence-based answers.
What it does
Perplexity ReSearcher connects companies with researchers by matching job descriptions to conference papers through reasoning-based analysis.
For Researchers
- Claim your accepted paper.
- The system analyzes your publications and extracts demonstrated expertise.
- Discover companies looking for similar problem-solving abilities.
- Accept sponsorships to attend and present your work.
For Organizations
- Paste a job description.
- The system extracts required domains and methods.
- Searches papers that demonstrate matching reasoning.
- Returns ranked, evidence-backed researcher profiles.
Result: Verified expertise replaces keyword filtering, enabling direct, transparent collaboration between researchers and recruiters.
How we built it
We built a full-stack AI-powered application using Next.js, integrating the Perplexity API (sonar-pro) for reasoning and skill extraction.
System Architecture
Frontend (Next.js + React + Tailwind)
|
| HTTP / API Calls
v
Backend (Next.js API Routes + Node.js)
- Input validation
- Business logic
- Cache & in-memory DB
- Calls Perplexity API
|
+-------------------------------+
| |
v v
Perplexity API (sonar-pro) In-memory DB (cache)
- Skill extraction - Stores profiles
- Paper analysis - TTL: 30 minutes
- Job matching
Tech Stack
- Languages: TypeScript, JavaScript
- Framework: Next.js 14 (full-stack React framework)
- Frontend: React 18 with Tailwind CSS
- Backend: Next.js API Routes on Node.js
- AI Integration: Perplexity API (sonar-pro) for reasoning, skill extraction, and job-paper matching
- Storage: In-memory
Mapfor temporary researcher data (30-minute TTL) - Deployment: Vercel (serverless functions)
- Data Source: Google Scholar via Perplexity search
- Version Control: GitHub
- Environment Management:
.envfor secure API key handling
Perplexity API Integration
The Perplexity API powers all reasoning and analysis features.
- Researcher Profile Fetching
async fetchResearcherProfile(name, affiliation, limit = 10)
- Searches Google Scholar via Perplexity
- Extracts top N papers and author details
- Returns structured researcher profiles
- Researcher-Job Matching
async matchPaperToJob(paper, jobDescription)
- Compares paper content and job requirements
- Returns a score (0–100), reasoning summary, and citations
- Skill Extraction
async askAboutResearcher(question, researcher)
- Identifies technical domains and methods used
- Generates concise skill summaries
- Paper Recommendations
GET /api/researcher/recommendations?areas=<topics>
- Suggests related NeurIPS 2024 papers
- Ranks by relevance with reasoning explanations
API Configuration
{
"model": "sonar-pro",
"temperature": 0.2,
"return_citations": true,
"search_domain_filter": ["arxiv.org", "scholar.google.com", "github.com"]
}
Performance Optimizations
- Parallel Processing: All researcher analyses run concurrently (
Promise.all) - Caching: 30-minute in-memory TTL for repeated queries
- Smart Prompts: Concise structured prompts reduce latency and token cost
- Error Handling: Graceful degradation with retry logic
- Batching: Combines multiple API calls per researcher for efficiency
Usage
Researchers
- Enter name and affiliation
- Select correct profile (if multiple)
- View publications and recommended related work
Recruiters
- Enter job description
Get ranked researcher matches with:
- Match score (0–100)
- Key skills and expertise
- Evidence-backed reasoning
- Relevant paper citations
Admin
- Access
/adminfor researcher database - Includes seeded profiles (e.g., Yann LeCun, Geoffrey Hinton, Fei-Fei Li)
Data Sources: OpenReview (simulated), arXiv, Google Scholar, GitHub
Core Innovation: Uses Perplexity to match reasoning ability, not keywords—every match is supported by citations.
Challenges we ran into
- OpenReview API Access: Required institutional permissions; simulated realistic structure for demo.
- Keyword vs Reasoning: Initial keyword approach lacked depth; reframed prompts to infer reasoning and expertise.
- Redefining Matches: Focused on “demonstrated reasoning ability” instead of direct problem-solution matching.
- Transparency: Added “Inspect Reasoning” to show match evidence with citations.
- Demo Data Quality: Created synthetic but realistic researcher profiles for ethical demonstration.
Accomplishments
- Built a reasoning graph that connects expertise to opportunities
- Created a transparent AI pipeline with verifiable citations
- Achieved 85% reduction in recruiting cost through reasoning-based matching
- Delivered a functional two-sided portal for researchers and recruiters
- Demonstrated a multi-step reasoning process with interpretable outputs
- Proved papers can autonomously generate funding opportunities
What we learned
- Peer Review is Verification: Papers validated by experts provide stronger proof than résumés.
- Research ≠ Final Solution: Papers show reasoning ability—what hiring truly needs.
- Problems Drive Knowledge: Following Popper’s view, progress comes from tackling evolving problems, not static roles.
- Citations Build Trust: Perplexity’s grounded answers make AI matching credible.
- Better Questions Improve Outcomes: “Who has reasoned through X?” is more powerful than “Who has X on their résumé?”
- Living Expertise Graph: Once published, a paper continuously participates in new reasoning matches.
Main takeaway: Perplexity ReSearcher is not just a hiring tool—it’s reasoning infrastructure for human expertise.
What's next for Perplexity ReSearcher
Immediate (0–3 months)
- Integrate official OpenReview API
- Pilot program at a major ML conference
- Add sponsorship payment via Stripe Connect
Medium Term (6–12 months)
- Expand to major venues: NeurIPS, ICML, ICLR, CVPR, ACL
- Add reasoning graph analytics and cross-conference expertise mapping
- Enable real-time collaboration recommendations
Long Term (2–3 years)
- Transform conferences into self-funding ecosystems via sponsorships
- Extend reasoning graph beyond academia (GitHub, patents, blogs)
- Establish “Reasoning as a Service” infrastructure for global expertise discovery
Built with curiosity. Powered by reasoning. Grounded in evidence. Papers autonomously generate funding opportunities.
Built With
- javascript
- next.js14
- nextjsapi
- perplexityapi-sonorpro
- react-18
- react18
- tailwindcss
- typescript
Log in or sign up for Devpost to join the conversation.