Inspiration

Hiring is broken — recruiters spend hours combing through resumes, manually verifying skills, and trying to guess whether a candidate’s GitHub and LinkedIn actually reflect what they say on paper. We wanted to fix that.
ScreenAI was born out of a desire to automate, verify, and intelligently score candidates using AI, so recruiters can screen smarter and hire faster. With ultra-fast LLMs from Groq, we saw a chance to rethink the screening process entirely.


What it does

ScreenAI is an AI-powered candidate screening platform that:

  • Accepts resumes in PDF format and extracts structured data.
  • Automatically scrapes the candidate’s GitHub, LinkedIn, personal websites and their online presence.
  • Uses Groq-powered LLMs to analyze the candidate’s profile, validate claims, and match them against job descriptions.
  • Assigns a fit score (0-100) with detailed breakdowns, highlighting strengths, gaps, and red flags.
  • Offers recruiters a dashboard to manage jobs and candidate resumes, with real-time processing updates via SSE.

How we built it

Frontend:

  • Built using Next.js 14 (App Router), TypeScript, and Tailwind CSS.
  • Responsive UI with Radix UI, Lucide Icons, and custom animations (glassmorphism, transitions).
  • Seamless UX from resume upload to real-time analysis feedback.

Backend:

  • Next.js API routes for all processing, powered by Server-Sent Events (SSE) for live updates.
  • Groq API for blazing-fast LLM-based analysis (Llama 3.3 70B, Llama 3.1 8B, Gemma 2 9B).
  • PDF parser for extracting resume content.
  • Cheerio + APIs for scraping GitHub, LinkedIn, and portfolio data.

Architecture:

  • Modular service-based architecture: parsing, scraping, and AI analysis as separate services.
  • Mock data fallback if no API key is provided (for easy demoing).
  • Secure file handling and clean directory structure.

Challenges we ran into

  • LinkedIn anti-scraping restrictions limited how much candidate data we could extract reliably.
  • Integrating SSE with a loading UI in Next.js required fine-tuning for seamless feedback.
  • Balancing analysis speed with model size — while Llama 70B offers richer results, it's heavier, so we also used smaller models (8B, Gemma 2 9B) for specific tasks.

Accomplishments that we're proud of

  • End-to-end screening experience from resume upload to red flag detection — in under 2 minutes.
  • Job-fit scoring that’s actually explainable using AI-generated reasoning.
  • A clean, modern UI with fluid interactions and real-time insights.
  • Built-in job posting assistant using natural language prompts — it feels like magic.

What we learned

  • How to build fast and reliable pipelines using Groq's LLMs, and the massive difference inference speed makes in UX.
  • Deepened our experience in real-time frontend updates with SSE and AI data orchestration.
  • Gained practical insights on multi-model usage, where different LLMs shine at different parts of the candidate analysis process.
  • Got better at creating developer-friendly architectures, enabling future feature expansion.

What's next for ScreenAI

  • Advanced candidate comparison — stack ranked comparisons between multiple applicants.
  • Custom model fine-tuning on internal recruiter feedback or company-specific hiring signals.
  • Integrated ATS support (e.g. Greenhouse, Lever).
  • Candidate feedback reports — so applicants know why they didn't move forward.
  • Mobile support and an extension to evaluate candidates directly from LinkedIn/GitHub.
  • A recruiter copilot — voice-powered assistant that suggests top candidates, flags risky hires, and summarizes profiles in seconds.
  • Streamlined background checks so it everything is done in one click

Built With

Share this project:

Updates