Market Ready

Inspiration

Students grind projects, courses, and “resume tips,” but still don’t know the truth: are they actually market-ready for internships/jobs? We wanted a system that doesn’t just give vibes—it demands proof, ties that proof to real market demand, and then produces a clear plan to close gaps.

What it does

Market Ready helps a student quantify and improve readiness with a few connected modules:

Market-Ready Index (MRI): a single score that blends standards, demand, and evidence quality

GitHub Signal Auditor: reads GitHub activity and repo signals to infer verified skills + velocity + warnings

Sentinel Market Guard: checks market shifts (job demand changes) and pushes actionable alerts

Interactive 90-Day Pivot Kanban: a drag-drop board with an AI-generated 90-day plan + optional GitHub sync

Future-Shock Simulator: stress-tests a skill profile against accelerated change and flags “at-risk” skills

Recruiter Truth-Link: generates a shareable public profile link (proof + score) for recruiters/coaches

How we built it

Frontend

Next.js 14 (App Router), React, TypeScript

Tailwind CSS + shadcn/ui for UI components

Backend

FastAPI (Python 3.11)

SQLAlchemy ORM + PostgreSQL + Alembic migrations

JWT auth (token header-based)

External data + signals

GitHub API (public signals)

Adzuna (labor market demand signals)

O*NET/CareerOneStop-style standards for “non-negotiable” requirements

LLM API (OpenAI; optional Groq provider) for generative features

The core scoring idea

We compute an MRI score as a weighted blend:

MRI=0.40(Federal Standards)+0.30(Market Demand)+0.30(Evidence Density)

Where:

Federal Standards = completion of “non-negotiable” + “strong signal” checklist items

Market Demand = how many verified skills match what the market is hiring for

Evidence Density = diversity/recency of proofs + GitHub signal bonuses

We also weigh self-attested proofs by proficiency (example policy):

Beginner = 50% credit

Intermediate = 75% credit

Professional = 100% credit

Challenges we ran into

Turning “proof” into something measurable: designing a scoring system that rewards evidence quality without being gameable

Signal noise: GitHub and job APIs contain messy, incomplete, or misleading data

AI safety + trust: making AI verification transparent (badges like Reviewing/Verified/Rejected) so users don’t treat AI as magic

Integration complexity: connecting scoring ↔ notifications ↔ kanban planning so it feels like one product, not separate demos

What we learned

A “career score” only matters if it’s paired with specific next actions

Market signals change fast—users need a monitor + alert loop, not a one-time dashboard

AI is most useful when it reduces friction (generate plans, map evidence, summarize gaps) while keeping humans in control

What’s next

Better skill-to-job matching with richer role taxonomies and embeddings

Stronger proof verification (more artifact types, clearer rubrics, audit trails)

Cohort/coaching mode for advisors and career centers

Personalized interview + project prompts generated directly from identified gaps

Built With

Share this project:

Updates