DEADLINE EXTENDED BY 24HOURS

📢 UPDATED DATASET FOR THE HACKATHON IS NOW LIVE!

The official dataset for the HackTheFest AI Bias Bounty Hackathon are now fully available.

🔗 Access it here: Download Dataset Folder Here

(Please review the README inside and Onboarding document carefully for field descriptions and structure.)

📘 Before diving in, please read through the Participant Onboarding PDF to understand what to build, expected deliverables, and evaluation criteria.

How it Works

  • The dataset released on Day 0( loan_access_dataset) contains the full training data, including the loan_approved column.
  • Today, we've released a new evaluation file: test_csv, which contain new applicant records without the Loan_Approved column.
  • Your task is to use your trained model to generate predictions for this new test set.
  • Submit your final predictions as submission.csv using this format: ID | LoanApproved

📁 Submission templates can be found in the Resources folder on our GitHub Repository.

📝 Submit your final project through:

We can't wait to see what you build! 🚀

If you have any questions, join us on Slack

 

⚠️ IMPORTANT: You can participate as an individual or form a team of 2-4 members.

Still assembling your team? Start by registering here on Devpost. Devpost makes it easy to connect with others and form a team. Once your team is finalized, each member must complete the official registration form via the website to be fully registered for the hackathon.

Please ensure your entire team is ready before submitting the official registration form on our website. If your team details change after registering, contact us at support@hackthefest.com.

 

About the challenge

Welcome to the AI Bias Bounty Hackathon, a community-powered competition focused on uncovering, documenting, and mitigating harmful bias and risk in AI systems. Whether you're a student, researcher, engineer, red teamer, or concerned techie, this event is your chance to contribute meaningfully to AI safety while building tools that will benefit the entire ecosystem.

The AI Bias Bounty Hackathon invites you to step into the role of an AI risk detective: investigate datasets, detect hidden bias, document real harm, and build models/tools that make AI safer for everyone. This is your opportunity to explore the dark corners of machine behavior, contribute to the world’s first open AI Risk Intelligence Framework, and create tools that will be used far beyond this event.

 

What is AI Bias Bounty?

AI Bias Bounty is a 48-hour, hands-on, impact-focused hackathon designed for researchers, engineers, students, and red teamers who care about responsible AI. It’s inspired by security bug bounty programs, but instead of vulnerabilities in code, we’re mapping bias, hallucination, discrimination, data risk, and misuse in real-world AI systems.

You’ll work with:

  • Our simulated financial datasets

  • Ready-to-use template for documenting risk

  • Starter kits and onboarding guides

You’ll test, detect, report, and contribute, and your work will live on as part of a global GitHub archive that’s open to the public.

 
Why Join?
  • Make real impact by helping shape how AI harm is identified and discussed.
  • Gain hands-on experience testing models, analyzing datasets, and building auditing tools.
  • Learn valuable skills in red teaming, bias analysis, and ethical model evaluation.
  • Contribute to public AI risk database and earn prizes and recognition for your work.

 

Key Dates

  • Registration: June 4th - 27th, 2025
  • Kickoff Event: June 28th, 2025
  • Onboarding: June 28th - 30th, 2025
  • Hackathon Begins: July 1st - 3rd, 2025
  • Judging Period: July 5th - 15th, 2025
  • Winners Announced & Certificates: July 17th & July 23rd, 2025 respectively

Requirements

What You Will Build

Dataset Bias Detection + Model Tool

  1. Explore the dataset and train a classification model for prediction using techniques like Logistic Regression, Random Forest, Transformer models, or XGBoost.
  2. Audit your model for fairness, including intersectional analysis and identifying false positives/negatives by group/subgroup.
  3. Apply and evaluate bias mitigation techniques such as reweighting, data balancing, or adversarial methods to improve fairness without sacrificing performance.
  4. Use interpretability tools like SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), or Fairlearn to explain how features contribute to predictions and fairness outcomes.
  5. Submit your trained model, source code, visuals, and a structured bias report documenting your process, analysis, and fairness interventions.

Hackathon Sponsors

Prizes

3 non-cash prizes
Ist Prize
1 winner

$1000 Azure OpenAI Credits
$100 Gift Card
Certificate of recognition

2nd Prize
1 winner

$600 Azure OpenAI Credits
$50 Gift Card
Certificate of recognition

3rd Prize
1 winner

$400 Azure OpenAI Credits
Certificate of recognition

Devpost Achievements

Submitting to this hackathon could earn you:

Judges

Oleksandr Kondratiuk

Oleksandr Kondratiuk
CTO | akirolabs

Sudheer Obbu

Sudheer Obbu
Vice President | JPMorgan Chase

Jenna Cavelle

Jenna Cavelle
Founder | One Woman Show AI

Sarah Choudhary

Sarah Choudhary
CEO | ICE rideshare

Ganesh Harke

Ganesh Harke
Vice President Technology | Citi

Iyanuoluwa Ajao

Iyanuoluwa Ajao
Senior Applied AI Engineer | Dataligence Labs

Yetunde Adekoya

Yetunde Adekoya
Quantitative Risk Analyst | Citibank, N.A.

Madhu Ramanathan

Madhu Ramanathan
Principal Group Engineering Manager | Microsoft

Alankar Agnihotri

Alankar Agnihotri
Senior Product Manager - Head of Gemini in Auto | Google

Harpreet Singh

Harpreet Singh
Executive Director | JPMorgan Chase & Co

Pratik Badri

Pratik Badri
VP, Quant Analytics Manager | JPMorgan Chase & Co

Laticbe Elijah

Laticbe Elijah
Yeshiva University

Rajesh Sura

Rajesh Sura
Head of Data Engineering and Analytics | Amazon

Raghav Sharma

Raghav Sharma
Machine Learning Engineer | Workday Inc

Praneetha Kotla

Praneetha Kotla
Lead RPA Developer | ERP Smartlabs

Judging Criteria

  • Accuracy of Bias Identification (30%)
    How effectively does the submission identify multiple types of bias, provide supporting evidence, and handle potential errors?
  • Model Design and Justification (20%)
    How appropriate is the chosen model for the task, and how well is the design and reasoning behind it explained?
  • Coverage of Bias Types (15%)
    To what extent does the submission address a wide range of bias categories, from minimal to comprehensive coverage?
  • Interpretability and Insight (15%)
    How clearly and insightfully does the submission explain the identified biases, especially through tools, visualizations, or interpretability techniques?
  • Mitigation Suggestions or Solutions (10%)
    Does the submission offer thoughtful and practical suggestions to reduce or mitigate the identified biases, including improvements to the model?
  • Presentation and Clarity (10%)
    How clearly is the submission presented through visuals, written documentation, and/or demonstrations?

Questions? Email the hackathon manager

Tell your friends

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.