The Anthropic AI Hackathon, Toronto 2025 is an 3-week, project-based hackathon where teams build anytime throughout November and then present live on November 23rd at the University of Toronto’s Bahen Centre for Information Technology. Participants will use the Claude API to create safe, creative, and human-centered AI systems across reasoning, accessibility, education, civic tech, and more.

Builders work on their own schedule during the month, but everything leads to Demo Day, where teams showcase their projects, meet sponsors, and compete for prizes.

Link to Hacker Guide and Schedule

Requirements

To be eligible for judging on November 23, each team must submit the following on Devpost before the deadline:

✅ 1. Project Demo Video / Presentation (max 5 minutes)

A clear walkthrough of your project, showing how it works and what problem it solves.

If you are entering track 3, you should make a powerpoint presentation or slideshow instead of a video (video is optional if you have a product). For more details on track 3 deliverables, see this document.

✅ 2. GitHub Repository

Include:
- All source code
- A readable README explaining setup + how to run your project
- Any datasets, prompts, or configuration files needed for evaluation

✅ 3. Written Project Summary (≤ 500 words)

Describe:
- The problem you’re solving
- Your solution + key features
- Why it matters (impact)
- How Claude is used

✅ 4. Track Selection

Choose which track(s) you’re submitting to (Reasoning Systems, Human-Centered AI, Agentiiv Challenge).

Hackathon Sponsors

Prizes

10 non-cash prizes
Best Overall Hack
1 winner

Awarded to the highest-scoring team across all tracks for pushing boundaries with technical depth, bold creativity, or redefining what's possible with Claude.

Reasoning Systems Track: First Place
1 winner

Awarded to the highest-scoring team in the Reasoning Systems Track for building tools that extend human thinking, reasoning, and creation.

Reasoning Systems Track: Second Place
1 winner

Awarded to the second highest-scoring team in the Reasoning Systems Track for building tools that extend human thinking, reasoning, and creation.

Reasoning Systems Track: Third Place
1 winner

Awarded to the third highest-scoring team in the Reasoning Systems Track for building tools that extend human thinking, reasoning, and creation.

1st Place: Human Centered AI Track
1 winner

Awarded to the highest-scoring team in the Human-Centered AI Track for using AI to solve real problems for real people.

2nd Place: Human Centered AI Track
1 winner

Awarded to the second highest-scoring team in the Human-Centered AI Track for using AI to solve real problems for real people.

3rd Place: Human Centered AI Track
1 winner

Awarded to the third highest-scoring team in the Human-Centered AI Track for using AI to solve real problems for real people.

1st Place: Agentiiv Scholarship Challenge
1 winner

Awarded to the highest-scoring team in the Agentiiv Challenge for building the smartest system to turn scholarship data into winning applications.

2nd Place: Agentiiv Scholarship Challenge
1 winner

Awarded to the second highest-scoring team in the Agentiiv Challenge for building the smartest system to turn scholarship data into winning applications.

3rd Place: Agentiiv Scholarship Challenge
1 winner

Awarded to the third highest-scoring team in the Agentiiv Challenge for building the smartest system to turn scholarship data into winning applications.

Devpost Achievements

Submitting to this hackathon could earn you:

Judges

James Han

Judging Criteria

  • Innovation
    How original, creative, or insightful your idea is. Judges look for projects that push boundaries, explore new directions, or apply Claude in a novel way.
  • Technical Execution
    How well the project is engineered. This includes code quality, system design, robustness, and the depth of your implementation—not just UI polish.
  • Impact
    How meaningful the project is in the real world. Does it solve a real problem? Is the solution clear, useful, and well-justified?
  • Human Experience
    The clarity and quality of the user experience. Judges evaluate accessibility, usability, intuitive design, and how well the project aligns with human-centered or ethical principles.
  • Presentation
    How clearly you communicate your project during the demo. Strong storytelling, effective explanations, and a smooth demo all matter.

Questions? Email the hackathon manager

Tell your friends

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.