DunkDex

Inspiration

When we were kids, Pokedexes felt like magic. A portable device that could scan a creature and instantly tell you everything about it. Pokemon may not be real, but the NBA gives us something close every night. Seven feet tall freak of nature athletes doing things that barely seem possible. We wanted to bring that childhood feeling into the real world, so we built DunkDex, a portable handheld basketball Pokedex that scans NBA players and reveals their information like they are entries in a living sports encyclopedia.

We also wanted the project to feel fun, not just technical. That led us to add collectible player entries, retro-inspired visuals, and challenge-based tasks that turn the experience into something between a stats tool and a game.

What it does

DunkDex is a lightweight handheld system that lets users:

  • scan NBA players using a camera
  • identify players through facial recognition
  • unlock player entries in a Pokedex-style interface
  • view player information like height, weight, team, number, position, and college
  • complete randomized tasks to gamify the experience
  • check live and upcoming NBA game information

The goal was to make the device feel nostalgic, interactive, and portable while still being technically impressive.

How we built it

DunkDex was built through a combination of embedded hardware, computer vision, web development, and data integration.

Hardware

We built the handheld around an Arduino Uno Q and an ESP32-S3-EYE, along with a Logitech USB Camera and external display hardware through an LCD screen. We also designed and 3D printed a retro-style frame to give the device the look and feel of a real handheld console.

Originally, we planned to have the Arduino drive the display directly, but that became one of the biggest pivots of the project. Instead, we used the Arduino-hosted server as the brains of the system and relied on the ESP32 to display the interface, which ended up being a much more workable solution under hackathon time pressure.

Software and ML

Our CS team split into two major tracks:

  • Computer vision / player recognition
  • Frontend + backend / device interface

On the computer vision side, we trained and tuned a facial recognition pipeline using OpenCV and related tooling on curated NBA player datasets. To keep inference fast and realistic for embedded hardware, we limited our scope to playoff teams, which helped reduce the model size and improve responsiveness. We also wanted to avoid simply leaning on a black-box solution; part of the motivation was to better control the training process and be more intentional about dataset quality and fairness.

When our initial model and dataset underperformed, we had to make a major mid-project pivot: retrain, refine, and recalibrate halfway through development. That ended up being one of the best lessons of the weekend.

Backend and web interface

The software interface was built as a lightweight Flask server serving HTML, CSS, and JavaScript pages. The handheld experience used a menu-driven interface with separate pages for scanning, browsing player entries, tasks, and game information.

When a player is scanned:

  1. the camera captures an image
  2. the backend sends it through the recognition pipeline
  3. the recognized player is matched against our player database
  4. the player is marked as found in a JSON-backed state file
  5. the interface updates the DunkDex with that player's entry and sprite

We also created backend endpoints to support the device flow, including routes for recognition, state updates, and a snapshot/display pipeline for the screen.

Data

We pulled and organized player metadata such as:

  • height
  • weight
  • jersey number
  • team
  • position
  • college

We also integrated live basketball data through the NBA API so users could see active or upcoming games and find relevant teams and broadcasts. On top of that, we created 60+ unique in-app tasks to encourage exploration and replayability.

Challenges we ran into

This project involved a lot of pivots.

1. Our first model did not work well enough

Our initial recognition pipeline and dataset were not accurate enough for the experience we wanted. Midway through the event, we had to accept that it was not going to be good enough and retrain with a better setup. Starting over that late was painful, but it was the right decision.

2. Display hardware nearly derailed the project

One of our biggest technical struggles was getting the LCD display to interface properly with the Arduino. Around 7 hours into the competition, it became clear that our original display plan was failing and that we could not afford to keep forcing it.

We pivoted to a workaround: serve the interface through the Arduino-hosted web app, generate a display-friendly snapshot view, and send that output to the ESP32, which handled the screen more reliably. That introduced a bit of latency, but it allowed us to preserve the UI quality and actually ship a functional handheld experience.

3. Networking issues

Another major issue was connectivity. Local networks like MakeNJIT and the campus Wi-Fi were not cooperating with the board setup, so we had to pivot again and run the project off a mobile hotspot. That added friction during testing and deployment, but it ultimately gave us a stable enough environment to demo the device.

4. Camera integration and power

Getting the camera to communicate properly with the board was harder than expected. We found a tutorial that helped us move forward, but we also discovered we needed a powered USB hub to make the setup work consistently, which meant another last-minute hardware run during the hackathon.

What we learned

This project taught us a lot beyond just the tech stack.

  • Pivoting is a skill. Some of our most important decisions were not about building the original plan, but about recognizing when the original plan was failing.
  • Hardware constraints matter early. We learned the importance of validating displays, cameras, and board compatibility before building too much around them.
  • Embedded systems reward simplicity. Lightweight pages, smaller datasets, and scoped-down inference made a huge difference.
  • Starting over is sometimes faster than forcing a broken path. Retraining the model and redesigning the display pipeline were both hard decisions, but they saved the project.
  • Team splitting worked well. Having some people focused on vision and others focused on backend/UI let us move in parallel and keep momentum.

Accomplishments we're proud of

We are especially proud that DunkDex ended up being:

  • portable
  • lightweight
  • low-latency
  • fun to use
  • visually polished
  • actually functional on constrained hardware

A lot of hackathon projects stay as demos on laptops. We are proud that this felt like a real handheld product, complete with scanning, progression, live data, and a physical shell.

Key technologies

  • Arduino Uno R4 WiFi
  • ESP32
  • OpenCV
  • Flask
  • HTML / CSS / JavaScript
  • JSON state storage
  • NBA API
  • 3D printing
  • camera + USB hub hardware integration

What's next

DunkDex has a lot of room to grow.

Some of the next directions we are excited about:

  • expanding beyond playoff teams to cover the entire NBA
  • adding support for other sports leagues
  • adapting the concept for international tournaments like the World Cup
  • improving model accuracy and adding more players
  • refining the display pipeline to reduce latency even further
  • expanding the collectible/task system into a richer progression experience

DunkDex started as a nostalgic idea, but it ended up becoming a real lesson in embedded systems, machine learning, product design, and resilience under pressure.

Built With

Share this project:

Updates