Inspiration

I was inspired to explore a heuristic approach because I wanted to create a Tron agent that made smart, explainable decisions without relying on machine learning. I wanted the AI to “think” logically about space, territory, and traps — similar to how a human might play — while keeping it efficient and deterministic.

What it does

The agent plays Tron on an 18x20 toroidal grid, where the edges wrap around. It uses spatial reasoning, breadth-first search, and region control to make decisions each turn. The AI evaluates safe zones, predicts opponent moves, and maximizes its survival space while trying to limit the opponent’s. It also includes adaptive risk handling and trap detection logic.

How we built it

We built it using Python with a Flask server to communicate with the Tron judge engine. The AI logic was developed iteratively — starting from simple movement rules, then layering BFS-based space estimation, opponent simulation, and dynamic heuristics. The result is a fully deterministic but highly adaptive agent that can compete against trained models and random agents alike.

Challenges we ran into

One major challenge was handling the toroidal map, where moving off one edge teleports you to the other side. Ensuring BFS, collision detection, and region control all worked correctly under those conditions took a lot of debugging. Another challenge was balancing the heuristics — avoiding overfitting to specific scenarios while keeping the logic general enough to handle any opponent style.

Accomplishments that we're proud of

We’re proud that our AI performs strongly without any machine learning or training data. It can strategically trap opponents, survive long in complex maps, and recover gracefully from disadvantageous positions. The BFS region analysis and opponent prediction modules turned out to be both efficient and effective.

What we learned

We learned how powerful heuristic reasoning can be when combined with clean logic and search-based decision-making. Implementing modular AI components (for movement, risk assessment, and prediction) made the agent easier to debug and extend. We also gained a deeper appreciation for algorithmic optimization and game-theoretic reasoning.

What's next for Case Closed

Next, we plan to hybridize the heuristic agent with reinforcement learning — letting a neural model fine-tune certain weights while keeping the logical backbone intact. We also aim to generalize the AI for other grid-based or adversarial games, and to visualize its decision process for explainable AI analysis.

Built With

Share this project:

Updates