Inspiration

The inspiration for this project stems from the increasing interest in AI-driven game strategies and the exciting challenge of creating intelligent agents capable of competing at an expert level in dynamic environments. Push Battle presents a unique opportunity to test and improve AI techniques by combining elements of strategic decision-making, grid manipulation, and opponent prediction.

What it does

Push Battle is an AI competition featuring a two-player strategy game played on an 8x8 grid. Players place pieces on the board, pushing adjacent pieces away upon placement. The objective is to align three pieces in a row while navigating time constraints, valid move rules, and opponent strategies. The AI agent aims to make optimal moves to win the game while adapting to the opponent’s tactics and the constraints of the environment.

How we built it

We developed our AI agent using the provided API and starter code, focusing on optimizing decision-making algorithms. We explored various approaches to enhance the agent’s performance, including Minimax with Alpha-Beta Pruning, Monte Carlo Tree Search (MCTS), and Reinforcement Learning techniques. The AI evaluates the current game state, predicts the outcomes of possible moves, and selects the most strategic action based on both short-term tactics and long-term goals. To address the time limitations imposed by the competition, we implemented a Beam Monte Carlo approach that strikes a balance between exploration and computation time, ensuring decisions are made within the 5-second window.

Challenges we ran into

One of the primary challenges we faced was optimizing the agent’s decision-making within the tight 5-second time limit for each move. This required us to refine our algorithms for speed without sacrificing strategic depth. We also had to manage the trade-off between heuristic-based methods (fast but less precise) and more computationally expensive methods like MCTS (slower but more strategic). Balancing these two approaches to ensure high-quality decisions under time pressure was a key hurdle, and was ultimately affected by the time constraint.

Accomplishments that we're proud of

We are particularly proud of our AI's ability to adapt to various opponent strategies, making intelligent decisions even under strict time constraints. By combining several AI techniques, we were able to create a flexible agent capable of analyzing the board, predicting opponent moves, and adjusting its strategy accordingly to improve its chances of winning.

What we learned

This project deepened our understanding of AI and game theory, particularly in competitive, time-limited environments. We learned how to effectively apply search algorithms like Minimax and Monte Carlo Tree Search, and how to balance exploration and exploitation in decision-making. We also gained insights into the computational constraints of real-time AI and how to optimize algorithms to make quick, yet informed decisions.

What's next for Push Battle

In the future, we plan to refine our AI’s ability to handle edge cases and further improve its adaptability to different opponents’ strategies. We will continue to optimize our algorithms, experimenting with hybrid approaches like combining Reinforcement Learning with MCTS to enhance both short-term decision-making and long-term strategic planning.

Built With

Share this project:

Updates