An election voting simulation featuring 5 AI-powered villager agents with distinct personalities, each simulated by Qwen3-8B language model running on Modal. Politicians can interact with villagers through dialogue and broadcasts to persuade them to vote.
This backend enables interactions between 2 politicians and 5 villager agents in a town hall election simulation. Each villager has unique personality traits, values, and voting preferences that can be influenced through conversation.
- Backend: FastAPI server handling agent interactions and state management
- Inference: Modal-hosted vLLM server running Qwen3-8B-FP8 for agent responses
- Storage: In-memory agent state with conversation history
- Sarah the Waitress - Working class, pro-welfare, liberal, anti-immigration (job concerns)
- Margaret the Librarian - Education-focused, liberal, middle class, pro-health and education spending
- Brother Thomas (Monk) - Spiritual, highly conservative, pro-health spending, selective immigration (religion-based)
- Officer James (Police) - Pro-defense budget, slightly conservative, anti-immigration
- Emily the Mother - Wealthy, liberal, pro-immigration, pro-school funding, anti-police funding
- Python 3.12+
- Modal account with authentication configured
- pip or poetry for dependency management
- Clone the repository:
git clone https://github.com/Honyant/TownVotingSimulator-NeoHackathon.git
cd TownVotingSimulator-NeoHackathon- Install dependencies:
pip install -r requirements.txt- Configure Modal authentication:
modal token set- Deploy the Modal inference service:
modal deploy modal_inference.py- Create a
.envfile with your Modal endpoint:
cp .env.example .env
# Edit .env and set MODAL_INFERENCE_URL to your deployed Modal endpoint- Start the FastAPI server:
python main.pyThe API will be available at http://localhost:8000
GET /- API information and available endpointsGET /agents- List all 5 villager agentsGET /agent/{agent_name}- Get specific agent's state and memory
-
POST /talk- One-on-one conversation with a specific agent{ "agent_name": "waitress", "politician_id": "politician_1", "message": "What are your thoughts on immigration?" } -
POST /broadcast- Broadcast message to all agents{ "politician_id": "politician_1", "message": "I propose increasing school funding by 20%", "topic": "budget" } -
POST /townhall- Town hall conversation with randomized agent responses{ "politician_1_message": "I support open immigration", "politician_2_message": "I support controlled immigration", "topic": "immigration" }
GET /politicians- Get current policies for both politiciansPOST /politician/policy- Update politician's policy positions{ "politician_id": "politician_1", "immigration_policy": "Open borders with background checks", "budget_policy": { "police": "decrease", "schools": "increase", "welfare": "increase", "health": "increase" } }
-
POST /vote- Cast a vote from an agent{ "agent_name": "waitress", "politician_id": "politician_1" } -
GET /results- Get current voting results
POST /reset- Reset all agent memories and politician policies
Politicians can address two main policy areas:
- Immigration: Who should be allowed to join the village?
- Budget: How to allocate spending across:
- Police/Defense
- Schools/Education
- Welfare
- Health
- Government
Each agent has a detailed system prompt that defines their:
- Background and occupation
- Political leanings (liberal/conservative)
- Key policy priorities
- Concerns and fears
- Speaking style
- Persuadability factors
See agent_configs.py for full personality definitions.
The backend uses Modal to host a vLLM server running Qwen3-8B-FP8 model. The inference service:
- Runs on H100 GPU for fast inference
- Uses OpenAI-compatible API format
- Supports streaming responses
- Auto-scales based on demand
The Modal service is defined in modal_inference.py and can be deployed independently.
Test the API using curl:
# List agents
curl http://localhost:8000/agents
# Talk to an agent
curl -X POST http://localhost:8000/talk \
-H "Content-Type: application/json" \
-d '{"agent_name": "waitress", "politician_id": "politician_1", "message": "Hello!"}'
# Broadcast to all agents
curl -X POST http://localhost:8000/broadcast \
-H "Content-Type: application/json" \
-d '{"politician_id": "politician_1", "message": "I support education!", "topic": "budget"}'
# Get voting results
curl http://localhost:8000/results.
├── main.py # FastAPI backend with all endpoints
├── agent_configs.py # Agent personality definitions and system prompts
├── modal_inference.py # Modal vLLM inference service
├── requirements.txt # Python dependencies
├── .env # Environment variables (Modal endpoint URL)
└── README.md # This file
- First inference request may take 1-2 minutes due to Modal cold start (GPU spin-up)
- Subsequent requests are much faster (~1-5 seconds)
- Agent memories are stored in-memory and will be lost on server restart
- The simulation supports 2 politicians competing for 5 votes
MIT
Built for NeoHackathon using Modal, FastAPI, and Qwen3-8B