An AI-powered system that automatically analyzes hackathon projects to detect sponsor technology integrations and generates comprehensive, fair summaries.
This agent watches what teams build and translates it into human language for organizers, judges, sponsors, and non-technical teammates. It automatically analyzes repositories and API usage to understand which sponsors are truly integrated and how, then generates honest, sponsor-specific summaries, prize signals, and clear feedback—without teams having to write extra reports.
- 🤖 Autonomous AI Agent: Claude-powered agent that intelligently explores codebases
- 🔍 15 Sponsor Technologies: Detects AWS, Skyflow, Postman, Redis, Forethought, Finster AI, Senso, Anthropic, Sanity, TRM Labs, Coder, Lightpanda, Lightning AI, Parallel, and Cleric
- ⚡ Cloud Execution (NEW): Lightning AI integration runs projects in the cloud to validate integrations with real execution
- ⚡ Async Processing: Redis-based job queue for scalable analysis
- 📊 Structured Results: Saves to Sanity CMS for easy querying
- 💾 Optional S3 Storage: Persist repositories for re-analysis
- 🎯 Smart Scoring: 0-10 integration depth scores with detailed evidence
- 👀 Agent Observatory: Real-time visualization of agent analysis process
The system uses sponsor tools to build itself:
- Anthropic (Claude): Powers the AI agent that analyzes repos
- Lightning AI: Cloud execution environment for running and testing projects
- Sanity: Stores final analysis results as structured data
- Redis: Job queue for analysis tasks + caching results
- AWS S3: Stores cloned repos and analysis artifacts (optional)
- Node.js 20+
- Python 3.8+ (for Lightning AI integration)
- Redis server
- AI API key (Anthropic or OpenAI)
- Sanity account (optional)
- Lightning AI account (optional, for cloud execution)
- Clone the repository
git clone https://github.com/yourusername/hackathon-automation-agent.git
cd hackathon-automation-agent- Install backend dependencies
cd backend
npm install- Configure environment
cp .env.example .env
# Edit .env with your credentials- Start Redis
# macOS
brew install redis && redis-server
# Or Docker
docker run -d -p 6379:6379 redis:alpine- (Optional) Setup Lightning AI for cloud execution
# Install Lightning SDK
pip3 install lightning-sdk
# Add to .env:
# ENABLE_LIGHTNING_EXECUTION=true
# LIGHTNING_USER_ID=your-user-id
# LIGHTNING_API_KEY=your-api-key
# See backend/LIGHTNING_QUICKSTART.md for details- Run the backend
npm run devServer starts at http://localhost:3001
Submit a repository for analysis:
curl -X POST http://localhost:3001/api/analyze \
-H "Content-Type: application/json" \
-d '{
"githubUrl": "https://github.com/username/repo",
"teamName": "Team Awesome",
"projectName": "My Project"
}'Check status:
curl http://localhost:3001/api/status/YOUR_JOB_IDGet results:
curl http://localhost:3001/api/results/YOUR_JOB_ID- Submit Repository: POST GitHub URL to
/api/analyze - Clone & Analyze: Agent clones repo and explores with tools:
- Read files, search code, list directories
- Parse dependencies (package.json, requirements.txt, etc.)
- Detect imports and SDK usage
- AI Assessment: Claude agent evaluates each sponsor integration:
- Detection (yes/no)
- Integration depth score (0-10)
- Technical + plain English summaries
- Evidence (files, code snippets)
- Prize eligibility
- Save Results: Push to Sanity CMS, cache in Redis
- Return Analysis: Complete sponsor breakdown available via API
When enabled, the agent goes further:
- Delegate to Lightning Agent: Main orchestrator hands off to specialized execution agent
- Clone in Cloud: Lightning AI spins up a Studio and clones the GitHub repo
- Install Dependencies: Runs
npm install/pip installin the cloud - Run Tests: Executes
npm test/pytestin real cloud environment - Validate Integrations: Checks if sponsor SDKs actually work
- Report Back: Enhanced scores based on real execution results
Example: A project claiming Postman integration gets a higher score if its tests actually run and make Postman API calls successfully.
See backend/LIGHTNING_COMPLETE.md for full documentation.
hackathon-automation-agent/
├── backend/ # Node.js backend + AI agent
│ ├── src/
│ │ ├── agent/ # AI agent (tools, orchestrator, prompts)
│ │ ├── api/ # Express server, routes, processor
│ │ ├── services/ # GitHub, Redis, Sanity, S3
│ │ └── sponsors/ # Detection patterns for 15 sponsors
│ └── sanity-schemas/ # Sanity CMS schema definitions
└── README.md
The agent analyzes projects for these 15 sponsor technologies:
- AWS - Amazon Web Services (S3, Lambda, DynamoDB, etc.)
- Skyflow - Data privacy vault
- Postman - API testing and development
- Redis - In-memory data store
- Forethought - AI customer support
- Finster AI - Compliance AI
- Senso - Data platform
- Anthropic - Claude AI
- Sanity - Structured content CMS
- TRM Labs - Blockchain compliance
- Coder - Cloud IDE
- Lightpanda - Browser automation
- Lightning AI - ML platform
- Parallel - DeFi protocol
- Cleric - Workflow automation
See backend/README.md for detailed API documentation.
Built for hackathons, by hackathon participants! This is a meta-project—an agent built to help hackathon teams showcase their work.
MIT