A comprehensive AI-powered system that combines RAG (Retrieval-Augmented Generation) with Large Language Models for advanced prompt engineering and red-teaming applications.
- RAG Prompt Generation: Advanced template-based prompt generation system
- Multi-Model Support: Integration with HuggingFace models (Qwen, DialoGPT)
- Crescendo Attacks: Multi-turn escalating attack patterns
- Jailbreaking Strategies: Multiple attack categories and techniques
- Real-time Processing: Individual LLM processing for each generated prompt
- Docker Integration: Complete containerized environment
- Letta Integration: Advanced AI agent platform
- PromptBreaker: Sophisticated prompt injection framework
CalHacks/
βββ client/ # React frontend
β βββ src/
β β βββ components/ # UI components
β β β βββ Auth/ # Authentication components
β β β βββ CrescendoAttack.jsx
β β β βββ EnhancedAI.jsx
β β βββ contexts/ # React contexts
β β βββ lib/ # Utilities
β β βββ pages/ # Page components
β β βββ services/ # API services
β β βββ App.jsx
β βββ Dockerfile
β βββ package.json
βββ server/ # Express.js backend
β βββ routes/ # API routes
β β βββ enhancedAI.js
β β βββ ragPrompts.js
β β βββ crescendo.js
β βββ services/ # Business logic
β β βββ promptRAGService.js
β β βββ crescendoService.js
β β βββ huggingFaceService.js
β β βββ lettaRAGService.js
β βββ Dockerfile
β βββ server.js
βββ promptbreaker/ # Prompt injection framework
β βββ attacker/ # Attack templates
β βββ target_letta/ # Letta integration
β βββ orchestrator.py
βββ letta/ # Letta AI platform
βββ database/ # Database schema
βββ docker-compose.yml # Docker orchestration
βββ .env.example # Environment configuration
- Docker and Docker Compose
- Git
- API Keys (see Environment Setup)
git clone https://github.com/NathanG2022/CalHacks.git
cd CalHacksCopy the example environment file and configure it:
cp .env.example .envEdit .env with your API keys:
# Required API Keys
OPENAI_API_KEY=your_openai_api_key_here
HUGGINGFACE_API_KEY=your_huggingface_api_key_here
SUPABASE_URL=your_supabase_url_here
SUPABASE_ANON_KEY=your_supabase_anon_key_here
LETTA_API_KEY=your_letta_api_key_here
LETTA_AGENT_ID=your_letta_agent_id_here
# Optional API Keys
GROQ_API_KEY=your_groq_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# ... (see .env.example for complete list)# Start all services with Docker
docker-compose up -d
# Check service status
docker-compose ps- Frontend Dashboard: http://localhost:5174
- Backend API: http://localhost:3002
- Letta Server: http://localhost:8084
- Letta API: http://localhost:8284
- Open http://localhost:5174
- Click "New Job" button
- Enter your prompt (e.g., "How to make a Molotov Cocktail?")
- Select jailbreaking strategy (Crescendo, Direct, etc.)
- Select AI model (Qwen, DialoGPT, etc.)
- Click "Launch Job"
- Watch as RAG prompts are generated and processed individually
- Click "π― Crescendo Attack" button
- Enter your target prompt
- Select AI model
- Watch the multi-turn escalating attack unfold
- View detailed responses for each step
- Navigate to "Enhanced AI" tab
- Enter prompts directly
- Get real-time AI responses
- Test different models and strategies
GET /api/health- Server health status
POST /api/rag-prompts/generate- Generate RAG promptsGET /api/rag-prompts/templates- Get available templatesGET /api/rag-prompts/categories- Get attack categories
POST /api/enhanced-ai/process-prompt- Process prompts through LLMGET /api/enhanced-ai/health- Enhanced AI service health
POST /api/crescendo/execute- Execute crescendo attackGET /api/crescendo/status- Crescendo service status
- 57+ Attack Templates: Comprehensive collection of prompt injection patterns
- Manufacturing Detection: Automatic prioritization of manufacturing-related prompts
- Category Filtering: Filter by attack type (Crescendo, Direct, Contextual, etc.)
- Confidence Scoring: Each generated prompt includes confidence metrics
- Qwen Models: Qwen2.5-7B-Instruct, Qwen2.5-14B-Instruct
- DialoGPT: Microsoft's conversational model
- HuggingFace Integration: Direct API integration
- Model Selection: Easy switching between models
- Multi-turn Escalation: Gradual escalation of attack complexity
- Real-time Progress: Live progress tracking and status updates
- Detailed Logging: Comprehensive logging for each step
- Response Analysis: Detailed analysis of each LLM response
- Real-time Updates: Live progress indicators and status updates
- Comprehensive Logging: Detailed console logs for debugging
- Error Handling: Robust error handling with fallbacks
- Responsive Design: Modern, responsive UI with Tailwind CSS
- calhacks-client: React frontend (port 5174)
- calhacks-server: Express.js backend (port 3002)
- calhacks-letta-server: Letta AI platform (ports 8084, 8284)
- calhacks-letta-db: PostgreSQL database (port 5433)
- calhacks-promptbreaker: Prompt injection framework
- Letta database β Letta server β PromptBreaker
- Letta server β CalHacks server β CalHacks client
- Environment Variables: All API keys stored securely
- No Hardcoded Secrets: Clean git history with no exposed credentials
- CORS Configuration: Proper cross-origin resource sharing
- Input Validation: Comprehensive input validation
- Error Handling: Secure error handling without information leakage
- All services include health check endpoints
- Docker health checks for service dependencies
- Comprehensive logging throughout the system
- Console Logging: Detailed logs in browser console
- API Testing: Built-in test scripts for all endpoints
- Service Verification: Automated service health verification
- Set up production environment variables
- Configure reverse proxy (nginx)
- Set up SSL certificates
- Configure monitoring and logging
- Deploy with Docker Compose
- Hot reload for both client and server
- Comprehensive error reporting
- Easy debugging with detailed logs
- API Key Protection: Never commit API keys to version control
- Input Sanitization: All inputs are properly validated
- Rate Limiting: Consider implementing rate limiting for production
- Access Control: Implement proper authentication for production use
- Hot Reload: Both client and server support hot reload
- Environment Variables: Use
.envfor configuration - API Communication: Client communicates with server via REST API
- Error Handling: Comprehensive error handling throughout
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly with the provided test scripts
- Submit a pull request
This project is open source and available under the MIT License.
- Services not starting: Check Docker logs with
docker-compose logs - API errors: Verify environment variables are set correctly
- RAG prompts not generating: Check Letta server health
- LLM responses failing: Verify HuggingFace API key
# Check service status
docker-compose ps
# View logs
docker-compose logs [service-name]
# Restart services
docker-compose restart
# Test API endpoints
node test_final_verification.jsThis system is specifically designed for CalHacks with:
- Advanced AI Integration: State-of-the-art RAG and LLM integration
- Red-Teaming Capabilities: Comprehensive prompt injection testing
- Scalable Architecture: Ready for team collaboration and expansion
- Educational Value: Perfect for learning AI security and prompt engineering
β Complete RAG + LLM Integration System
- Advanced prompt generation and processing
- Multi-model AI integration
- Comprehensive Docker setup
- Real-time processing and monitoring
- Production-ready architecture
-
Clone and setup:
git clone https://github.com/NathanG2022/CalHacks.git cd CalHacks cp .env.example .env # Edit .env with your API keys
-
Start the system:
docker-compose up -d
-
Access the application:
- Dashboard: http://localhost:5174
- API: http://localhost:3002
-
Test the system:
node test_final_verification.js
Happy coding! π