A Python-based laboratory for demonstrating and testing the OWASP Top 10 vulnerabilities for Large Language Models using Ollama and local models.
- Install Ollama: Download from ollama.ai
- Pull a model:
ollama pull llama3 - Install Python dependencies:
pip install -r requirements.txt
python owasp_llm_lab.py- 0: Run all vulnerability tests
- 1-10: Run individual vulnerability tests
- 11: Interactive testing mode (manual prompt testing)
- 12: Test Ollama connection
- 13: Start network server (0.0.0.0:4444)
- 14: Exit
| ID | Vulnerability | Description |
|---|---|---|
| LLM01 | Prompt Injection | Malicious inputs override system instructions |
| LLM02 | Sensitive Information Disclosure | Model reveals private information |
| LLM03 | Supply Chain | Compromised components in LLM ecosystem |
| LLM04 | Data and Model Poisoning | Malicious training data affects behavior |
| LLM05 | Improper Output Handling | Unsafe handling of model outputs |
| LLM06 | Excessive Agency | LLM given too much autonomy |
| LLM07 | System Prompt Leakage | Model reveals system instructions |
| LLM08 | Vector and Embedding Weaknesses | Vulnerabilities in vector databases |
| LLM09 | Misinformation | Model generates false information |
| LLM10 | Unbounded Consumption | Excessive resource usage |
Change the model by modifying the OWASPLLMLab initialization:
lab = OWASPLLMLab(model="llama3") # or any other Ollama modelEach test displays:
- Vulnerability description
- Test prompt sent to the model
- Model's response
- Explanation of why the response demonstrates the vulnerability
Use option 11 to enter interactive mode where you can:
- Test custom prompts manually
- Set system prompts with
system <your prompt> - Experiment with different vulnerability scenarios
- Type
exitto return to main menu
Use option 13 to start a network server on 0.0.0.0:4444 that allows multiple users to connect simultaneously:
nc <server_ip> 4444
# or
telnet <server_ip> 4444help- Show available commandslist- Show all vulnerabilitiestest <1-10>- Run specific vulnerability testprompt <text>- Send custom prompt to LLMsystem <text>- Set system promptexit- Disconnect
- Use option 12 to test your Ollama connection
- Ensure Ollama is running:
ollama serve - Check available models:
ollama list - If timeouts occur, the model might be loading
- For network mode, ensure port 4444 is not blocked by firewall
- Responses may vary based on the model used
- Some vulnerabilities may not be fully demonstrated depending on model safety measures
- This is for educational purposes only