A Minecraft Paper plugin that provides an API wrapper for Ollama, enabling players and other plugins to interact with large language models directly from within the game.
/ollama say <prompt>- Generate text using Ollama/ollama code <prompt>- Generate code with syntax highlighting/ollama run <prompt>- Generate and execute Minecraft commands/ollama chat- Start an interactive chat session/ollama version- Show plugin version and model info
/ollama status- Show plugin and API status/ollama reload- Reload configuration/ollama test- Test API connection/ollama debug <on|off>- Toggle debug mode
- Simple Java API for text generation
- Chat completion support
- Event-driven architecture
- Context-aware responses using player activity logs
- Async processing with callbacks
- Java 21+
- Minecraft Paper 1.21.6+
- Ollama running locally
-
Install Ollama:
# Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Start Ollama server ollama serve # Pull a model ollama pull llama3.2
-
Build the plugin:
git clone https://github.com/herobrinesystems/minecraft-ollama.git cd minecraft-ollama make build -
Set up development server:
make setup make start
The plugin is configured via config.yml:
# Ollama API Settings
api:
endpoint: "http://localhost:11434"
model: "llama3.2"
timeout: 30
temperature: 0.7
# Chat Settings
chat:
max_history: 10
session_timeout: 30
# Command Execution
commands:
enable_execution: true
blocked_commands:
- "stop"
- "op"
- "ban"
# Performance
performance:
max_concurrent_requests: 5
rate_limit: 10
# Debug
debug:
enabled: false/ollama say Write a creative story about a dragon
/ollama code Create a function to calculate fibonacci numbers
/ollama run Give me a diamond sword
/ollama chat
// Simple text generation
OllamaAPI.generate("Hello, world!", response -> {
if (!response.hasError()) {
player.sendMessage(response.getResponse());
}
});
// Chat completion
List<ChatMessage> messages = List.of(
ChatMessage.user("How do I build a redstone clock?")
);
OllamaAPI.chat(messages, response -> {
String reply = response.getContent();
player.sendMessage(reply);
});
// Context-aware generation
OllamaAPI.generateWithContext("Help me with what I'm doing", player, response -> {
// Uses recent player activity as context
player.sendMessage(response.getResponse());
});@EventHandler
public void onOllamaResponse(OllamaResponseEvent event) {
Player player = event.getPlayer();
GenerateResponse response = (GenerateResponse) event.getResponse();
// Custom handling of Ollama responses
if (response.getTokensPerSecond() > 50) {
player.sendMessage("That was a fast response!");
}
}# Build plugin
make build
# Run tests
make test
# Development cycle (build + install + restart)
make dev
# Format code
make format# Build and test in Docker
make docker-test
# View logs
docker-compose logs -f# Interactive debug script
make debug
# Check status
make status
# View logs
make logsMain API class for interacting with Ollama.
Methods:
generate(String prompt, Consumer<GenerateResponse> callback)chat(List<ChatMessage> messages, Consumer<ChatResponse> callback)generateWithContext(String prompt, Player player, Consumer<GenerateResponse> callback)testConnection(BiConsumer<Boolean, String> callback)
Represents a chat message in a conversation.
Static Methods:
ChatMessage.user(String content)ChatMessage.assistant(String content)ChatMessage.system(String content)
Response from text generation.
Methods:
getResponse()- Get generated texthasError()- Check for errorsgetTokensPerSecond()- Get performance metrics
Fired when a request is made to Ollama. Can be cancelled.
Fired when a response is received from Ollama.
ollama.use- Basic plugin usage (default: true)ollama.generate- Text generation (default: true)ollama.code- Code generation (default: true)ollama.chat- Chat interactions (default: true)ollama.run- Command execution (default: op)ollama.admin- Admin commands (default: op)ollama.debug- Debug commands (default: op)
// Get the Ollama API instance
OllamaPlugin ollamaPlugin = (OllamaPlugin) Bukkit.getPluginManager().getPlugin("Ollama");
OllamaAPI api = ollamaPlugin.getOllamaAPI();
// Use in your plugin
api.generate("Generate a quest for my RPG plugin", response -> {
// Handle the generated quest
});@EventHandler
public void onOllamaRequest(OllamaRequestEvent event) {
if (event.getRequest() instanceof GenerateRequest) {
GenerateRequest request = (GenerateRequest) event.getRequest();
// Add custom context
String customContext = "Player is in region: " + getPlayerRegion(event.getPlayer());
request.setSystem(customContext);
}
}- Rate Limiting: Configurable per-player rate limits
- Caching: Optional response caching
- Async Processing: All API calls are asynchronous
- Connection Pooling: Reuses HTTP connections
- Resource Management: Automatic cleanup of expired sessions
-
Plugin not loading:
- Check Java version (requires 21+)
- Verify Paper version (requires 1.21.6+)
- Check server logs for errors
-
Ollama not responding:
- Ensure Ollama is running:
ollama serve - Check endpoint configuration
- Verify firewall settings
- Ensure Ollama is running:
-
Slow responses:
- Check system resources
- Try a smaller model
- Adjust temperature settings
Enable debug mode for detailed logging:
/ollama debug onOr in config.yml:
debug:
enabled: true
log_requests: true
verbose_errors: true- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
Please follow the development guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: GitHub Issues
- Documentation: Wiki
- Community: Discord
- Ollama: ollama.com
- Paper: papermc.io
- Herobrine Systems: herobrinesystems.com
Made with ❤️ by Herobrine Systems