Skip to content

Add option for persistent Ollama model serving #74

@micr0-dev

Description

@micr0-dev

so that Ollama doesn't unload the model from ram while Altbot is running, this will greatly improve response times with Ollama based models, but will increase continuous server load. Good for very active Altbot instances like the main Altbot instance.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions