so that Ollama doesn't unload the model from ram while Altbot is running, this will greatly improve response times with Ollama based models, but will increase continuous server load. Good for very active Altbot instances like the main Altbot instance.