Skip to content
This repository was archived by the owner on Nov 21, 2025. It is now read-only.

Local LLMs via Ollama #44

Merged
micr0-dev merged 13 commits intomainfrom
feature/ollama
Apr 4, 2024
Merged

Local LLMs via Ollama #44
micr0-dev merged 13 commits intomainfrom
feature/ollama

Conversation

@micr0-dev
Copy link
Copy Markdown
Owner

Allows for local LLMs via Ollama

List of all available LLMs: https://ollama.com/library

Personally I find that the local LLMs are faster but not as knowledgable as Gemini, I have not tried the very large llama models however like 13B and 70B

@micr0-dev micr0-dev merged commit de2b281 into main Apr 4, 2024
@micr0-dev micr0-dev deleted the feature/ollama branch April 4, 2024 19:14
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant