Skip to content
This repository was archived by the owner on Nov 21, 2025. It is now read-only.

Added Local LLM Content Streaming#47

Merged
micr0-dev merged 4 commits intomainfrom
feature/localContentStream
Apr 6, 2024
Merged

Added Local LLM Content Streaming#47
micr0-dev merged 4 commits intomainfrom
feature/localContentStream

Conversation

@micr0-dev
Copy link
Copy Markdown
Owner

Added content streaming for local LLMs this allows for faster access to the response before it is done generating. This is especially useful for slow models or really big models. I find it very useful for 17B and up models as you spend less time just waiting.

@micr0-dev micr0-dev merged commit 36f02d6 into main Apr 6, 2024
@micr0-dev micr0-dev deleted the feature/localContentStream branch April 6, 2024 01:09
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant