[project-demos]: Create demo for LlmTornado using Ollama#362
Merged
[project-demos]: Create demo for LlmTornado using Ollama#362
Conversation
… MultimediaBlade, and SimpleChatBlade for enhanced chat functionalities
…oved UI for model selection
…ency and responsiveness
…SimpleChatBlade for improved alignment and sizing
…val, weather information, and mathematical calculations, along with UI improvements for agent instructions management
…formatting for messages
…ornadoApp for streamlined functionality
…line development environment setup
…enhanced user experience
…tBlade for consistent UI presentation
… use horizontal alignment for improved UI flexibility
…e links in Program.cs and README.md for consistency
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
LlmTornado
Description
LlmTornado Examples is a web application demonstrating the LlmTornado library capabilities for interacting with LLM models. It features multiple chat interfaces including simple streaming chat and AI agents with function calling/tools support.
LlmTornado.mp4
One-Click Development Environment
Click the badge above to open Ivy Examples repository in GitHub Codespaces with:
Created Using Ivy
Web application created using Ivy-Framework.
Ivy - The ultimate framework for building internal tools with LLM code generation by unifying front-end and back-end into a single C# codebase. With Ivy, you can build robust internal tools and dashboards using C# and AI assistance based on your existing database.
Ivy is a web framework for building interactive web applications using C# and .NET.
Interactive Examples for LlmTornado Library
This example demonstrates various capabilities of the LlmTornado library for interacting with LLM models. The application showcases streaming responses, function calling with tools, and agent-based interactions.
What This Application Does:
This implementation creates an LlmTornado Examples workspace that allows users to:
Technical Implementation:
/api/tagsendpointUseState,UseEffect) for reactive UI updatesExamples
Simple Chat
Basic conversation interface with streaming responses. Perfect for simple Q&A interactions with any Ollama model.
Features:
Agent with Tools
AI agent with function calling capabilities. The agent can use tools to perform actions like getting the current time, performing calculations, and getting weather information.
Features:
Note: Not all models support function calling. The application will display a helpful message if the selected model doesn't support tools.
Tools
Prerequisites
ollama pull llama3.2:1borollama pull llama3.2)How to Run
Install and start Ollama:
Pull a model (in another terminal):
ollama pull llama3.2:1b # Or use another model like llama3.2, mistral, gemma, etc.Navigate to the example:
cd project-demos/llm-tornadoRestore dependencies:
Run the application:
Open your browser to the URL shown in the terminal (typically
http://localhost:5010)Configure Ollama (if needed): Set the Ollama URL (default:
http://localhost:11434) and select a model from the dropdown. The application will automatically discover available models.How to Deploy
Deploy this example to Ivy's hosting platform:
Navigate to the example:
cd project-demos/llm-tornadoDeploy to Ivy hosting:
Configure environment variables in your deployment settings:
OLLAMA_URLto your Ollama server URL (default:http://localhost:11434)OLLAMA_MODELto your preferred model (default:llama3.2:1b)This will deploy your LlmTornado examples with a single command.
Project Structure
Learn More
Tags
AI, LLM, Ollama, LlmTornado, Chat, Function Calling, Tools, Streaming, Agents, Local AI, Markdown