This project follows the Python AI Agent From Scratch tutorial by Tech With Tim. It demonstrates how to build a simple AI agent using Python, focusing on core concepts such as environment interaction, agent design, and reinforcement learning basics.
LangChain is a framework for developing applications powered by large language models (LLMs).
It simplifies and abstracts tasks such as prompt engineering, chaining multiple LLM calls, and connecting to APIs or databases
LangChain is also open source and free to use.
The API Key include a free credits quota so this is a convenient source of models to learn on.
I have made some variations from the tutorial's code:
I split the main.py into smaller examples scripts to see how the agent components work.
- First simple llm example. This example does not use agent functionality yet.
SCRIPT
how_to_make_bread.py
LLM MODEL Randomly picked model in HuggingFace. According to Mistral-Nemo-Base-2407 model card
The Mistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters trained jointly by Mistral AI and NVIDIA
I used a langchain_huggingface library endpoint to access the model. The do_sample parameter is set to False to use greedy sampling instead of random sampling resulting in more consistent and deterministic output.
PROMPT I used a one line text coded prompt.
OUTPUT
The output is printed in the console. See example outputs: how_to_make_bread.md
- Next, a simple llm chat example.
Conversations can be broken into messages. Here we have the user prompt and the formatted system prompt which tells the model how to behave. eg "You are a helpful cooking assistant." but also gives it instructions on format eg "Provide clear, step-by-step instructions for recipes and cooking techniques when users ask specific questions." and guidelines eg " Respond only to the current question without referencing previous conversation context." In addition there may be assistant messages in the conversations.
This script uses the existing HuggingFaceEndpoint and LangChain's separate HumanMessage, AIMessage, and SystemMessage classes to simulate a chat conversation.
The langchain_huggingface library doesn't have a direct equivalent to the LangChain classes ChatOpenAI and ChatHuggingFace interfaces that support sending and receiving messages in chat format.
Note these classes are different to ChatML which is a message formatting protocol or chat template developed by OpenAI for structuring multi-turn conversations.
The classes often use ChatML under the hood.
SCRIPT
how_to_make_cake.py
LLM MODEL Same model and endpoint Mistral-Nemo-Base-2407 model card
PROMPT User Input: The script prompts you with input("You: ") and waits for you to type something Message Creation: When you type something, the script creates a HumanMessage object to represent your input in the conversation context Context Building: This message gets added to the messages list along with previous messages Model Processing: The entire conversation (including your input) gets sent to the HuggingFace model
OUTPUT
The conversational output is printed in the console. See example outputs: how_to_make_cake.md
- Next, a simple agent example.
how_to_make_curry.py script that implements an agent workflow using a general purpose agent using Langchain.
Agents An AI agent is a system that uses an LLM to decide the control flow of an application..
LangChain is a framework for developing applications powered by large language models (LLMs).
This script uses the Thought-Action-Observation (TAO) pattern.
In this pattern, the agent reasons about the user's query ("Thought"), decides which tool to use ("Action"), executes the tool and observes the result ("Observation"), and continues this loop until it produces a final answer.
Agents iterate through a while loop until the objective is fulfilled.
SCRIPT
how_to_make_curry.py
AGENT
- LangChain Agent: Uses
ZERO_SHOT_REACT_DESCRIPTIONagent type. It is a general task agent that doesn't need training.
LLM MODEL Same model and endpoint as before Mistral-Nemo-Base-2407 model card
TOOLS
The agent uses custom tools defined in the script:
- RecipeSearchTool: Searches for recipes based on a query
- IngredientCheckTool: Checks availability of ingredients
- CookingStepTool: Provides instructions for cooking steps
The tools are defined using Langchain's BaseTool class.
PROMPT
This agent uses the Zero-Shot ReAct (combines “Reasoning” (Think) with “Acting” (Act)) prompt technique.
- Structured Input/Output: Uses Pydantic library models with logic to perform data validation and structure of the data. BaseModel = "What data do I expect?"
- Interactive Chat: Clean console interface with emojis and formatting
OUTPUT
The conversational output is printed in the console. See example outputs: how_to_make_curry.md
Verbose Mode: Shows the agent's thinking process
- Mortgage agent example.
mortgage_agent.py script that implements a different agent workflow for Mortgage Comparison Analysis.
SCRIPTS
mortgage_web_demo.py
mortgage_agent.py
AGENTS
Use the smolagents library having the tool calling LLMs in code rather than JSON like other agent libraries.
It will also use the think, act and observe cycle.
The agent uses smolagents' CodeAgent which combines:
Tool calling (function calling) Code execution capabilities Multi-step reasoning
LLM MODEL claude-3-5-sonnet-20240620 via LiteLLMModel class
The LiteLLMModel class is specifically designed to work with models that are hosted locally using the Ollama framework. This allows for faster inference times and reduced latency.
An alternative class is to use the InferenceClientModel which can work with any model hosted on HuggingFace, AWS, or other endpoints.
TOOLS
analyze_loan_data: Analyzes existing mortgage loanscalculate_restructure_options: Calculates potential restructuring scenarios
PROMPT
This uses agent prompting that combines structured task decomposition with tool-directed execution and context-rich data provision.
OUTPUT
Comprehensive mortgage analysis with personalized comparisons. Run demo: streamlit run mortgage_web_demo.py
Follow these steps to set up the environment:
-
Create a virtual environment:
python -m venv .venv
-
Activate the virtual environment:
- On Windows:
.venv\Scripts\Activate
- On macOS/Linux:
source .venv/bin/activate
- On Windows:
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
- Create a
.envfile in the project root directory - Add your API keys to the
.envfile:
HF_API_KEY="your_huggingface_api_key_here" ANTHROPIC_API_KEY="your_anthropic_api_key_here"
To get a HuggingFace API key:
- Go to HuggingFace
- Create an account or sign in
- Go to your profile settings
- Navigate to "Access Tokens"
- Create a new token with "read" permissions
- Copy the token and paste it in your
.envfile
To get an Anthropic API key:
- Go to Anthropic Console
- Create an account or sign in
- Navigate to "API Keys"
- Create a new API key
- Copy the key and paste it in your
.envfile
Note: Never commit your
.envfile to version control. It's already included in.gitignore. - Create a
Andrej Karpathy: Software Is Changing (Again)
- We’ve entered the era of “Software 3.0,” where natural language becomes the new programming interface and models do the rest.
glossary.md to reference AI and agent terminology used in this project.
This project is for educational purposes only.