Documentation Index
Fetch the complete documentation index at: https://docs.agentops.ai/llms.txt
Use this file to discover all available pages before exploring further.
OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. The SDK provides a comprehensive set of tools for creating, managing, and monitoring agent-based applications.
Core Concepts
- Agents: LLMs configured with instructions, tools, guardrails, and handoffs
- Handoffs: Allow agents to transfer control to other agents for specific tasks
- Guardrails: Configurable safety checks for input and output validation
- Tracing: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
Install OpenAI Agents SDK
pip install openai-agents
This will be updated to a PyPI link when the package is officially released.
Add 2 lines of code
Make sure to call agentops.init before calling any openai, cohere, crew, etc models.
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
Set your API key as an .env variable for easy access.
AGENTOPS_API_KEY=<YOUR API KEY>
OPENAI_API_KEY=<YOUR OPENAI API KEY>
Read more about environment variables in Advanced ConfigurationRun your agents
Execute your program and visit app.agentops.ai/drilldown to observe your Agents! 🕵️After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard
Hello World Example
from agents import Agent, Runner
import agentops
# Initialize AgentOps
agentops.init()
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
print(result.final_output)
# Output:
# Code within the code,
# Functions calling themselves,
# Infinite loop's dance.
Handoffs Example
from agents import Agent, Runner
import asyncio
import agentops
# Initialize AgentOps
agentops.init()
spanish_agent = Agent(
name="Spanish agent",
instructions="You only speak Spanish.",
)
english_agent = Agent(
name="English agent",
instructions="You only speak English",
)
triage_agent = Agent(
name="Triage agent",
instructions="Handoff to the appropriate agent based on the language of the request.",
handoffs=[spanish_agent, english_agent],
)
async def main():
result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
print(result.final_output)
# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?
if __name__ == "__main__":
asyncio.run(main())
Functions Example
import asyncio
from agents import Agent, Runner, function_tool
import agentops
# Initialize AgentOps
agentops.init()
@function_tool
def get_weather(city: str) -> str:
return f"The weather in {city} is sunny."
agent = Agent(
name="Hello world",
instructions="You are a helpful agent.",
tools=[get_weather],
)
async def main():
result = await Runner.run(agent, input="What's the weather in Tokyo?")
print(result.final_output)
# The weather in Tokyo is sunny.
if __name__ == "__main__":
asyncio.run(main())
The Agent Loop
When you call Runner.run(), the SDK runs a loop until it gets a final output:
- The LLM is called using the model and settings on the agent, along with the message history.
- The LLM returns a response, which may include tool calls.
- If the response has a final output, the loop ends and returns it.
- If the response has a handoff, the agent is set to the new agent and the loop continues from step 1.
- Tool calls are processed (if any) and tool response messages are appended. Then the loop continues from step 1.
You can use the max_turns parameter to limit the number of loop executions.
Final Output
Final output is the last thing the agent produces in the loop:
- If you set an
output_type on the agent, the final output is when the LLM returns something of that type using structured outputs.
- If there’s no
output_type (i.e., plain text responses), then the first LLM response without any tool calls or handoffs is considered the final output.