Agents
Reference docs
This page contains reference documentation for Agents. See the docs for conceptual guides, tutorials, and examples on using Agents.
agents
¶
Entrypoint to building Agents with LangChain.
create_agent
¶
create_agent(
model: str | BaseChatModel,
tools: Sequence[BaseTool | Callable[..., Any] | dict[str, Any]] | None = None,
*,
system_prompt: str | SystemMessage | None = None,
middleware: Sequence[AgentMiddleware[StateT_co, ContextT]] = (),
response_format: ResponseFormat[ResponseT]
| type[ResponseT]
| dict[str, Any]
| None = None,
state_schema: type[AgentState[ResponseT]] | None = None,
context_schema: type[ContextT] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
name: str | None = None,
cache: BaseCache[Any] | None = None,
) -> CompiledStateGraph[
AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]
]
Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using create_agent,
visit the Agents docs.
| PARAMETER | DESCRIPTION |
|---|---|
|
The language model for the agent. Can be a string identifier (e.g., For a full list of supported model strings, see
See the Models docs for more information.
TYPE:
|
|
A list of tools, If See the Tools docs for more information.
TYPE:
|
|
An optional system prompt for the LLM. Can be a
TYPE:
|
|
A sequence of middleware instances to apply to the agent. Middleware can intercept and modify agent behavior at various stages. See the Middleware docs for more information.
TYPE:
|
|
An optional configuration for structured responses. Can be a If provided, the agent will handle structured output during the conversation flow. Raw schemas will be wrapped in an appropriate strategy based on model capabilities. See the Structured output docs for more information.
TYPE:
|
|
An optional When provided, this schema is used instead of Generally, it's recommended to use
TYPE:
|
|
An optional schema for runtime context.
TYPE:
|
|
An optional checkpoint saver object. Used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).
TYPE:
|
|
An optional store object. Used for persisting data across multiple threads (e.g., multiple conversations / users).
TYPE:
|
|
An optional list of node names to interrupt before. Useful if you want to add a user confirmation or other interrupt before taking an action. |
|
An optional list of node names to interrupt after. Useful if you want to return directly or run additional processing on an output. |
|
Whether to enable verbose logging for graph execution. When enabled, prints detailed information about each node execution, state updates, and transitions during agent runtime. Useful for debugging middleware behavior and understanding agent execution flow.
TYPE:
|
|
An optional name for the This name will be automatically used when adding the agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems.
TYPE:
|
|
An optional |
| RETURNS | DESCRIPTION |
|---|---|
CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]
|
A compiled |
| RAISES | DESCRIPTION |
|---|---|
AssertionError
|
If duplicate middleware instances are provided. |
The agent node calls the language model with the messages list (after applying
the system prompt). If the resulting AIMessage
contains tool_calls, the graph will then call the tools. The tools node executes
the tools and adds the responses to the messages list as
ToolMessage objects. The agent node then calls
the language model again. The process repeats until no more tool_calls are present
in the response. The agent then returns the full list of messages.
Example
from langchain.agents import create_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_agent(
model="anthropic:claude-sonnet-4-5-20250929",
tools=[check_weather],
system_prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)
Structured output¶
ResponseFormat
module-attribute
¶
ResponseFormat = (
ToolStrategy[SchemaT] | ProviderStrategy[SchemaT] | AutoStrategy[SchemaT]
)
Union type for all supported response format strategies.
ToolStrategy
dataclass
¶
ToolStrategy(
schema: type[SchemaT] | UnionType | dict[str, Any],
*,
tool_message_content: str | None = None,
handle_errors: bool
| str
| type[Exception]
| tuple[type[Exception], ...]
| Callable[[Exception], str] = True,
)
Bases: Generic[SchemaT]
Use a tool calling strategy for model responses.
Initialize ToolStrategy.
Initialize ToolStrategy with schemas, tool message content, and error handling
strategy.
tool_message_content
instance-attribute
¶
tool_message_content: str | None = tool_message_content
The content of the tool message to be returned when the model calls an artificial structured output tool.
handle_errors
instance-attribute
¶
handle_errors: (
bool
| str
| type[Exception]
| tuple[type[Exception], ...]
| Callable[[Exception], str]
) = handle_errors
Error handling strategy for structured output via ToolStrategy.
True: Catch all errors with default error templatestr: Catch all errors with this custom messagetype[Exception]: Only catch this exception type with default messagetuple[type[Exception], ...]: Only catch these exception types with default messageCallable[[Exception], str]: Custom function that returns error messageFalse: No retry, let exceptions propagate
ProviderStrategy
dataclass
¶
Bases: Generic[SchemaT]
Use the model provider's native structured output method.
Initialize ProviderStrategy with schema.
| PARAMETER | DESCRIPTION |
|---|---|
schema
|
Schema to enforce via the provider's native structured output. |
strict
|
Whether to request strict provider-side schema enforcement.
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
to_model_kwargs |
Convert to kwargs to bind to a model to force structured output. |