Ship reliable, testable agents – not guesses. Better Agents adds simulations, evaluations, and standards on top of any framework. Explore Better Agents
Follow the LangWatch Python integration guide to capture traces, debug pipelines, and enable observability for agent testing.
Integrate LangWatch into your Python application to start observing your LLM interactions. This guide covers the setup and basic usage of the LangWatch Python SDK.
Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration
First, you need a LangWatch API key. Sign up at app.langwatch.ai and find your API key in your project settings. The SDK will automatically use the LANGWATCH_API_KEY environment variable if it is set.
Each message triggering your LLM pipeline as a whole is captured with a Trace.
A Trace contains multiple Spans, which are the steps inside your pipeline.
A span can be an LLM call, a database query for a RAG retrieval, or a simple function transformation.
Different types of Spans capture different parameters.
Spans can be nested to capture the pipeline structure.
Traces can be grouped together on LangWatch Dashboard by having the same thread_id in their metadata, making the individual messages become part of a conversation.
It is also recommended to provide the user_id metadata to track user analytics.
To capture an end-to-end operation, like processing a user message, you can wrap the main function or entry point with the @langwatch.trace() decorator. This automatically creates a root span for the entire operation.
Copy
import langwatchfrom openai import OpenAIclient = OpenAI()@langwatch.trace()async def handle_message(): # This whole function execution is now a single trace langwatch.get_current_trace().autotrack_openai_calls(client) # Automatically capture OpenAI calls # ... rest of your message handling logic ... pass
You can customize the trace name and add initial metadata if needed:
To instrument specific parts of your pipeline within a trace (like an llm operation, rag retrieval, or external api call), use the @langwatch.span() decorator.
Copy
import langwatchfrom langwatch.types import RAGChunk@langwatch.span(type="rag", name="RAG Document Retrieval") # Add type and custom namedef rag_retrieval(query: str): # ... logic to retrieve documents ... search_results = [ {"id": "doc-1", "content": "..." }, {"id": "doc-2", "content": "..." } ] # Add specific context data to the span langwatch.get_current_span().update( contexts=[ RAGChunk(document_id=doc["id"], content=doc["content"]) for doc in search_results ], retrieval_strategy="vector_search", ) return search_results@langwatch.trace()async def handle_message(message: cl.Message): # ... retrieved_docs = rag_retrieval(message.content) # This call creates a nested span # ...
The @langwatch.span() decorator automatically captures the decorated
function’s arguments as the span’s input and its return value as the
output. This behavior can be controlled via the capture_input and
capture_output arguments (both default to True).
Spans created within a function decorated with @langwatch.trace() will automatically be nested under the main trace span. You can add additional type, name, metadata, and events, or override the automatic input/output using decorator arguments or the update() method on the span object obtained via langwatch.get_current_span().For detailed guidance on manually creating traces and spans using context managers or direct start/end calls, see the Manual Instrumentation Tutorial.
An existing OpenTelemetry TracerProvider. If provided, LangWatch will use it
(adding its exporter) instead of creating a new one. If not provided,
LangWatch checks the global provider or creates a new one.
If True, suppresses the warning message logged when an existing global
TracerProvider is detected and LangWatch attaches its exporter to it instead
of overriding it.
LangWatch offers seamless integrations with a variety of popular Python libraries and frameworks. These integrations provide automatic instrumentation, capturing relevant data from your LLM applications with minimal setup.Below is a list of currently supported integrations. Click on each to learn more about specific setup instructions and available features:
If you are using a library that is not listed here, you can still instrument your application manually. See the Manual Instrumentation Tutorial for more details. Since LangWatch is built on OpenTelemetry, it also supports any library or framework that integrates with OpenTelemetry. We are also continuously working on adding support for more integrations.
LangWatch automatically captures cost and token data for most LLM providers. If you’re missing costs or token counts, our cost tracking tutorial covers troubleshooting steps, model cost configuration, and manual token tracking setup.
How do I capture RAG (Retrieval Augmented Generation) contexts?
To monitor your RAG pipelines and track retrieved documents, see our RAG capturing guide. This enables specialized RAG evaluators and analytics on document usage patterns.
How can I make input and output of the trace more human readable to better read the conversation?
Our input/output mapping guide shows how to properly structure chat messages, handle different data formats, and ensure your LLM conversations are captured correctly for analysis.
How do I add custom metadata and user information to traces?
Learn how to enrich your traces with user IDs, session data, and custom attributes in our metadata and attributes tutorial. This is essential for user analytics and filtering traces by custom criteria.
How can I capture a whole conversation?
To connect multiple traces into a conversation, you can use the thread_id metadata. See the metadata and attributes tutorial for more details.
How do I capture evaluations and guardrails tracing data?
Implement automated quality checks and safety measures with our evaluations and guardrails tutorial. Learn to create custom evaluators and integrate safety guardrails into your LLM workflows.
How can I manually instrument my application for more fine-grained control?
For custom frameworks or fine-grained control, our manual instrumentation guide covers creating traces and spans programmatically using context managers and direct API calls.
How do I integrate with existing OpenTelemetry setups?
LangWatch is OpenTelemetry-based, so it can be integrated seamlessly with any OpenTelemetry-compatible application. If you already use OpenTelemetry in your application, our OpenTelemetry integration tutorial explains how to configure LangWatch alongside existing telemetry infrastructure, including custom collectors and exporters.