This project demonstrates Agent-to-Agent (A2A) communication between different agent frameworks, enabling distributed tracing and conversation across multiple agent implementations.
┌─────────────────────────────────────────────────────────────────┐
│ test_agent_conversation.py │
│ (Orchestrates conversation) │
└────────────┬──────────────────────────────┬────────────────────┘
│ │
│ A2A Protocol │ A2A Protocol
│ (JSON-RPC) │ (JSON-RPC)
│ │
┌────────▼────────┐ ┌────────▼────────┐
│ LangChain Agent │ │ Google ADK │
│ Port 2024 │ │ Agent │
│ (langgraph dev) │ │ Port 8002 │
│ │ │ (uvicorn) │
└────────┬─────────┘ └────────┬────────┘
│ │
│ OpenTelemetry │ OpenTelemetry
│ Traces │ Traces
│ │
└──────────────┬───────────────┘
│
┌────────▼────────┐
│ LangSmith │
│ (Tracing) │
└─────────────────┘
A2A-distributed-tracing/
├── langgraph_agent/ # LangGraph-based agent
│ ├── agent.py # Main agent implementation
│ ├── test_agent.py # Test script for this agent
│ └── langgraph.json # LangGraph configuration
├── langchain_agent/ # LangChain v1-based agent
│ ├── agent.py # Main agent implementation
│ ├── test_agent.py # Test script for this agent
│ └── langgraph.json # LangGraph configuration
├── google_adk/ # Google ADK-based agent
│ ├── agent.py # Main agent implementation
│ └── test_agent.py # Test script for this agent
└── test_agent_conversation.py # Multi-agent conversation test
This project contains three different agent implementations, all communicating via the A2A protocol:
- LangGraph Agent (
langgraph_agent/): Uses LangGraph's StateGraph directly - LangChain Agent (
langchain_agent/): Uses LangChain v1'screate_agentAPI with middleware - Google ADK Agent (
google_adk/): Uses Google ADK'sto_a2a()function
All agents are specialized in fluid dynamics and Navier-Stokes equations, and can communicate with a mathematics professor (Google ADK agent) for calculations.
-
Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | shOr using Homebrew on macOS:
brew install uv
-
Install dependencies:
uv sync
-
Configure environment variables: Create a
.envfile in the root directory:OPENAI_API_KEY=your_openai_api_key_here LANGSMITH_API_KEY=your_langsmith_api_key_here # Optional, for distributed tracing LANGSMITH_PROJECT=a2a-distributed-tracing # Optional, defaults to "a2a-distributed-tracing"
Each agent folder contains a test_agent.py script to test that agent independently:
cd langgraph_agent
uv run langgraph dev --port 2024
# In another terminal:
uv run python test_agent.py <assistant_id>cd langchain_agent
uv run langgraph dev --port 2026
# In another terminal:
uv run python test_agent.py <assistant_id>uv run uvicorn google_adk.agent:a2a_app --host localhost --port 8002
# In another terminal:
uv run python google_adk/test_agent.pyThe test_agent_conversation.py script demonstrates A2A communication between agents:
-
Start LangChain agent:
cd langchain_agent uv run langgraph dev --port 2024Copy the
assistant_idfrom the output. -
Start Google ADK agent:
uv run uvicorn google_adk.agent:a2a_app --host localhost --port 8002
uv run python test_agent_conversation.py <langchain_assistant_id>Or with environment variable:
export LANGCHAIN_ASSISTANT_ID=<assistant_id>
uv run python test_agent_conversation.pyThe script will:
- Use
context_idasthread_idto group traces in LangSmith - Share the same
thread_idbetween both agents for unified tracing - Simulate a conversation between LangChain and Google ADK agents
- Maintain conversation context across multiple rounds using A2A
contextId
- Location:
langgraph_agent/ - Port: 2024
- Implementation: Uses LangGraph's
StateGraphdirectly - System Prompt: Specialized in fluid dynamics and Navier-Stokes equations
- Features: Direct StateGraph construction, custom state management
- Location:
langchain_agent/ - Port: 2024 (configurable)
- Implementation: Uses LangChain v1's
create_agentAPI - System Prompt: Specialized in fluid dynamics and Navier-Stokes equations
- Features:
- Uses
create_agentdirectly (no manual StateGraph) - Custom middleware for A2A message format conversion
- Extensible with LangChain tools
- Uses
- Location:
google_adk/ - Port: 8002
- Implementation: Uses Google ADK's
to_a2a()function - Functionality: Calculator agent with mathematical operations
- Features:
- Auto-generated agent card
- Exposed via uvicorn
- Acts as "mathematics professor" for other agents
- OpenTelemetry tracing to LangSmith (same project as other agents)
The project follows the A2A Protocol Specification for multi-turn conversations, specifically using contextId per section 3.4.2 (Multi-Turn Conversation Patterns).
- Endpoint:
http://localhost:{port}/a2a/{assistant_id} - Format: Standard A2A protocol
- Context ID: Used inside
message.contextIdfor multi-turn conversation continuity - Task ID: Optionally included in
message.taskIdfor follow-up messages referencing specific tasks - Metadata:
thread_id(usingcontext_idvalue) added at payload root level for LangSmith tracing
- Endpoint:
http://localhost:8002/(root endpoint) - Format:
to_a2a()specific format - Context ID: Used inside
message.contextIdfor multi-turn conversation continuity - Task ID: Optionally included in
message.taskIdfor follow-up messages - Message ID: Inside
messageobject (not at params level) - Metadata:
thread_id(usingcontext_idvalue) added at payload root level for LangSmith tracing
According to A2A spec 3.4.2:
- Context Continuity: Task objects maintain conversation context through the
contextIdfield - Follow-up Messages: Clients can include
contextIdin subsequent messages to continue a previous interaction - Task References: Clients can use
taskId(with or withoutcontextId) to continue or refine a specific task - Context Inheritance: New tasks created within the same
contextIdcan inherit context from previous interactions
The project uses distributed tracing to track agent interactions across multiple agents, ensuring all traces are grouped in the same thread in LangSmith.
All agents use thread_id in metadata (using context_id value) to group traces in LangSmith. Both agents share the same thread_id to ensure unified tracing:
# First message (no contextId - server generates it)
# Use a shared thread_id for both agents
thread_id = str(uuid.uuid4())
payload = {
"jsonrpc": "2.0",
"id": str(uuid.uuid4()),
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "Hello"}],
"messageId": str(uuid.uuid4())
},
"messageId": str(uuid.uuid4())
},
"metadata": {"thread_id": thread_id} # Groups traces in LangSmith
}
# Follow-up message (includes contextId inside message object)
# Use context_id as thread_id once available
thread_id = context_id # From previous response
payload = {
"jsonrpc": "2.0",
"id": str(uuid.uuid4()),
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "Follow-up"}],
"messageId": str(uuid.uuid4()),
"contextId": context_id # From previous response
},
"messageId": str(uuid.uuid4())
},
"metadata": {"thread_id": thread_id} # Use context_id as thread_id
}Key Points:
context_idfrom A2A responses is used asthread_idin metadata- Both agents share the same
thread_idvalue to ensure traces are grouped together - The
thread_idis synchronized between agents when either receives a newcontext_id
The Google ADK agent includes OpenTelemetry instrumentation that automatically sends traces to LangSmith:
- Basic tracing: Uses
langsmith.integrations.otel.configure()for automatic tracing - Thread ID extraction: Middleware extracts
thread_idfrom request metadata and sets it aslangsmith.metadata.thread_idin span attributes - Unified project: All traces go to the same LangSmith project (
a2a-distributed-tracingby default) - Complete visibility: Captures agent conversations, tool calls, and model interactions
- Thread grouping: Uses
langsmith.metadata.thread_idattribute to group traces in the same thread
To enable tracing, set the LANGSMITH_API_KEY environment variable. The project name can be customized via LANGSMITH_PROJECT (defaults to a2a-distributed-tracing).
Note: The openinference package for Google ADK instrumentation may not be available yet. The code will gracefully fall back to basic LangSmith tracing if the package is not installed.
This tracing setup allows you to:
- Track complete conversations across multiple agents
- Group related traces by thread_id (using context_id) in LangSmith
- View all agent interactions in the same thread for unified analysis
- Analyze agent-to-agent communication patterns
- Debug distributed agent interactions
- View detailed OpenTelemetry spans for Google ADK operations with thread_id in metadata
- Create a new folder (e.g.,
new_agent/) - Implement the agent following A2A protocol
- Add a
test_agent.pyscript - Update this README with agent details
- LangChain Agent: Add tools using
@tooldecorator and pass tocreate_agent() - LangGraph Agent: Add nodes to the StateGraph
- Google ADK Agent: Add functions as tools to the Agent
- LangGraph/LangChain agents: Change port in
uv run langgraph dev --port {port} - Google ADK agent: Change port in
agent.pyor uvicorn command
- Check server logs for errors
- Verify agent card endpoint is accessible
- Ensure all dependencies are installed:
uv sync - Check environment variables are set correctly
- Reinstall dependencies:
uv sync - Use
uv runto ensure the correct environment is used
