This project demonstrates Agent-to-Agent (A2A) conversations using LangGraph with the A2A protocol. It includes implementations in both Python and TypeScript, allowing you to explore A2A communication patterns across different language ecosystems.
The project showcases how two conversational AI agents can communicate with each other using the standardized A2A protocol. Each agent:
- Uses OpenAI's GPT-4o-mini for generating responses
- Maintains independent conversation state
- Communicates via JSON-RPC 2.0 formatted messages
- Responds to messages from other agents in a conversational loop
agent2agent/
├── python/ # Python implementation
│ ├── langgraph_agent.py # Agent definition
│ ├── a2a_conversation.py # Conversation orchestrator
│ ├── langgraph.json # LangGraph configuration
│ ├── requirements.txt # Python dependencies
│ └── README.md # Python-specific docs
│
├── typescript/ # TypeScript implementation
│ ├── src/
│ │ ├── langgraph_agent.ts # Agent definition
│ │ └── a2a_conversation.ts # Conversation orchestrator
│ ├── langgraph.json # LangGraph configuration
│ ├── package.json # Node.js dependencies
│ ├── tsconfig.json # TypeScript configuration
│ └── README.md # TypeScript-specific docs
│
└── README.md # This file
-
Setup:
cd python python -m venv venv source venv/bin/activate # On macOS/Linux pip install -r requirements.txt
-
Configure environment:
cp .env.example .env # Add your OPENAI_API_KEY to .env -
Run agents:
# Terminal 1 langgraph dev --port 2024 # Terminal 2 langgraph dev --port 2025
-
Run conversation:
python a2a_conversation.py
See python/README.md for detailed Python setup instructions.
-
Setup:
cd typescript npm install -
Configure environment:
cp .env.example .env # Add your OPENAI_API_KEY to .env -
Run agents:
# Terminal 1 npx @langchain/langgraph-cli dev --port 2024 # Terminal 2 npx @langchain/langgraph-cli dev --port 2025
-
Run conversation:
npm run conversation
See typescript/README.md for detailed TypeScript setup instructions.
- Python: The local dev server (
langgraph dev) fully supports A2A protocol endpoints at/a2a/{assistant_id} - TypeScript:
- A2A protocol is supported on LangGraph Cloud deployments
- The local dev server (
langgraph dev) does not expose A2A endpoints - For local TypeScript development, use the Threads API instead
Both implementations require:
OPENAI_API_KEY: Your OpenAI API keyAGENT_A_ID: Assistant ID from the first server (port 2024)AGENT_B_ID: Assistant ID from the second server (port 2025)
Both implementations define a conversational agent using LangGraph's StateGraph:
- State: Maintains a list of messages using LangChain's
BaseMessagetypes - Node: Processes messages using OpenAI's GPT-4o-mini
- Configuration: Brief responses (max 100 tokens), temperature 0.7
The orchestrator:
- Sends JSON-RPC 2.0 formatted messages between agents
- Uses the A2A protocol endpoint:
/a2a/{assistant_id} - Implements the
message/sendmethod - Extracts responses from the
artifactsarray - Simulates 3 rounds of back-and-forth conversation
- Swagger UI:
http://localhost:2024/docsorhttp://localhost:2025/docs - LangGraph Studio: Available via the server output
- LangGraph Studio: Available via the server output (e.g.,
https://smith.langchain.com/studio?baseUrl=http://localhost:2025) - Note:
/docsendpoint is not available in TypeScript dev server
| Feature | Python | TypeScript |
|---|---|---|
| A2A Local Dev | ✅ Supported | ❌ Not available (use Cloud) |
| A2A Cloud | ✅ Supported | ✅ Supported |
| Threads API | ✅ Available | ✅ Available |
| Swagger Docs | ✅ /docs endpoint |
❌ Use Studio UI |
| State Definition | Dataclass | Annotation API |
- Run conversation:
python a2a_conversation.py
npm run dev- Watch mode for developmentnpm run build- Build TypeScript to JavaScriptnpm run conversation- Run the A2A conversation simulation
See LICENSE for details.