This example demonstrates how to run the Google Calendar Agent with the Inference Gateway using Docker Compose. The setup includes both services configured to work together, providing a complete AI-powered calendar management solution.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ │ │ │ │ |
│ A2A Debugger │───▶│ Calendar Agent │───▶│ Google Calendar │
│ CLI Tool │ │ (A2A Server) │ │ API │
│ │ │ (Port 8080) │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ │ │ │
│ Task Management │ │ LLM Engine │
│ • List tasks │ │ (Optional) │
│ • View history │ │ For AI features │
│ • Submit tasks │ │ │
└─────────────────┘ └─────────────────┘
Direct Access Flow:
- A2A Debugger connects directly to the Calendar Agent's A2A server endpoint
- Submit Tasks: Send JSON-RPC 2.0 requests directly to the agent
- Task Management: List, monitor, and retrieve task details and conversation history
- Real-time Debugging: View agent responses, tool calls, and execution flow
- Agent Tools: Direct access to calendar operations without gateway overhead
A2A Debugger Commands:
# Test connection and get agent capabilities
docker compose run --rm a2a-debugger connect
docker compose run --rm a2a-debugger agent-card
# Submit tasks directly to the agent
docker compose run --rm a2a-debugger tasks submit "List my events for today"
docker compose run --rm a2a-debugger tasks submit "Create a meeting tomorrow at 2 PM"
docker compose run --rm a2a-debugger tasks submit-streaming "List my events for today?"
# Monitor and debug
docker compose run --rm a2a-debugger tasks list
docker compose run --rm a2a-debugger tasks get <task-id>
docker compose run --rm a2a-debugger tasks history <context-id>Benefits of Direct Access:
- Faster Response: No gateway overhead for debugging
- Direct Tool Access: Immediate access to calendar operations
- Enhanced Debugging: Full visibility into agent internals
- Task Monitoring: Real-time task status and conversation history
- Development Workflow: Perfect for agent development and testing
- Google Calendar Agent: Manages calendar events with natural language processing
- Inference Gateway: High-performance LLM gateway supporting multiple providers
- Multi-Provider Support: OpenAI, Groq, Anthropic, DeepSeek, Cohere, Cloudflare
- Mock Mode: Run without Google Calendar integration for testing
- Health Checks: Built-in health monitoring for both services
- Automatic Restart: Services restart automatically on failure
- Docker and Docker Compose installed
- Google Calendar API credentials (unless running in mock mode)
- API keys for at least one LLM provider
# Navigate to the example directory
cd example
# Copy the environment template
cp .env.example .envEdit the .env file and configure the required settings:
# Set mock mode to true
GOOGLE_MOCK_MODE=true
# Configure at least one LLM provider
GROQ_API_KEY=your_groq_api_key_here
A2A_AGENT_CLIENT_PROVIDER=groq
A2A_AGENT_CLIENT_MODEL=deepseek-r1-distill-llama-70b# Disable mock mode
GOOGLE_MOCK_MODE=false
# Configure Google Calendar
GOOGLE_CALENDAR_ID=primary
GOOGLE_SERVICE_ACCOUNT_JSON={"type":"service_account","project_id":"..."}
# Configure LLM provider
GROQ_API_KEY=your_groq_api_key_here
A2A_AGENT_CLIENT_PROVIDER=groq
A2A_AGENT_CLIENT_MODEL=deepseek-r1-distill-llama-70b# Using Task (recommended)
task up
# Or using Docker Compose directly
docker-compose up -d
# View logs
task logs
# or
docker-compose logs -f
# Check service status
task status
# or
docker-compose ps# Test Inference Gateway
curl http://localhost:8080/healthSet on the Inference Gateway A2A_EXPOSE=true and bring up the containers.
curl http://localhost:8080/v1/a2a/agents# Test through Inference Gateway (non-streaming)
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "List my events for today"
}
]
}'
# Test through Inference Gateway (streaming)
curl -N -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "List my events for today"
}
],
"stream": true
}'For the easiest interaction experience, use the inference-gateway CLI which provides an interactive chat interface:
# Start an interactive chat session with the agent (most convenient)
docker compose run --rm cli
# Alternative: Run a one-off command
docker compose run --rm cli agent "What events do I have today?"
# Alternative: Start in interactive chat mode
docker compose run --rm cli chatCLI Benefits:
- Interactive Experience: Natural conversation flow instead of curl commands
- Automatic Formatting: Responses are properly formatted and easy to read
- Session Management: Maintains conversation context across multiple queries
- Real-time Streaming: See responses as they're generated
- Command History: Use arrow keys to navigate previous commands
- Error Handling: Clear error messages and retry options
The CLI is perfect for:
- Testing agent functionality interactively
- Debugging agent responses in real-time
- Validating that the agent integration is working correctly
- Daily usage without technical complexity
This example includes a Taskfile for easy management. Here are the available commands:
# Service management
task up # Start all services
task down # Stop all services
task restart # Restart all services
task status # Show service status
# Monitoring
task logs # Show logs for all services
task logs-gateway # Show logs for inference gateway only
task logs-agent # Show logs for calendar agent only
task health # Check health of all services
# Testing
task test-gateway # Test Inference Gateway
task test-agent # Test Calendar Agent directly
task agent-info # Get agent information
# Maintenance
task clean # Stop services and remove volumes
task clean-all # Stop services and remove everything
task pull # Pull latest images
# Modes
task demo # Start in demo mode
task prod # Start in production mode
task debug # Start with debug logging
# Validation
task validate-env # Check environment configuration| Environment Variable | Description | Default | Required |
|---|---|---|---|
GOOGLE_MOCK_MODE |
Run without Google Calendar integration | false |
No |
GOOGLE_APPLICATION_CREDENTIALS |
Path to credentials file | - | Yes* |
GOOGLE_CALENDAR_ID |
Target calendar ID | primary |
No |
GOOGLE_CALENDAR_TIMEZONE |
Default timezone | UTC |
No |
* Required unless GOOGLE_MOCK_MODE=true
GROQ_API_KEY=your_groq_api_key
A2A_AGENT_CLIENT_PROVIDER=groq
A2A_AGENT_CLIENT_MODEL=deepseek-r1-distill-llama-70bOPENAI_API_KEY=your_openai_api_key
A2A_AGENT_CLIENT_PROVIDER=openai
A2A_AGENT_CLIENT_MODEL=gpt-4oANTHROPIC_API_KEY=your_anthropic_api_key
A2A_AGENT_CLIENT_PROVIDER=anthropic
A2A_AGENT_CLIENT_MODEL=claude-3-opus-20240229DEEPSEEK_API_KEY=your_deepseek_api_key
A2A_AGENT_CLIENT_PROVIDER=deepseek
A2A_AGENT_CLIENT_MODEL=deepseek-chatCOHERE_API_KEY=your_cohere_api_key
A2A_AGENT_CLIENT_PROVIDER=cohere
A2A_AGENT_CLIENT_MODEL=command-r-plusCLOUDFLARE_API_TOKEN=your_cloudflare_token
CLOUDFLARE_ACCOUNT_ID=your_account_id
A2A_AGENT_CLIENT_PROVIDER=cloudflare
A2A_AGENT_CLIENT_MODEL=@cf/meta/llama-3.1-8b-instruct- Go to Google Cloud Console
- Create a new project or select existing one
- Enable the Google Calendar API
- Create a Service Account
- Download the JSON credentials file
- Share your calendar with the service account email
- Set
GOOGLE_SERVICE_ACCOUNT_JSONto the JSON content (single line)
# Through Inference Gateway (non-streaming)
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "What events do I have this week?"
}
]
}'
# Through Inference Gateway (streaming - real-time response)
curl -N -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "What events do I have this week?"
}
],
"stream": true
}'# Through Inference Gateway (non-streaming)
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "Schedule a team meeting tomorrow at 2 PM for 1 hour"
}
]
}'
# Through Inference Gateway (streaming - real-time response)
curl -N -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "Schedule a team meeting tomorrow at 2 PM for 1 hour"
}
],
"stream": true
}'# Through Inference Gateway (non-streaming)
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "Move my 2 PM meeting to 3 PM"
}
]
}'
# Through Inference Gateway (streaming - real-time response)
curl -N -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "Move my 2 PM meeting to 3 PM"
}
],
"stream": true
}'# Through Inference Gateway (non-streaming)
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "Cancel my meeting with John tomorrow"
}
]
}'
# Through Inference Gateway (streaming - real-time response)
curl -N -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{
"role": "user",
"content": "Cancel my meeting with John tomorrow"
}
],
"stream": true
}'# Check logs for errors
docker-compose logs
# Restart services
docker-compose down
docker-compose up -d# Check service status
docker-compose ps
# Test connectivity
curl http://localhost:8080/health- Verify credentials are correctly formatted
- Ensure calendar is shared with service account
- Check API quotas in Google Cloud Console
- Verify API keys are correct
- Check provider-specific rate limits
- Try different models if current one fails
Enable debug logging for more detailed output:
# In .env file
LOG_LEVEL=debug
SERVER_GIN_MODE=debug# All services
docker-compose logs -f
# Specific service
docker-compose logs -f google-calendar-agent
docker-compose logs -f inference-gateway
# Last 100 lines
docker-compose logs --tail=100# Stop services
docker-compose down
# Remove volumes and networks
docker-compose down -v
# Remove images (optional)
docker-compose down --rmi all- Store API keys securely (use Docker secrets in production)
- Use HTTPS in production environments
- Regularly rotate API keys
- Limit Google Calendar permissions to necessary scopes
- Monitor API usage and set up alerts
For production deployments, consider:
- Using Docker secrets for sensitive data
- Setting up reverse proxy with SSL termination
- Implementing proper monitoring and logging
- Using managed databases for persistence
- Setting up automated backups
- Implementing health check endpoints
For issues and questions: