Monitor and log LLM API calls from OpenAI and Anthropic to the Coolhand analytics platform.
pip install coolhand- Get API Key: Visit coolhandlabs.com to create a free account
- Install:
pip install coolhand - Configure: Set
COOLHAND_API_KEYandimport coolhandin your app - Deploy: Your AI calls are now automatically monitored!
Set it and forget it! Monitor ALL AI API calls across your entire application with minimal configuration.
import coolhand # Auto-initializes and starts monitoring
# That's it! ALL AI API calls are now automatically monitored:
# ✅ OpenAI SDK calls
# ✅ Anthropic API calls
# ✅ ANY library making AI API calls via httpx
# Your existing code works unchanged:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
# The request and response have been automatically logged to Coolhand!Why Automatic Monitoring:
- Zero refactoring - No code changes to existing services
- Complete coverage - Monitors all AI libraries using httpx automatically
- Security built-in - Automatic credential sanitization
- Performance optimized - Negligible overhead via async logging
- Future-proof - Automatically captures new AI calls added by your team
| Variable | Required | Default | Description |
|---|---|---|---|
COOLHAND_API_KEY |
Yes | - | Your Coolhand API key for authentication |
COOLHAND_SILENT |
No | true |
Set to false for verbose logging output |
from coolhand import Coolhand
coolhand_client = Coolhand(
api_key='your-api-key',
silent=False, # Enable verbose logging
)import coolhand
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Request automatically logged to Coolhand!import coolhand
from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.content[0].text)
# Request automatically logged to Coolhand!import coolhand
from openai import OpenAI
client = OpenAI()
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
# Complete streamed response automatically logged to Coolhand!The monitor captures:
- Request Data: Method, URL, headers, request body
- Response Data: Status code, headers, response body
- Timing: Request timestamp, response timestamp, duration
- LLM-Specific: Model used, token counts, streaming status
Headers containing API keys are automatically sanitized for security.
Coolhand monitors HTTP requests made via httpx, which is used by:
- OpenAI Python SDK
- Anthropic Python SDK
- Any other library using httpx for HTTP requests
- When you import
coolhand, it automatically patches httpx - Requests to OpenAI and Anthropic APIs are intercepted
- Request and response data are captured (credentials sanitized)
- Data is sent to Coolhand asynchronously
- Your application continues without interruption
For non-LLM endpoints, requests pass through unchanged with zero overhead.
Collect user feedback on LLM responses to improve your AI outputs. The FeedbackService lets you capture thumbs up/down ratings, explanations, and corrections.
Frontend Feedback Widget: For browser-based feedback collection, see coolhand-js - an accessible, lightweight JavaScript widget that leverages best UX practices to capture actionable user feedback on any AI output.
from coolhand import Coolhand
# Initialize with your API key
ch = Coolhand(api_key='your-api-key')
# Submit positive feedback
ch.create_feedback({
'llm_request_log_id': 12345, # From Coolhand logs
'like': True,
'explanation': 'Very helpful response!'
})
# Using original output for fuzzy matching
ch.create_feedback({
'original_output': 'The capital of France is London.',
'like': False,
'revised_output': 'The capital of France is Paris.'
})| Field | Type | Required | Description |
|---|---|---|---|
like |
bool | Yes | Thumbs up (True) or down (False) |
llm_request_log_id |
int | No* | Coolhand log ID (exact match) |
llm_provider_unique_id |
str | No* | Provider's x-request-id (exact match) |
original_output |
str | No* | Original response text (fuzzy match) |
client_unique_id |
str | No* | Your internal identifier |
explanation |
str | No | Why the response was good/bad |
revised_output |
str | No | User's corrected version |
creator_unique_id |
str | No | ID of user providing feedback |
*At least one matching field is recommended to link feedback to the original request.
from coolhand import Coolhand
Coolhand(
api_key='your-api-key',
silent=False # Enable verbose logging
)Or via environment variable:
export COOLHAND_SILENT=falseSign up for free at coolhandlabs.com to get your API key and start monitoring your LLM usage.
What you get:
- Complete LLM request and response logging
- Usage analytics and insights
- No credit card required to start
- API keys in request headers are automatically redacted
- No sensitive data is exposed in logs
- All data is sent via HTTPS to Coolhand servers
- Frontend (Feedback Collection Widget): coolhand-js - Frontend feedback widget for collecting user feedback on AI outputs
- Ruby: coolhand gem - Coolhand monitoring for Ruby applications
- Node.js: coolhand-node package - Coolhand monitoring for Node.js applications
- Questions? Create an issue
- Contribute? Submit a pull request
- Support? Visit coolhandlabs.com
Apache-2.0