Monitor and log LLM API calls from multiple providers (OpenAI, Anthropic, Google AI, and more) to the Coolhand analytics platform.
| Package | Environment | Purpose |
|---|---|---|
coolhand-node |
Node.js | Server-side monitoring and logging of LLM API calls |
coolhand |
Browser | Feedback widget for collecting user sentiment on AI outputs |
This package (coolhand-node) is the server-side SDK for monitoring LLM calls. For browser-based feedback collection widgets, see coolhand.
npm install coolhand-node- Get API Key: Visit coolhandlabs.com and get an API key
- Install:
npm install coolhand-node - Initialize: Add
require('coolhand-node/auto-monitor')to your main file - Configure: Set
COOLHAND_API_KEYin your environment variables - Deploy: Your AI calls are now automatically monitored!
🔥 RECOMMENDED - Zero Configuration AI Monitoring
Note: Global monitoring works in Node.js server environments. For React frontend apps, see our React Integration Guide.
Set it and forget it! Monitor ALL AI API calls across your entire application with just one line of code, so you'll never be surprised by new LLM calls added to your production codebase.
// Add this ONE line at the top of your main application file
require('coolhand-node/auto-monitor');
// That's it! ALL AI API calls are now automatically monitored:
// ✅ OpenAI SDK calls
// ✅ LangChain operations
// ✅ Anthropic API calls
// ✅ Custom AI libraries
// ✅ Direct fetch/axios requests to AI APIs
// ✅ ANY library making AI API calls
// NO code changes needed in your existing services!Environment Variables:
# .env
COOLHAND_API_KEY=your_api_key_here
COOLHAND_DEBUG=false # Set to true for debug modeOr manual initialization:
import { initializeGlobalMonitoring } from 'coolhand-node';
// Initialize once at application startup
initializeGlobalMonitoring({
apiKey: 'your-api-key',
debug: false
});
// Now ALL outbound AI API calls are automatically monitored✨ Why Global Monitoring is Recommended:
- 🚫 Zero refactoring - No code changes to existing services
- 📊 Complete coverage - Monitors ALL AI libraries automatically
- 🔒 Security built-in - Automatic credential sanitization
- ⚡ Performance optimized - Negligible overhead
- 🛡️ Future-proof - Automatically captures new AI calls added by your team
For cases where you need explicit control over which AI calls are monitored:
const Coolhand = require('coolhand-node');
// Initialize the monitor
const monitor = new Coolhand({
apiKey: 'your-api-key',
debug: false // Enable debug mode if needed
});Collect feedback on LLM responses to improve model performance.
Frontend Feedback Widget: For browser-based feedback collection, see coolhand-js - an accessible, lightweight JavaScript widget that leverages best UX practices to capture actionable user feedback on any AI output.
import { Coolhand } from 'coolhand-node';
const coolhand = new Coolhand({
apiKey: 'your-api-key'
});
// Create feedback for an LLM response
const feedback = await coolhand.createFeedback({
llm_request_log_id: 123,
llm_provider_unique_id: 'req_xxxxxxx',
client_unique_id: 'workorder-chat-456',
creator_unique_id: 'user-789'
original_output: 'Here is the original LLM response!',
revised_output: 'Here is the human edit of the original LLM response.',
explanation: 'Tone of the original response read like AI-generated open source README docs',
like: true,
});Field Guide: All fields are optional, but here's how to get the best results:
llm_request_log_id🎯 Exact Match - ID from the Coolhand API response when the original LLM request was logged. Provides exact matching.llm_provider_unique_id🎯 Exact Match - The x-request-id from the LLM API response (e.g., "req_xxxxxxx")original_output🔍 Fuzzy Match - The original LLM response text. Provides fuzzy matching but isn't 100% reliable.client_unique_id🔗 Your Internal Matcher - Connect to an identifier from your system for internal matching
revised_output⭐ Best Signal - End user revision of the LLM response. The highest value data for improving quality scores.explanation💬 Medium Signal - End user explanation of why the response was good or bad. Valuable qualitative data.like👍 Low Signal - Boolean like/dislike. Lower quality signal but easy for users to provide.creator_unique_id👤 User Tracking - Unique ID to match feedback to the end user who created it
📚 Framework Integration Guide - Complete documentation for all supported frameworks
Supported Frameworks: Works with any Node.js framework (Express.js, NestJS, Fastify, Koa, AWS Lambda, Vercel Functions), extensively tested with Next.js/T3 Stack
- Next.js / T3 Stack - ✅ Production-ready
- React Frontend - 🧪 Frontend integration patterns
- Express.js - 🧪 Needs testing
- NestJS - 🧪 Needs testing
- Fastify - 🧪 Needs testing
- Koa.js - 🧪 Needs testing
- Serverless (AWS Lambda, Vercel, Netlify) - 🧪 Needs testing
| Option | Type | Default | Description |
|---|---|---|---|
apiKey |
string | required | Your Coolhand API key for authentication |
silent |
boolean | true |
Whether to suppress console output |
debug |
boolean | false |
Enable debug mode (API calls will be mocked) |
patternsFile |
string | undefined |
Path to custom API patterns file |
| Variable | Type | Default | Description |
|---|---|---|---|
COOLHAND_API_KEY |
string | required | Your Coolhand API key |
COOLHAND_SILENT |
'true' | 'false' |
'true' |
Whether to suppress console output |
COOLHAND_DEBUG |
'true' | 'false' |
'false' |
Enable debug mode |
COOLHAND_PATTERNS_FILE |
string | undefined |
Path to custom API patterns file |
Same options as global monitoring, passed to the Coolhand constructor.
Full TypeScript support with exported types:
import { Coolhand, CoolhandOptions, CoolhandCallData, CoolhandStats } from 'coolhand-node';
const monitor = new Coolhand({
apiKey: 'your-api-key',
silent: true,
debug: false
});The monitor captures:
- Request Data: Method, URL, headers, request body
- Response Data: Status code, headers, response body
- Metadata: Timestamp, protocol used
- LLM-Specific: Model used, token counts, temperature settings
Headers containing API keys are automatically sanitized for security.
The monitor works with any Node.js library that makes HTTP(S) requests to LLM APIs, including:
- OpenAI official SDK
- Anthropic SDK
- Google AI SDK
- LangChain
- Direct
fetch()calls https/httpmodule usage- Any other HTTP client
Add support for custom AI providers by creating a patterns file:
const monitor = new Coolhand({
apiKey: 'your-api-key',
patternsFile: './my-patterns.json'
});Example patterns file (my-patterns.json):
{
"patterns": [
{
"name": "My Custom AI",
"domains": ["api.mycustomai.com"],
"paths": ["/v1/generate", "/v1/chat"],
"headers": {
"authorization": "[REDACTED]",
"api-key": "[REDACTED]"
}
}
]
}Track monitoring statistics in your application:
const { getGlobalStats } = require('coolhand-node');
setInterval(() => {
const stats = getGlobalStats();
console.log(`AI Calls: ${stats.interceptedCalls}, Total Requests: ${stats.totalRequests}`);
}, 60000);Enable debug mode for development and testing:
// Global monitoring with debug mode
require('coolhand-node/auto-monitor'); // Set COOLHAND_DEBUG=true in .env
// Or instance-based with debug mode
const monitor = new Coolhand({
apiKey: 'your-api-key',
debug: true
});When debug mode is enabled:
- API calls to Coolhand will be mocked
- Debug messages will show what would have been sent
- No data will be sent to Coolhand servers
Access individual services for advanced use cases:
import { PatternMatchingService, LoggingService } from 'coolhand-node';
// Use pattern matching independently
const patternService = new PatternMatchingService('./custom-patterns.json');
const match = patternService.matchesAPIPattern(requestOptions);
// Use logging service independently
const loggingService = new LoggingService({
apiKey: 'your-key',
silent: false,
debug: false
});🆓 Sign up for free at coolhandlabs.com to get your API key and start monitoring your LLM usage.
What you get:
- Complete LLM request and response logging
- Usage analytics and insights
- Feedback collection and quality scoring
- No credit card required to start
The monitor handles errors gracefully:
- Failed API logging attempts are logged to console but don't interrupt your application
- Invalid API keys will be reported but won't crash your app
- Network issues are handled with appropriate error messages
- API keys in request headers are automatically redacted
- No sensitive data is exposed in logs
- Debug mode prevents data from being sent to external servers
- Framework Integration Guide - Complete setup for all frameworks. (Well, some are more complete than others.)
- Global Monitoring Guide - Advanced global monitoring features. Even easier than asking your favorite LLM coding tool to do it for you.
- React Integration Guide - Frontend integration patterns. We won't ask about how you are planning to keep your API keys secret.
- Frontend (Feedback Collection Widget): coolhand-js - Frontend feedback widget for collecting user feedback on AI outputs
- Ruby: coolhand gem - Coolhand monitoring for Ruby applications
- Python: coolhand package - Coolhand monitoring for Python applications
- Questions? Create a discussion
- Issues? Report bugs
- Contribute? Submit a pull request