Skip to content

olgasafonova/mcp-otel-go

Repository files navigation

mcp-otel-go

Go Report Card CI Go Reference codecov

OpenTelemetry (OTel) tracing and metrics for Go MCP servers. One function call instruments every method in a go-sdk server, following the OTel semantic conventions for MCP.

The go-sdk doesn't include observability out of the box, and existing OpenTelemetry integrations for MCP (MCPcat, Shinzo Labs) are TypeScript-only. This is the Go equivalent.

Who is this for?

You're building MCP servers in Go with the official go-sdk. You already have OTel infrastructure (Jaeger, Grafana Tempo, Prometheus, Datadog) and you want your MCP servers reporting into it. You shouldn't have to write custom instrumentation for every tool handler. Nothing else exists for Go today.

Install

go get github.com/olgasafonova/mcp-otel-go/mcpotel

Usage

server := mcp.NewServer(impl, opts)
server.AddReceivingMiddleware(mcpotel.Middleware(mcpotel.Config{
    ServiceName:    "my-mcp-server",
    ServiceVersion: "1.0.0",
}))

Three lines. Every incoming MCP method call now produces an OTel span and a duration histogram.

Two error surfaces, both covered

MCP tool errors split into two categories, and most instrumentation only catches one.

Protocol errors happen when the tool doesn't exist or params are invalid. The go-sdk returns these as normal Go errors. Easy to catch.

Application errors happen when your tool handler returns an error (database down, API timeout, bad input). The go-sdk wraps these into CallToolResult{IsError: true} and returns nil for the error. Your middleware sees a "successful" call. Your dashboard shows green. Your users see failures.

This middleware catches both. It inspects CallToolResult.IsError after every tools/call and marks the span as an error with the original error message.

What gets collected

Data Example
Span per method call tools/call miro_create_sticky
Method name mcp.method.name = "tools/call"
Tool name gen_ai.tool.name = "miro_create_sticky"
Resource URI mcp.resource.uri = "miro://board/123"
Prompt name gen_ai.prompt.name = "summarize"
Session ID mcp.session.id = "abc123"
Error type (both surfaces) error.type = "*errors.errorString"
Duration histogram mcp.server.operation.duration (seconds)

All attribute names follow the OTel semantic conventions for MCP.

What does NOT get collected

Privacy-safe by default. The middleware never records:

  • Tool arguments or return values
  • Resource content
  • Environment variables or file paths
  • IP addresses or user-identifiable information
  • Full error messages (only Go type names like *json.SyntaxError, not the message text)

Only method names, tool names, timing, error type names, and session IDs. Resource URIs are recorded by default but can be redacted (see below).

Privacy controls

Error messages from tool handlers can contain PII (e.g., "user john@example.com not found"). Resource URIs can contain user-identifiable paths (e.g., "user://john.doe/profile"). The middleware provides two redaction hooks to control what reaches your telemetry backend.

Error redaction (on by default)

By default, only the Go error type name is recorded (e.g., *json.SyntaxError), not the full error message. This is safe because type names are developer-defined and never contain user data.

// Default behavior: records "*json.SyntaxError", not "invalid field: email john@example.com"
mcpotel.Middleware(mcpotel.Config{
    ServiceName: "my-server",
})

Opt in to full error messages only if your errors are known to be PII-free:

mcpotel.Middleware(mcpotel.Config{
    ServiceName: "my-server",
    RedactError: mcpotel.ErrorMessageFull,
})

Or provide your own classifier:

mcpotel.Middleware(mcpotel.Config{
    ServiceName: "my-server",
    RedactError: func(err error) string {
        // Classify by error type, strip PII, or return a fixed string
        return "internal_error"
    },
})

URI redaction (opt-in)

Resource URIs are recorded in full by default. If your URIs contain user-identifiable paths, enable scheme-only recording:

mcpotel.Middleware(mcpotel.Config{
    ServiceName: "my-server",
    RedactURI:   mcpotel.URISchemeOnly, // "file:///home/john/secret.txt" → "file://"
})

Data controller responsibility

This middleware is a data processor. You, as the MCP server operator, are the data controller. You decide:

  • Which telemetry backend receives the data
  • How long spans and metrics are retained
  • Whether error messages or URIs need redaction for your use case
  • Compliance with GDPR, CCPA, or other applicable regulations

Session IDs are random protocol identifiers, not user identifiers. They become pseudonymous data only if your telemetry backend correlates them with user identity through other means.

Config

type Config struct {
    ServiceName    string                    // Required. OTel service.name
    ServiceVersion string                    // Optional. service.version
    TracerProvider trace.TracerProvider       // Optional. Defaults to otel.GetTracerProvider()
    MeterProvider  metric.MeterProvider      // Optional. Defaults to otel.GetMeterProvider()
    Filter         func(method string) bool  // Optional. Return false to skip a method
    RedactError    func(err error) string    // Optional. Defaults to Go type name only
    RedactURI      func(uri string) string   // Optional. Nil = full URI recorded
}

Filtering methods

Skip instrumentation for noisy methods:

mcpotel.Middleware(mcpotel.Config{
    ServiceName: "my-server",
    Filter: func(method string) bool {
        return method != "notifications/initialized"
    },
})

Bring your own exporter

No opinions on where telemetry goes. Configure your providers at startup as usual:

exporter, _ := otlptracegrpc.New(ctx)
tp := sdktrace.NewTracerProvider(sdktrace.WithBatcher(exporter))
otel.SetTracerProvider(tp)

// The middleware picks up the global provider automatically
server.AddReceivingMiddleware(mcpotel.Middleware(mcpotel.Config{
    ServiceName: "my-server",
}))

Or pass providers explicitly:

server.AddReceivingMiddleware(mcpotel.Middleware(mcpotel.Config{
    ServiceName:    "my-server",
    TracerProvider: myCustomTP,
    MeterProvider:  myCustomMP,
}))

Try it locally

The examples/otlp/ demo exports traces and metrics over gRPC to localhost:4317. Point any OTLP-compatible backend at it — no code changes needed.

Generate telemetry (same for every backend):

# Terminal — run the OTLP example via MCP Inspector
npx @modelcontextprotocol/inspector go run ./examples/otlp

Connect in the Inspector UI, call the greet tool a few times. Then check your backend for:

  • tools/call greet spans with mcp.method.name, gen_ai.tool.name, mcp.session.id attributes
  • mcp.server.operation.duration histogram

Set OTEL_EXPORTER_OTLP_ENDPOINT to override the default localhost:4317.

otel-tui (no Docker, no setup)

Terminal UI that receives OTLP directly. Fastest way to see traces.

brew install ymtdzzz/tap/otel-tui   # macOS
# or: go install github.com/ymtdzzz/otel-tui@latest
otel-tui                             # listens on :4317

Jaeger (traces)

Web UI with trace waterfall diagrams and dependency graphs.

docker run -d -p 16686:16686 -p 4317:4317 jaegertracing/jaeger:latest

Open http://localhost:16686. Select the example-server service to see traces.

Grafana + Tempo + Prometheus (traces + metrics)

Full observability stack with dashboards. Create a docker-compose.yml:

services:
  tempo:
    image: grafana/tempo:latest
    command: ["-config.file=/etc/tempo.yaml"]
    volumes:
      - ./tempo.yaml:/etc/tempo.yaml
    ports:
      - "4317:4317"

  prometheus:
    image: prom/prometheus:latest
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin

Add a minimal tempo.yaml:

server:
  http_listen_port: 3200

distributor:
  receivers:
    otlp:
      protocols:
        grpc:
          endpoint: "0.0.0.0:4317"

storage:
  trace:
    backend: local
    local:
      path: /tmp/tempo/blocks
docker compose up -d

Open http://localhost:3000, add Tempo as a data source (http://tempo:3200), and explore traces.

Datadog

Set your API key and site, then use the Datadog Agent as the OTLP collector:

docker run -d \
  -e DD_API_KEY=<your-api-key> \
  -e DD_SITE=datadoghq.com \
  -e DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_GRPC_ENDPOINT=0.0.0.0:4317 \
  -p 4317:4317 \
  gcr.io/datadoghq/agent:latest

Traces and metrics appear in the Datadog APM dashboard.

Honeycomb

Cloud-native observability with a generous free tier. No Docker needed — send OTLP directly:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io \
OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<your-api-key>" \
go run ./examples/otlp

Grafana Cloud

Free tier includes traces and metrics. Get your OTLP endpoint and token from the Grafana Cloud portal:

OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp-gateway-<zone>.grafana.net/otlp \
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <base64-encoded-credentials>" \
go run ./examples/otlp

Dependencies

  • github.com/modelcontextprotocol/go-sdk v1.3.0+
  • go.opentelemetry.io/otel v1.34.0+
  • No exporter dependencies. You bring your own.

License

MIT

About

OpenTelemetry middleware for Go MCP servers (go-sdk)

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors