Skip to content
This repository was archived by the owner on Sep 17, 2025. It is now read-only.
This repository was archived by the owner on Sep 17, 2025. It is now read-only.

trace: clashing RPCs share the same traceID #182

@odeke-em

Description

@odeke-em

I built code for a demo for "OpenCensus for Python gRPC developers" at https://github.com/orijtech/opencensus-for-grpc-python-developers and blogpost https://medium.com/@orijtech/opencensus-for-python-grpc-developers-9e460e054395

Given this code below

Server

import grpc
import os
import time
from concurrent import futures

import defs_pb2_grpc as proto
import defs_pb2 as pb

# Tracing related imports
from opencensus.trace.exporters import stackdriver_exporter
from opencensus.trace.exporters.transports.background_thread import BackgroundThreadTransport
from opencensus.trace.samplers import always_on
from opencensus.trace.tracer import Tracer
from opencensus.trace.ext.grpc import server_interceptor

# Create the exporters:
# 1. Stackdriver
stackdriverExporter = stackdriver_exporter.StackdriverExporter(
            project_id=os.environ.get('OCGRPC_PROJECTID', 'census-demos'),
            transport=BackgroundThreadTransport)

class CapitalizeServer(proto.FetchServicer):
    def __init__(self, *args, **kwargs):
        super(CapitalizeServer, self).__init__()

    def Capitalize(self, request, context):
        tracer = Tracer(sampler=always_on.AlwaysOnSampler(), exporter=stackdriverExporter)
        with tracer.span(name='Capitalize') as span:
            data = request.data
            span.add_annotation('Data in', len=len(data))
            return pb.Payload(data=data.upper())

def main():
    # Setup and start the gRPC server with the OpenCensus integration/interceptor
    tracer_interceptor = server_interceptor.OpenCensusServerInterceptor(
            always_on.AlwaysOnSampler(), stackdriverExporter)

    server = grpc.server(
            futures.ThreadPoolExecutor(max_workers=10),
            interceptors=(tracer_interceptor,))
    proto.add_FetchServicer_to_server(CapitalizeServer(), server)
    server.add_insecure_port('[::]:9778')
    server.start()

    try:
        while True:
            time.sleep(60 * 60)
    except KeyboardInterrupt:
        server.stop(0)

if __name__ == '__main__':
    main()

Client

import grpc

import defs_pb2_grpc as proto
import defs_pb2 as pb

def main():
    channel = grpc.insecure_channel('localhost:9778')
    stub = proto.FetchStub(channel)

    while True:
        lineIn = input('> ')
        capitalized = stub.Capitalize(pb.Payload(data=bytes(lineIn, encoding='utf-8')))
        print('< %s\n'%(capitalized.data.decode('utf-8')))

if __name__ == '__main__':
    main()

However, while running multiple clients to simulate traffic and popularity of a service, I noticed that a bunch of traces were bunched up together as viewed in the Stackdriver UI
screen shot 2018-05-24 at 3 23 56 pm

Bouncing around ideas with @sebright and @songy23, @sebright gave a postulation of perhaps the same traceID being shared which alas seems to the be problem, as per our log examinations

screen shot 2018-05-24 at 3 28 16 pm

screen shot 2018-05-24 at 3 27 55 pm

This has a problem of bunching up totally different traces from different clients that are concurrently occuring and seems perhaps like a bug?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions