Skip to content
This repository was archived by the owner on Sep 17, 2025. It is now read-only.
This repository was archived by the owner on Sep 17, 2025. It is now read-only.

Memory increase due to too many traces #590

@ChrisTerBeke

Description

@ChrisTerBeke

Hi,

We're using this lib in several APIs running on GKE and using StackDriver. We noticed that when sending all traces to StackDriver, the amount of traces collected grows faster than the amount that can be exported to StackDriver. Because of this, a list containing all trace spans that still need to be sent grows over time, to the point where Kubernetes kills our pod because it uses too much memory.

While this is expected behavior from the current code, I wonder if there would be a cleaner way to do this. For example by dropping the spans when the list reaches a maximum and then logging a warning or error? This will prevent apps from 'leaking' memory.

What are your thoughts about this?

p.s. this is probably the same as reported on #334, but it's not really resolved there.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions