Skip to content

benchmarking: Create meaningful dashboards for apmbench results #7868

@marclop

Description

@marclop

Description

The new benchmarking framework will also need new dashboards to be built. Since we're rebuilding them from scratch, we can look at what could be improved when comparing them to the existing observability-benchmarks dashboard.

Dashboard consumers

While the main goal is to allow us (Go devs) to track the APM Server's performance over time and gauge the impact that code changes have in that ingest performance, but we should also think about how to make this data useful to allow sizing recommendations to happen as a result of our long running benchmarks.

I can think a couple of roles that may look into the dashboards (or docs that are updated from the dashboards):

  • Support engineers looking into to a case where the APM Server (perhaps Elasticsearch as well) may not be appropriately sized for the number of events/s received.
  • Solution Architects helping a customer plan their Elastic Stack deployment.

Dashboard metrics

Each profile of user will be interested in different metrics, however, all parties are likely interested in an overview summary that shows higher-level view of the data, broken down by APM Server / Elasticsearch size:

  • Aggregate throughput (events/s).
  • Performance regressions.
  • Resource usage.

Server team

As the main apm-server contributors, we'll be interested in tracking how the APM Server's resource usage evolves over time, these are tracked in #7817.

Metadata

Metadata

Assignees

Type

No type
No fields configured for issues without a type.

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions