Skip to content

benchmark result: rename iterations to repetitions #1398

@jgehrcke

Description

@jgehrcke

Micro benchmark frameworks often perform I iterations for a given thing and measure the total time by comparing start time and end time of that entire operation (that is, there is no further fine-grained timing insights for anything happening between those two points in time).

For example, Google Benchmark calls this concept "iterations": https://github.com/google/benchmark/blob/main/docs/user_guide.md#output-formats

From Conbench's point of view, we encourage users to submit 'multi-sample results', or maybe we should call them 'multi-repetition results'. Via the API and UI we however expose this concept as "iterations". Our "iterations" is not the same as their "iterations".

It would be a great communication/intuitiveness win to normalize our "iterations" to "repetitions". There is a clear hierarchy: one repetition might be comprised of many iterations.

Quoting myself from #579:

I would then choose L small enough for t/L to still be robust, but allowing for doing N (e.g. 6) repetitions within a reasonable time frame, leading to N duration measurements.

@alistaire47 on #579 (comment):

And yeah, we should have some consistent opinion on how to translate repetitions vs iterations in this context; vocabulary is causing confusion.

Metadata

Metadata

Assignees

No one assigned

    Labels

    UX - terminologyuser-facing terminology: case, benchmark result, ...api JSON schemas and specrelated to technical API specification, including JSON object schemas

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions