Skip to content

Stats don't include warmup and calibration runs #52

@mdboom

Description

@mdboom

When pystats are collected inside of the benchmarking infrastructure, pystats is turned off during pyperf's warmup and calibration steps. This means by the time the actual benchmarking happens and the stats are turned on, the code may already be specialized and traced and moved to tier 2. So we miss out on trace creation attempt stats, for example, this is from json_dumps -- we are executing some traces, but not creating any, which doesn't make sense until you realize they were created during "warmup".

Count Ratio
Optimization attempts 0
Traces created 0
Traces executed 30,720
Uops executed 376,140

This effect has existed for sometime and is undercounting our specialization attempts as well, but it's more obviously missing for trace creation.

I think the easy fix is to just always turn stats collection on, even for the warmup phases, which will increase the overall stats counts, but it probably more realistic. A more involved change to pyperf would be to disable the warmup and calibration when doing a stats collection.

@markshannon: Does this make sense? How much do we care about this problem?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions