Skip to content

Test server: controlling parallelism via dynamic delays  #389

@oschaaf

Description

@oschaaf

For simulating overloaded backends it would be useful to be able to have a means
to increase the injected delay as the number of requests behing handled grows.

There's the downstream_rq_active gauge that looks like it could be used as an
input to this. One way to achieve this would be to add configuration settings that
control this computation to allow one to have latency computed based on the delta
between the current and target value of a gauge [1]

The final delay would then look like (target-value-of-the-gauge - current-value-of-the-gauge) * delay_step_us.
To trigger the fault filter, the test server extension could synthesize an x-envoy-fault-delay-request: <computed value> header.

[1] example

          - name: test-server
            config:
              stats-based-delay: 
                  - gauge: "downstream_rq_active":
                     target-gauge-value: 100
                     delay-step-us: 1000

It's tempting to generalize this more by integrating the CEL expression engine, but it looks like that is an order
of magnitude more work (I may be wrong about that, but that is my gut feeling after skimming some code).

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions