Skip to content

router: Avoid large up-front allocations#421

Merged
olix0r merged 2 commits intomasterfrom
ver/lazy-cache-queue
Jan 31, 2020
Merged

router: Avoid large up-front allocations#421
olix0r merged 2 commits intomasterfrom
ver/lazy-cache-queue

Conversation

@olix0r
Copy link
Member

@olix0r olix0r commented Jan 31, 2020

We initially opted to pre-allocate the delay queue to avoid allocation
at runtime. However, practically, router capacities are much higher
than their typical usage; so we incur an extra ~1MB+ of allocation that
will never really be used (for outbound routers).

This change initializes the router's DelayQueue with only a single slot
by default. The capacity will be increased at runtime as needed.

We initially opted to pre-allocate the delay queue to avoid allocation
at runtime. However, practically, router capacities are _much_ higher
than their typical usage; so we incur an extra ~1MB+ of allocation that
will never really be used (for outbound routers).

This change initializes the router's DelayQueue with only a single slot
by default. The capacity will be increased at runtime as needed.
@olix0r olix0r requested a review from a team January 31, 2020 16:12
@olix0r olix0r self-assigned this Jan 31, 2020
Copy link
Contributor

@kleimkuhler kleimkuhler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So Cache.values is created with a size of 0 and therefore does not allocate until an item is inserted. Should expirations also just use DelayQueue::default() so that both allocate for the first time when an item is inserted?

@olix0r
Copy link
Member Author

olix0r commented Jan 31, 2020

@kleimkuhler the thought was that we'll ~never create routers that have no services. Or that will be pretty rare. But it's not a particularly important optimization.

@kleimkuhler
Copy link
Contributor

@kleimkuhler the thought was that we'll ~never create routers that have no services. Or that will be pretty rare. But it's not a particularly important optimization.

Yep makes sense. It was more coming from the fact that we already have an allocation to make when a service is added since values defaults to 0 capacity. So it may make sense to change values to be IndexMap::with_capacity(1) as well.

Copy link
Contributor

@kleimkuhler kleimkuhler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either way though looks good!

@olix0r olix0r merged commit 9faff18 into master Jan 31, 2020
@olix0r olix0r deleted the ver/lazy-cache-queue branch January 31, 2020 21:25
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Feb 4, 2020
This release fixes a bug in the proxy's logging subsystem that could
cause the proxy to consume memory until the process is OOMKilled,
especially when the proxy was configured to log diagnostic information.

The proxy also now properly emits `grpc-status` headers when signaling
proxy errors to gRPC clients.

This release upgrades the proxy's Rust version, the `http` crate
dependency to address RUSTSEC-2019-0033 and RUSTSEC-2019-0034, and the
`prost` crate dependency has been patched to address RUSTSEC-2020-02.

---

* internal: Introduce a locking middleware (linkerd/linkerd2-proxy#408)
* Update to Rust 1.40 with new Cargo.lock format (linkerd/linkerd2-proxy#410)
* Update http to v0.1.21 (linkerd/linkerd2-proxy#412)
* internal: Split retry, http-classify, and http-metrics (linkerd/linkerd2-proxy#409)
* Actually update http to v0.1.21 (linkerd/linkerd2-proxy#413)
* patch `prost` 0.5 to pick up security fix (linkerd/linkerd2-proxy#414)
* metrics: Make Counter & Gauge atomic (linkerd/linkerd2-proxy#415)
* Set grpc-status headers on dispatch errors (linkerd/linkerd2-proxy#416)
* trace: update `tracing-subscriber` to 0.2.0-alpha.4 (linkerd/linkerd2-proxy#418)
* discover: Warn on discovery error (linkerd/linkerd2-proxy#422)
* router: Avoid large up-front allocations (linkerd/linkerd2-proxy#421)
* errors: Set correct HTTP version on responses (linkerd/linkerd2-proxy#424)
* app: initialize tracing prior to parsing env vars (linkerd/linkerd2-proxy#425)
* trace: update tracing-subscriber to 0.2.0-alpha.6 (linkerd/linkerd2-proxy#423)
adleong pushed a commit to linkerd/linkerd2 that referenced this pull request Feb 4, 2020
This release fixes a bug in the proxy's logging subsystem that could
cause the proxy to consume memory until the process is OOMKilled,
especially when the proxy was configured to log diagnostic information.

The proxy also now properly emits `grpc-status` headers when signaling
proxy errors to gRPC clients.

This release upgrades the proxy's Rust version, the `http` crate
dependency to address RUSTSEC-2019-0033 and RUSTSEC-2019-0034, and the
`prost` crate dependency has been patched to address RUSTSEC-2020-02.

---

* internal: Introduce a locking middleware (linkerd/linkerd2-proxy#408)
* Update to Rust 1.40 with new Cargo.lock format (linkerd/linkerd2-proxy#410)
* Update http to v0.1.21 (linkerd/linkerd2-proxy#412)
* internal: Split retry, http-classify, and http-metrics (linkerd/linkerd2-proxy#409)
* Actually update http to v0.1.21 (linkerd/linkerd2-proxy#413)
* patch `prost` 0.5 to pick up security fix (linkerd/linkerd2-proxy#414)
* metrics: Make Counter & Gauge atomic (linkerd/linkerd2-proxy#415)
* Set grpc-status headers on dispatch errors (linkerd/linkerd2-proxy#416)
* trace: update `tracing-subscriber` to 0.2.0-alpha.4 (linkerd/linkerd2-proxy#418)
* discover: Warn on discovery error (linkerd/linkerd2-proxy#422)
* router: Avoid large up-front allocations (linkerd/linkerd2-proxy#421)
* errors: Set correct HTTP version on responses (linkerd/linkerd2-proxy#424)
* app: initialize tracing prior to parsing env vars (linkerd/linkerd2-proxy#425)
* trace: update tracing-subscriber to 0.2.0-alpha.6 (linkerd/linkerd2-proxy#423)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants