Skip to content

BUG : MAX 100 HTTP2 streams limit between two Envoy Proxy #3076

@prune998

Description

@prune998

description

When doing HTTP2 Streams (ex using gRPC), there is a limit of 100 Streams per TCP connexions between two Envoy proxy.

This situation shows up, for example, when you have 2 pods with Istio Sidecar (which is using Envoy) and talking to each other using gRPC.

Envoy creates ONE TCP connexion to the upstream server per CPU. In my tests with 2 CPUs --> 2 TCP connexions
Then Envoy will multiplex HTTP/2 gRPC calls into the two TCP cnx.
Here are the scenarios for N clients on servers with 1 CPU (so only one TCP cnx between Envoys) :

one Envoy

N client --> 1TCP/1Stream * N --> Envoy --> 1TCP/N Stream --> server

Everything working as expected, can go up to 1024 Streams as per Envoy's default. If I try more, I do get some 503's, which is the expected behaviour

two Envoy

N client --> 1TCP/1Stream * N --> Envoy --> 1TCP/N Stream --> Envoy --> 1TCP/N Stream --> server

When N is > 100, Envoy start queuing the other Streams. As these Streams are long live, they never end, blocking the communication.
What happen is :
N client --> 1TCP/1Stream * N --> Envoy --> 1TCP/100 Stream --> Envoy Sidecar --> 1TCP/100 Stream --> server

Which means if N > 100, I lose connexions.

I can't find why the limit is 100, but I can reproduce this situation every time.
If I have 2 CPUs, I see 2 TCP cnx and 100 streams in each...

I opened an issue in Istio's repo : istio/istio#4940

I also first commented to issue #2941 which in fact is not related.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions