Skip to content

common/tcp: generic TCP connection pool for extensions#3747

Merged
htuch merged 6 commits intoenvoyproxy:masterfrom
turbinelabs:stephan/tcp-conn-pool
Jul 10, 2018
Merged

common/tcp: generic TCP connection pool for extensions#3747
htuch merged 6 commits intoenvoyproxy:masterfrom
turbinelabs:stephan/tcp-conn-pool

Conversation

@zuercher
Copy link
Copy Markdown
Member

Provides an analog to Envoy::Http::ConnectionPool::Instance for generic
TCP connections. This is intended for use by extensions that desire
access to connection-pooled hosts from a Cluster load balancer.

Risk Level: low (not presently used)
Testing: unit tests
Docs Changes: n/a -- not user facing
Release Notes: n/a

Signed-off-by: Stephan Zuercher stephan@turbinelabs.io

Provides an analog to Envoy::Http::ConnectionPool::Instance for generic
TCP connections. This is intended for use by extensions that desire
access to connection-pooled hosts from a Cluster load balancer.

*Risk Level*: low (not presently used)
*Testing*: unit tests
*Docs Changes*: n/a -- not user facing
*Release Notes*: n/a

Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
@mattklein123
Copy link
Copy Markdown
Member

@zuercher high level question: Is the idea here that's up to the application to decide when they give TCP connections back to the pool and to understand if they can be reused or not? I haven't looked at the code yet, just trying to understand where this fits in and how much duplication there is w/ existing code (and wondering whether it would be somehow possible to use this for Redis also cc @danielhochman).

@zuercher
Copy link
Copy Markdown
Member Author

Yes, the idea is to allow extensions to do connection re-use in the same way as the HTTP connection pools. It should be reusable for any protocol that has discrete requests and responses.

The TCP conn pool is a stripped down copy of the HTTP1 conn pool. In principal, once could rewrite the HTTP1 conn pool in terms of this (layering the StreamDecoder/Encoder semantics on top). That'd be a riskier change, obviously.

It might also make sense to remove ClusterManager::tcpConnForCluster and have those callers use a connection pool (and just never reuse the connections).

@zuercher
Copy link
Copy Markdown
Member Author

So I think I missed some stuff. Probably I should try to write an integration test uses this connection pool to do some trivial thing. Feel free to hold off giving it more than high level thought until then.

…on owner, integration test

Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
@zuercher
Copy link
Copy Markdown
Member Author

zuercher commented Jun 29, 2018

I think it's functional now that *ahem* it actually connects. I added an integration test as well.

@mattklein123 mattklein123 self-assigned this Jul 2, 2018

std::string tcp_conn_pool_config;

// Trivial Filter that obtains connections from a TCP connection pool each time onData is called
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional idea: I'd like to see the basic TCP proxy session use the connection pool even if there's no performance gains yet because connections are never reused. That way we can eventually add prefetched connections and health checks without a big change-over. What do you think?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely. I'd like to do that separately, if there's no pressing reason to do it in one go.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. I look forward to the change and I'm all for landing the big chunk of new code separately from a pretty massive functional changes to TCP proxying :-)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just looked through. This all seems reasonable, but my main complete is that it's basically all code almost directly copied from the HTTP connection pool case. How can we share this code in the future? Can we open some follow up issues on things we might like to do here? Some things I can think of:

  1. Use this code for tcp_proxy per @alyssawilk (+1 from me)
  2. Use this code in redis_proxy (by passing the right LB context I'm guessing this would be very easy)
  3. Somehow build the HTTP connection pool on top of this one? (At least the HTTP/1.1 one)?
  4. Other ideas?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to open up tickets to do those and work through them, interleaved with the other stuff I'm working on.

I'm least sure of 2, because I don't really understand the protocol very well, but it doesn't seem like it should be a problem.

I think the trickiest past of an HTTP 1 conn pool based on this TCP conn pool is to make the CodecClient have the same lifetime as the connection.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Can we open a tracking ticket for all of this? Also looks like we need a master merge.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I'll make a ticket and take a pass to make sure this is in shape.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see #3818.

zuercher added 2 commits July 3, 2018 16:05
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
zuercher added 2 commits July 9, 2018 13:03
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
@zuercher
Copy link
Copy Markdown
Member Author

@mattklein123 ok to merge this or should I find another maintainer to look at it?

@htuch htuch merged commit c3d2f37 into envoyproxy:master Jul 10, 2018
@zuercher zuercher deleted the stephan/tcp-conn-pool branch July 19, 2018 20:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants