Conversation
Although the ring hashing load balancer ("ketama") was primarily implemented
for the upcoming redis support, as a bonus this commit adds HTTP consistent
hash routing as well, since it's pretty simple and it's been requested several
times.
|
@lyft/network-team this is missing docs but in case you want to start reviewing the code changes are done. Will report back when the docs are done. |
|
|
||
| /** | ||
| * @return Optional<uint64_t> an optional hash value to route on given a set of HTTP headers. | ||
| * A hash value might not be returned if for example the specified HTTP header does not |
| const std::vector<HostPtr>& hosts) { | ||
| log_trace("ring hash: building ring"); | ||
| if (hosts.empty()) { | ||
| ring_.clear(); |
There was a problem hiding this comment.
fix, this needs to go above, add test.
|
@lyft/network-team I added docs. This is done and ready for full review. |
| "header_name": "..." | ||
| } | ||
|
|
||
| header_name |
There was a problem hiding this comment.
I'd cover case on what happens if a given header is not present in the request
|
|
||
| The ring/modulo hash load balancer implements consistent hashing to upstream hosts. The algorithm is | ||
| based on mapping all hosts onto a circle such that any host set changes only affect 1/N requests. | ||
| This technique is also commonly known as "ketama" hashing. The consistent hashing load balancer is |
There was a problem hiding this comment.
may be link here for "ketama" hashing?
| ^^^^^^^^^ | ||
|
|
||
| The ring/modulo hash load balancer implements consistent hashing to upstream hosts. The algorithm is | ||
| based on mapping all hosts onto a circle such that any host set changes only affect 1/N requests. |
There was a problem hiding this comment.
i think this is not 100% correct, specifically it's only single addition or removal of the host from host set will affect 1/N requests, but not any host set changes.
rephrase it a bit here?
|
|
||
| namespace Upstream { | ||
|
|
||
| /** |
There was a problem hiding this comment.
nit: of all or any context information
source/common/router/router.h
Outdated
| // Upstream::LoadBalancerContext | ||
| const Optional<uint64_t>& hashKey() const override { return hash_; } | ||
|
|
||
| Optional<uint64_t> hash_; |
| // standpoint and duplicates the regeneration computation. In the future we might want | ||
| // to generate the rings centrally and then just RCU them out to each thread. This is | ||
| // sufficient for getting started. | ||
| uint64_t min_ring_size = runtime.snapshot().getInteger("upstream.ring_hash.min_ring_size", 1024); |
|
|
||
| uint64_t hashes_per_host = 1; | ||
| if (hosts.size() < min_ring_size) { | ||
| hashes_per_host = min_ring_size / hosts.size(); |
There was a problem hiding this comment.
a bit simply, hashes_per_host = (min_ring_size + hosts.size() - 1) / hosts.size(); (and remove if below)
There was a problem hiding this comment.
I just spent a few minutes trying to reason about whether this is correct or not and I couldn't easily do it, so I'm going to leave the if statement. It's easy to understand.
| mechanism is to hash via :ref:`HTTP header values <config_http_conn_man_route_table_hash_policy>` in | ||
| the :ref:`HTTP router filter <arch_overview_http_routing>`. The default minimum ring size is | ||
| specified in :ref:`runtime <config_cluster_manager_cluster_runtime_ring_hash>`. The minimum ring | ||
| size is governs the replication factor for each host in the ring. For example, if the minimum ring |
|
|
||
| uint64_t hashes_per_host = 1; | ||
| if (hosts.size() < min_ring_size) { | ||
| hashes_per_host = min_ring_size / hosts.size(); |
There was a problem hiding this comment.
This should be hashes_per_host = 160 (or hashes_per_host = 160 * weight) in order to keep it consistent (and interoperable) with other implementations of ketama.
Note: This will consume quite a bit of memory with huge clusters.
There was a problem hiding this comment.
@PiotrSikora since this was already submitted, can you open a GH issue and we can discuss there? It's not clear to me that we have to explicitly support 160 points per server (I'm not a hashing expert but that seems arbitrary to me). Another option is to have a different type that we expose in the config called "ketama" which uses the same underlying code but does 160 points per server.
| "lb_type" : { | ||
| "type" : "string", | ||
| "enum" : ["round_robin", "least_request", "random"] | ||
| "enum" : ["round_robin", "least_request", "random", "ring_hash"] |
There was a problem hiding this comment.
This should be exposed as "ketama" (or at the very least "consistent_hash"), otherwise it's not obvious what the underlying mapping represents.
Also, we might want to add other consistent hashes in the future (like the less memory-hungry "jump" algorithm).
* Convert Pilot ip mesh attributes to bytes * Update comment * Update after review comment
…reverse_bridge_filter.rst (envoyproxy#496) * update docs/root/configuration/http/http_filters/grpc_http1_reverse_bridge_filter.rst * update grpc_http1_reverse_bridge_filter.rst * update translation of "arch overview & reference" * Update docs/root/configuration/http/http_filters/grpc_http1_reverse_bridge_filter.rst Co-authored-by: zhiguo-lu <70930322+zhiguo-lu@users.noreply.github.com> * Update docs/root/configuration/http/http_filters/grpc_http1_reverse_bridge_filter.rst body update with "响应体" Co-authored-by: zhiguo-lu <70930322+zhiguo-lu@users.noreply.github.com> * Update docs/root/configuration/http/http_filters/grpc_http1_reverse_bridge_filter.rst Co-authored-by: zhiguo-lu <70930322+zhiguo-lu@users.noreply.github.com> * updated * updated Co-authored-by: zhiguo-lu <70930322+zhiguo-lu@users.noreply.github.com>
Although the ring hashing load balancer ("ketama") was primarily implemented
for the upcoming redis support, as a bonus this commit adds HTTP consistent
hash routing as well, since it's pretty simple and it's been requested several
times.
fixes #197