-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Description
Bug Template
Title: *grpc-json transcoder always returns 200 OK with response body [] when rpc return type is stream *
Description:
The grpc-json transcoder always returns 200 OK with response body [] when rpc return type is of type stream. If the gRPC api returns an error or throws an exception, it still returns 200 with body '[]'. The response content-type is always 'application/grpc (for successful and unsuccessful calls).
Expected behaviour: The response content type should be application/json and errors should be reflected in the status code.
Repro steps:
I came across this issue while writing a blog post about Envoy and grpc transcoding. Please just focus on the ListReservations rpc for reproduction.
Complete working exampe transcoding-grpc-to-http-json
reservation_service.proto file:
syntax = "proto3";
package reservations.v1;
// Creates separate .java files for message and service
// instead of creating them inside the class defined by
// java_outer_classname
option java_multiple_files = true;
// Class that will contain descriptor
option java_outer_classname = "ReservationServiceProto";
// The package where the generated classes will reside
option java_package = "nl.toefel.reservations.v1";
// required to add annotations to the rpc calls
import "google/api/annotations.proto";
import "google/protobuf/empty.proto";
service ReservationService {
rpc CreateReservation(CreateReservationRequest) returns (Reservation) {
option(google.api.http) = {
post: "/v1/reservations"
body: "reservation"
};
}
rpc GetReservation(GetReservationRequest) returns (Reservation) {
// {id} is mapped into the GetReservationRequest.id field!
option (google.api.http) = {
get: "/v1/reservations/{id}"
};
}
// lists all the reservations, use query params on venue or timestamp to filter the resultset.
rpc ListReservations(ListReservationsRequest) returns (stream Reservation) {
// use query parameter to specify filters, example: ?venue=UtrechtHomeoffice
// these query parameters will be automatically mapped to the ListReservationRequest object
option (google.api.http) = {
get: "/v1/reservations"
};
}
rpc DeleteReservation(DeleteReservationRequest) returns (google.protobuf.Empty) {
// {id} is mapped into the DeleteReservationRequest.id field!
option (google.api.http) = {
delete: "/v1/reservations/{id}"
};
}
}
message Reservation {
string id = 1;
string title = 2;
string venue = 3;
string room = 4;
string timestamp = 5;
repeated Person attendees = 6;
}
message Person {
string ssn = 1;
string firstName = 2;
string lastName = 3;
}
message CreateReservationRequest {
Reservation reservation = 2;
}
message CreateReservationResponse {
Reservation reservation = 1;
}
message GetReservationRequest {
string id = 1;
}
message ListReservationsRequest {
string venue = 1;
string timestamp = 2;
string room = 3;
Attendees attendees = 4;
message Attendees {
repeated string lastName = 1;
}
}
message DeleteReservationRequest {
string id = 1;
}
ListReservations Java implementation
@Override
public void listReservations(ListReservationsRequest request, StreamObserver<Reservation> responseObserver) {
System.out.println("listReservations() called with " + request);
if ("error".equals(request.getRoom())) {
responseObserver.onError(Status.UNAUTHENTICATED.asRuntimeException());
} else if ("throw".equals(request.getRoom())) {
throw Status.UNAUTHENTICATED.asRuntimeException();
} else {
// nothing, empty response should yield []
responseObserver.onCompleted();
}
}generating service descriptor
protoc -I. -Ibuild/extracted-include-protos/main --include_imports \
--include_source_info \
--descriptor_set_out=reservation_service_definition.pb \
src/main/proto/reservation_service.proto
Running envoy using docker
docker run -it --rm --name envoy --network="host" \
-v "$(pwd)/reservation_service_definition.pb:/data/reservation_service_definition.pb:ro" \
-v "$(pwd)/envoy-config.yml:/etc/envoy/envoy.yaml:ro" \
envoyproxy/envoy
successful cal
curl http://localhost:51051/v1/reservations -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 51051 (#0)
> GET /v1/reservations HTTP/1.1
> Host: localhost:51051
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/grpc
< grpc-status: 0
< x-envoy-upstream-service-time: 173
< date: Sun, 11 Nov 2018 21:17:38 GMT
< server: envoy
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
[]
Call in which the gRPC service throws an exception
curl http://localhost:51051/v1/reservations?room=throw -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 51051 (#0)
> GET /v1/reservations?room=throw HTTP/1.1
> Host: localhost:51051
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/grpc
< grpc-status: 2
< x-envoy-upstream-service-time: 10
< date: Sun, 11 Nov 2018 21:18:11 GMT
< server: envoy
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
[]
Call in which the gRPC service calls responseObserver.onError(UNAUTHENTICATED)
curl http://localhost:51051/v1/reservations?room=error -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 51051 (#0)
> GET /v1/reservations?room=error HTTP/1.1
> Host: localhost:51051
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/grpc
< grpc-status: 16
< x-envoy-upstream-service-time: 10
< date: Sun, 11 Nov 2018 21:18:54 GMT
< server: envoy
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
[]
Admin and Stats Output:
/clusters
grpc-backend-services::default_priority::max_connections::1024
grpc-backend-services::default_priority::max_pending_requests::1024
grpc-backend-services::default_priority::max_requests::1024
grpc-backend-services::default_priority::max_retries::3
grpc-backend-services::high_priority::max_connections::1024
grpc-backend-services::high_priority::max_pending_requests::1024
grpc-backend-services::high_priority::max_requests::1024
grpc-backend-services::high_priority::max_retries::3
grpc-backend-services::added_via_api::false
grpc-backend-services::0.0.0.0:0::cx_active::3
grpc-backend-services::0.0.0.0:0::cx_connect_fail::0
grpc-backend-services::0.0.0.0:0::cx_total::3
grpc-backend-services::0.0.0.0:0::rq_active::0
grpc-backend-services::0.0.0.0:0::rq_error::1
grpc-backend-services::0.0.0.0:0::rq_success::2
grpc-backend-services::0.0.0.0:0::rq_timeout::0
grpc-backend-services::0.0.0.0:0::rq_total::3
grpc-backend-services::0.0.0.0:0::health_flags::healthy
grpc-backend-services::0.0.0.0:0::weight::1
grpc-backend-services::0.0.0.0:0::region::
grpc-backend-services::0.0.0.0:0::zone::
grpc-backend-services::0.0.0.0:0::sub_zone::
grpc-backend-services::0.0.0.0:0::canary::false
grpc-backend-services::0.0.0.0:0::success_rate::-1
/server_info
{
"version": "7f1bbfaceb44c51e0c7734c7c0abe0afa00f39f6/1.9.0-dev/Clean/RELEASE",
"state": "LIVE",
"epoch": 0,
"uptime_current_epoch": "216s",
"uptime_all_epochs": "216s"
}
*/stats
cluster.grpc-backend-services.bind_errors: 0
cluster.grpc-backend-services.circuit_breakers.default.cx_open: 0
cluster.grpc-backend-services.circuit_breakers.default.rq_open: 0
cluster.grpc-backend-services.circuit_breakers.default.rq_pending_open: 0
cluster.grpc-backend-services.circuit_breakers.default.rq_retry_open: 0
cluster.grpc-backend-services.circuit_breakers.high.cx_open: 0
cluster.grpc-backend-services.circuit_breakers.high.rq_open: 0
cluster.grpc-backend-services.circuit_breakers.high.rq_pending_open: 0
cluster.grpc-backend-services.circuit_breakers.high.rq_retry_open: 0
cluster.grpc-backend-services.external.upstream_rq_200: 3
cluster.grpc-backend-services.external.upstream_rq_2xx: 3
cluster.grpc-backend-services.external.upstream_rq_completed: 3
cluster.grpc-backend-services.http2.header_overflow: 0
cluster.grpc-backend-services.http2.headers_cb_no_stream: 0
cluster.grpc-backend-services.http2.rx_messaging_error: 0
cluster.grpc-backend-services.http2.rx_reset: 0
cluster.grpc-backend-services.http2.too_many_header_frames: 0
cluster.grpc-backend-services.http2.trailers: 0
cluster.grpc-backend-services.http2.tx_reset: 0
cluster.grpc-backend-services.lb_healthy_panic: 0
cluster.grpc-backend-services.lb_local_cluster_not_ok: 0
cluster.grpc-backend-services.lb_recalculate_zone_structures: 0
cluster.grpc-backend-services.lb_subsets_active: 0
cluster.grpc-backend-services.lb_subsets_created: 0
cluster.grpc-backend-services.lb_subsets_fallback: 0
cluster.grpc-backend-services.lb_subsets_removed: 0
cluster.grpc-backend-services.lb_subsets_selected: 0
cluster.grpc-backend-services.lb_zone_cluster_too_small: 0
cluster.grpc-backend-services.lb_zone_no_capacity_left: 0
cluster.grpc-backend-services.lb_zone_number_differs: 0
cluster.grpc-backend-services.lb_zone_routing_all_directly: 0
cluster.grpc-backend-services.lb_zone_routing_cross_zone: 0
cluster.grpc-backend-services.lb_zone_routing_sampled: 0
cluster.grpc-backend-services.max_host_weight: 0
cluster.grpc-backend-services.membership_change: 1
cluster.grpc-backend-services.membership_healthy: 1
cluster.grpc-backend-services.membership_total: 1
cluster.grpc-backend-services.original_dst_host_invalid: 0
cluster.grpc-backend-services.retry_or_shadow_abandoned: 0
cluster.grpc-backend-services.update_attempt: 46
cluster.grpc-backend-services.update_empty: 0
cluster.grpc-backend-services.update_failure: 0
cluster.grpc-backend-services.update_no_rebuild: 0
cluster.grpc-backend-services.update_success: 46
cluster.grpc-backend-services.upstream_cx_active: 3
cluster.grpc-backend-services.upstream_cx_close_notify: 0
cluster.grpc-backend-services.upstream_cx_connect_attempts_exceeded: 0
cluster.grpc-backend-services.upstream_cx_connect_fail: 0
cluster.grpc-backend-services.upstream_cx_connect_timeout: 0
cluster.grpc-backend-services.upstream_cx_destroy: 0
cluster.grpc-backend-services.upstream_cx_destroy_local: 0
cluster.grpc-backend-services.upstream_cx_destroy_local_with_active_rq: 0
cluster.grpc-backend-services.upstream_cx_destroy_remote: 0
cluster.grpc-backend-services.upstream_cx_destroy_remote_with_active_rq: 0
cluster.grpc-backend-services.upstream_cx_destroy_with_active_rq: 0
cluster.grpc-backend-services.upstream_cx_http1_total: 0
cluster.grpc-backend-services.upstream_cx_http2_total: 3
cluster.grpc-backend-services.upstream_cx_idle_timeout: 0
cluster.grpc-backend-services.upstream_cx_max_requests: 0
cluster.grpc-backend-services.upstream_cx_none_healthy: 0
cluster.grpc-backend-services.upstream_cx_overflow: 0
cluster.grpc-backend-services.upstream_cx_protocol_error: 0
cluster.grpc-backend-services.upstream_cx_rx_bytes_buffered: 121
cluster.grpc-backend-services.upstream_cx_rx_bytes_total: 268
cluster.grpc-backend-services.upstream_cx_total: 3
cluster.grpc-backend-services.upstream_cx_tx_bytes_buffered: 0
cluster.grpc-backend-services.upstream_cx_tx_bytes_total: 929
cluster.grpc-backend-services.upstream_flow_control_backed_up_total: 0
cluster.grpc-backend-services.upstream_flow_control_drained_total: 0
cluster.grpc-backend-services.upstream_flow_control_paused_reading_total: 0
cluster.grpc-backend-services.upstream_flow_control_resumed_reading_total: 0
cluster.grpc-backend-services.upstream_rq_200: 3
cluster.grpc-backend-services.upstream_rq_2xx: 3
cluster.grpc-backend-services.upstream_rq_active: 0
cluster.grpc-backend-services.upstream_rq_cancelled: 0
cluster.grpc-backend-services.upstream_rq_completed: 3
cluster.grpc-backend-services.upstream_rq_maintenance_mode: 0
cluster.grpc-backend-services.upstream_rq_pending_active: 0
cluster.grpc-backend-services.upstream_rq_pending_failure_eject: 0
cluster.grpc-backend-services.upstream_rq_pending_overflow: 0
cluster.grpc-backend-services.upstream_rq_pending_total: 0
cluster.grpc-backend-services.upstream_rq_per_try_timeout: 0
cluster.grpc-backend-services.upstream_rq_retry: 0
cluster.grpc-backend-services.upstream_rq_retry_overflow: 0
cluster.grpc-backend-services.upstream_rq_retry_success: 0
cluster.grpc-backend-services.upstream_rq_rx_reset: 0
cluster.grpc-backend-services.upstream_rq_timeout: 0
cluster.grpc-backend-services.upstream_rq_total: 3
cluster.grpc-backend-services.upstream_rq_tx_reset: 0
cluster.grpc-backend-services.version: 0
cluster_manager.active_clusters: 1
cluster_manager.cluster_added: 1
cluster_manager.cluster_modified: 0
cluster_manager.cluster_removed: 0
cluster_manager.cluster_updated: 0
cluster_manager.cluster_updated_via_merge: 0
cluster_manager.update_merge_cancelled: 0
cluster_manager.update_out_of_merge_window: 0
cluster_manager.warming_clusters: 0
filesystem.flushed_by_timer: 2
filesystem.reopen_failed: 0
filesystem.write_buffered: 6
filesystem.write_completed: 3
filesystem.write_total_buffered: 192
http.admin.downstream_cx_active: 1
http.admin.downstream_cx_delayed_close_timeout: 0
http.admin.downstream_cx_destroy: 0
http.admin.downstream_cx_destroy_active_rq: 0
http.admin.downstream_cx_destroy_local: 0
http.admin.downstream_cx_destroy_local_active_rq: 0
http.admin.downstream_cx_destroy_remote: 0
http.admin.downstream_cx_destroy_remote_active_rq: 0
http.admin.downstream_cx_drain_close: 0
http.admin.downstream_cx_http1_active: 1
http.admin.downstream_cx_http1_total: 1
http.admin.downstream_cx_http2_active: 0
http.admin.downstream_cx_http2_total: 0
http.admin.downstream_cx_idle_timeout: 0
http.admin.downstream_cx_overload_disable_keepalive: 0
http.admin.downstream_cx_protocol_error: 0
http.admin.downstream_cx_rx_bytes_buffered: 360
http.admin.downstream_cx_rx_bytes_total: 2320
http.admin.downstream_cx_ssl_active: 0
http.admin.downstream_cx_ssl_total: 0
http.admin.downstream_cx_total: 1
http.admin.downstream_cx_tx_bytes_buffered: 0
http.admin.downstream_cx_tx_bytes_total: 9933
http.admin.downstream_cx_upgrades_active: 0
http.admin.downstream_cx_upgrades_total: 0
http.admin.downstream_flow_control_paused_reading_total: 0
http.admin.downstream_flow_control_resumed_reading_total: 0
http.admin.downstream_rq_1xx: 0
http.admin.downstream_rq_2xx: 3
http.admin.downstream_rq_3xx: 0
http.admin.downstream_rq_4xx: 3
http.admin.downstream_rq_5xx: 0
http.admin.downstream_rq_active: 1
http.admin.downstream_rq_completed: 6
http.admin.downstream_rq_http1_total: 7
http.admin.downstream_rq_http2_total: 0
http.admin.downstream_rq_idle_timeout: 0
http.admin.downstream_rq_non_relative_path: 0
http.admin.downstream_rq_overload_close: 0
http.admin.downstream_rq_response_before_rq_complete: 0
http.admin.downstream_rq_rx_reset: 0
http.admin.downstream_rq_too_large: 0
http.admin.downstream_rq_total: 7
http.admin.downstream_rq_tx_reset: 0
http.admin.downstream_rq_ws_on_non_ws_route: 0
http.admin.rs_too_large: 0
http.async-client.no_cluster: 0
http.async-client.no_route: 0
http.async-client.rq_direct_response: 0
http.async-client.rq_redirect: 0
http.async-client.rq_total: 0
http.grpc_json.downstream_cx_active: 0
http.grpc_json.downstream_cx_delayed_close_timeout: 0
http.grpc_json.downstream_cx_destroy: 3
http.grpc_json.downstream_cx_destroy_active_rq: 0
http.grpc_json.downstream_cx_destroy_local: 0
http.grpc_json.downstream_cx_destroy_local_active_rq: 0
http.grpc_json.downstream_cx_destroy_remote: 3
http.grpc_json.downstream_cx_destroy_remote_active_rq: 0
http.grpc_json.downstream_cx_drain_close: 0
http.grpc_json.downstream_cx_http1_active: 0
http.grpc_json.downstream_cx_http1_total: 3
http.grpc_json.downstream_cx_http2_active: 0
http.grpc_json.downstream_cx_http2_total: 0
http.grpc_json.downstream_cx_idle_timeout: 0
http.grpc_json.downstream_cx_overload_disable_keepalive: 0
http.grpc_json.downstream_cx_protocol_error: 0
http.grpc_json.downstream_cx_rx_bytes_buffered: 0
http.grpc_json.downstream_cx_rx_bytes_total: 304
http.grpc_json.downstream_cx_ssl_active: 0
http.grpc_json.downstream_cx_ssl_total: 0
http.grpc_json.downstream_cx_total: 3
http.grpc_json.downstream_cx_tx_bytes_buffered: 0
http.grpc_json.downstream_cx_tx_bytes_total: 584
http.grpc_json.downstream_cx_upgrades_active: 0
http.grpc_json.downstream_cx_upgrades_total: 0
http.grpc_json.downstream_flow_control_paused_reading_total: 0
http.grpc_json.downstream_flow_control_resumed_reading_total: 0
http.grpc_json.downstream_rq_1xx: 0
http.grpc_json.downstream_rq_2xx: 3
http.grpc_json.downstream_rq_3xx: 0
http.grpc_json.downstream_rq_4xx: 0
http.grpc_json.downstream_rq_5xx: 0
http.grpc_json.downstream_rq_active: 0
http.grpc_json.downstream_rq_completed: 3
http.grpc_json.downstream_rq_http1_total: 3
http.grpc_json.downstream_rq_http2_total: 0
http.grpc_json.downstream_rq_idle_timeout: 0
http.grpc_json.downstream_rq_non_relative_path: 0
http.grpc_json.downstream_rq_overload_close: 0
http.grpc_json.downstream_rq_response_before_rq_complete: 0
http.grpc_json.downstream_rq_rx_reset: 0
http.grpc_json.downstream_rq_too_large: 0
http.grpc_json.downstream_rq_total: 3
http.grpc_json.downstream_rq_tx_reset: 0
http.grpc_json.downstream_rq_ws_on_non_ws_route: 0
http.grpc_json.no_cluster: 0
http.grpc_json.no_route: 0
http.grpc_json.rq_direct_response: 0
http.grpc_json.rq_redirect: 0
http.grpc_json.rq_total: 3
http.grpc_json.rs_too_large: 0
http.grpc_json.tracing.client_enabled: 0
http.grpc_json.tracing.health_check: 0
http.grpc_json.tracing.not_traceable: 0
http.grpc_json.tracing.random_sampling: 0
http.grpc_json.tracing.service_forced: 0
listener.0.0.0.0_51051.downstream_cx_active: 0
listener.0.0.0.0_51051.downstream_cx_destroy: 3
listener.0.0.0.0_51051.downstream_cx_total: 3
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_1xx: 0
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_2xx: 3
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_3xx: 0
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_4xx: 0
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_5xx: 0
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_completed: 3
listener.0.0.0.0_51051.no_filter_chain_match: 0
listener.admin.downstream_cx_active: 1
listener.admin.downstream_cx_destroy: 0
listener.admin.downstream_cx_total: 1
listener.admin.http.admin.downstream_rq_1xx: 0
listener.admin.http.admin.downstream_rq_2xx: 3
listener.admin.http.admin.downstream_rq_3xx: 0
listener.admin.http.admin.downstream_rq_4xx: 3
listener.admin.http.admin.downstream_rq_5xx: 0
listener.admin.http.admin.downstream_rq_completed: 6
listener.admin.no_filter_chain_match: 0
listener_manager.listener_added: 1
listener_manager.listener_create_failure: 0
listener_manager.listener_create_success: 8
listener_manager.listener_modified: 0
listener_manager.listener_removed: 0
listener_manager.total_listeners_active: 1
listener_manager.total_listeners_draining: 0
listener_manager.total_listeners_warming: 0
runtime.admin_overrides_active: 0
runtime.load_error: 0
runtime.load_success: 0
runtime.num_keys: 0
runtime.override_dir_exists: 0
runtime.override_dir_not_exists: 0
server.concurrency: 8
server.days_until_first_cert_expiring: 2147483647
server.hot_restart_epoch: 0
server.live: 1
server.memory_allocated: 4016040
server.memory_heap_size: 5242880
server.parent_connections: 0
server.total_connections: 0
server.uptime: 225
server.version: 8330175
server.watchdog_mega_miss: 0
server.watchdog_miss: 0
stats.overflow: 0
cluster.grpc-backend-services.external.upstream_rq_time: P0(nan,10) P25(nan,10.375) P50(nan,10.75) P75(nan,172.5) P90(nan,177) P95(nan,178.5) P99(nan,179.7) P99.5(nan,179.85) P99.9(nan,179.97) P100(nan,180)
cluster.grpc-backend-services.upstream_cx_connect_ms: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,0) P95(nan,0) P99(nan,0) P99.5(nan,0) P99.9(nan,0) P100(nan,0)
cluster.grpc-backend-services.upstream_cx_length_ms: No recorded values
cluster.grpc-backend-services.upstream_rq_time: P0(nan,10) P25(nan,10.375) P50(nan,10.75) P75(nan,172.5) P90(nan,177) P95(nan,178.5) P99(nan,179.7) P99.5(nan,179.85) P99.9(nan,179.97) P100(nan,180)
http.admin.downstream_cx_length_ms: No recorded values
http.admin.downstream_rq_time: P0(0,0) P25(0,0) P50(0,0) P75(0,0) P90(0,0) P95(0,0) P99(0,0) P99.5(0,0) P99.9(0,0) P100(0,0)
http.grpc_json.downstream_cx_length_ms: P0(nan,10) P25(nan,10.75) P50(nan,11.5) P75(nan,172.5) P90(nan,177) P95(nan,178.5) P99(nan,179.7) P99.5(nan,179.85) P99.9(nan,179.97) P100(nan,180)
http.grpc_json.downstream_rq_time: P0(nan,10) P25(nan,10.75) P50(nan,11.5) P75(nan,172.5) P90(nan,177) P95(nan,178.5) P99(nan,179.7) P99.5(nan,179.85) P99.9(nan,179.97) P100(nan,180)
listener.0.0.0.0_51051.downstream_cx_length_ms: P0(nan,10) P25(nan,10.75) P50(nan,11.5) P75(nan,172.5) P90(nan,177) P95(nan,178.5) P99(nan,179.7) P99.5(nan,179.85) P99.9(nan,179.97) P100(nan,180)
listener.admin.downstream_cx_length_ms: No recorded values
Note: If there are privacy concerns, sanitize the data prior to
sharing.
Config:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: main-listener
address:
socket_address: { address: 0.0.0.0, port_value: 51051 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: grpc_json
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/", grpc: {} }
route: { cluster: grpc-backend-services, timeout: { seconds: 60 } }
http_filters:
- name: envoy.grpc_json_transcoder
# configuration docs: https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/http/transcoder/v2/transcoder.proto
config:
proto_descriptor: "/data/reservation_service_definition.pb"
services: ["reservations.v1.ReservationService"]
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: false
preserve_proto_field_names: false
- name: envoy.router
clusters:
- name: grpc-backend-services
connect_timeout: 1.25s
type: logical_dns
lb_policy: round_robin
dns_lookup_family: V4_ONLY
http2_protocol_options: {}
hosts:
- socket_address:
address: 127.0.0.1
port_value: 53000
Logs:
./start-envoy.sh
Envoy will run at port 51051 (see envoy-config.yml)
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:203] initializing epoch 0 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:205] statically linked extensions:
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:207] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:210] filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:213] filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:216] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:218] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:220] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:223] transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-11 21:17:22.166][000009][info][main] [source/server/server.cc:226] transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-11 21:17:22.168][000009][info][main] [source/server/server.cc:268] admin address: 0.0.0.0:9901
[2018-11-11 21:17:22.169][000009][info][config] [source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-11 21:17:22.169][000009][info][config] [source/server/configuration_impl.cc:56] loading 1 cluster(s)
[2018-11-11 21:17:22.170][000009][info][upstream] [source/common/upstream/cluster_manager_impl.cc:136] cm init: all clusters initialized
[2018-11-11 21:17:22.170][000009][info][config] [source/server/configuration_impl.cc:61] loading 1 listener(s)
[2018-11-11 21:17:22.172][000009][info][config] [source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-11 21:17:22.172][000009][info][config] [source/server/configuration_impl.cc:112] loading stats sink configuration
[2018-11-11 21:17:22.172][000009][info][main] [source/server/server.cc:426] all clusters initialized. initializing init manager
[2018-11-11 21:17:22.172][000009][info][config] [source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workers
[2018-11-11 21:17:22.172][000009][info][main] [source/server/server.cc:454] starting main dispatch loop