-
Notifications
You must be signed in to change notification settings - Fork 42.8k
Documentation for enabling the proxy protocol with AWS ELB #49682
Description
I'm trying to enable AWS ELB's proxy protocol support so that the IP addresses of external clients are passed on to the pod backing our services. Both our Kubernetes control plane and worker nodes are behind ELBs, so I turned on the proxy protocol for both. After doing so, I am unable to make connections to either the Kubernetes API or the services running on the worker nodes.
Control plane:
$ kubectl get nodes
Unable to connect to the server: remote error: tls: record overflow
Workers:
$ curl -i https://my-app.example.com/
curl: (35) Unknown SSL protocol error in connection to my-app.example.com:-9822
Here is the configuration of the load balancers, as Terraform configuration: https://github.com/InQuicker/kaws/blob/b3282d3f331b06fefdd69a22ef2be6b9c49fd24a/terraform/balancers.tf
After applying this configuration, the output of aws elb describe-load-balancers contains this:
{
"LoadBalancerDescriptions": [
{
"Subnets": [
"redacted"
],
"CanonicalHostedZoneNameID": "redacted",
"CanonicalHostedZoneName": "redacted",
"ListenerDescriptions": [
{
"Listener": {
"InstancePort": 30000,
"LoadBalancerPort": 80,
"Protocol": "TCP",
"InstanceProtocol": "TCP"
},
"PolicyNames": []
},
{
"Listener": {
"InstancePort": 30001,
"LoadBalancerPort": 443,
"Protocol": "TCP",
"InstanceProtocol": "TCP"
},
"PolicyNames": []
}
],
"HealthCheck": {
"HealthyThreshold": 2,
"Interval": 30,
"Target": "http:10249/healthz",
"Timeout": 3,
"UnhealthyThreshold": 2
},
"VPCId": "redacted",
"BackendServerDescriptions": [
{
"InstancePort": 30000,
"PolicyNames": [
"k8s-nodes-proxy"
]
},
{
"InstancePort": 30001,
"PolicyNames": [
"k8s-nodes-proxy"
]
}
],
"Instances": [
{
"InstanceId": "redacted"
},
{
"InstanceId": "redacted"
}
],
"DNSName": "redacted",
"SecurityGroups": [
"redacted"
],
"Policies": {
"LBCookieStickinessPolicies": [],
"AppCookieStickinessPolicies": [],
"OtherPolicies": [
"k8s-nodes-proxy"
]
},
"LoadBalancerName": "kaws-k8s-nodes-example",
"CreatedTime": "2016-12-22T00:00:55.430Z",
"AvailabilityZones": [
"redacted"
],
"Scheme": "internet-facing",
"SourceSecurityGroup": {
"OwnerAlias": "redacted",
"GroupName": "kaws-balancers-example"
}
},
{
"Subnets": [
"redacted"
],
"CanonicalHostedZoneNameID": "redacted",
"CanonicalHostedZoneName": "redacted",
"ListenerDescriptions": [
{
"Listener": {
"InstancePort": 443,
"LoadBalancerPort": 443,
"Protocol": "TCP",
"InstanceProtocol": "TCP"
},
"PolicyNames": []
}
],
"HealthCheck": {
"HealthyThreshold": 2,
"Interval": 30,
"Target": "http:8080/healthz",
"Timeout": 3,
"UnhealthyThreshold": 2
},
"VPCId": "redacted",
"BackendServerDescriptions": [
{
"InstancePort": 443,
"PolicyNames": [
"k8s-masters-proxy"
]
}
],
"Instances": [
{
"InstanceId": "redacted"
},
{
"InstanceId": "redacted"
}
],
"DNSName": "redacted",
"SecurityGroups": [
"redacted"
],
"Policies": {
"LBCookieStickinessPolicies": [],
"AppCookieStickinessPolicies": [],
"OtherPolicies": [
"k8s-masters-proxy"
]
},
"LoadBalancerName": "kaws-k8s-masters-example",
"CreatedTime": "2016-12-22T00:00:55.550Z",
"AvailabilityZones": [
"redacted"
],
"Scheme": "internet-facing",
"SourceSecurityGroup": {
"OwnerAlias": "redacted32",
"GroupName": "kaws-balancers-example"
}
}
]
}On the worker nodes, ports 30000 and 30001 are node ports for our Traefik service, representing HTTP and HTTPS, respectively. Traefik serves as our ingress controller.
Is there something wrong with my configuration of the proxy protocol causing this or does Kubernetes not support the proxy protocol at all? (I realize that for the nodes, this would depend on Traefik supporting it, but for the control plane requests are hitting the Kubernetes API server directly from the ELB.)
There's currently no documentation that I can find on how to do this.