Skip to content

(aws-eks): kubectl layer is not compatible with k8s v1.22.0 #19843

@akefirad

Description

@akefirad

Describe the bug

Running an empty update on an empty EKS cluster fails while updating the resource EksClusterAwsAuthmanifest12345678 (Custom::AWSCDK-EKS-KubernetesResource).

Expected Behavior

The update should succeed.

Current Behavior

It's fails with error:

Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' Logs: /aws/lambda/InfraMainCluster-awscdkawseksKubec-Handler886CB40B-rDGV9O3CyH7n at invokeUserFunction (/var/task/framework.js:2:6) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async onEvent (/var/task/framework.js:1:302) at async Runtime.handler (/var/task/cfn-response.js:1:1474) (RequestId: acd049fc-771c-4410-8e09-8ec4bec67813)

Reproduction Steps

This is what I did:

  1. Deploy an empty cluster:
export class EksClusterStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: cdk.StackProps) {
    super(scope, id, props);

    const clusterAdminRole = new iam.Role(this, "ClusterAdminRole", {
      assumedBy: new iam.AccountRootPrincipal(),
    });

    const vpc = ec2.Vpc.fromLookup(this, "MainVpc", {
      vpcId: "vpc-1234567890123456789",
    });

   const cluster = new eks.Cluster(this, "EksCluster", {
      vpc: vpc,
      vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_NAT }],
      clusterName: `${id}`,
      mastersRole: clusterAdminRole,
      defaultCapacity: 0,
      version: eks.KubernetesVersion.V1_22,
    });

    cluster.addFargateProfile("DefaultProfile", {
      selectors: [{ namespace: "default" }],
    });
  }
}
  1. Add a new fargate profile
    cluster.addFargateProfile("IstioProfile", {
      selectors: [{ namespace: "istio-system" }],
    });
  1. Deploy the stack and wait for the failure.

Possible Solution

No response

Additional Information/Context

I checked the version of kubectl in the lambda handler and it's 1.20.0 which AFAIK is not compilable with cluster version 1.22.0. I'm not entirely sure how the lambda is created. I thought it matches the kubectl with whatever version the cluster has. But it seems it's not It is not the case indeed (#15736).

CDK CLI Version

2.20.0 (build 738ef49)

Framework Version

No response

Node.js Version

v16.13.0

OS

Darwin 21.3.0

Language

Typescript

Language Version

3.9.10

Other information

Similar to #15072?

Metadata

Metadata

Assignees

No one assigned

    Labels

    @aws-cdk/aws-eksRelated to Amazon Elastic Kubernetes Serviceeffort/largeLarge work item – several weeks of effortfeature-requestA feature should be added or improved.p1

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions