-
Notifications
You must be signed in to change notification settings - Fork 4.5k
‼️ NOTICE: aws-eks "error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1" #15072
Copy link
Copy link
Closed
Labels
@aws-cdk/aws-eksRelated to Amazon Elastic Kubernetes ServiceRelated to Amazon Elastic Kubernetes ServicebugThis issue is a bug.This issue is a bug.effort/smallSmall work item – less than a day of effortSmall work item – less than a day of effortmanagement/trackingIssues that track a subject or multiple issuesIssues that track a subject or multiple issuesp0
Description
Please add your +1 👍 to let us know you have encountered this
Status: IN-PROGRESS
Overview:
Version 1.106.0 and later of the aws-eks construct library throw an error when trying to update a KubernetesManifest object, this includes objects used in the cluster.addManifest method.
Complete Error Message:
11:22:46 AM | UPDATE_FAILED | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'
Workaround:
Downgrade to version 1.105.0 or below
Original opening post
When updating a KubernetesManifest, the deploy fails with an error like:
11:22:46 AM | UPDATE_FAILED | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'
This issue occurs with Kubernetes versions 1.16, 1.17, and 1.20.
Reproduction Steps
- Deploy a simple EKS stack with a manifest
import { Stack, App } from "@aws-cdk/core";
import {
Cluster,
KubernetesManifest,
KubernetesVersion,
} from "@aws-cdk/aws-eks";
const app = new App();
const stack = new Stack(app, "repro-prune-invalid-resource", {
env: {
region: process.env.CDK_DEFAULT_REGION,
account: process.env.CDK_DEFAULT_ACCOUNT,
},
});
const cluster = new Cluster(stack, "cluster", {
clusterName: "repro-prune-invalid-resource-test",
version: KubernetesVersion.V1_16,
prune: true,
});
const manifest = new KubernetesManifest(stack, `pdb`, {
cluster,
manifest: [
{
apiVersion: "policy/v1beta1",
kind: "PodDisruptionBudget",
metadata: {
name: "test-pdb",
namespace: "default",
},
spec: {
maxUnavailable: 1,
selector: {
matchLabels: { app: "thing" },
},
},
},
],
});
app.synth();This deploys successfully.
- Make a small change to the manifest, such as changing
maxUnavailable: 1tomaxUnavailable: 2and deploy again
This results in the error above.
What did you expect to happen?
I would have expected the deploy to have succeeded, and updated the maxUnavailable field in the deployed Manifest from 1 to 2.
What actually happened?
11:22:46 AM | UPDATE_FAILED | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingr
ess" in version "networking.k8s.io/v1"\n'
Logs: /aws/lambda/repro-prune-invalid-resource-awscd-Handler886CB40B-hFxU42VXJuOz
at invokeUserFunction (/var/task/framework.js:95:19)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async onEvent (/var/task/framework.js:19:27)
at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: 1be7dfcb-288d-4309-8b8c-cadafb97fd09)
Environment
- CDK CLI Version : 1.108.0
- Framework Version: 1.108.0
- Node.js Version: v12.18.4
- OS : Linux
- Language (Version): Typescript 4.3.2
Other
This is 🐛 Bug Report
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
@aws-cdk/aws-eksRelated to Amazon Elastic Kubernetes ServiceRelated to Amazon Elastic Kubernetes ServicebugThis issue is a bug.This issue is a bug.effort/smallSmall work item – less than a day of effortSmall work item – less than a day of effortmanagement/trackingIssues that track a subject or multiple issuesIssues that track a subject or multiple issuesp0