Skip to content

aws-eks: Imported cluster yields invalid long stack name #23628

@J11522

Description

@J11522

Describe the bug

We try adding manifests to an imported cluster, which creates a new stack to host the kubectl provider.
This stack has a name with > 128 characters and fails validation.

As far as I can tell, the problem lies within the generateStackName method.
The fix to the original issue made changes to the makeStackName function and applied the length limit there.
This ensures that the stack id created by generateStackId is no longer than 128 characters.
However, the generated stack id is prefixed with ${assembly.stageName}- when available, result in a too long stack name

Expected Behavior

The stack name for the kubectlProvider stack is valid

Current Behavior

The stack name for the kubectlProvider stack is longer than 128 characters and therefore invalid.

Reproduction Steps

//Import cluster in a staged application with long names
this.cluster = Cluster.fromClusterAttributes(this, 'ClusterImport', {
clusterName
openIdConnectProvider,
kubectlSecurityGroupId,
kubectlPrivateSubnetIds,
kubectlRoleArn,
vpc,
});
//Add manifest, resulting in creation of kubectlProvider
this.cluster.addManifest('SampleNamespace', {
apiVersion: 'v1',
kind: 'Namespace',
metadata: { name: 'SampleNamespace' },
});

Possible Solution

No response

Additional Information/Context

No response

CDK CLI Version

2.59.0 (build b24095d)

Framework Version

No response

Node.js Version

18.13 LTS

OS

Ventura 13.1

Language

Typescript

Language Version

No response

Other information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    @aws-cdk/aws-eksRelated to Amazon Elastic Kubernetes ServicebugThis issue is a bug.p2

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions