/kind bug
1. What kops version are you running? The command kops version, will display
this information.
Client version: 1.32.0 (git-v1.32.0)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: v1.34.1
Kustomize Version: v5.7.1
Server Version: v1.32.5
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Create the following manifests:
Create the required storage classes.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: kops-report-vsc
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: ebs.csi.aws.com
deletionPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kops-report-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Create the initial pod to create a snapshot from
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kops-report-pvc
spec:
storageClassName: kops-report-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: kops-report-initial-pod
spec:
containers:
- name: app
image: busybox
command: ["/bin/sh", "-c"]
args:
- >
echo "Hello from PVC at $(date)" > /data/hello.txt;
sleep 3600;
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: kops-report-pvc
Create the snapshot
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: kops-report-snapshot
spec:
volumeSnapshotClassName: kops-report-vsc
source:
persistentVolumeClaimName: kops-report-pvc
Restore the snapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kops-report-restored-pvc
spec:
storageClassName: kops-report-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
dataSource:
name: kops-report-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
---
# Create a consumer.
apiVersion: v1
kind: Pod
metadata:
name: kops-report-pod-from-snapshot
spec:
containers:
- name: app
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: kops-report-restored-pvc
5. What happened after the commands executed?
Warning ProvisioningFailed 26s ebs.csi.aws.com_ebs-csi-controller-774c69dfd9-pr7bk_a4568a5b-c04b-4e90-97c2-72262c399643 failed to provision volume with StorageClass "kops-report-sc": rpc error: code = Internal desc = Could not create volume "pvc-e7f6f54e-86a5-4701-9825-c75a0c7d12e0": could not create volume in EC2: operation error EC2: CreateVolume, https response error StatusCode: 403, RequestID: 5d240942-27a8-4480-842e-9dcb42e80696, api error UnauthorizedOperation: You are not authorized to perform this operation. User: arn:aws:sts::<id>:assumed-role/ebs-csi-controller-sa.kube-system.sa.dev.k8s.local/1762936755668596251 is not authorized to perform: ec2:CreateVolume on resource: arn:aws:ec2:eu-west-1::snapshot/snap-01d9d1d7c9bd54ce4 because no identity-based policy allows the ec2:CreateVolume action. Encoded authorization failure message: d18kOjg_sRmbJuybXMiQ3tWtEltyK7bZVouyIVFYPF0LARkJlFGXrQxoqotDewQtTe4d9-cNyXH5-J7dT1EH9nCvGJ396zO9pfnZvyIF0PV2IPtnZqA8R_0ezMl1jqUyc30nUXPRNR6e-UHIh0X2zLchcXdLQHtnfmyV05mbSpzJ4ib_XpiwZGMku_UncudVmt2BMsTaSqYPvfpNv7V8yTx0J8behT0IDBiB7ePS1L42kybKUU867tfrLUuBNKKuJBjiKcSy5X3xTt9QtpoUEMNPnPaf_-5GbdEpQjkmvIjuRWdLI1naqF-WO4zVUdrx4O3j1RQnIk98GaYBzvRs6x2g_9Ge5xsALEGx6f_gFagNdpL1LeHijkL0CxFSI_DhdCc9cM2TzNj4aA-3ZnmmI2E3tlnjTOXb5Br3iCwOLKjFyUUdMjv1PLsINGVvJUoJ0VZY--N4IP8p0V7mVmvLvKkqBAIfjAO1gZHUWxmFCuUl2RF1eP_Velg12avw-PKUZF-MEg3wTKb4ADZCt4xa6SMgX08xM5MzAzV4ZuesBF5B7UEbUskbYRZseLCCPWT-RqvHCdgI8lkvWbPagBAWCg8bLEGmd5sgYSS_n3bhExH_YKhCl_nZtIsj9mr5W4wOwaKU_PB6uFoS-dv0S_z5bwizGgLpJj3WULJ2440eMdmX8MjWRbDFvN2wnGnHg__KP75mkRpZZQ4ZaPjk1Zxjb_ZPHg0w1M6JcU_pWMiYHD2Fy3dnrPt2dVhfLlyAebFQYpP9PEsg9TTl7jBp-8iL94OlhA06n2NJggQcA4eaOpSKSHPOMiXFIMjzWPk4w50P7yywTYO8Pbnk90APDe50u77Lp3EVuSH7mWuXAfPhXPAZn32-K8jIhHEXx9zqtfjNr8rh6Hm6rcZRIVV4z7u8G1C5vEOcEIvELyGCyOCE4X2z1wvZn5VnSSi4p4lD
From $ kubectl describe persistentvolumeclaim/kops-report-restored-pvc
Full output
Name: kops-report-restored-pvc
Namespace: default
StorageClass: kops-report-sc
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
volume.kubernetes.io/selected-node: i-00fc42bc13460ba3e
volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
DataSource:
APIGroup: snapshot.storage.k8s.io
Kind: VolumeSnapshot
Name: kops-report-snapshot
Used By: kops-report-pod-from-snapshot
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 57s persistentvolume-controller waiting for first consumer to be created before binding
Warning ProvisioningFailed 42s (x5 over 57s) ebs.csi.aws.com_ebs-csi-controller-774c69dfd9-pr7bk_a4568a5b-c04b-4e90-97c2-72262c399643 failed to provision volume with StorageClass "kops-report-sc": error getting handle for DataSource Type VolumeSnapshot by Name kops-report-snapshot: snapshot kops-report-snapshot is not Ready
Normal Provisioning 26s (x6 over 57s) ebs.csi.aws.com_ebs-csi-controller-774c69dfd9-pr7bk_a4568a5b-c04b-4e90-97c2-72262c399643 External provisioner is provisioning volume for claim "default/kops-report-restored-pvc"
Warning ProvisioningFailed 26s ebs.csi.aws.com_ebs-csi-controller-774c69dfd9-pr7bk_a4568a5b-c04b-4e90-97c2-72262c399643 failed to provision volume with StorageClass "kops-report-sc": rpc error: code = Internal desc = Could not create volume "pvc-e7f6f54e-86a5-4701-9825-c75a0c7d12e0": could not create volume in EC2: operation error EC2: CreateVolume, https response error StatusCode: 403, RequestID: 5d240942-27a8-4480-842e-9dcb42e80696, api error UnauthorizedOperation: You are not authorized to perform this operation. User: arn:aws:sts::<id>:assumed-role/ebs-csi-controller-sa.kube-system.sa.dev.k8s.local/1762936755668596251 is not authorized to perform: ec2:CreateVolume on resource: arn:aws:ec2:eu-west-1::snapshot/snap-01d9d1d7c9bd54ce4 because no identity-based policy allows the ec2:CreateVolume action. Encoded authorization failure message: d18kOjg_sRmbJuybXMiQ3tWtEltyK7bZVouyIVFYPF0LARkJlFGXrQxoqotDewQtTe4d9-cNyXH5-J7dT1EH9nCvGJ396zO9pfnZvyIF0PV2IPtnZqA8R_0ezMl1jqUyc30nUXPRNR6e-UHIh0X2zLchcXdLQHtnfmyV05mbSpzJ4ib_XpiwZGMku_UncudVmt2BMsTaSqYPvfpNv7V8yTx0J8behT0IDBiB7ePS1L42kybKUU867tfrLUuBNKKuJBjiKcSy5X3xTt9QtpoUEMNPnPaf_-5GbdEpQjkmvIjuRWdLI1naqF-WO4zVUdrx4O3j1RQnIk98GaYBzvRs6x2g_9Ge5xsALEGx6f_gFagNdpL1LeHijkL0CxFSI_DhdCc9cM2TzNj4aA-3ZnmmI2E3tlnjTOXb5Br3iCwOLKjFyUUdMjv1PLsINGVvJUoJ0VZY--N4IP8p0V7mVmvLvKkqBAIfjAO1gZHUWxmFCuUl2RF1eP_Velg12avw-PKUZF-MEg3wTKb4ADZCt4xa6SMgX08xM5MzAzV4ZuesBF5B7UEbUskbYRZseLCCPWT-RqvHCdgI8lkvWbPagBAWCg8bLEGmd5sgYSS_n3bhExH_YKhCl_nZtIsj9mr5W4wOwaKU_PB6uFoS-dv0S_z5bwizGgLpJj3WULJ2440eMdmX8MjWRbDFvN2wnGnHg__KP75mkRpZZQ4ZaPjk1Zxjb_ZPHg0w1M6JcU_pWMiYHD2Fy3dnrPt2dVhfLlyAebFQYpP9PEsg9TTl7jBp-8iL94OlhA06n2NJggQcA4eaOpSKSHPOMiXFIMjzWPk4w50P7yywTYO8Pbnk90APDe50u77Lp3EVuSH7mWuXAfPhXPAZn32-K8jIhHEXx9zqtfjNr8rh6Hm6rcZRIVV4z7u8G1C5vEOcEIvELyGCyOCE4X2z1wvZn5VnSSi4p4lD
Normal ExternalProvisioning 15s (x4 over 57s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'ebs.csi.aws.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
6. What did you expect to happen?
The volume should have been created and /data/hello.txt should exist with the expected content.
7. Please provide your cluster manifest.
See 5.
8. Please run the commands with most verbose logging by adding the -v 10 flag.
N/A
9. Anything else do we need to know?
We suspect that either the condition is misspelled (should it perhaps be ResourceTag?) because that would match how our snapshots are tagged. Or, the request tagging was forgotten somewhere in the driver specifically for restoring snapshots? Creating "clean" volumes works fine as evident in our reproduction steps.
Thank you for your help!
/kind bug
1. What
kopsversion are you running? The commandkops version, will displaythis information.
Client version: 1.32.0 (git-v1.32.0)
2. What Kubernetes version are you running?
kubectl versionwill print theversion if a cluster is running or provide the Kubernetes version specified as
a
kopsflag.Client Version: v1.34.1
Kustomize Version: v5.7.1
Server Version: v1.32.5
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Create the following manifests:
Create the required storage classes.
Create the initial pod to create a snapshot from
Create the snapshot
Restore the snapshot
5. What happened after the commands executed?
From
$ kubectl describe persistentvolumeclaim/kops-report-restored-pvcFull output
6. What did you expect to happen?
The volume should have been created and
/data/hello.txtshould exist with the expected content.7. Please provide your cluster manifest.
See 5.
8. Please run the commands with most verbose logging by adding the
-v 10flag.N/A
9. Anything else do we need to know?
We suspect that either the condition is misspelled (should it perhaps be
ResourceTag?) because that would match how our snapshots are tagged. Or, the request tagging was forgotten somewhere in the driver specifically for restoring snapshots? Creating "clean" volumes works fine as evident in our reproduction steps.Thank you for your help!