95

I'm having trouble deleting custom resource definition. I'm trying to upgrade kubeless from v1.0.0-alpha.7 to v1.0.0-alpha.8.

I tried to remove all the created custom resources by doing

$ kubectl delete -f kubeless-v1.0.0-alpha.7.yaml
deployment "kubeless-controller-manager" deleted
serviceaccount "controller-acct" deleted
clusterrole "kubeless-controller-deployer" deleted
clusterrolebinding "kubeless-controller-deployer" deleted
customresourcedefinition "functions.kubeless.io" deleted
customresourcedefinition "httptriggers.kubeless.io" deleted
customresourcedefinition "cronjobtriggers.kubeless.io" deleted
configmap "kubeless-config" deleted

But when I try,

$ kubectl get customresourcedefinition
NAME                    AGE
functions.kubeless.io   21d

And because of this when I next try to upgrade by doing, I see,

$ kubectl create -f kubeless-v1.0.0-alpha.8.yaml
Error from server (AlreadyExists): error when creating "kubeless-v1.0.0-alpha.8.yaml": object is being deleted: customresourcedefinitions.apiextensions.k8s.io "functions.kubeless.io" already exists

I think because of this mismatch in the function definition , the hello world example is failing.

$ kubeless function deploy hellopy --runtime python2.7 --from-file test.py --handler test.hello
INFO[0000] Deploying function...
FATA[0000] Failed to deploy hellopy. Received:
the server does not allow this method on the requested resource (post functions.kubeless.io)

Finally, here is the output of,

$ kubectl describe customresourcedefinitions.apiextensions.k8s.io
Name:         functions.kubeless.io
Namespace:
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apiextensions.k8s.io/v1beta1","description":"Kubernetes Native Serverless Framework","kind":"CustomResourceDefinition","metadata":{"anno...
API Version:  apiextensions.k8s.io/v1beta1
Kind:         CustomResourceDefinition
Metadata:
  Creation Timestamp:             2018-08-02T17:22:07Z
  Deletion Grace Period Seconds:  0
  Deletion Timestamp:             2018-08-24T17:15:39Z
  Finalizers:
    customresourcecleanup.apiextensions.k8s.io
  Generation:        1
  Resource Version:  99792247
  Self Link:         /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/functions.kubeless.io
  UID:               951713a6-9678-11e8-bd68-0a34b6111990
Spec:
  Group:  kubeless.io
  Names:
    Kind:       Function
    List Kind:  FunctionList
    Plural:     functions
    Singular:   function
  Scope:        Namespaced
  Version:      v1beta1
Status:
  Accepted Names:
    Kind:       Function
    List Kind:  FunctionList
    Plural:     functions
    Singular:   function
  Conditions:
    Last Transition Time:  2018-08-02T17:22:07Z
    Message:               no conflicts found
    Reason:                NoConflicts
    Status:                True
    Type:                  NamesAccepted
    Last Transition Time:  2018-08-02T17:22:07Z
    Message:               the initial names have been accepted
    Reason:                InitialNamesAccepted
    Status:                True
    Type:                  Established
    Last Transition Time:  2018-08-23T13:29:45Z
    Message:               CustomResource deletion is in progress
    Reason:                InstanceDeletionInProgress
    Status:                True
    Type:                  Terminating
Events:                    <none>
2
  • It says CustomResource deletion is in progress but its not getting deleted. No change in status. Commented Aug 24, 2018 at 17:33
  • 1
    Isn't --force help with resources stuck in delete/terminating or some other skatchy states? Maybe add --grace-period=0 Commented Aug 24, 2018 at 18:28

7 Answers 7

183

So it turns out , the root cause was that Custom resources with finalizers can "deadlock". The CustomResource "functions.kubeless.io" had a

Finalizers:
    customresourcecleanup.apiextensions.k8s.io

and this is can leave it in a bad state when deleting.

https://github.com/kubernetes/kubernetes/issues/60538

I followed the steps mentioned in this workaround and it now gets deleted.

Sign up to request clarification or add additional context in comments.

4 Comments

Worked like a charm! Thank you! kubectl patch crd/MY_CRD_NAME -p '{"metadata":{"finalizers":[]}}' --type=merge
Well, at some point I upvoted your comment so I must have run into this before. It worked for me again today though!
Oh my God you're a life saver!
Edit CRD, empty Finalizers field and Save >> It worked like a charm!
56
$ kubectl get crd

NAME                                                            CREATED AT
accesscontrolpolicies.networking.zephyr.solo.io                 2020-04-22T12:58:39Z
istiooperators.install.istio.io                                 2020-04-22T13:49:20Z
kubernetesclusters.discovery.zephyr.solo.io                     2020-04-22T12:58:39Z
meshes.discovery.zephyr.solo.io                                 2020-04-22T12:58:39Z
meshservices.discovery.zephyr.solo.io                           2020-04-22T12:58:39Z
meshworkloads.discovery.zephyr.solo.io                          2020-04-22T12:58:39Z
trafficpolicies.networking.zephyr.solo.io                       2020-04-22T12:58:39Z
virtualmeshcertificatesigningrequests.security.zephyr.solo.io   2020-04-22T12:58:39Z
virtualmeshes.networking.zephyr.solo.io                         2020-04-22T12:58:39Z
$ kubectl delete crd istiooperators.install.istio.io

delete error

$ kubectl patch crd/istiooperators.install.istio.io -p '{"metadata":{"finalizers":[]}}' --type=merge
success delete crd istiooperators.install.istio.io

result

NAME                                                            CREATED AT
accesscontrolpolicies.networking.zephyr.solo.io                 2020-04-22T12:58:39Z
kubernetesclusters.discovery.zephyr.solo.io                     2020-04-22T12:58:39Z
meshes.discovery.zephyr.solo.io                                 2020-04-22T12:58:39Z
meshservices.discovery.zephyr.solo.io                           2020-04-22T12:58:39Z
meshworkloads.discovery.zephyr.solo.io                          2020-04-22T12:58:39Z
trafficpolicies.networking.zephyr.solo.io                       2020-04-22T12:58:39Z
virtualmeshcertificatesigningrequests.security.zephyr.solo.io   2020-04-22T12:58:39Z
virtualmeshes.networking.zephyr.solo.io                         2020-04-22T12:58:39Z

3 Comments

kubectl patch crd/istiooperators.install.istio.io -p '{"metadata":{"finalizers":[]}}' --type=merge success delete crd istiooperators.install.istio.io
For people confused about why he's writing kc, he probably set an alias for kubectl, so just use kubectl instead of kc.
@basickarl It has been modified,thank you !
14

Try:

oc patch some.crd/crd_name -p '{"metadata":{"finalizers":[]}}' --type=merge

Solved my problem after trying to force delete got stuck.

Comments

4

I had to get rid of a few other things

kubectl get mutatingwebhookconfiguration | ack consul | awk '{print $1}' | xargs -I {} kubectl delete mutatingwebhookconfiguration {}

kubectl get clusterrolebinding | ack consul | awk '{print $1}' | xargs -I {} kubectl delete clusterrolebinding {}

kubectl get clusterrolebinding | ack consul | awk '{print $1}' | xargs -I {} kubectl delete clusterrole {}

1 Comment

Thanks @Yoker !! What I did in my case was kubectl get customresourcedefinition | awk '{print $1}' | grep "getambassador.io" | xargs -I {} kubectl delete customresourcedefinition {} taken from your tip to delete many leftovers from ambasssador crd's.
1

In my case, it was an issue that I have deleted a custom resource object, but not a custom resource definition (CRD).

I fixed it with: kubectl delete -f resourcedefinition.yaml. In that file I defined my CRDs.

So I think it's the best practice not to delete custom objects manually, but by deleting file where you define both object and CRD. Reference.

1 Comment

This only solves the issue when you have a file defined. If you have a CRD defined over the helm chart and you uninstalled it, but one CRD remains then you have to uninstall it manually.
0

Kubernetes have CustomResourceDefinition kind and we are able to do some actions on it:

kubectl delete customresourcedefinitions.apiextensions.k8s.io <crd-name>
kubectl describe customresourcedefinitions.apiextensions.k8s.io <crd-name>

We can also use shortnames (crd,crds):

kubectl delete crd <crd-name>
kubectl describe crd <crd-name>

Comments

0

Solved my problem by listing all CRD and running the delete script on bash.

Get all CRD

$ kubectl get crd
NAME                                  CREATED AT
awschaos.chaos-mesh.org               2025-07-07T17:50:58Z
azurechaos.chaos-mesh.org             2025-07-07T17:50:58Z
blockchaos.chaos-mesh.org             2025-07-07T17:50:58Z
dnschaos.chaos-mesh.org               2025-07-07T17:50:58Z
gcpchaos.chaos-mesh.org               2025-07-07T17:50:58Z
httpchaos.chaos-mesh.org              2025-07-07T17:50:58Z

Once we had deleted all CRDs. For my case, I need to delete the chaos-mesh CRD. I have used the below script.

$ for i in $(kubectl get crd | grep chaos-mesh | awk '{print $1}'); do kubectl delete crd $i;  done

customresourcedefinition.apiextensions.k8s.io "awschaos.chaos-mesh.org" deleted
customresourcedefinition.apiextensions.k8s.io "azurechaos.chaos-mesh.org" deleted

I can delete all CRD created.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.