582

I tried to delete a ReplicationController with 12 pods and I could see that some of the pods are stuck in Terminating status.

My Kubernetes cluster consists of one control plane node and three worker nodes installed on Ubuntu virtual machines.

What could be the reason for this issue?

NAME        READY     STATUS        RESTARTS   AGE
pod-186o2   1/1       Terminating   0          2h
pod-4b6qc   1/1       Terminating   0          2h
pod-8xl86   1/1       Terminating   0          1h
pod-d6htc   1/1       Terminating   0          1h
pod-vlzov   1/1       Terminating   0          1h
1
  • 1
    This question looks like it just reopened. A quick reminder: questions on how to use K8S are not on-topic here, as they are not considered to be sufficiently about programming. Commented Apr 13, 2025 at 9:11

27 Answers 27

1028

You can use following command to delete the POD forcefully.

kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
Sign up to request clarification or add additional context in comments.

11 Comments

I did this in my cluster and the pod seemed to be removed but when I checked the node it's container was still running. I ended up restarting Docker on the node itself. github.com/kubernetes/kubernetes/issues/25456 Just be careful you're not hiding a systemic problem with this command.
If the pod is in a namespace other than default namespace then it is required to include -n <namespace-name>, otherwise above command wont work.
@mqsoh : The force delete just remove it from the api-server store(etcd), the actual resource deleted may end up running indefinitely.
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely" What resources ?
This happened during a deploy and healthy apps were stuck in a state of termination while unhealthy apps entered a restart loop (due to stringent ready/health check timeouts).. awesome! I thought Kube's strength was resilience?? The only time the site has been down during the last 2 years has been ON Kubernetes. I really hope all this is down to user error because I'm losing faith. On another note, I force terminated and resources were released.
|
148

The original question is "What could be the reason for this issue?" and the answer is discussed at:

Its caused by docker mount leaking into some other namespace.

You can logon to pod host to investigate.

minikube ssh
docker container ps | grep <id>
docker container stop <id> 

3 Comments

I can't believe this is the least upvoted answer and didn't have a single comment. While all the other answers address ways to work around or fix the problem, the OP clearly asked for the reason why the condition happens in the first place.
The answer already says this but I would like to stress it: Make sure you run these commands in the node where the pod is hosted!
In the case of EKS, you need to identify the node (kubectl get pods -n <NAMESPACE> -o wide), then SSH on to the node and use containerd to list running containers - (sudo ctr -n k8s.io containers ls). However, in most cases (EKS or not) I tend to find that the container is not running on the identified node, and it's tuck in a terminating state for some other reason.
95

Force delete the pod:

kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>

The --force flag is mandatory.

8 Comments

But the real question for me is "why do we have to resort to this in the first place?" What kinds of things cause pods to get in this stuck state under otherwise normal operating conditions?
Well, I can give you one example, we had a java container that had graceful shutdown, but was garbage-collecting itself to death, thus not reacting to signals.
It's good to provide the namespace, otherwise in a multi-namespace environment your pod will not be found, by default it's looking in the kube-system namespace.
To force delete all pods in a namesapce at once ktl get pods -o custom-columns=:metadata.name | xargs kubectl delete pod --force --grace-period=0
@deepdive kubectl delete pod --all -n <namespace> .
|
54

I found this command more straightforward:

for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do kubectl delete pod $p --grace-period=0 --force;done

It will delete all pods in Terminating status in default namespace.

3 Comments

If you want to run it on another namespaces like kube-system use: for p in $(kubectl get pods -n kube-system| grep Terminating | awk '{print $1}'); do kubectl delete pod $p --grace-period=0 --force -n kube-system;done
This answer combined all the other answers perfectly for me. The only caveat was that I had a bunch of orphaned PVCs and a PV. So, I adapted your nice little script with this one liner for p in $(kubectl get pvc | grep Bound | awk '{print $1}'); do kubectl delete pvc $p --grace-period=0 --force;done and then deleted the one PV.
kubectl delete pod --force $(kubectl get pods | grep Terminating | cut -d' ' -f1) is the short form of this answer.
48

In my case the --force option didn't quite work. I could still see the pod ! It was stuck in Terminating/Unknown mode. So after running

kubectl -n redis delete pods <pod> --grace-period=0 --force

I ran

kubectl -n redis patch pod <pod> -p '{"metadata":{"finalizers":null}}'

1 Comment

Before doing this, it's worth reading kubernetes.io/docs/concepts/workloads/controllers/… to understand what finalizers are. Also, looking at the specific finalizer that is stuck might give hints why it's stuck and whether it's safe to bypass...
40

I stumbled upon this recently to free up resource in my cluster. here is the command to delete them all.

kubectl get pods --all-namespaces | grep Terminating | while read line; do
  pod_name=$(echo $line | awk '{print $2}' ) \
  name_space=$(echo $line | awk '{print $1}' ); \
  kubectl delete pods $pod_name -n $name_space --grace-period=0 --force
done

2 Comments

Here is a warning: warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
@d0zingcat any command to check which pods are running indefinitely that don't appear in kubectl get pods?
32

Delete the finalizers block from resource (pod,deployment,ds etc...) yaml:

"finalizers": [
  "foregroundDeletion"
]

3 Comments

Persistent volume got deleted after this. What does it really do?
This was the only thing that fixed the stuck pod for me when delete -grace-period=0 --force didn't. I'd also appreciate some elaboration on what does it do exactly, though.
This page explains foregroundDeletion. Its a meta data value that indicates the object is in the process of deletion. kubernetes.io/docs/concepts/workloads/controllers/…
19

Practical answer -- you can always delete a terminating pod by running:

kubectl delete pod NAME --grace-period=0

Historical answer -- There was an issue in version 1.1 where sometimes pods get stranded in the Terminating state if their nodes are uncleanly removed from the cluster.

6 Comments

I guess that is the issue. I powered off one minion vm without removing from nodes. Is this an acceptable behaviour? Or is there a fix to remove those pods from kubernetes?
Yeah, the workaround until version 1.2 comes around is to delete the pods.
You can always force delete a terminating pod with kubectl delete pod NAME --grace-period=0
The doc says when running kubectl delete ... a SIG_TERM request will be sent to the container. But what if after the the grace period, the container is still running? I got a bunch of pods stuck at Terminating, some written in go, some in nodejs. The replicationController was removed, and the container is is still running
kubectl delete pod PODNAME --grace-period=0 worked for me as suggested by Clayton.
|
10

For my case, I don't like workaround. So there are steps :

  • k get pod -o wide -> this will show which Node is running the pod
  • k get nodes -> Check status of that node... I got it NotReady

I went and I fixed that node. For my case, it's just restart kubelet :

  • ssh that-node -> run swapoff -a && systemctl restart kubelet (Or systemctl restart k3s in case of k3s | or systemctl restart crio in other cases like OCP 4.x (k8s <1.23) )

Now deletion of pod should work without forcing the Poor pod.

3 Comments

I love the reason behind your steps
it is not 'the reason' - it is one of many reasons and that is why this answer is by no means common are practical.
Restarting a kubelet is a workaround. It does not fix the problem!
8

Please try below command:

kubectl patch pod <pod>-p '{"metadata":{"finalizers":null}}'

3 Comments

From Review: Command/Code-only answers are discouraged on Stack Overflow because they don't explain how it solves the problem. Please edit your answer to explain what this code does and how it answers the question, so that it is useful to the OP as well as other users with similar issues. See: How do I write a good answer?. Thanks
Following up on @sɐunıɔןɐqɐp's comment, seriously, what does this command do?
What this command does is it attempts to manually patch the pod in the effort to force kubernetes into deleting it immediately without any additional considerations. This is provided there isn't a different reason it's not deleting like an internal service error on the cluster's management pods. In addition the command should be "kubectl patch pod <my-pod> --patch '{"meta data":{"finalizers":null}}". Leaving the --patch command out will throw an error
7

If --grace-period=0 is not working then you can do:

kubectl delete pods <pod> --grace-period=0 --force

1 Comment

There are some situations where this appears to work but it does not actually delete. It may have to do with issues where kubelet loses state of the pod and can not get the state so leaves it .. (e.g github.com/kubernetes/kubernetes/issues/51835 ). I have not found a way to purge it as of yet.
7

I stumbled upon this recently when removing rook ceph namespace - it got stuck in Terminating state.

The only thing that helped was removing kubernetes finalizer by directly calling k8s api with curl as suggested here.

  • kubectl get namespace rook-ceph -o json > tmp.json
  • delete kubernetes finalizer in tmp.json (leave empty array "finalizers": [])
  • run kubectl proxy in another terminal for auth purposes and run following curl request to returned port
  • curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json 127.0.0.1:8001/k8s/clusters/c-mzplp/api/v1/namespaces/rook-ceph/finalize
  • namespace is gone

Detailed rook ceph teardown here.

Comments

7

I had to same issue in a production Kubernetes cluster.

A pod was stuck in Terminating phase for a while:

pod-issuing   mypod-issuing-0   1/1     Terminating   0  27h

I tried checking the logs and events using the command:

kubectl describe pod mypod-issuing-0 --namespace pod-issuing
kubectl logs mypod-issuing-0 --namespace pod-issuing

but none was available to view

How I fixed it:

I ran the command below to forcefully delete the pod:

kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>

This deleted the pod immediately and started creating a new one. However, I ran into the error below when another pod was being created:

Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data mypod-issuing-token-5swgg aws-iam-token]: timed out waiting for the condition

I had to wait for 7 to 10 minutes for the volume to become detached from the previous pod I deleted so that it can become available for this new pod I was creating.

Comments

7

To delete all pods in "Terminating" state in all namespaces:

kubectl get pods --all-namespaces | awk '/Terminating/{print $1 " " $2}' | while read -r namespace pod; do kubectl delete pod "$pod" -n "$namespace" --grace-period=0 --force;done

Comments

6

Force delete ALL pods in namespace:

kubectl delete pods --all -n <namespace> --grace-period 0 --force

Comments

6

I used this command to delete the pods

kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>

But when I tried run another pod, it didn't work, it was stuck in "Pending" state, it looks like the node itself was stuck.

For me, the solution was to recreate the node. I simply went to GKE console and deleted the node from the cluster and so GKE started another.

After that, everything started to work normally again.

Comments

5

Before doing a force deletion i would first do some checks. 1- node state: get the node name where your node is running, you can see this with the following command:

"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"

Under the "Node" label you will see the node name. With that you can do:

kubectl describe node NODE_NAME

Check the "conditions" field if you see anything strange. If this is fine then you can move to the step, redo:

"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"

Check the reason why it is hanging, you can find this under the "Events" section. I say this because you might need to take preliminary actions before force deleting the pod, force deleting the pod only deletes the pod itself not the underlying resource (a stuck docker container for example).

Comments

5

you can use awk :

kubectl get pods --all-namespaces | awk '{if ($4=="Terminating") print "oc delete pod " $2 " -n " $1 " --force --grace-period=0 ";}' | sh

2 Comments

I did a small variation: kubectl get pods --all-namespaces | awk '{if ($4=="Terminating") print "kubectl delete pods " $2 " -n " $1 " --force --grace-period=0 ";}' | sh
this is great. but I mixes oc (OCP and kubectl) so suggest to replace the oc with kubectl.
4

I am going to try the most extense answer, because none of the above are wrong, but they do not work in all case scenarios.

The usual way to put an end to a terminating pod is:

kubectl delete pod -n ${namespace} ${pod} --grace-period=0 

But you may need to remove finalizers that could be preventing the POD from stoppoing using:

kubectl -n ${namespace} patch pod ${pod} -p '{"metadata":{"finalizers":null}}'

If none of that works, you can remove the pod from etcd with etcdctl:

# Define variables
ETCDCTL_API=3
certs-path=${HOME}/.certs/e
etcd-cert-path=${certs-path}/etcd.crt
etcd-key-path=${certs-path}/etcd.key
etcd-cacert-path=${certs-path}/etcd.ca
etcd-endpoints=https://127.0.0.1:2379
namespace=myns
pod=mypod

# Call etcdctl to remove the pod
etcdctl del \
--endpoints=${etcd-endpoints}\
--cert ${etcd-cert-path} \
--key ${etcd-client-key}\
--cacert ${etcd-cacert-path} \
--prefix \
/registry/pods/${namespace}/${pod} 

This last case should be used as last resource, in my case I ended having to do it due to a deadlock that prevented calico from starting in the node due to Pods under terminating status. Those pods won't be removed until calico is up, but they have reserved enough CPU to avoid calico, or any other pod, from Initializing.

Comments

3

I'd not recommend force deleting pods unless container already exited.

  1. Verify kubelet logs to see what is causing the issue "journalctl -u kubelet"
  2. Verify docker logs: journalctl -u docker.service
  3. Check if pod's volume mount points still exist and if anyone holds lock on it.
  4. Verify if host is out of memory or disk

Comments

3

One reason WHY this happens can be turning off a node (without draining it). Fix in this case is to turn on the node again; then termination should succeed.

2 Comments

A very over simplistic answer. What if you can't turn it back on. If Kubernetes is truly declarative then it should notice this problem and fix itself.
Thanks for identifying the root cause and a fix!
2

My pods stuck in 'Terminating', even after I tried to restart docker & restart server. Resolved after edit the pod & delete items below 'finalizer'

$ kubectl -n mynamespace edit pod/my-pod-name

Comments

2

Only this thing works for me:

When you patching finalizers and removing pods that are in terminating state

force-delete-terminated-pods.sh

#!/usr/bin/env bash
kubectl get pods --all-namespaces | grep Terminating | awk '{print $2 " --namespace=" $1}' | xargs kubectl patch pod -p '{"metadata":{"finalizers":null}}'
kubectl get pods --all-namespaces | grep Terminating | awk '{print $2 " --namespace=" $1}' | xargs kubectl delete pod

Comments

1

Following command with awk and xargs can be used along with --grace-period=0 --force to delete all the Pods in Terminating state.

kubectl get pods|grep -i terminating | awk '{print $1}' | xargs kubectl delete --grace-period=0 --force pod

3 Comments

I have the same error message scenario. I recently installed an NFS server in my cluster, and some nodes of the same node pool have this problem. I provisionally scale the nodes, and the problem is solved, but it is not the final solution. I'm still investigating, as the node has free resources, nothing still justifies the problem
this forcing process, it's bad and negative for Kubernetes, I don't recommend this type of action...only in last case of troubleshooting
if one is 'required' to stop pods and forceful stop doesnt create any issue, then why not. Kubernetes wouldnt have this option had it been bad and negative. There may be many scenarios where Pods has to be stopped at a certain instance, this option may be used. Btw "one size doesnt fit all"
1

go templates will work without awk, for me it works without --grace-period=0 --force but, add it if you like

this will output the command to delete the Terminated pods.

kubectl get pods --all-namespaces -otemplate='{{ range .items }}{{ if eq .status.reason  "Terminated" }}{{printf "kubectl delete pod -n %v %v\n" .metadata.namespace .metadata.name}}{{end}}{{end}}'

if you are happy with the output, you cat add | sh - to execute it. as follow:

kubectl get pods --all-namespaces -otemplate='{{ range .items }}{{ if eq .status.reason  "Terminated" }}{{printf "kubectl delete pod -n %v %v\n" .metadata.namespace .metadata.name}}{{end}}{{end}}' |sh -

Comments

1

In my case I had some PersistentVolumes and Ingresses stuck due to finalizers.

PersistentVolumes had activated the deletion control. Ingresses were using a shared ALB and could not delete them, so they were stuck.

After cleaning those finelizers I could delete the PODs.

Comments

0

for me below command has resolved the issue

oc patch pvc pvc_name -p '{"metadata":{"finalizers":null}}

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.