Introduction
Kubernetes has revolutionized container orchestration and application deployment through its management of containerized applications across clusters of hosts. The kubectl command line tool allows developers and administrators to control Kubernetes clusters and deployments.
One of kubectl‘s most common usages is deleting unused resources from Kubernetes clusters. Over time, deployments create lots of pods, services, volumes, and other resources. Removing obsolete resources frees up cluster resources and aids debugging by eliminating clutter. However, haphazard resource deletion can lead to unexpected application outages.
That‘s why carefully understanding kubectl delete is critical. In this comprehensive guide, we will cover kubectl delete in-depth, including syntax, grace periods, safety practices, and much more. Let‘s get started!
Deleting Different Resource Types
Kubectl can remove many different Kubernetes resource kinds. Here are some common examples:
Pods
Pods encapsulate one or more containers representing a single application instance. We can delete pods with:
kubectl delete pod my-pod
By default, the Kubernetes ReplicaSet controller will immediately recreate any scaled pods we delete. Later, we‘ll discuss deleting scaled resources like ReplicaSets.
Deployments
Deployments provide versioned, auto-scaled pod templates using ReplicaSets. To delete:
kubectl delete deployment my-deployment
This deletes the Deployment and child ReplicaSet. The ReplicaSet then deletes all scaled pods.
Services
Services route traffic to backend pods using stable networking abstraction. We remove them with:
kubectl delete service my-service
This stops and dismantles the service load balancing to pods.
PersistentVolumes
PersistentVolumes provide long-term pod storage transcending container restarts. They persist until explicitly deleted:
kubectl delete persistentvolume my-volume
This removes the volume resource and underlying storage medium. Later, we‘ll discuss volume deletion safety practices.
And many other resources like Namespaces, ConfigMaps, Secrets, and more can also be deleted. Now let‘s look at delete command syntax…
Delete Syntax and Options
The basic kubectl delete command structure is:
kubectl delete <resource type> <resource name>
For example:
kubectl delete deployment my-deployment
We can also batch delete resources using labels and selectors…
Using Labels and Selectors
Resources often use metadata labels allowing logical grouping. We batch delete by label with -l:
kubectl delete pods -l app=myapp
More complex selectors filter on multiple labels with commas, etc. We also select by resource relationship, like deleting ConfigMaps mapped to a certain namespace.
Deleting All Resources of a Type
We can remove ALL resources of a particular type with:
kubectl delete pods --all
This allows clean slate deletion, but can be dangerous if run against broad resource types!
Now let‘s discuss some other helpful options…
Useful Delete Options
Here are some commonly used kubectl delete options:
--now # Delete resource immediately without grace period
--grace-period=10s # Customize graceful termination window
--ignore-not-found # Don‘t error if resource doesn‘t exist
--namespace=<NS> # Namespace context of resources to delete
For example, we could immediately purge a pod resource using:
kubectl delete pod my-pod --now
Next let‘s talk about managing graceful resource termination…
Grace Periods
By default, Kubernetes allows a brief grace period when deleting resources like pods and deployments. This enables:
- Ongoing requests to complete
- Application shutdown routines to run
- Dependent resources to cleanly unlink
Preventing sudden outages. Each resource type defines a default grace period duration, commonly 30 seconds.
Customizing Grace Periods
We customize grace periods with the –grace-period flag:
kubectl delete deployment my-app --grace-period=60s
This grants 60 seconds for the deployment to scale down pods before removing finalizers.
Deleting Immediately with –now
Sometimes we need to delete pods that are stuck in the "Terminating" phase for troubleshooting. –now forces immediate deletion:
kubectl delete pod my-pod --now
This interrupts the graceful period, abruptly terminating containers. So only use –now judiciously on resources causing issues.
Next let‘s discuss cascading deletion…
Cascading Deletion
Kubernetes allows resources to depend on each other, like Deployments managing Pods. When deleting a resource, Kubernetes garbage collection will automatically cascade deletion to dependent kinds according to preset rules.
For example, deleting a Deployment cascades to:
Deployment > ReplicaSet > Pods
The Deployment first scales to 0 replicas, then once Pods terminate, it deletes the ReplicaSet since it no longer serves purpose without Pods.
Dangers of Cascading Deletion
Accidentally over-cascading deletion can take down critical resources. Like deleting a Namespace cascades ALL resources within it. Some deleterious example chains:
Namespace > Pods > Node > Cluster
PersistentVolume > Pod Volumes > Pod Data
Later we‘ll discuss practices avoiding these outcomes. Next, an important cascading deletion concept…
Orphans vs Foreground Deletion
By default, kubectl utilizes "foreground" cascading deletion. Linked resources are immediately co-terminated then garbage collected AFTER the initial resource is gone.
We change this behavior to "orphan" deletion with –cascade=orphan. Here, dependent resources persist even after the original resource gets removed.
This allows decoupling "parent" resources from "children" when necessary. However it can leave behind useless orphaned resources cluttering the cluster. Understand both deletion modes.
Now let‘s turn to storage volume deletion…
Purging vs Deleting PersistentVolumes
PersistentVolumes often use remote durable storage disks. We must differentiate Volume "deletion" versus "purging":
Deletion – Just deletes Kubernetes resource API object. Disk storage remains.
Purging – Deletes BOTH Volume resource AND wipes the backend disk clean. Destroying all disk data permanently.
By default, kubectl delete volume only removes the Volume resource. The remote storage media persists untouched. This keeps data safe just unmapped from Kubernetes.
We must explicitly purge volumes with associated Claim resources using this cascade option:
kubectl delete pv my-volume --cascade=foreground
This foreground cascades to mapped PersistentVolumeClaim resources, wiping the storage media after unlinking claims. Let‘s shift gears to namespace contexts…
Deleting in Different Namespace Contexts
Namespaces partition Kubernetes clusters into virtual sub-clusters helping team/app isolation. By default kubectl targets the "default" namespace.
We can configure the namespace context several ways:
Set Namespace Manually
One method sets the namespace for all kubectl commands:
kubectl config set-context my-context --namespace=myns
Leverage kubens Plugin
The kubens plugin switches interactively:
kubens myns
This cleanly sets context just for following kubectl calls.
Use kubectx Plugin
kubectx manages both namespace AND overall cluster context:
kubectx minikube
kubens myns
With namespace set, kubectl deletes apply to that space only. For example, after kubens myns, this command:
kubectl delete pods --all
Would only remove pods in myns, NOT cluster-wide! Let‘s finish with some best practices…
Deletion Safety Practices
While kubectl delete liberates resources, it can lead to trouble when over-eager. Follow these guidelines to safely delete cluster resources:
- Scope commands narrowly – Target the precise resource names/labels needed
- Double check contexts – Apply correct namespace scope
- Dry run first – Use –dry-run=client to preview deletion plan
- Back up data – Persist volumes with snapshots; save manifests
- Watch dependencies – Mind edges cascading extensively
- Namespaces help – Namespace to partition risk
- Go slow – Delete in chunks, check impacts piecemeal
If in doubt, delete incrementally rather than unleashing a swarm of cascaded deletions!
Conclusion
Kubectl delete gives administrators immense power over cluster resources. Used properly, it prevents waste and keeps deployments tidy. But mistakes can inadvertently take down whole applications and their data stores!
Follow the syntax, grace period, namespace, safety guideline concepts we covered to avoid collateral damage. Take backups. And carefully inspect inter-resource dependencies before cascading to prevent runaway deletion storms.
What Kubernetes deletion challenges have you faced? Any other tips? Share your thoughts below!


