Namespaces are key building blocks for organizing resources within Kubernetes clusters. As a full-stack developer working extensively with namespaces in production environments, I wanted to share an insiders guide to namespaces based on real-world use.
Whether you are looking to segregate workloads in shared clusters or manage environments for microservices, understanding namespaces is essential.
In this comprehensive 2600+ word guide, we will dig deeper into namespace architecture, operations and best practices from a developer lens.
Namespace Basics
Let‘s quickly recap what namespaces are in Kubernetes:
- Namespaces partition cluster resources across multiple teams and workloads
- Each namespace provides isolation for things like pods, services, configmaps etc
- Users and service accounts belong to namespaces
- Resource quotas can restrict memory, cpu and object counts per namespace
Some infrastructure namespaces created automatically include:
- default – default space for resources without assigned namespace
- kube-system – for Kubernetes system creates like kube-dns
- kube-public – publicly accessible resources readable by all
Developers typically create custom namespaces aligned to teams, environments and applications.
Why Namespaces Matter
Namespaces allow us to organize Kubernetes much like directories in a filesystem. Here are my top reasons for using them:
Resource Sharing and Governance
In large enterprises, namespaces help divide limited cluster resources fairly between teams through quotas.
Production Engineering Team:
CPU Requests: 32 cores
Memory: 128Gi
Max Pods: 100
Data Science Team:
CPU Requests: 8 cores
Memory: 64Gi
Max Pods: 50
Without governance, one team can starve others. Namespaces combined with ResourceQuotas allow fair sharing.
Security and Access Control
Access policies applied through RBAC roles and role bindings on namespaces allow securing access.
For example, backend devs can view production workloads without ability to modify or delete resources.
Separate Environment Isolation
Teams share development and test clusters while using namespaces to protect production stability:
team-red (Development)
Latest code
Experimental features
Rapidly changing
team-red (Production)
Stable code releases
Locked down configuration
Controlled changes
This prevents unstable changes from hitting production.
Application Domains
Microservices and related backends can reside within a common namespace for easy visibility:
catalog (Namespace)
catalog-db (Service)
catalog-api (Deployment)
shared-lib (ConfigMap)
Developers just interact with the catalog namespace rather than individual pieces.
Operational Agility
Namespace policies and permissions strengthen cluster sharing but sometimes more agility is needed.
New clusters can be spun up quickly that provide dedicated control planes for specific teams. Teams operate services with full autonomy without worrying about negatively impacting others.
While namespaces provide partitioning, sometimes dedicated environments enable faster innovation.
Namespace Architecture Patterns
Over years building applications using Kubernetes, I have seen a few common architecture patterns around organizing namespaces emerge:
1. Per Environment
Create namespaces for different environments like development, staging, and production. Apply quotas and lock down production:
team-red (Development)
team-red (Staging)
team-red (Production)
This provides environment isolation and control.
2. Per Team
Partition namespace per developer team aligning their services, ingress rules, configs and secrets:
team-red (Namespace)
redis cache
todo-api
ingress
team-blue (Namespace)
postgres
user-db
monitoring
Ownership enables agility within the application domain.
3. Per Customer / Account
For multitenant applications like SaaS, client-specific namespaces allow customization:
acme (Namespace)
acme-gateway
acme-auth
globex (Namespace)
globex-api
globex-worker
Different per tenant configurations, secrets and policies.
published application domains help discoverability of services.
4. Hybrid
Practical systems use a mix of all models above:
prod
team-red
team-blue
staging
team-red
team-blue
loadtest-2023-01-15
Environments provide macro isolation while teams enable micro isolation.
Namespace Planning Guide
Like microservices, bounded contexts are central to namespaces.
Here is my checklist when scoping namespaces:
Strategize Resource Limits
- What capacity exists on clusters?
- How should namespaces divide vCPUs and memory?
- What namespace gets what percentages based on priority?
- How to manage capacity if teams scale up pods?
Resource limits avoid "noisy neighbors".
Group Related Services
- Which apps and services belong together?
- Does this change between environments?
- Can services communicate via DNS alone?
- Enable discoverability with readable namespaces.
Domain logic helps organization.
Model Access Policies
- Who requires what level of access per namespace?
- Production vs non-prod namespaces?
- Can namespaces map to teams 1-1?
- Create roles and bindings to enforce policies.
Security through least privilege access.
Set Up Monitoring
- How to track resource usage per namespace?
- Trending utilization over weeks and months?
- Dashboards highlighting namespaces exceeding limits?
- Alerts informing on quotas or anomaly detection?
Observability prevents surprise outages.
With scale, having an informed namespace architecture reduces outages and fire-fights.
Configuring Resource Quotas
ResourceQuotas limit the compute resources like CPU and memory consumed per namespace.
I use them extensively to restrict teams from overwhelming clusters and impacting others.
Here is a quota example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: team-red
spec:
hard:
requests.cpu: "2000m"
requests.memory: 2Gi
limits.cpu: "4000m"
limits.memory: 4Gi
This limits the namespace to:
- 2 CPU cores for request (burstable)
- 2 GiB memory requests
- 4 CPU cores hard limit (maximum)
- 4 GiB memory hard limit
Burstable means the ability to burst over requests and be throttled down when the limit is hit.
How do engineers know how much quota is available and usage?
View Namespace Quota Usage
Check current quota usage stats using:
$ kubectl describe quota compute-quota --namespace=team-red
Name: compute-quota
Namespace: team-red
Resource Used Hard
-------- ---- ----
limits.cpu 800m 4
limits.memory 2Gi 4Gi
requests.cpu 400m 2
requests.memory 1Gi 2Gi
This shows quota capacity and current utilization.
Dashboards provide trends over weeks as well while alerts call out outliers.
Set Default Quotas
You can set default quotas automatically applied to all namespaces:
quota.yaml:
apiVersion: v1
kind: ResourceQuota
metadata:
name: default-quota
spec:
hard:
requests.cpu: "500m"
requests.memory: 1Gi
limits.cpu: "1000m"
limits.memory: 2Gi
Enable default namespace admission controller using:
--enable-admission-plugins=...,ResourceQuota,...
Now namespaces get quotas unless explicitly disabled.
Migrating Resources Across Namespaces
When evolving architectures, resources often shuffle across namespaces.
Here is one pattern I follow for migrating between namespaces:
-
Export definitions from old namespace
kubectl get all -n old-ns -o yaml > export.yaml -
Find/replace namespace references
sed -i ‘s/old-ns/new-ns/g‘ export.yaml -
Apply resources in new namespace
kubectl apply -n new-ns -f export.yaml
This moves workloads between namespaces via declarative yaml while preserving definitions and configuration.
For zero-downtime, deploy new versions along existing before decommissioning.
Automating Namespace Management
Like infrastructure as code, namespace setup can be automated and version controlled.
Here is a script to bootstrap a namespace:
setup.sh:
#!/bin/bash
NS=$1
kubectl create ns $NS
kubectl create quota compute-quota \
--hard=cpu=2,memory=2Gi,pods=10 \
--namespace=$NS
kubectl create role pod-reader --verb=list,get --resource=pods
kubectl create rolebinding default --role=pod-reader --serviceaccount=$NS:default -n $NS
Usage:
./setup.sh myteam
This allows declaring namespaces as code!
You can enhance with:
- Terraform / Pulumi / Cloud providers to auto-provision namespaces
- GitOps to sync desired namespace state
- Automated security scans like kube-bench per namespace
Infrastructure as code unlocks scalable namespace management.
Troubleshooting Namespaces
Here are some common namespace issues I have debugged and solutions:
App can‘t access a Kubernetes service
- Ensure service and app reside in the same namespace OR
- Export service with externalTrafficPolicy local
Forbidden errors accessing resources
- Create RBAC roles and bindings properly mapped to apps
- Avoid using default service account with broad permissions
Quotas hit even when quotas show plenty
- Check for priority, fairness and actual capacity cluster wide
- Burstable CPU counts against requests AND limits
New namespace not working like old one
- Check for custom resource definitions, operators, service accounts
- Identify and export critical definitions before deleting namespaces
API delays accessing services
- Ensure pods distributed across multiple nodes
- Check pod count and memory pressure by node
- Tune HPA autoscaling to balance
Debugging involves methodically ruling out infrastructure and policies using tools like describe, metrics, logs and traces.
Namespace usage patterns over weeks reveal issues missed in snapshots.
Key Takeaways
Here are my main recommendations around namespaces from architecting large Kubernetes deployments:
- Plan namespaces around environments, teams and products for separation
- Put all related services and workloads in ONE space for ownership
- Manage capacity via quotas and tailor access through custom roles
- Automate namespace creation/config using Infrastructure as Code tools
- Follow readable naming and namespace patterns for consistency
- Use namespaces to secure and scale cluster density substantially
I hope this detailed guide from a fullstack developer angle helps you leverage namespaces effectively. Feel free to post any questions!


