Docker containers enable standardized and portable execution of applications across environments. They utilize a lightweight isolation mechanism built on top of the host OS kernel. This guide will explore all facets of checking container status from a developer‘s perspective – understanding states, key commands, integrating with registries and volumes, and best practices for container lifecycle monitoring.
Docker Architecture Refresher
Before diving into container status, let‘s do a quick recap of Docker architecture constructs relevant to our discussion:

- Images: Read-only templates used to create container instances. Images get stored in registries and contain application code, configs, dependencies etc.
- Containers: Isolated user-space instances running off Docker images. Used to execute applications detached from host infrastructure.
- Registry: Central repository to store, distribute, and download Docker images. Defaults to Docker Hub but can be self-hosted as well with Registry service.
- Storage Volumes: External persistent data stores that can be mounted inside containers. Used to save state independent of container lifecycle.
Now that we understand the core building blocks, let‘s drill deeper into tracking container status.
Overview of Container States
As mentioned in the introduction, containers can transition between different execution states during their lifecycle:

Let‘s analyze them in detail:
Created: A container is in the Created state once docker create or docker run completes successfully. Resources have been allocated but the main application process has not started executing.
Running: Indicates that all processes are up and the container is operational. This is the steady state where containers function as expected.
Paused: All processes inside have been suspended but the state is retained in memory. The container can be unpaused to resume. Useful for temporarily disabling containers.
Restarting: Transient state when the Docker daemon attempts to start a container that was previously running but got stopped. Automatically clears once restart succeeds.
Exited: Main container process has terminated indicating a complete execution. Resources get recycled but runtime data persists in attached volumes (more on this later).
Dead: Container has stopped due to an error or failure. Docker daemon will not attempt to restart it.
Now that we understand what each state signifies, let‘s look at how to check them.
Commands to View Container Status
Docker provides a set of CLI commands to view current status of running and stopped containers. Let‘s go through them one-by-one.
View All Containers
To check status of all containers on a system, use the docker ps command with the -a flag:
docker ps -a
This will display a table with the state of every local container – both running and stopped:

Key fields here are:
- STATUS: Current state as covered before. Running, Exited, Dead etc.
- IMAGE: Base image used for container. Pulled from remote Docker registry.
- NAMES: The container tag name set using
--name. - COMMAND: Process or command being run inside container.
- CREATED: Date and time of container initialization.
- PORTS: Any exposed ports from the container.
With docker ps -a, you get a snapshot of every container on the Docker Engine along with metadata to infer further details.
View Only Running Containers
To check status of exclusively running containers, leave out the -a flag:
docker ps
By default, Docker only shows active containers:

This quickly gives you a production-style view focused solely on container workloads currently operating without any stopped containers.
Additionally, you can filter explicitly by running state as well:
docker ps -f status=running
Filter By Status Values
Speaking of filters, we can query containers based on a state directly:
# Paused containers
docker ps -f status=paused
# Created containers
docker ps -f status=created
Some other useful filters are:
# Restarting containers
docker ps -f status=restarting
# Exited containers
docker ps -f status=exited
# Dead containers
docker ps -f status=dead
Make sure to always use -a to cover all containers regardless of execution state.
Filters give precision control to slice and dice containers on specific conditions.
Customize Output Columns
While the default docker ps output shows useful stuff, we may need additional metadata for debugging or auditing.
Docker allows customizing displayed columns with the --format flag. Some helpful ones to append are:
# Duration since container initialization
docker ps -a --format "table {{.ID}}\t{{.Image}}\t{{.RunningFor}}"
# Disk usage size
docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Size}}"
# Network port mapping
docker ps -a --format "table {{.ID}}\t{{.Image}}\t{{.Ports}}"
# Mounted volumes
docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Mounts}}"
This exposes further contextual and performance data for containers.

Refer to the official documentation for more on custom docker ps templating.
View Status History of Containers
In addition to current state, we can view historical status events of containers using the docker container inspect API.
It returns a JSON string containing detailed metadata. Going further down, there is a State section with timeline of state changes:

The key fields here are:
- Status: The state changed to, like "running".
- StartedAt:Exact timestamp for state transition.
- FinishedAt: Set if container exited afterwards.
- Error: Populated if transition failed due to an error.
This gives you a precise audit trail of container execution events since initialization.
Quickly Check Status with Docker Compose
When running multi-service apps with Docker Compose orchestration, we can check status at stack rather than individual container level:
docker-compose ps
This shows state of all containers in Compose application:

To include stopped containers as well:
docker-compose ps -a
Some other useful Compose subcommands are:
# Filter by service
docker-compose ps -q <service>
# View logs
docker-compose logs -f <service>
# Inspect metadata
docker inspect <container>
This allows inspecting your entire application cluster in one unified interface without needing to check individual containers separately.
Debugging Exited & Dead Containers
Now that we have covered all the ways to check real time status, let‘s shift focus to containers that have unexpectedly stopped.
Two common erroneous states for containers going down are Exited and Dead. How do we debug them?
Exit Codes
When a container transitions from Running to Exited, the application process inside has terminated. Docker assigns an exit code between 0-127.
- 0 – Successful termination
- 1-127 – Fatal error leading to exit
- 128+ – Init system error before app execution
We can check the exit code that was set:
# Works for both running and stopped containers
docker inspect --format=‘{{.State.ExitCode}}‘ <container-id>
Non-zero codes imply a crash or failure. Identifying the exact code gives insights into causes, as each application assigns specialized meanings to them. Like – code 2 = config error.
Matching the shown code against your application conventions helps identify failures requiring app modification versus infrastructure issues.
Viewing Logs
In addition to exit codes, we can view logs generated during time container was running:
docker logs <container>
This surfaces STDOUT/STDERR prints that can point out activity before shutdown:

Some other useful options:
# Continuous stream of latest logs
docker logs -f <container>
# Get last 100 lines only
docker logs --tail 100 <container>
# Logs between two timestamps
docker logs -t 1631056613 -t 1631057727 <container>
Logs provide tremendous insights even for stopped containers to identify why they exited or crashed unexpectedly.
Automatic Restarts Configuration
For containers that exit frequently but need to run continuously, Docker supports auto restart policies with the --restart flag:
# Restart always irrespective of exit code
docker run --restart=always <image>
# Restart max 10 times
docker run --restart=on-failure:10 <image>
This alleviates needing to manually restart containers when dealing with flaky applications. Read more on restart policies here.
Handling Dead Containers
If Docker transitions a container to the Dead state, this implies an unrecoverable failure. The engine will not attempt any further restarts.
This usually happens when:
- Container process did not start successfully
- Underlying storage drivers errors out
- System initialization failures
- Hardware virtualization issue
For Dead containers, first course of action is checking logs:
docker logs <container-id>
If this does not reveal anything:
- Destroy container fully with
docker rm -f - Recreate container from image and try again
- If issue recurs, problem likely at infrastructure-level
Also closely analyze the Docker daemon logs at /var/log/docker.log for clues.
Dead is a fail fast signal from Docker that something catastrophic happened stopping execution completely.
Container Status Integrations
So far we explored native Docker tooling to monitor statuses. But in large fleets, manual inspection does not scale.
We need mature container observability pipelines by integrating with enterprise platforms.
Cluster Management: Kubernetes
For operating Docker clusters at scale, Kubernetes is the de facto open-source orchestration platform. With features like auto healing, scaling, load balancing and more, Kubernetes radically simplifies running large container deployments.
It subscribes to the control plane pattern for cluster management. The master servers continually monitor containers for status changes:

We can describe containers as Pods for Kubernetes deployment. The control plane then actively handles supervision aspects like:
- Liveness probes: Heartbeats to check container health
- Readiness checks: Verify app preparedness to serve traffic
- Auto healing: Restart failed processes and re-provision nodes
- Scaling: Upsize pods automatically on load spike
With Kubernetes managing container lifecycles, we offload operational concerns around availability and scalability completely.
Monitoring: Prometheus
For analytics and data-driven insights into container statuses, Prometheus is a popular open-source monitoring and alerting toolkit.
It provides storage for massive volumes of time series metrics data with a multi-dimensional data model and powerful query language (PromQL).

Docker integrates seamlessly with Prometheus auto discovery conventions. Out of the box metrics get collected automatically:

We can instantly start graphing container availability rates, uptime trends etc. PromQL lets you query and correlate signals for advanced analysis:
# Percentage containers in running state over last hour
container_state_uptime{state="running"}[1h]
# Average memory utilization across container fleet
avg(container_memory_usage_bytes{state="running"}) by (instance)
For setting up alerts around container failures and more, check out this detailed guide.
Prometheus takes container status monitoring to the next level enabling data-driven reliability.
Persisting Container Data with Volumes
An important consideration with container status is data retention across lifecycles.
By default, the layered writable container FS gets deleted when execution finishes. This leads to loss of state on exit.
Docker allows attaching external volumes to persist data beyond container availability:
Common use cases are:
- Database data files
- Application config files
- Log data
- Anything that needs reuse across restarts
We define one-off volumes inline with docker run:
docker run -v mydata:/var/lib/data <image>
This mounts a new volume called mydata externally where container can access it at the internal path /var/lib/data.
For reuse, named volumes get defined upfront:
docker volume create logdata
docker run -v logdata:/var/log <image>
Now multiple containers can link the shared logdata volume simultaneously.
Docker manages volume lifecycle independent of individual containers. So applications can start, stop and reinitialize without worrying about state loss.
Container Status Best Practices
We‘ll conclude by enumerating some best practices around configuring and monitoring container state:
1. Name containers uniquely: Instead of overwhelming default IDs, use custom names with --name for quick visual recognition of apps.
2. Validate health probes: Enforce both readiness and liveness checks so orchestrators can track container well being accurately.
3. Persist critical data: Offload important datasets that need reuse like logs, configs and database files to external Docker volumes.
4. Follow least privilege: Containers should execute with minimal necessary resources and privileges as per app requirements. Restricting capabilities tightens security.
5. Monitor dashboards closely: Visualize metrics like container availability, traffic loads, resource usage levels etc. in time series graphs for increased situational awareness.
6. Configure restart policies: Use auto-restart flags to keep flaky containers running reliably instead of depending on manual intervention everytime.
7. Replicate metrics for redundancy: Push all monitoring stats to secondary dashboards as well besides primary apps. This redundancies visibility streams.
Adopting these patterns will help you gain tremendous observability, control and confidence over container workloads irrespective of adopted scale or complexity.
Conclusion
I hope this guide served as a comprehensive reference for tackling the many nuances around checking Docker container status from development to production scale.
We took a multi-layered approach covering key CLI syntax, state transition workflows, integration touch points with volumes, networking and orchestrators as well as operational best practices.
Do let me know if you have any other questions as you instrument monitoring for your own Docker environments!


