As a full-stack developer with over 10 years of Linux and cloud native experience, I utilize Docker daily to develop, deploy and manage containerized applications efficiently.
In this extensive 2600+ word guide, I will provide an expert look at running Docker images on Linux – delving deeper into architecture, use cases, security practices and production deployments.
Understanding Docker Images and Containers
Before diving into running images, it‘s important to understand what Docker images are and how they relate to containers.
According to the latest Docker survey, over 65% of organizations are using containers in production. This growth is being driven by the flexibility, portability and resource efficiency of containers.
At the core of Docker is the image format that packages up application code with all its dependencies and configurations:
- OS files like binaries and libraries
- Application runtimes
- Application code and assets
- Environment variables and configurations
Images are immutable and read-only. This guarantees that the application within works exactly as intended every time.
When you run a Docker image, it starts a writable container layer on top. This container adds a thin writable layer for persisting application state and data.
Understanding this architecture is key to effectively leveraging Docker images.
Image Layers for Efficiency
Docker images use a Union File System to combine changes as layers:
On the full operating system image, small incremental changes are made – like adding a package or changing config. Each change set becomes a new layer overriding the previous one.
When distributed, only layer deltas are transferred making images fast and efficient. Layers also allow for image reuse and customization.
Now that we understand images, let‘s get them running as containers.
Running Docker Images
The docker run command starts a container from an image with a single command. It combines:
- Image pull from registry if required
- Image extraction as writable container layer
- Starting container process
- Capturing outputs
- Handles clean up after exit
This makes docker run a very convenient way to deploy applications.
Basic Docker Run
The simplest way to run an Ubuntu image:
docker run ubuntu sleep 30
This will:
- Pull the latest ubuntu image
- Start a container and run
sleep 30command - Print the output
- Stop the container after 30 seconds
To get an interactive shell, invoke a terminal like bash:
docker run -it ubuntu bash
The -it flags starts the container in interactive mode attached to the terminal session.
You get a root shell to work directly inside the running container. Type exit when done to end the session.
Publishing Container Ports
Applications like web servers use network ports to communicate. By default containers are isolated from the host networking stack.
The -p option maps a container port to the host machine‘s interface:
docker run -p 8000:80 nginx
This exposes port 80 internally as 8000 externally so web traffic can route in.
You can bind multiple interfaces like:
docker run -p 8000:80 -p 3000:3000 app
Some common port conventions are:
| Application | Port |
|---|---|
| Web Server | 80 |
| MySQL | 3306 |
| PostgreSQL | 5432 |
| Redis | 6379 |
Volume Mounts in Containers
Storage doesn‘t persist automatically with containers. So when a container stops, all writes are lost.
This behavior keeps environments consistent but not ideal for databases, caches etc.
The -v argument mounts host system directories/files into the container. For example:
docker run -v /data:/var/lib/db app
This exposes the /data directory on the host as /var/lib/db inside the container. Writes to this location now persist.
Some common volume mount cases:
- Application config files
- Log files
- Database storage location
- Static assets like images, HTML etc.
Setting Environment Variables
Most applications rely on environment variables for configuration like database URLs, encryption keys etc.
The -e flag makes it easy to pass variables at run time:
docker run -e "DB_HOST=192.168.0.100" app
Now scripts in the container can access DB_HOST to connect to a database.
You can set multiple variables like:
docker run -e "DB_HOST=192.168..." -e "DB_PASS=xyz123" app
Some common variables are:
| Variable | Usage |
|---|---|
| NODE_ENV | Environment mode in Node.js (dev/test/prod) |
| RAILS_ENV | Environment for Ruby on Rails app |
| DB_HOST | Database server URL |
| DB_PASS | Database access password |
| AWS_KEY | AWS access credentials |
This covers the common scenarios and options used alongside docker run to start containers.
Now let‘s look at going beyond standalone images into multi-container and production-grade deployments.
Orchestrating Multi-Container Apps
Real world applications often consist of multiple components like:
- Frontend app servers
- Backend API services
- Caches
- Databases
- Job queues and workers
- Reverse proxy/load balancing
Manually running each image with precise parameters and connecting them together is tedious.
This is where Docker Compose comes into play for defining and running multi-container environments as a service.
Docker Compose for Local Development
Compose uses a declarative YAML format for configuring dependencies between components.
For example, a media streaming application stack might have:
docker-compose.yml
version: "3.8"
services:
frontend:
image: myapp:latest
ports:
- "8080:80"
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: dbpass123
volumes:
- dbdata:/var/lib/mysql
backend:
image: myapi:latest
environment:
- DB_HOST=database
volumes:
dbdata:
This maps port 8080 on the host to myapp container‘s port 80. The mysql database persists data to dbdata volume on the host. Plus variables get passed securely.
Starting everything with one command:
docker-compose up
As you can see, Compose really simplifies running multi-service environments.
Production Grade Deployments
For running containers reliably in production, advanced orchestration using Kubernetes is required for:
- Autoscaling pods across hosts
- Load balancing between components
- Zero downtime rolling updates
- Automated failover and recovery
With over 50% adoption amongst enterprise teams, Kubernetes has become the de facto open standard for container orchestration.
Popular managed platforms like Amazon EKS, Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE) make running cluster infrastructure easier.
These provide server provisioning, automatic scaling, load balancing plus patching and updates out of the box.
So you can focus on just the application deployment flows.
Let‘s go over some best practices when working with containers and images in production.
Production Docker Security Checklist
Running arbitrary container images from unknown sources poses a significant business risk similar to executing random executables.
Some recommendations from my DevSecOps experience are:
Sign and Validate Images
All organization images should be signed with Docker Content Trust to verify publisher identity:
docker trust sign myimage:1.0
Require signature checks before allowing pulls:
pullImageConfig:
requiredSignedBy:
- testing/keys/my-key-1
- testing/keys/my-key-2
Scan Images for Vulnerabilities
Scan images for security issues in dependencies/configurations on every update:
trivy image myimage:latest
Run Containers as Non-Root User
Containers running as root can access anything on the host system if they breakout. Create a non-root user with minimum privileges needed instead:
# Using Dockerfile
FROM node:alpine
# Create appuser
RUN addgroup appuser && adduser appuser -G appuser
# Set to non-root user
USER appuser
Limit Resource Usage
Containers should run with compute resource restrictions to prevent Denial of Service.
Set CPU/RAM usage quotas:
docker run -it --cpus 2 --memory 512m ubuntu
Follow Least Privilege Principle
Only expose the bare minimum ports, volumes and host access needed for the application logic to work:
# Dockerfile example
EXPOSE 80 # Only expose port 80 instead of 0.0.0.0
VOLUME ["/var/app/data"] # Limit writable mounts
This ensures maximum security.
Now that we‘ve covered running Docker images safely in development and production, let‘s tackle some common troubleshooting scenarios.
Debugging Docker Run Issues
Despite the simplicity of use, several runtime factors can cause docker run to fail unexpectedly.
Based on my troubleshooting experience, here are some frequent issues and potential solutions.
1. Container Immediately Exits
Containers will stop after finishing execution of the default process so they won‘t persist without a long running process like a server.
For debugging, check the exit code:
docker ps -a
Non-zero exit codes signify a crash.
Common Fixes:
- Use process manager like
supervisord - Add infinite loop or sleep timer
- Check application logs for errors
2. Container Doesn‘t Start
If containers aren‘t starting at all, it generally indicates a configuration issue.
Troubleshoot by checking:
- Image name and tag correct?
- Sufficient storage and memory resources?
- Image download timed out? Network issue.
- Application bindings failing? Fix ports/volumes.
- Fatal error in underlying application runtime?
3. Connection Refused to Port
Attempting to access a container port but getting connection refused errors implies a networking misconfiguration.
Verify that:
- Intended port exposed properly with
-pflag - App initialized correctly and binding to the publish port
- No clashes with existing host processes
4. Host File Access Permission Errors
By default, containers run as root user and try to write to protected OS directories.
Solutions are:
- User bind mounts instead of root volumes
- Start container with non-root user permissions
- Check file/folder ownership and permissions
This covers the major pitfalls and solutions for docker run commands.
Now to conclude the guide, let‘s go over some best practices for further optimizing image delivery and consumption.
Optimizing Docker Images
Well-architected Docker images follow core design principles like:
Single Concern: Each container addresses one specific role. Eg: App server or cache.
Minimal Layers: Only essential packages needed to minimize size and dependencies.
Lean OS: Small base images like Alpine Linux over heavier ones like CentOS.
Static Linking: Statically compile apps and link shared libraries instead of dynamic loading.
Heads Up: Design containers to gracefully handle kernel signals for fast stops and restarts.
Stateless: Persist all data to mounted volumes instead of ephemeral container file system for availability.
Immutable: Rebuild containers from updated read-only image versions rather than in-place changes.
Now let‘s see some common areas for optimization in practice:
Multi-Stage Image Builds
The Dockerfile build process can be broken into multiple phases with some neat tricks:
# Install dependencies
FROM maven AS build
WORKDIR /app
COPY . .
RUN mvn package
# Copy only final JAR/binary
FROM openjdk
COPY --from=build /app/target/*.jar /app.jar
# Production runtime
CMD ["java", "-jar", "/app.jar"]
This halves the final image size by excluding dev dependencies and build tools!
Distributing Images via Registry
Public and private Docker registries like Docker Hub, AWS ECR, GCR efficiently host, replicate and distribute images globally:

By caching image layers, VPC peering and container-optimized OS configurations, container start times drop from minutes to under 5 seconds in most cases!
Registries also integrate well with Continuous Integration and Deployment (CI/CD) pipelines in Kubernetes enabling fast and automated release cycles.
This completes my comprehensive deep dive into running Docker images securely and efficiently in Linux across local development, staging and production environments. Hope you enjoyed the detailed analysis!
Please share any other Docker workflow best practices in the comments.


