How to Create Your Own Optimized Docker Images Like a Pro

Have you ever struggled with an existing Docker image that isn‘t quite optimized for your application? Or wanted more control over the contents and security of your containers?

Creating your own custom Docker images gives you that power!

In this comprehensive guide, you’ll learn:

  • Key reasons for building custom images
  • Step-by-step instructions from Dockerfile to deployment
  • Expert tips for streamlining and locking down images

Soon you’ll be able to effortlessly build lean, secure and portable images tailored to any app.

Why Roll Your Own Docker Images?

With over 8 million apps deployed via Docker and industry giants like Netfix and Spotify containerizing apps, Docker adoption continues accelerating.

Plenty of pre-made images exist on Docker Hub to get started fast. But relying solely on others’ images has downsides:

❌ No control over dependencies included
❌ Images ballooning in size,filled with unnecessary tools
❌ Security risks from unpatched vulnerabilities

Crafting your own gives total freedom to:

✅ Only install exactly what your apps need
✅ Optimize size for faster deployment
✅ Add security measures like read-only volumes

Beyond those key benefits, custom images also enable seamless CI/CD pipelines, consistency across environments, and portability across any platform supporting containers.

Now that I’ve convinced you to give it a shot, let’s explore how to DIY images like a Docker pro!

Gather Prerequisites

I’ll assume you have some basic Docker familiarity since you’re keen to build images. But let’s validate you have what you need to follow along:

Docker Installed

Docker Engine enables containers, while Docker Compose assists in running multi-service apps. Having both installed provides maximum flexibility:

docker --version
docker-compose --version 

I recommend the latest Docker releases to leverage cutting-edge features.

App Code Ready

You may be containerizing an existing app or building one from scratch. Either way, have the source code, scripts and assets gathered. This gets copied into the image later.

A simple Node.js app like this works nicely:

const express = require(‘express‘);

const app = express();

app.get(‘/‘, (req, res) => {
  res.send(‘Hi there!‘);
});

const port = 3000;

app.listen(port, () => {   
  console.log(`App running on port ${port}`);
}); 

Save it to an app.js file we can Dockerize next.

Basic Docker Literacy

It helps to understand key concepts like containers vs. images, the component parts of an image, and how docker build works under the hood.

If terms like intermediate containers or image layers sound foreign, I suggest brushing up with Docker‘s excellent interactive tutorials first.

No specialized Linux skills are necessary though! Once the above boxes are checked, you have all you need to start Dockerizing like a developer.

Crafting a Lean Dockerfile

The foundation of any Docker image is the Dockerfile – an simple text file containing steps to assemble the image.

Think of it like bread dough: add in ingredients like OS packages or application code, season with commands like EXPOSE or CMD, then bake with docker build!

Let‘s breakdown the anatomy of a robust Dockerfile using our Node.js app as an example:

Dockerfile Contents

Choosing a Base

All Dockerfiles start FROM an existing image. This downstream parent image provides the initial filesystem and OS fundamentals to build on top of.

FROM node:14-alpine

The most official Node image runs on tiny Alpine Linux. This keeps images compact and secure out of the gate.

Install Deps

If additional OS packages are required, use RUN to install them:

RUN apk add --no-cache libc6-compat

Our application needs this extra library available at runtime.

Set Working Directory

Define the root directory for other commands with WORKDIR:

WORKDIR /app

This path will now be the default for all RUN, COPY etc going forward in the Dockerfile.

Copy App Code

Populate working directory with the app source files:

COPY . .

This copies files from current host directory to container path /app defined earlier.

Open Ports

Make ports available with EXPOSE while not publishing them:

EXPOSE 3000   

Our Node.js app listens on port 3000 for incoming connections.

Define Runtime Command

When a container launches from our image, run this by default:

CMD ["node", "app.js"]

This will execute node app.js, running the server when containers start.

And that‘s a wrap! Our slim but powerful Dockerfile is complete.

On to building the image itself next.

Transforming Dockerfile to Image

We have our “recipe” for the perfect image backing our app. Time to turn it into a tangible image by “cooking” it with docker build.

Build off the Dockerfile we just created:

docker build -t my-node-app .

Breaking this down:

  • docker build = construct image from Dockerfile
  • -t my-node-app = tag/name image
  • . = use Dockerfile in current directory

When run, Docker steps through each instruction: installing packages, copying files, generating container layers.

The final output prints the new image ID for our handcrafted my-node-app image!

Successfully built 98e4970aa7d1
Successfully tagged my-node-app:latest

Let’s now put it through its paces.

Testing Containers from Custom Images

With DYI image in hand, spin up a container from it:

docker run -dp 3000:3000 my-node-app 

Breaking down the flags:

  • d = detached mode
  • p = publish port 3000 inside to host

Head to localhost:3000 and voila – our app!

Tweak the Dockerfile as needed and rebuild until running smoothly:

docker build -t my-node-app .

This recreates the image, staying agile as we optimize further.

Once confident via testing, we’re ready to share and run the image anywhere Docker is installed!

Streamlining Your Image

Being diligent about keeping images tidy saves storage space and runtime memory. Here are some pro tips:

Leverage Multi-Stage Builds

Install build tools in a separate stage instead of bloating the production image:

FROM node:14 AS build
RUN npm install -g gulp

FROM node:14-alpine  
COPY --from=build /node_modules ./node_modules

Only the output artifacts from the first stage are copied over.

.dockerignore Unneeded Files

Exclude locally useful but container irrelevant files:

node_modules
.git

This speeds up builds by only processing essential assets.

Use Smaller Base Images

Trim OS overhead by not inheriting from distros like Debian or Ubuntu. Targeted bases like Alpine require less resources.

Every optimization makes deployments snappier for you and end users!

Distributing Your Masterpiece

Once refined, share your containerized application via:

Container Registries

For commercial use, push to public or private registries like Docker Hub or AWS ECR. These central repositories manage access and provide scalable infrastructure.

Exporting and Loading Files

For simple sharing, export container images or save tar archives. Transfer these file snapshots anywhere to precisely recreate your containers.

No matter the distribution method, custom images guarantee reliable delivery of your app!

Keeping Images Maintained

With containers in production, be diligent about:

Versioning – Tag images with schema like MAJOR.MINOR.PATCH to denote changes. This eases tracking what’s running where.

Rebuilding – If the base OS or app code changes, generate new images to stay up-to-date. Automate rebuilding for efficiency.

Securing – Follow principles like read-only volumes, locked down Linux capabilities and monitoring tools to harden container environments.

By design, containers encourage immutable infrastructure. Replacing rather than modifying gives confidence in a consistent system state.

Conclusion

We’ve covered a ton of ground on the intricacies of creating custom Docker images! Let‘s recap the key points:

➡️ Why custom – Granular control, smaller assets, hardened security
➡️ Prepping prerequisites – Docker, app code, basic concepts
➡️ Dockerfiles – Foundation for image contents and behavior
➡️ Building – Transforming Dockerfile instructions into images
➡️ Testing – Validating functionality before distribution
➡️ Optimizing – Streamlining size and enhancing lockdown
➡️ Sharing – Deploying apps from registry or file export
➡️ Maintaining – Keeping images secured and up-to-date

Phew, you made it! Now you can confidently build, optimize and deploy custom Docker images.

As next steps, consider integrating images into CI/CD pipelines for testing and preview environments. And explore orchestrators like Kubernetes to manage containers at scale.

But those are topics for another day! I invite you to grab the code examples from this post and start Dockerizing your own apps with custom images.

Happy containerizing!

Scroll to Top