It is easy to create a custom Docker image from existing Docker images using a Dockerfile. Usually people use a minimal base image such as alpine or ubuntu/debian for that purpose. For example, you may want to create a custom Docker image for your favorite web app written in NodeJS that runs on port 8080.
By default, you won‘t be able to access the web app on port 8080 from your host machine unless you configure the Dockerfile to expose that port. In this comprehensive guide, I will demonstrate how to expose ports in a Dockerfile with a real-world NodeJS web app example.
Why Expose Ports in Dockerfile
When a container is created from a Docker image, the container runs in an isolated environment. This means any ports opened inside the container are not automatically available on the host machine.
The EXPOSE instruction in a Dockerfile informs Docker that the container listens on the specified ports at runtime. The EXPOSE instruction does not actually publish the ports – it functions as a type of documentation between the person who builds the image and the person who runs the container.
To actually publish the ports when running the container, the -p flag must be used on docker run. The -p flag maps ports exposed in the Dockerfile to ports on the host machine. This publishing of ports allows traffic to the container‘s exposed ports from outside the container‘s isolated environment.
Basic Example
Let‘s look at a simple example to demonstrate how to expose and publish ports with Dockerfile and docker run.
First, create a directory for the project:
$ mkdir dockerfile-expose-demo
$ cd dockerfile-expose-demo
Next, create a file called app.js with a basic NodeJS web app:
const http = require(‘http‘);
const server = http.createServer((req, res) => {
res.write(‘Hello from container!‘);
res.end();
});
server.listen(8080);
This app simply listens on port 8080 and returns "Hello from container!".
Now create a Dockerfile to build a Docker image for running this web app:
FROM node:18-alpine
WORKDIR /app
COPY app.js .
EXPOSE 8080
CMD ["node", "app.js"]
This Dockerfile starts from the node:18-alpine base image, copies the app.js file into the container filesystem at /app, exposes port 8080, and sets the command to start the web app.
Build the image:
$ docker build -t my-web-app .
Finally, run a container from the image, publishing port 8080:
$ docker run -p 8080:8080 my-web-app
The -p 8080:8080 publishes the exposed port 8080 inside the container to port 8080 on the host machine.
Now you can make requests to the web app on localhost port 8080 from your host machine:
$ curl localhost:8080
Hello from container!
So in this example, the EXPOSE 8080 exposed port 8080 within the isolated container environment to enable publishing it to the host with -p 8080:8080. This allows directing external traffic to that internal port of the container.
Exposing Multiple Ports
You can expose multiple ports using EXPOSE. For example, if your containerized app listened on both port 80 and port 443:
EXPOSE 80
EXPOSE 443
When running the container, you would publish both ports to the host with flags like -p 80:80 -p 443:443.
TCP vs UDP
The examples above demonstrate exposing TCP ports. To specify UDP ports instead, include the protocol:
EXPOSE 53/udp
If you do not specify tcp or udp, tcp is assumed.
Container Networking Modes
There are different network modes available when running containers, which affect port publishing.
The examples above demonstrating basic port exposure use the default bridge mode. In this mode, containers connect to a private virtual bridge network inside the host. Port publishing maps ports exposed within this private network to assignable ports on the host‘s public interfaces.
Other networking modes like host or none have different implications for exposing/publishing ports, so the method used depends on the desired network isolation in each case.
Real-World Example
Let‘s look at a complete real-world example where we‘ll build and run a custom Docker image for a multi-service NodeJS application.
The application will provide:
- Web frontend on port 3000
- JSON API on port 3001
- WebSocket server on port 3002
First, create a directory:
$ mkdir complex-node-example
$ cd complex-node-example
Next, create the package.json file to describe the Node application and its dependencies:
{
"name": "complex-node-example",
"version": "1.0.0",
"description": "A complex Node app with multiple services",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"dependencies": {
"express": "^4.16.1",
"socket.io": "^4.5.1"
}
}
Then, create an index.js file that will start up all the application services:
const express = require(‘express‘);
const socketio = require(‘socket.io‘);
const app = express();
const api = express();
const websocket = socketio(3001);
app.get(‘/‘, (req, res) => {
res.send(‘Web frontend‘);
});
api.get(‘/data‘, (req, res) => {
res.json({hello: ‘world‘});
});
websocket.on(‘connection‘, (socket) => {
socket.emit(‘message‘, ‘Hello WebSocket client!‘);
});
app.listen(3000);
api.listen(3001);
This app starts the three services:
- Web frontend on port 3000
- JSON API for /data path on port 3001
- WebSocket server on port 3002
Next we need a Dockerfile to containerize this application:
FROM node:18-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
EXPOSE 3001
EXPOSE 3002
CMD [ "node", "index.js" ]
This Dockerfile builds an image with all the application files included, exposes the necessary ports, sets the start command, and uses best practices like copying package specs and running npm install early in the build process.
Now build the Docker image:
$ docker build -t my-complex-app .
Finally, run a container from the image, publishing all the ports:
$ docker run -p 3000:3000 -p 3001:3001 -p 3002:3002 my-complex-app
We should now be able to access the web frontend, JSON API, and websocket server from the host machine on ports 3000-3002 respectively.
So in this complex real-world example, using EXPOSE in the Dockerfile and corresponding -p flags when running the container allowed early documentation and eventual access to the multi-service application from outside the isolated container environment.
Summary
Key points about exposing ports in Dockerfiles:
- Use
EXPOSEinstructions to document which ports your containerized application uses. - The
EXPOSEinstruction does not actually publish ports for host access by itself. - Use the
-pflag withdocker runto actually publish the ports for external access. - Multiple TCP and UDP ports can be exposed and published.
- Different container networking modes handle port exposure differently.
I hope this guide gave you a comprehensive understanding of how port exposure works in Dockerfiles and how to enable external access for your containerized apps. Let me know in the comments if you have any other Docker networking questions!


