#90DaysOfDevOps Challenge - Day 17 - Docker Project for DevOps Engineers (Dockerfile)

#90DaysOfDevOps Challenge - Day 17 - Docker Project for DevOps Engineers (Dockerfile)

Welcome to Day 17 of the #90DaysOfDevOps challenge. Today, we will dive into Docker and explore a Docker project for DevOps Engineers. In this project, we will learn about Dockerfile, build images, run containers, and push images to repositories.


The very basic building block of a Docker image is a Dockerfile

A Dockerfile is a simple text file with instructions and arguments. Docker can build images automatically by reading the instructions given in a Dockerfile.

In a Dockerfile, Everything on left is an INSTRUCTION, and on right is an ARGUMENT to those instructions. Remember that the file name is "Dockerfile" without any extension.

Dockerfile Instructions

The following table contains the important Dockerfile instructions and their explanation.

Dockerfile InstructionExplanation
FROMTo specify the base image which can be pulled from a container registry (Docker hub, GCR, Quay, ECR, etc)
RUNExecutes commands during the image build process.
ENVSets environment variables inside the image. It will be available during build time as well as in a running container. If you want to set only build-time variables, use ARG instruction.
COPYCopies local files and directories to the image
EXPOSESpecifies the port to be exposed for the Docker container.
ADDIt is a more feature-rich version of the COPY instruction. It also allows copying from the URL that is the source and tar file auto-extraction into the image. However, usage of COPY command is recommended over ADD. If you want to download remote files, use curl or get using RUN.
WORKDIRSets the current working directory. You can reuse this instruction in a Dockerfile to set a different working directory. If you set WORKDIR, instructions like RUN, CMD, ADD, COPY, or ENTRYPOINT gets executed in that directory.
VOLUMEIt is used to create or mount the volume to the Docker container
USERSets the username and UID when running the container. You can use this instruction to set a non-root user of the container.
LABELIt is used to specify metadata information of Docker image
ARGIt is used to set build-time variables with key and value. the ARG variables will not be available when the container is running. If you want to persist a variable on a running container, use ENV.
CMDIt is used to execute a command in a running container. There can be only one CMD, if multiple CMD there then it only applies to the last one. It can be overridden from the Docker CLI.
ENTRYPOINTSpecifies the commands that will execute when the Docker container starts. If you don’t specify any ENTRYPOINT, it defaults to /bin/sh -c. You can also override ENTRYPOINT using the --entrypoint flag using CLI.

Let's go through the essential components of a Dockerfile:

Base Image

The base image is the starting point for your Docker image. It provides the underlying operating system and environment for your application. You can choose from a wide range of pre-built base images available on Docker Hub, such as Alpine Linux, Ubuntu, or specific language runtimes like Node.js or Python.

FROM node:14

In the example above, we are using the official Node.js base image with version 14. This image includes Node.js and its runtime environment, which is suitable for running Node.js applications.

Working Directory

The working directory is the location inside the container where your application code will be copied. It is a good practice to set the working directory to a specific path.


In this case, we set the working directory to /app.

Copy Files

Next, you need to copy your application code and any necessary files to the working directory in the container. This ensures that the container has all the required files to run your application.

COPY package*.json ./

The above line copies the package.json and package-lock.json files from the host directory to the current working directory in the container.

Install Dependencies

After copying the necessary files, you can install your application's dependencies using package managers like npm or pip.

RUN npm install

This command installs the Node.js dependencies based on the package.json file.

Copy Application Code

Once the dependencies are installed, you can copy the rest of your application code to the working directory in the container.

COPY . .

This line copies all the files and folders from the host directory to the current working directory in the container.

Expose Port

If your application listens on a specific port, you need to expose that port in the Dockerfile.


In this example, we expose port 3000, which is the default port for a Node.js web application.

Define Command

Finally, you need to define the command that will be executed when the container starts.

CMD ["node", "app.js"]

This command specifies that the node command should be executed, and the app.js file should be the entry point for your application.

By combining these instructions in a Dockerfile, you can build an image that encapsulates your application and its dependencies. This image can be run as a container on any system with Docker installed, providing a consistent and isolated environment for your application.

Dockerfile Example

# Use the official Node.js base image with version 14
FROM node:14

# Set the working directory to /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install the Node.js dependencies
RUN npm install

# Copy the application code to the working directory
COPY . .

# Expose port 3000

# Define the command to start the application
CMD ["node", "app.js"]

In the example above, we start with the official Node.js base image with version 14. We set the working directory to /app and copy the package.json and package-lock.json files to the working directory. Then, we install the Node.js dependencies using npm install. Next, we copy the rest of the application code to the working directory. We expose port 3000 to allow external access to the application running inside the container. Finally, we define the command to start the application, specifying node app.js as the entry point.

This example covers the essential elements of a Dockerfile, including the base image selection, working directory setup, copying files, installing dependencies, exposing ports, and defining the command. You can modify this example according to your specific application requirements, such as using a different base image, exposing different ports, or changing the entry point command.

Task 1 - Create a Dockerfile for a simple web application (e.g. a Node.js or Python app)

# app.py

# Importing the Flask class from the flask module
from flask import Flask

# Creating an instance of the Flask application
app = Flask(__name__)

# Defining a route decorator for the root URL ("/")
def hello():
    # Returning the string "Hello, World!" as the response to the request
    return "Hello, World!"

# Checking if the script is being executed directly (not imported as a module)
if __name__ == '__main__':
    # Running the Flask application
    # Starts a development server on the specified host ( and port (3000) to handle incoming requests
    app.run(host='', port=3000)
# Use the official Python 3 base image
FROM python:3

# Set the working directory inside the container

# Install Flask library using pip
RUN pip install flask

# Copy the app.py file from the local directory to the working directory inside the container
COPY app.py /app/

# Expose port 3000 to allow external access

# Set the command to run when the container starts
CMD [ "python", "./app.py" ]

This Dockerfile and Python code demonstrate containerizing a simple Flask web application. The Dockerfile sets up the environment and exposes port 3000. The Python code creates a Flask app that responds with "Hello, World!" for the root URL. By building and running the Docker image, the Flask app becomes accessible on port 3000.

Task 2 - Build the image using the Dockerfile and run the container

First, we build the docker image from the Dockerfile

docker build -t my-web-app .

Then, we run a container using the image created

docker run -d -p 3000:3000 my-web-app

We can check if the docker container is running by running docker ps

Task 3 - Verify that the application is working as expected by accessing it in a web browser

We can verify if the application is running by accessing in our preferred browser

Task 4 - Push the image to a public or private repository (e.g. Docker Hub)

Log in to Docker Hub using the docker login command:

docker login

Tag your local image with the repository name:

docker tag my-web-app estebanmorenoit/my-web-app

Push the image to Docker Hub:

docker push estebanmorenoit/my-web-app

Here you can find 'my-web-app' repository after following the above steps

Congratulations on completing Day 17 of the #90DaysOfDevOps challenge. Today, we explored Docker and its capabilities for containerization and management. We gained practical experience in building, running, and pushing Docker images.

As we progress to Day 18, we have an exciting Docker project lined up specifically for DevOps Engineers. This project will introduce us to Docker Compose, a powerful tool for orchestrating multi-container applications. We will learn how to define and manage complex application stacks using a simple YAML file. Stay tuned!

Did you find this article valuable?

Support Esteban Moreno by becoming a sponsor. Any amount is appreciated!