Dockerize Anything

Dockerizing has become the de facto standard for shipping code. Learn how to containerize your application to prevent any future surprises while deploying or distributing for development.

Dockerize Anything
Dockerize All Applications

Docker helps you package and ship your application in a way that anyone with Docker installed on their system can run it.

FAQ

  1. How is Docker different than a Virtual Machine?

The major difference is how both solutions isolate your application. A virtual machine creates its own OS where it simulates both the kernel and the OS, however, Docker just virtualizes the application layer and is mainly dependent on the Host to talk to the kernel.

  1. What is the difference between Dockerfile and docker-compose.?

A Dockerfile is a bunch of instructions on how to create an image for the application you're building. docker-compose is used to run containers in the form of services in their own isolated environment.

To further clarify, think of your production and development environment. In your production environment, your database is most likely a service you're opting for from one of the cloud providers, and in this case, your docker image needs only to be set up with your actual application, maybe an Nginx to act as a web server. In this case, a Dockerfile is most useful as it will generate an image for only the essential components for going to production.

In your development environment, however, you'd be better off hosting a database instance locally to minimize the latency and obviously the cost. In this case, you should rely on docker-compose to spin up both your database and application containers.

Common Terminologies

  • What is an Image?
  • An image is a snapshot of a packaged application. Think of it as an .exe.
  • What is a Container?
  • A container is currently an image that is running/executing. Think of this as running the aforementioned .exe.

Installing Docker and Docker Compose

To install Docker on Linux, a simple sudo apt install docker.io should work. For Windows you just have to download the binary and double-click on it. Hope you can manage that.

Also, do install docker-compose while you're at it. For that navigate to their official repository's release page, select the latest version, and choose the correct binary for your OS. If you're on Linux, it should be docker-compose-linux-x86_64. For Windows, it's going to be the one with the .exe extension.

Overview of the application to be packaged

I'll be using a Flask application written in Python to demonstrate how to set up Docker. Please clone it from my Github repository if you'd like to tinker with it yourself.

Note - Don't be alarmed if you don't know Python or Flask, they're just tools to explain how Docker works.

Take this for example, this is the code for app.py which serves as the entry point for our application. It creates API endpoints.

from flask import Flask

app = Flask(__name__)

@app.route("/")
def index():
    return {"hello": "world"}
entry point for the web application

This may look familiar to my friends using Express js. The function def index returns JSON {"hello": "world"} if you hit the root of the application. Keep in mind the actual app.py is a little bit more complex where it connects with a Database and returns entries from the database in the form of JSON. All of this will be explained later.

Start Packaging

The 5 lines of code above are all that is required to start packaging the application. You'll need to create a few things before we start.  Create a docker-compose.yml file at the root of your project. Create a directory called docker and place the following files in it.

  • Dockerfile.local
  • Instruction to create an image.
  • nginx.default
  • Configuration for the nginx server to act as a reverse proxy.
  • start_server.sh
  • The script will actually start the application server, create database migrations, and whatever else that's required by your application to run smoothly.

After doing all this, this is how your project directory will look like.

.
├── app.py
├── docker
│   ├── Dockerfile.local
│   ├── nginx.default
│   └── start_server.sh
├── docker-compose.yml
project directory structure

docker-compose.yml

version: "3"
services:
  database:
    image: postgres:14
    container_name: mtmm-postgres
    volumes:
      - ./docker/data/postgres:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=mtmm
      - ALLOW_IP_RANGE=0.0.0.0/0

  web_app:
    build:
      context: ./
      dockerfile: docker/Dockerfile.local
    working_dir: /app
    volumes:
      - ./:/app
    ports:
      - "5000:5000"
    environment:
      - "DB_PORT=5432"
      - "DB_HOST=database"
docker-compose.yml configuration

Here's a step-by-step explanation of everything that's going on here.

Database Section

  1. We use the postgres version 14 image as our database instance.
  2. Name the container "mtmm-postgres"
  3. volumes part is important, here we link data/postgres directory inside our docker folder so that all the data saved in the DB doesn't get reset every time you start and stop the container.
  4. environment is where we specify the username and passwords we want to set when the database image boots up for the first time.

Web App Section

  1. We specify the version of docker-compose we want to use
  2. Services is the umbrella that holds all the containers together. Here you'll define anything your application might need, currently, it's just instructions to build your web app, but it can include a Database (which is demoed below), a Redis server, an Elastic Search whatever else you can think of.
  3. web_app is the first service we want to create, you can name it whatever you like and this is the meta of things.
  4. container_name is pretty self-explanatory, it's going to be the name of the container once it runs.
  5. context - You specify the base directory and the point of reference from where the config will start looking if any specific file is mentioned. This is the root of your project.
  6. docker file - This specifies where the Dockerfile is located that compose will use to create the image, and later run it.
  7. working_dir - Sets the directory where your application gets copied too. More on this later.
  8. volumes - This acts as a persistent data store for your container. You're telling it to use the "volume" as its hard drive.
  9. ports - Connecting a host port with a container port. HOST_PORT:CONTAINER_PORT

Keep in mind that under the services section, we have set the database as our key, which will also serve as the host of the application. For example, when connecting to the DB, you will specify database your host, and since your application is running as a container too, it'll be resolved. However, if you try to connect from your host machine to this database named host, it'll fail because it doesn't have Docker's context.

Dockerfile.local

FROM python:3.10-buster

ARG DEVELOPMENT_ENVIRONMENT

ENV DEVELOPMENT_ENVIRONMENT=${DEVELOPMENT_ENVIRONMENT}

# Installing nginx and vim to the docker-image
RUN apt-get update && apt-get install nginx vim -y --no-install-recommends

# Copying the custom nginx config we have into the newly installed nginx.
COPY docker/nginx.default /etc/nginx/sites-available/default

# Creating soft links for access and error logs for nginx
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log

# To store all the code from our project, we create the `app` directory at the root of the image.
RUN mkdir -p /app

# This is python specific, to store all the cache for installed packages we create pip_cache directory inside app.
RUN mkdir -p /app/pip_cache


# Copying requirements.txt which carries a list of dependencies for the project. It's essentially package.json.
COPY requirements.txt /app/

# Copying all the code from the project directory into /app created in the docker image
COPY . /app/

# setting work directory as /app and any command from here on out issued will be run from this directory.
WORKDIR /app

# installing packages.
RUN pip install --default-timeout=100 -r requirements.txt --cache-dir /app/pip_cache

# changing ownership of the code to the www-data user.
RUN chown -R www-data:www-data /app

# opening up the port 5000 as this is where our nginx server will run.
EXPOSE 5000

# CMD is the command that runs when the build process is complete to start the container. We have a script called start_server.sh which will perform any task required to start the application, including starting the application.
CMD ["/app/docker/start_server.sh"]
Dockerfile configuration
  • You use the python3.10 image built on Debian as the base.
  • You can pass a bunch of arguments when building your image, to parse the argument, you use the ARG keyword then the name of the argument.
  • ENV helps you set the environment variable in the docker image you'll build. Here we store the argument DEVELOPMENT_ENVIRONMENT as an environment variable with the same name.
  • RUN is an image-building command, you can use it to install packages, copy files, and much more. It alters the state of the image.
  • CMD is the command that is used to start the container. Have it at the end to run whatever script is required to start your project.

nginx.default

server {
    listen 5000;
    server_name methodtomymadness.dev
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    location / {
        proxy_pass http://127.0.0.1:5010;
    }
}
nginx config that will serve as the proxy between the app server and the client

Here we have nginx that's set up to listen to port 5000, as you'll recall above, it's the same port we're opening in our docker image.

  • server_name - can be anything, doesn't matter.
  • access_log and server_log - where all the logs for any incoming requests will be stored.
  • location is a directive used to specify which configuration block will take over for any particular URL, and URIs. For example, here, for any request that comes in we pass it to the local server running at port 5010 which is where our actual project will be running.
  • proxy_pass is just passing incoming requests to another server.

start_server.sh

#!/bin/bash

(gunicorn -w 4 'app:app' -b 0.0.0.0:5010) &
nginx -g "daemon off;";
script that will start the application inside the docker container

It's a simple bash script that runs a gunicorn server, it's a WSGI (Web Server Gateway Interface) that you use in production to handle incoming traffic more efficiently. The nginx command tells the server to stay in the foreground. This is usually used in docker images as one container usually runs one service. However, a bare metal server may be running a bunch of them where you'd need to send Nginx to the background. For more details about this, please have a look at this link.

We're finally ready to run the server.

Running the Docker container

To start the Docker container you enter the following command

sudo docker-compose up
starting docker containers via docker-compose

That's all that's needed. In case you're already in the docker group you won't need to use sudo.

Building Docker Image Manually

You'll be building an image when deploying to production. Ideally you should have a CI / CD pipeline setup for this, but that's beyond the scope of this blog post.

Here's how you can build an image for yourself.

sudo docker build --build-arg DEVELOPMENT_ENVIROMENT=PRODUCTION \
  -t <tag the image by a specific name> \
  -f docker/Dockerfile.live    
building docker image

This will generate an image for your project.

Running an image as a container

Pretty simple to run any existing image.

sudo docker run -p HOST_PORT:CONTAINER_PORT image_name
running a docker image as container


Debugging Docker

There's a good chance you'll have to get a shell inside the docker container that is running for some debugging purpose, so here's how to do that.

  1. List all the docker containers currently running.
sudo docker ps

# get names of docker containers
docker ps --format '{{.Names}}'

containerizing_anything_web_app_1
mtmm-postgres
getting a list of docker containers
  1. Getting a shell on one of the containers.
 sudo docker exec -it <name_of_container> bash

root@fd8de0cbe540:/# 

# You can run commands directly instead of getting a shell as well

sudo docker exec -it <name_of_container> ls

bin   docker-entrypoint-initdb.d  lib    mnt   root  srv  usr
boot  etc                         lib64  opt   run   sys  var
dev   home                        media  proc  sbin  tmp
getting a shell on a docker container


And that's all folks! I know the first time you look at all this setup it seems a little daunting but keep in mind all of this is laying a solid foundation for your project to prepare for the future. This will help you when you have to distribute the software to a bunch of developers or deploy it rapidly to production. Happy dockerizing!