Docker: It Just Works!

The term ‘Docker’ might be intimidating with all the jargon and terminology that comes with it, but it is everything but!

source :

This article was written as a submission of CS UI’s software development course’s article task.

In a Nutshell

Docker is a platform based on containers — a process (or groups of it) that is isolated from the OS. Think of VMs, but not quite. Containers run images — a packaged version of an application, complete with its dependencies. Images are stored in a registry that you can pull and push images from.

The Infrastructure

We talked a bit about containers and images in the previous sections, but the essential components that make docker work aren’t just containers and images. Visualized, the entire docker architecture looks like this:

source :

Docker Client

A terminal application that users use to input commands. Docker commands use the docker API to communicate with the docker daemon. Hence, if the docker daemon isn’t running, you can’t use docker commands.

Docker Daemon

A daemon that listens for docker API requests and manages docker objects.

Docker Images

A read-only file with instructions for creating a docker container. Using an already existing image is simple: you only need to pull them from a registry and use it. To build a new image, however, requires a Dockerfile, which contains instructions on how to create the image. You can also make an image that is based on another image, and customize it to your liking. Once you’ve built an image, it is static.

Docker Containers

A runnable instance of an image. Can be created, deleted, moved, started and stopped by using the docker daemon.

Container Registry

A registry containing images that users can pull from and push to. Can be utilized to send your images to your production server and for CI/CD (more on that later).


To put all that we’ve learned into action, let’s build an image from scratch and run a container based off of that image as an example.

Making an Image

For this example, I already have a django web-application ready that I want to package as an image. To do so, I need to create a dockerfile that will tell docker how to build my image.

docker build -t web:latest .
#List all available images
docker image ls
#List all running containers
docker container ls
#List all containers (including stopped containers)
docker container ls -a

Running a container

Now that we have our image, we can run a container based on it. If you take a look into the dockerfile, notice that the very last line needs a variable called “PORT” to work. In production, since we’re using heroku, that variable will be allocated from heroku, but locally, we can set that variable to be whatever we want using the “-e” tag on our command.

docker container run --name web -d -e “PORT=8765” -p 8007:8765 web:latest
docker stop web

Other commands

Some other commands that are handy are:

docker logs <name-of-container>
docker exec -it <name-of-container> sh
docker prune

Docker implementation in our project

For our project, we use docker to deploy our application to heroku. Conveniently, heroku has an API that automatically pulls our latest image and rebuilds our application based off of it, which makes CI/CD so much easier (recall that I mentioned CI/CD at the beginning of the article).

Our gitlab-ci.yml
The heroku API script
My pruning script in action, running every hour


All in all, docker is an amazing tool that revolutionized the industry. With our implementation, we barely scratched the surface of what it is capable of, since we’re not using a multi container setup with docker compose(heroku handles our database and webserver) or more complex orchestration like docker swarm or kubernetes. With proper configuration, docker can be an easy, consistent, and compact architecture that remains a viable option for even the most complex of projects.


If you're reading an article from me, It's probably a part of my college course.