Docker: It Just Works!

The term ‘Docker’ might be intimidating with all the jargon and terminology that comes with it, but it is everything but!

Kyo
8 min readMay 3, 2021
source : https://blog.iron.io/docker-containers-the-pros-and-cons-of-docker/

This article was written as a submission of CS UI’s software development course’s article task.

There are many ways to deploy and distribute your applications. Among those, docker is renowned for its consistency: it effectively eliminates the “but it works on my machine!” problem. If it works locally, then you can be very certain that it’ll work on the production server. Why is that? and how does docker even work?

In a Nutshell

Docker is a platform based on containers — a process (or groups of it) that is isolated from the OS. Think of VMs, but not quite. Containers run images — a packaged version of an application, complete with its dependencies. Images are stored in a registry that you can pull and push images from.

Containers are isolated and are independent from each other, but can communicate with one another. If two containers run the same image, then you can be sure that the application, dependencies and environment of those two images will be the same (unless you manually tweak them after you created them, of course.) This means that a container running your image on your local computer will act the same way and yield the same results as a container running the same image in the production server.

That last part is important; having your application behave the same way in the production server as it does on your local machine is a godsend. This will also ensure that your application works on any computer that supports the container runtime environment, and since it doesn’t need its own OS, it is far less bulky than VMs (gigabytes versus megabytes) and can be spun up faster.

The Infrastructure

We talked a bit about containers and images in the previous sections, but the essential components that make docker work aren’t just containers and images. Visualized, the entire docker architecture looks like this:

source :https://docs.docker.com/get-started/overview/

Docker Client

A terminal application that users use to input commands. Docker commands use the docker API to communicate with the docker daemon. Hence, if the docker daemon isn’t running, you can’t use docker commands.

Docker Daemon

A daemon that listens for docker API requests and manages docker objects.

Docker Images

A read-only file with instructions for creating a docker container. Using an already existing image is simple: you only need to pull them from a registry and use it. To build a new image, however, requires a Dockerfile, which contains instructions on how to create the image. You can also make an image that is based on another image, and customize it to your liking. Once you’ve built an image, it is static.

Docker Containers

A runnable instance of an image. Can be created, deleted, moved, started and stopped by using the docker daemon.

Container Registry

A registry containing images that users can pull from and push to. Can be utilized to send your images to your production server and for CI/CD (more on that later).

Step-by-step

To put all that we’ve learned into action, let’s build an image from scratch and run a container based off of that image as an example.

Making an Image

For this example, I already have a django web-application ready that I want to package as an image. To do so, I need to create a dockerfile that will tell docker how to build my image.

The first line of the dockerfile specifies what docker image that I want to base my image off of. Since I’m using django (which is a python web framework), I chose python 3.8-alpine as my image. Other than that, the comments on my dockerfile aptly describes what each section of the file does.

Now that we have our dockerfile ready, I can build an image based on it and give it a name (or a tag, if you want to be technical) by using the command :

docker build -t web:latest .

the “-t” tag specifies what tag I want the image to correspond with, and the “.” at the end of the command specifies that the dockerfile that I want to use is in the same folder. The output of that command is as follows:

To check on what images and containers are available on my machine, we can use the following commands respectively.

#List all available images
docker image ls
#List all running containers
docker container ls
#List all containers (including stopped containers)
docker container ls -a

Running a container

Now that we have our image, we can run a container based on it. If you take a look into the dockerfile, notice that the very last line needs a variable called “PORT” to work. In production, since we’re using heroku, that variable will be allocated from heroku, but locally, we can set that variable to be whatever we want using the “-e” tag on our command.

docker container run --name web -d -e “PORT=8765” -p 8007:8765 web:latest

the “ — name” tag specifies the name. The “-d” stands for “detached”, so that the container runs in the background. The “-e” tag specifies a variable, in this case the variable “PORT” that was mentioned, and the “-p” tag specifies the port that the container will listen to. After running a container, we can use “docker container ls” to check if the newly spun container is running.

And voila! your container is now up and running! localhost:8007 will now host our web application that is running on the web container. When we want to stop our container, we can simply use the docker stop command.

docker stop web

As we can see, the container we just ran no longer shows up on our “docker container ls” command, meaning that it is no longer running.

Other commands

Some other commands that are handy are:

docker logs <name-of-container>

Outputs the log of a container. Useful to see if what went wrong if your container fails to start, or just to monitor it in general. For example, if you run your container and it doesn’t show up on “docker container ls”, then you can use this command to diagnose why it didn’t. For general monitoring purposes, I recommend using the “ — follow” tag behind the command, so as to make it a continuous live feed of the log.

docker exec -it <name-of-container> sh

Puts you inside the container, like an ssh command to a cloud instance. Can only work on running containers (containers that show up on “docker container ls”). Useful to check the environment of a container, for example, you can use it to check if the directory tree of the container is correct. Note that the “sh” at the end of the command can be replaced with any other command. For example, “docker exec -it web ls -a” will output the result of running “ls -a” at container “web”.

docker prune

Removes unused images from the machine. Useful if you have a lot of images that you don’t use and takes up space.

Docker implementation in our project

For our project, we use docker to deploy our application to heroku. Conveniently, heroku has an API that automatically pulls our latest image and rebuilds our application based off of it, which makes CI/CD so much easier (recall that I mentioned CI/CD at the beginning of the article).

For our dockerfile, we use the same exact dockerfile that I used for my example on the previous section. All that’s left to do is to edit our ci.yml file to accomodate docker orchestration.

Our gitlab-ci.yml

In our “staging” stage (which is exactly the same as deployment, just to a different url for beta testing), we first login to the heroku container registry with our credentials, which were already set as environment variables. After that, we pull the latest version of our image (prior to this one). Then, we build the image using our dockerfile (using caching to speed things up) and name it exactly the same as our image that we just pulled. This will override the version that we pulled with our new build.

After that, we push our newly built image to the same container registry that we pulled from, and run a script to call upon the heroku api so that it knows that we’ve built a newer version of the image.

The heroku API script

Then, heroku will update our application with the version that we just pushed.

For maintenance, I wrote a script on our gitlab runners (which are our AWS EC2 instances) that runs docker prune every hour to delete docker images that aren’t being used, since space is limited there.

My pruning script in action, running every hour

Afterword

All in all, docker is an amazing tool that revolutionized the industry. With our implementation, we barely scratched the surface of what it is capable of, since we’re not using a multi container setup with docker compose(heroku handles our database and webserver) or more complex orchestration like docker swarm or kubernetes. With proper configuration, docker can be an easy, consistent, and compact architecture that remains a viable option for even the most complex of projects.

Thanks for reading, I hope you’ve found this useful! Have a good day.

References

https://jessicagreben.medium.com/what-is-the-difference-between-a-process-a-container-and-a-vm-f36ba0f8a8f7

--

--

Kyo
Kyo

Written by Kyo

If you're reading an article from me, It's probably a part of my college course.