Deployment : A Simple Example
Deploying applications effectively might not sound as exciting as making one; but it sure is as important.
This article was written as a submission of CS UI’s software development course’s article task.
In the short time I’ve been a computer science student, I’ve noticed that the majority of my colleagues are infatuated with the prospect of making applications, but neglect the importance of deploying them. They don’t find it as interesting as making the application themselves, and so they don’t really dig that deep into it, having seen no reason to. I admit, I wasn’t really interested in deployment either; and I probably wouldn’t have to this day if I didn’t land an internship that required me to study it.
But as it turns out, after digging just a little bit deeper, I personally find deployment techniques and technologies just as, if not more, interesting as developing apps. In this article, I’m going to show you how our team deploys our apps, which is the result of our collective exploration into the world of app deployment.
Containers!
Believe it or not, I was introduced to docker very recently, in my sophomore year of college, despite having learned how to develop web applications since my freshman year. This article isn’t dedicated to docker and its intricacies, so I’ll try to keep it short: In a nutshell, using docker gives our team 2 very important benefits:
- It effectively eliminates the “But it works on my computer!” problem; if it works locally, then you can bet it’ll work on the production server.
- It wraps up our application, along with its dependencies, in a convenient package that’s easy to pass around (this one’s particularly important for CI/CD.)
I’m not saying it only gives us those 2 benefits, but those 2 are the most relevant advantages when it comes to our deployment method.
For our development cycle, each of us implement our respective features. Once implementation is done, we run local tests to see if there are any errors. Once that’s done, we finally make a docker image using a dockerfile that we have made, and then make a container that listens to a port on our local machine.
If the container runs without any hiccups, then our work has been greenlit and we can push our new feature to our respective branches. However, we do not push the image that we have just made to a container registry; the purpose of the local image we created was to just test if it’ll run in production.
Staging and Production
For our Staging and Production environments (which are two identical environments), we use the exact same environments that we use in development with two major differences:
- Different databases. As we wouldn’t want our development database to affect the production database in any way, we separated the two, and designed it so that our application will know what database to use based on the circumstances.
This way, when no production database has been found, our app will use the default django database: db.sqlite, and it will store it locally. If it finds a production database (the postgres that we use on heroku), then it will use that instead.
2. Different deployment procedures. While every development environment has its own docker image, it never pushes said images anywhere; that right is reserved to our staging and production environments, which pushes to the heroku container registry so that our production server can access it.
Notice how the procedures are almost exactly the same between staging and production. This is because our team implements “staging” as a “beta test”, which means it does exactly the same as real deployment, just to different addresses. As such, staging and production push two different docker images (although they’re only different in the name), with two different docker accounts, to two different registries.
CI/CD
Now here’s the part where using docker makes our lives much easier. Remember the second advantage that docker gave us in the previous section? Because of docker, our app can be distributed to a container registry, and our server can simply pull from that registry to get the latest version of our app. Simple, right?
“But wouldn’t that mean that you somehow have to tell the production server that you’ve pushed a new version of the application so that it can pull the latest image?” If that’s what you’re thinking, then you’ve hit the nail on the head! For the process to truly be continuous, then the production server has to automatically pull the newest version of the image every time a new one has been pushed.
Luckily, heroku, the production platform that we are using, has a built-in API that does exactly that. If you look back to the images in the previous section that shows the procedures of staging and production, you’ll notice that for staging, there’s an execution of a staging_release.sh script, while for production, release.sh is executed.
They are, again, two identical scripts, that only differ in their names, the images that they push, and the destination/target of their pushes. These scripts call upon the heroku docker API; which will then pull the latest production or staging docker image respectively, and will then update the image that our app is currently running to the newest version.
Conclusion
In summary, this brief exploration into deployment has been really interesting. However, we now know that we only barely scratched the surface of deployment technologies. For example, what if we deployed with AWS instead? there won’t be a heroku docker API to help, so how would we implement CI/CD? Would we make our own API? and what if we didn’t use docker altogether? What other interesting options are there? These questions leave us with far more room to explore and study, and I’m excited to see what question we will tackle next.
Thanks for reading!
Source:
https://testdriven.io/blog/deploying-django-to-heroku-with-docker/#gitlab-ci