Why, when and how do we use Docker?
-
3171
-
0
-
0
-
0
Let’s assume you need to run 20 various instances of a software and have 20 separate environments. Previously, the obvious choice would be running 20 virtual machines.
However, this would mean running 20 operating systems and devoting a significant amount of resources to simply maintaining the system. With Docker, you can run 20 isolated Docker containers based on different images on 1 virtual machine without a hypervisor. One image can be used to build multiple containers with different code, environment variables, ports, etc. How good is it? Docker is 26 times more efficient than KVM. The performance boost is astounding!
Docker is the most popular software container management platform. A container is actually a package of code, containing everything needed to separately run a piece of software on a shared OS. Docker container images are built in a way ensuring ANY Docker container can be run on ANY machine with Docker installed. This removes the “it works well on my machine, don’t know why it doesn’t work on yours” case, which was a HUGE headache for developers worldwide previously. There are a couple more benefits and we will list them below.
Why use Docker?
First of all, the cost-efficiency ratio is exceptional. As for the example above, instead of running 2 VM’s (for the sake of load balancing) per app, a business might rent 3-4 VM’s, cluster them up and deploy all these 20 apps on them. In addition, most of the containers are up and running in literally 1 second after creation — waaayyy faster than when using a VM.
Portability is the second reason for using Docker. The developers must use lots of tools in their work and many of these tools have microservices that demand separate environments. With Docker, instead of launching several VM’s on a development laptop, a DevOps engineer can simply launch several Docker containers on 1 VM and save a ton of resources and operational overhead.
The third reason is consistency. Docker images do not depend on the environment they run on. The very same app is running on the developer’s laptop, during testing, on a staging server and after being shipped to production, regardless of the surrounding. It works the same and reacts the same with the minimal effort across the software delivery pipeline.
The fourth, but not the last reason is the sheer number of Docker images deployed. The Docker community is huge and grows very fast. Each day dozens or even hundreds of new Docker images are added to Docker hub. When a DevOps engineer needs to try out some tool or a complex application, consisting of several services — there is no need to reinvent the wheel. There is a 99% chance somebody has already done this and the image is pushed to the hub.
When to use Docker?
Here is what a normal delivery pipeline looks like with Docker:
- A DevOps Engineer can specify requirements for any microservice in a simple to create Dockerfile.
- The code is pushed up to the repo and pulled down by a CI server to build EXACTLY the needed environment. You don’t even need to configure CI server at all for that.
- The container is built and deployed to staging by CI tool and is ready for tests.
- Shred the testing environment to pieces in a couple of clicks or automatically once the tests are done.
- Provision exactly the image you had tested to production, as machine configuration is out of concerns.
Some form of using containerized workflow was present in Java for a decade, but having it working across all Linux-based languages is a real game changer!
How to use Docker?
At IT Svit we use Docker servers on top of Linux OS, omitting the virtualization layer to leverage the incredible productivity boost the Docker offers.
The simpler way we use Docker is for working with CircleCI, Gitlab CI or Jenkins to create Continuous Integration pipelines, which allow us to easily build containers with the latest code versions, automatically test and deploy them to production.
The more complex way of using Docker (and the one we adore) is leveraging powerful tools for running larger projects and performing a much wider range of tasks, like rolling updates, autoscaling, performing rollbacks easily and safe if needed, thus ensuring Continuous Delivery. The tools we talk about are the container orchestration systems like Kubernetes, Rancher, Openshift, etc. Using these instruments for solving complex tasks lets you feel the real power of Docker and microservices.
Combining various tools is the key to success here:
- Configuration orchestration tools like Terraform help us easily build and modify the Infrastructure as Code
- Combinations of Ansible + Kubernetes/Docker Swarm help us provide IaaS
- Provisioners like Ansible, Puppet or Salt enable us to populate the infrastructure and manage applications swiftly and efficiently.
Such approach to using Docker is the key to building and maintaining scalable and reliable infrastructures for our customers on AWS, Google Cloud or any other provider of cloud/bare metal servers.