Docker

Let’s try to containerized your applications (and take the advantages)!

Muhammad Zuhdi
4 min readMay 3, 2021

What is Docker?

Docker is an open platform that provides the ability to package and run an application in a loosely isolated environment called a container. It can run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely what is currently installed on the host. Containers can be easily shared to others, so that you can share containers to anyone who need the same containers as yours and they will get the same containers as yours.

What’s the uses of Docker?

  • Fast, consistent delivery of your applications

Docker allowing developers to work on standardized environments by using local containers which act as your applications and services. Because of the standardized environments, you can deliver your applications consistently. Besides that, docker also makes the delivery of your applications faster since containers are great for continuous integration and continuous development workflows.

  • Responsive deployment and scaling

Docker’s container-based platform allows for highly portable workloads. Docker containers can run in many environments, such as developer’s local laptop, data center’s virtual machines, cloud providers, or in a mixture of environments.

  • Running more workload on the same hardware

Docker is lightweight so that it can be a cost-effective alternative to hypervisor-based virtual machines. It will be perfect for small to medium deployments where you need to do more with fewer resources.

Docker Architecture

Docker uses client-server architecture. Docker client communicate with docker daemon using REST API, over UNIX socket or network interface, in order to complete the requested tasks.

The Docker Daemon

Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks and volumes. It can also communicate with other daemons to manage Docker services.

The Docker Client

The Docker client (docker) is the primary way that many Docker users interact with Docker. When the users run commands such as docker run, the client sends these commands to dockerd, which will do the job. The docker command uses Docker API to communicate with ocker daemon. Docker client can communicate with many daemon.

Docker Registries

A Docker registry stores Docker images. There is a pubic registry that anyone can use, namely Docker Hub. By default, Docker will look for images on Docker Hub. Users can run their own private registry if needed.

When the docker pull or docker run commands is used, the required images will be pulled from configured registry. And, when the docker push command is used, intended image will be pushed to configured registry.

Docker Objects

Docker objects consist of images, containers, networks, volumes, etc. This section only contain about images and containers.

  • Images

An image is a read-only template with instructions for creating a Docker container. It can be build with another image as its base with additional customization.

You might create your own images or use that created by others and published in a registry. To build your own image, you have to create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile, you have to rebuild it in order to apply the changes. On rebuild, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.

  • Containers

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. So that, you can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

Orchestration

The portability and reproducibility of a containerized process mean we have an opportunity to move and scale our containerized applications across clouds and datacenters. With containers, applications guaranteed to run the same way in any host. Furthermore, as we scale our applications up, we’ll want (probably need) some tooling to help automate the maintenance of those applications, able to replace failed containers automatically, and manage the rollout of updates and reconfigurations of those containers during their lifecycle.

Tools to manage, scale, and maintain containerized applications are called orchestrators, and the most common examples of these are Kubernetes and Docker Swarm.

Development

According to the Docker’s guide, orchestration in development environment is done by using orchestrators provided by Docker Desktop. But, on my group project, we use Docker Compose to do the orchestration. I’ll tell the details below.

Example of compose file

From the file above, we can tell that my project consist of 5 services, that are rabbitmq, ms, db, web and worker. (to be continue)

--

--