Introduction To Docker

Container technology has gained momentum in the cloud world because of their rapid start time and small size for running applications. Here, I will be sharing with you my experience with a high level view of container technology; i.e., what is it ? How it is different from the monolithic architecture that we have been using throughout the years? How it can reduce the cost in the cloud world?

Let us understand Container Technology with the help of an example. Suppose, you go to a bakery shop and want to buy a 3 pounds of chocolate cake. Now, the shopkeeper will give you the cake in a box-type small container that is sufficient to accommodate a 3 pound cake, so that you can carry it easily. The shop keeper will not give you this 3 pound cake in a container that is for 5 or 10 pound cake otherwise it will be difficult for you to carry.

Now, the container technology has quite a fair resemblance with this analogy. Like saying, if you want your web application in production, you have to buy a whole new server infrastructure, install a server product, configure and install your binaries for your application, and then run your application. In the cloud world, you need to setup a whole new VM with an image of Windows Server or Linux and you have full responsibility of administration for that VM. More easily, you setup a PaaS service, such as Azure App Service or Google AppEngine, to host your application.

Maintaining these components individually and keeping them up to date as a whole monolithic block can affect the application's agility and cost that you are giving to the cloud vendor. This is where we use container technology. With container technology, we can reduce the costs, improve the application's agility and achieve high availability. There are 2 popular container technologies out there, Docker (for Linux) and Windows Containers (introduced in Windows Server 2016). Here we will see Docker and its components with a high level view.

Introduction to Docker

Docker is a light-weight, reliable and fast containerization technology. It greatly helps us to reduce the overhead of spinning-up a new Virtual Machine (VM), building Microservices Architectures, packaging, configuring and delivering application units in different environments.

Real-World Docker examples

Docker is a Container based application packaging and delivery technology. In Docker, applications run in containers. Now the question is why containers? The answer to this question is pretty simple. As we see in real world that if a country wants to export a number of stocks to another country, they put them into containers then they ship those containers either by road or through the dock to another country.

In containers, every stock is placed such that it does not get damaged upon small jerks and everything is placed safely regardless of where the container is moving.

The same above analogy applies to Docker. Docker encapsulates applications into a small running container with all of your application's bins\libs and dependencies and it fully isolates the application from the host machine. Docker Containers are based on Docker images as we will see what is a Docker image and the anatomy of a running Docker container in a while.

Docker and Virtual Machines (VMs)

VMs are also portable across a number of hyper-visors but it is important to note that Containers are not VMs. If we see VMs and containers in the context of application delivery, we see that Docker saves us from a Guest OS and Hyper-visor overhead. We need not use any Guest OS to test or run our application while using Docker.

As we can see, VMs must require a Guest OS which increases a lot of overhead over the computing power. VMs usually have the following drawbacks,

  • You have to spin up a whole new virtual machine and install a required licensed OS to just run your single application.
  • You have to configure and install all the binaries and libraries for your application manually each time.
  • They have a lots of overhead over your computing power.
  • Though they are portable across different hyper-visors and are isolated from their host machine but they have large file size in the terms of portability.

Docker Containers can save us from that heavy lifting. Also it becomes complicated and costly in the cloud world. Containers share the same services (such as file systems) and bin\libs (if necessary) of the host machine with all containers. Docker uses common Linux features namespaces and others, to isolate the application from the host machine and package them into an image. Containers offer fast startup time, more light-weight and fast application deployment and delivery. Calling containers as an alternative to virtualization technology might not be appropriate.

Docker Components

Now, let’s see some of the core building blocks of Docker in order to work around it properly.

Docker, in general, is composed of 5 core components,

  • Docker Host
  • Docker Engine
  • Docker Client
  • Docker Image and Dockerfile
  • Docker Registry and Docker Hub

Docker Host in Docker contains all the Linux features that are used to isolate applications from the host machine. Docker Engine sits inside of Docker Host, Docker Engine initiates Docker on our system (because Docker natively runs on Linux) and is responsible for creating, starting, removing and managing containers. It is also sometimes referred to as Docker Daemon. We use Docker Client on our host machine to talk to Docker Engine to create, build and run containers.

As said, applications run in containers. A Docker container is made by a Docker Image. The Docker image is made by writing a Dockerfile. A Docker Image, in a nutshell, is a recipe or blueprint of a running Docker container. Dockerfile is usually composed of different Linux commands for creating a Docker image. Docker Images are then processed by Docker Engine or Docker Daemon to create running instances of Docker Images (if we run the container) in the form of Docker containers.
After writing Dockerfiles and creating Docker images, we can push those images to Docker Registry for distributing our applications either publicly or privately. Think of it as GitHub for hosting Git repositories. With Docker Registry, we can share Docker Images with other people or create running instances of these images in the cloud for production applications (such as in ACS and DigitalOcean etc). The default Docker Registry is Docker Hub where we can find a number of Docker images from Docker officially and from some third party vendors.

Docker Engine (Docker Daemon) loads the Docker image (that was created using Dockerfile) into a Docker Container.
Containers are the running instances of Docker Images.

There are other Docker applications available such as,

  • Docker Compose, for managing multi-container applications.
  • Docker Cloud, for building and deploying Docker containers in the cloud such as ACS or Kubernetes.
  • Docker DTR (Docker Trusted Registry), A hosted registry service for managing and building images.

And others. We will see them in later posts.

Installing Docker

Installing Docker for your particular system is very easy, just go to, click on Docker for your system and you will find a step by step guide for installing Docker.

Docker for Windows and Mac use Hyper-V and Hyperkit, respectively. Make sure that they are installed and enabled.
You can also use Docker Toolbox for Windows and Mac which requires VirtualBox to be installed on your system. I'll be using Docker 1.12. Docker Toolbox requires the terminal to be configured as Docker-Client each time you start an instance of it.

It should be noted that by installing or enabling a Hyper-Visor such as Hyper-V or VirtualBox does not mean using the same traditional way of virtualization. We are using just one VM known as MobyLinux instance, which has enough capability to initiate Docker Host. If you are using Docker for Windows or Mac, this VM will be created for you in Hyper-V (For Windows) or HyperKit (For Mac OS) automatically.