DockerOps - Getting Started With Docker

Introduction

Today, almost everyone has at least heard of Docker and the term containerization. It doesn't matter whether they are experts in building container-oriented services, or whether they know how containers function. But they have stumbled upon a document, or ebook, magazine, article, or a video that talks about a containerized application, orchestration tools, or Docker, Kubernetes anything. The point is that people are aware of the technology, and the same evolution of the virtual machine stack, that everybody knows. That is pretty much old school now. What I want to do in this Docker 101 class, is to establish the core fundamentals of the Docker setup, how it works, how you can quickly get Docker, and how to try it out.

I will try my best to cover all the aspects of Docker, and containerization of the applications with this tutorial collection of the articles. Among a few topics about Docker itself, we will explore:

  • Data persistence in a container
  • Services in Docker
  • Container orchestration
  • Kubernetes and Docker

These are a few of the topics that I can discuss without actually making beginners lose track, or experts call me a geek.

What is Docker?

A short definition for those, who still don't know what Docker is, and why they should care about it: When people talk about Docker, they think of a containerization tool, or some even think of it as a process orchestrator. They are all correct. Docker is, and can be thought of, as a:

  1. Package manager
  2. Orchestrator
  3. Load Balancer

And I didn't even mention all the other things, that you can find on the internet like, Docker is a product developed by the company of the same name, or, Docker is an open source containerization management tool or another one like Docker is the gateway to the cloud. They are all right in their own perspective, but still, that does not just cover everything that Docker can do. That is why I am writing this article, to showcase the functionality and features of Docker, instead of talking about it. Let's start with the requirements and prerequisites for Docker.

Platform and Requirements

Docker itself is available to be installed on Linux, Windows or macOS, but I would be utilizing Linux platform here. The commands and Docker behavior will be similar on Windows as well as on macOS. But I assure you, it would be an amazing ride if you join using a Linux distro, and especially if you choose to setup Ubuntu installation 16.04 or latest. I have used Ubuntu 18.04 for this setup.

The second thing to note here is that I am using Snaps, the Snapcraft. The reason behind this step and selection is that snaps are really very easy to install, upgrade and manage. They come shipped with the services, programs, and utilities that enable you to manage the utility quite easily. Do not worry, in the section for installation and setup, I will explain and provide guidance on other installation and requirements too as needed, but the primary method is this one.

Installation and Configuration

As discussed, we will be installing the snaps for the Docker and, for those who want to learn some advanced topics of orchestration, Kubernetes. Snaps are available on Linux distributions, the ones that are released recently in 2018, or 2017. The installation process is clearly stated and mentioned on the Docker website, anyways I will still try to explain the method on a few of the platforms.

Windows

On Windows, the only things you need to make sure of before executing the installer for Docker are:

  • Hardware virtualization
  • Hyper-V to be installed and enabled

This also means that you cannot try this stuff out on Windows Home edition. You would require to have, Windows Pro or onwards edition for this. But this will not prevent you from trying out the Docker installation on a virtualized environment, and there you can easily set up Linux distro and get started with the next section.

But if you already have a Pro edition of Windows, just head over to Docker downloads and install Docker for your machine.

Linux (Ubuntu 18.04)

Although any of the Linux distribution that supports Snaps would work, I am going to use the Ubuntu distribution in the rest of the article as well as the articles that follow-up after this one. So to do that, just make sure you have the snaps available for your system. To make sure you are having the snap system installed, either make sure that you are running Ubuntu 16.04 or onwards, that is why I recommended using Ubuntu 18.04. Snaps daemons are installed by default on these operating systems, and in case the following command doesn't give a response,

  1. $ snap version  

Then it means that your operating system does not have a snap system installed. To install one, you can run the following commands,

  1. $ sudo apt-get install snapd  

This will install the daemon program for the snappy environment and later you can verify that the snappy is available. You can also follow along with the documentation provided by the snap website, for Ubuntu, or select the operating system and the distribution you selected.

Once you are done with that, just go ahead and install the Docker from the snap store.

  1. $ sudo snap install docker  

This will take some time and will have the Docker engine installed and set up for you. Now you can go ahead and install and set up a few more things, such as a hello world Docker image for trial purposes or what-so-ever. But what I would rather want to do here, is to hint out a few things about Docker before we step out and enter the second episode of this Docker series. Now, to move ahead and provided you have downloaded and installed Docker engine, move ahead to the next step and let's start the exploration of Docker.

Non-snappy Method

Docker publishes the artifacts for Linux environments too, you can download and install those tarballs or use the repository managers like apt, yum, etc.

Some of the important links you might want to follow are listed below:

Since Docker is open sourced, you can always go ahead and compile the source code yourself. This cookbook snippet from Packt would be able to tell you what steps you need to perform in order to build the source code for the Docker engine.

Exploring Docker

Before I conclude this chapter, let's take a quick look at Docker engine from the standpoints of storage, networking, and process information. This will lay some of the foundations for the concepts that we are going to learn later on. So, for the first thing, we will be utilizing my own build Node.js application, the reasons being:

  • It is my own development, and I can easily explain what I did, and where.
  • It is available as an open source project, and needs contributions and collaboration of the community, on GitHub.
  • It is light-weight and exposes some of the best practices that one can, should employ during the development of Dockerized packages.
  • Docker package is merely 4-lines of code, the underlying project is a complete web application project written in Node.js, and exposing several design patterns and architectural designs, especially for the serverless approach.
  • The project also comes with other configurable and ready-to-execute scripts, such as Docker Compose file.

You can explore the project on GitHub (link provided in the downloads section above), and see how it was structured for yourself. But now, without any further delay, let us create our first Docker container and visualize what the process is like. Inside our application, we are exposing the web app via this code,

  1. // Inside the serverconfig.js  
  2. let port = process.env.PORT || process.env.PORT_AZURE || process.env.PORT_AWS || 5000;  
  3.   
  4. // Inside the app.js  
  5. app.listen(serverConfigurations.serverPort, () => {  
  6.     let serverStatus = `Server listening on localhost:${serverConfigurations.serverPort}.`;  
  7.     logger.logEvent("server start", serverStatus);  
  8.     console.log(serverStatus);  
  9. });  

This code is not important, from Docker's standpoint, but what I want you to see is that this is what will be executed that, in turn, provides us a web server running in a container. In a very normal environment, this would just execute the function and start the event loop for the Node.js app, which means that your process will start and it will listen to the network traffic for that hostname, as well as that port that we have assigned. In this case, your output for the command,

  1. $ npm start  

would be something like this,

  1. [email protected] start /home/afzaal/Projects/Git/Nodejs-Dockerized/nodejs-dockerized  
  2. > node ./src/app.js  
  3.   
  4. Cannot start Application Insights; either pass the value to this app    
  5. or use the App Insights default environment variable.  
  6. [Event] server start: Server listening on localhost:5000..  
  7. Server listening on localhost:5000.  

From a naive standpoint, this all looks fair, but now we stumble upon a problem. If you look at it (ignore the Application Insights line), you will find that it states, "Server listening on localhost:5000.", which brings us to a point that makes it clear that now our port 5000 is bound to this process. This will cause problems with scalability of our application—that have a right to become a service—and will, in turn, require us to:

  • Either run the application on different and configurable ports,

    • This is quite possible in the application I wrote, you can pass the port for the application to listen on, but this is not the recommended approach. Because it will also require you to manage the traffic forwarding from outside this machine, and brings us to the second point.

  • Or run the application behind a load balancer and have load balancer forward the traffic to the 5000 port of each process that you spawn for this service.

Another major and most important problem with this approach is that if our process fails, or terminates, there is no way that our environment can bring back the process. We can overcome the problem by writing our own management program, or utilizing another web server program, but even there we need to manage and maintain how we specify that our application is up and running.

These are the areas where Docker shines and—not just Docker, but other orchestrators and process managers as well, such as Kubernetes, DC/OS Marathon, etc. but since we are talking about exploration of Docker,—we will be talking about these features, in proper depth in the later chapters, in this chapter however, we want to explore how this behavior of deployment of an application on a machine can be achieved. In order to understand how we do that, we are going to perform both the actions that we had talked about as a problem and we are going to leave that all up to Docker. How we do that, is, we create a new image in Docker, and we package our application inside it. In Docker, the packaging is done using a Dockerfile, and that file contains the sequence of commands that are required to be run in order to start the process. A Dockerfile requires the following information in order to start your process:

  1. Dependencies
  2. Source files
  3. Entry point or command

Following this pattern, we know that our application has a dependency on Node.js runtime, and the files are in the same folder, and we can start the project using the npm start command. Our Dockefile would be,

  1. FROM node:10-alpine  
  2. COPY . .  
  3. RUN [ "npm""update" ]  
  4. CMD [ "npm""start" ]  

That is it!

We do not require to write even an extra word, here. I have already talked about these commands in detail in another article of mine, here. And I do not feel like I should talk about it once more. Let's build the image, and run it to verify how things are taken care of for us.

  1. # docker build -t afzaalahmadzeeshan/dockerops:gettingstarted .  

This command will build our first container image, and provide us with the container image with that name.

  1. Sending build context to Docker daemon  1.527MB  
  2. Step 1/4 : FROM node:10-alpine  
  3.  ---> 7ca2f9cb5536  
  4. Step 2/4 : COPY . .  
  5.  ---> 9ac07aa147b7  
  6. Step 3/4 : RUN [ "npm""update" ]  
  7.  ---> Running in ba65eff2e6d9  
  8. npm notice created a lockfile as package-lock.json. You should commit this file.  
  9. npm WARN [email protected] No repository field.  
  10.   
  11. [email protected]  
  12. [email protected]  
  13. [email protected]  
  14. [email protected]  
  15. [email protected]  
  16. added 120 packages from 179 contributors and audited 261 packages in 30.756s  
  17. found 0 vulnerabilities  
  18.   
  19. Removing intermediate container ba65eff2e6d9  
  20.  ---> 558e5026cb19  
  21. Step 4/4 : CMD [ "npm""start" ]  
  22.  ---> Running in 807b13ed3a29  
  23. Removing intermediate container 807b13ed3a29  
  24.  ---> 46dc26469806  
  25. Successfully built 46dc26469806  
  26. Successfully tagged afzaalahmadzeeshan/dockerops:gettingstarted  

As you can see, during the process, Docker injects everything that is needed inside the image. All the required components are downloaded, patched, any executables run, and finally, an image is built. The build process is quite similar to any other building process, think of the .NET Core build process,

  1. $ dotnet restore  
  2. # Assuming the .NET Core project is in the same directory, we execute  
  3. $ dotnet build  
  4. # If everything goes fine, we do  
  5. $ dotnet run  

Similarly, what Docker did above was it merely built the image. Now our project is ready to be run. We have named the image as, afzaalahmadzeeshan/dockerops:gettingstarted. This will be easier to remember, and we will be using the label of the image during the container creation process. This way we can easily manage other operations such as inspection, and removal of the containers from the system. Let's go ahead and create the container, and inspect it as well.

  1. # docker run -d --name gettingstarted afzaalahmadzeeshan/dockerops:gettingstarted  

This will take hardly a few seconds and will create the container for you! Notice the -d flag, that will leave our terminal alone, and will run the container in detached mode. We can explore the logs to see what happens inside the process, and they are showed in the later part of the article continue reading... In the previous article, we discussed how we can opt out of exposing a port, in this one we will see how a container always listens to the network traffic for the IP address that is mapped to it by the Docker engine. Remember how we assigned 5000 as the port for the process, let us inspect the container and find out the IP address for it.

  1. # Notice how gettingstarted helps us in passing the container reference.  
  2. # docker inspect gettingstarted  

This gives out quite a lot of information, and we are only interested (for the time being, only) in the container information, such as the status, networking, and the hostname or IP address information. Which is as follows,

  1. [  
  2.     {  
  3.         "Id""c3f799ed9a74f1cb23c1046bceeee9144aad1122f70f4a39860722b625c5ef5b",  
  4.         "Created""2018-10-17T00:44:09.340443736Z",  
  5.         "Path""npm",  
  6.         "Args": [  
  7.             "start"  
  8.         ],  
  9.         "State": {  
  10.             "Status""running",  
  11.             "Running"true,  
  12.             "Paused"false,  
  13.             "Restarting"false,  
  14.             "OOMKilled"false,  
  15.             "Dead"false,  
  16.             "Pid": 16146,  
  17.             "ExitCode": 0,  
  18.             "Error""",  
  19.             "StartedAt""2018-10-17T00:44:10.767427825Z",  
  20.             "FinishedAt""0001-01-01T00:00:00Z"  
  21.         },  
  22.         "Image""sha256:46dc2646980611d74bd95d24fbee78f6b09a56d090660adf0bd0bc452228f82d",  
  23. ...  
  24.         "Name""/gettingstarted",  
  25.         "RestartCount": 0,  
  26.         "Driver""aufs",  
  27.         "Platform""linux",  
  28. ...  
  29.             "CpuShares": 0,  
  30.             "Memory": 0,  
  31.             "NanoCpus": 0,  
  32.             "CgroupParent""",  
  33.             "BlkioWeight": 0,  
  34.             "BlkioWeightDevice": [],  
  35.             "BlkioDeviceReadBps"null,  
  36.             "BlkioDeviceWriteBps"null,  
  37.             "BlkioDeviceReadIOps"null,  
  38.             "BlkioDeviceWriteIOps"null,  
  39.             "CpuPeriod": 0,  
  40.             "CpuQuota": 0,  
  41.             "CpuRealtimePeriod": 0,  
  42.             "CpuRealtimeRuntime": 0,  
  43.             "CpusetCpus""",  
  44. ...  
  45.         "Config": {  
  46.             "Hostname""c3f799ed9a74",  
  47.             "Domainname""",  
  48. ...  
  49.             "Networks": {  
  50.                 "bridge": {  
  51.                     "IPAMConfig"null,  
  52.                     "Links"null,  
  53.                     "Aliases"null,  
  54.                     "NetworkID""2b6ace2bb890dcd59370798285d3c917f40e3f23330298f8a11dff1fbe27b627",  
  55.                     "EndpointID""12c5e6c571af574e88c796e9804bd571523fc046aaaaa2d39a842580b391e745",  
  56.                     "Gateway""172.17.0.1",  
  57.                     "IPAddress""172.17.0.2",  
  58. ...  

All the lines that are "..." are used to cover up and truncate the information that is extra in this stage. From the text above, you can extract some information about the container, and some information is left over for the different parts of the Docker service. Remember, Docker engine is used in the standalone container mode, as well as in the Swarm mode. Most of the settings available in this value come are for the Swarm mode, in which Docker would be managing machines, services, or stacks. However, we are interested in the area where we are provided with this IPAddress value. That is the IP address for the container that is hosting our Node.js app. Recall that our port was 5000, combining and hitting the URL, we get -

Home page of the container app running in Firefox 
Figure 1: Homepage of the container app running in Firefox.

And that is how we can access the services from Docker. It is the same page that you might be tired of looking at, but yes, this is the service being exposed from the Docker engine. And if we explore the logs for the container, the following is what we get,

  1. # docker logs gettingstarted  
  2.   
  3. [email protected] start /  
  4. > node ./src/app.js  
  5.   
  6. Cannot start Application Insights; either pass the value to this app   
  7. or use the App Insights default environment variable.  
  8. [Event] server start: Server listening on localhost:5000..  
  9. Server listening on localhost:5000.  
  10. [Request] GET: /.  
  11. [Request] GET: /about.  
  12. [Request] GET: /.  
  13. [Request] GET: /contact.  
  14. [Request] GET: /.  
  15. [Request] GET: /.  

I just surfed the website for a while from the container, and as seen the above are the logs for the container. There is no difference, in how the application executes. Similarly, there is no difference in the runtime, or how the CPU, RAM or network resources are utilized by the process. The difference comes when we look at the process from a different standpoint—of not being a solo player, rather a worker in a swarm.

What is Different? You Say.

Recall the problems we mentioned earlier, 1. Fault-tolerance, 2. Scalability. Both of these problems are solved with this. Docker plays a vital role in this, by being a services' health manager, as well as a good and production-grade load balancer. This gives us an opportunity to utilize Docker not just as a package manager for our application, but also as a good load balancer for our application. Although I must admit, that other load balancers and orchestrators such as Kubernetes are much better at handling scalability, Docker is still a good choice for quick deployment of apps and services.

Spoiler alert, although this is the main topic of debate in the next episodes, the Docker Compose concept is useful when it comes to scalability, and fault-tolerance of the containers. A very simple, minimal example of a Docker Compose file can be found in the same GitHub repository,

  1. version: '3'  
  2. services:  
  3.   nodejsapp:  
  4.     build: .  
  5.     image: afzaalahmadzeeshan/express-nodejs:latest  
  6.     deploy:  
  7.       replicas: 3  
  8.       resources:  
  9.         limits:  
  10.           cpus: ".1"  
  11.           memory: 100M  
  12.       restart_policy:   
  13.         condition: on-failure  
  14.     ports:  
  15.       - "12345:80"  
  16.     networks:  
  17.       - appnetwork  
  18.   
  19. networks:  
  20.   appnetwork:  

As this is visible from the settings of this deployment script, we have the following configurations.

  1. restart_policy:  
  2.    condition: on-failure  
  3. And secondly, this one:  
  1. replicas: 3  

These configurations tell Docker engineer to (create and; not as needed) scale the service to 3 instances and also to make sure that engine recreates the container if the container faces an issue. I won't be going in depth of this script in this article, rather in the next one, that speaks about the services, and we will explore the benefits of using Docker Compose over ordinary Docker CLI, and then we will also take a look at the Docker Stack and Docker Services. There is a very big problem in scalability with Dockers, but... the next article is for that. :-)

Now, we will dedicate future episodes, to some specific areas of Docker development and management.

  • DockerOps: Services and their scalability
  • DockerOps: Storage and Networking
  • DockerOps: Orchestration and extensibility
  • DockerOps: Best Practices for Containerization

I hope you will join me on this tour, as I explore Docker engine and showcase some best practices to be employed while developing your apps targetting Docker engine for containerization.

Moving Ahead...

In this article, we merely explore what Docker would allow us to do. We haven't even scratched the surface of Dockers and their power and features. We will start to dig into the surface starting from the next article. The main benefit of using Dockers is that we can have our own apps, and processes run in an isolated environment, and our ability to control how they grow and how they scale in size.

You can, now, remove the resources that you had created to free up some hardware resources on your machine for further usage. Or you can leave them as they are, but we will still follow the recreation process in later articles. To remove the resources, run:

  1. # docker stop gettingstarted && docker rm gettingstarted 

This will stop the container and then remove it so that you can create it once again if you wish to follow along with the article. Now, before we move onward to the next episode, here is what you need to know before heading onwards:


Similar Articles