Docker Networking - An Overview

Docker has been in the limelight since the days of its very beginning and dockerizing almost ‘everything’ has been the talk of the day. Being at the hype, it’s getting more and more attention from both, developers or enterprises. As the IT world has taken the virtualization turn, the networking paradigm has also shifted from configuring physical routers, switches, LAN/WAN to networking components found in virtualized platforms, i.e., VMs, cloud, and others.

Now that we’re talking about the enchanted world composed of containers, one needs to have strong networking skills to properly configure the container architecture. If you want your deployed container to scale as per the requirements and likes of your microservices, then you’re going to get the networking fit-in just right. This is where docker networking lays its foot on the ground - balancing out the application requirements and networking environment. And, that’s what I’m here for; to give you an overview of docker networking.

We’re going to start small, talk a little bit about the basics of docker networking, and then step up to get containers to talk to each other using more recent and advanced options that docker networking provides. Please note that this article is not meant for absolute beginners with docker rather it assumes you have the basic knowledge of about docker and containers. I’m not going to talk about what docker or containers are; rather, my focus would be to make the docker learning curve smooth by giving you an overview of networking. So, don’t mind me taking some sharp turns if I may.

Some Background

Containers bring a whole new ideology to how networking works across all your hosts as it will include everything needed to run the application - code, runtime, system tools, system libraries, settings, and everything.


This should make the idea clear that the container isolates the application form the rest of the system to ensure that the environment is constant. That’s what the whole docker (Not to forget here that docker is the most popular containerizing tool) craze is all about, right? To separate the application from your infrastructure to ensure quick development, shipping, and running the application is so far so good but there can be 100s or even 1000s of containers per host. Moreover, our microservice can be distributed across multiple hosts.

So, how do we get the containers to talk to the external world? Like we need them in some way to use the services that they provide. Also, how would we get containers to talk to the host machine or other containers? For this reason, we need some sort of connection between the containers and that’s exactly where the bells ring for docker networking.

To be brutally honest, docker networking wasn’t something to brag about in the early days. It was complex and wasn’t easy to scale either. Not until after 2015; when docker laid its hands-on socket plane; that the docker networking started to shine. Since then, many interesting contributions have been made by the developer community including Pipework, Weave, Clocker, and Kubernetes. Docker Inc. also played the card by establishing the project to standardize networking known as libnetwork which implements the container network model (CNM). Our focus will be the same Container network model for this article.

Container Network Model

A container network model can be thought of as the backbone of the docker networking. It aims at keeping the networking as a library separated from container runtime rather than implementing it as a plugin using drivers (bridge, overlay, weave etc.).


That's the whole philosophy behind CNM model. You can think of it serving the purpose of abstraction layer to diversify the complexity of common networking as well as supporting multiple network drivers. The model has three main components -

  • Sandbox - contains the configuration of a container's network stack.
  • Endpoint - joins a Sandbox to a Network
  • Network - a group of Endpoints that are able to communicate with each other directly

Here’s a pictorial representation of the model.


The project is hosted on GitHub. If you’re interested in knowing more about the CNM model, I would encourage you to go ahead and give it a thorough read.

I heard about CNI as well...?

Well yes! Container Network Interface (CNI) is another standard for container networking which was started by coreOS and is used by cloudfoundry, kubernetes etc. which is beyond the scope of this article. That’s why I didn’t touch that ground. I might talk about that in the upcoming article but my apologies for now. Feel free to search and learn on your own.


The libnetwork implements the same CNM model we just talked about. Docker was introduced under the philosophy of providing great user experience and seamless application portability across the infrastructure. Networking follows the same philosophy. Portability and Extensibility were two main factors behind the libnetwork project. The long-term goal of this project, as mentioned officially, was to follow the Docker and Linux philosophy of delivering small, highly modular, and composable tools that work well independently with the aim to satisfy that composable need for networking in containers. libnetwork is not just a driver interface but has much more to it. Some of its main features are,

  • built-in IP address management
  • multi-host networking
  • service discovery and load balancing
  • plugins to extend the ecosystems

Things should make sense by now. Saying that I feel like enough talking and it is time to get our hands on it.

Some fun with networking modes

I’m assuming that you already have Docker installed on your machine. If not, please follow the documentation here to get docker. I’m working on Linux (Ubuntu 18.04 LTS) so everything that I’m going to show you now would work fine on the same and more interestingly, since it's docker we are talking about, things shouldn’t be so different for Mac, Windows, or any other platform that you might be working on. You can make sure of the installation by simply checking the version. Open your terminal window and execute the following command.

$ docker version

And you should get the output as follow,

  1. root@mehreen-Inspiron-3542:/home/mehreen# docker version  
  2. Client:  
  3. Version: 18.06.1-ce  
  4. API version: 1.38  
  5. Go version: go1.10.3  
  6. Git commit: e68fc7a  
  7. Built: Tue Aug 21 17:24:51 2018  
  8. OS/Arch: linux/amd64  
  9. Experimental: false  
  10. Server:  
  11. Engine:  
  12. Version: 18.06.1-ce  
  13. API version: 1.38 (minimum version 1.12)  
  14. Go version: go1.10.3  
  15. Git commit: e68fc7a  
  16. Built: Tue Aug 21 17:23:15 2018  
  17. OS/Arch: linux/amd64  
  18. Experimental: false  

Now, we are good to rock and roll. You can check the available commands for networks by using the following command.

$ docker network

The above command results in the following output. 

  1. root@mehreen-Inspiron-3542:/home/mehreen# docker network  
  2. Usage: docker network COMMAND  
  3. Manage networks  
  4. Commands:  
  5. connect Connect a container to a network  
  6. create Create a network  
  7. disconnect Disconnect a container from a network  
  8. inspect Display detailed information on one or more networks  
  9. ls List networks  
  10. prune Remove all unused networks  
  11. rm Remove one or more networks  
  12. Run 'docker network COMMAND --help' for more information on a command.  

Let’s see what else is there for us.

Default Networking Modes

By default, there are some networking modes active. They can also be called as single host docket networking modes. To view docker networks, run,

$ docker network ls

The above command outputs three options as follow.


Pretty self-explanatory it is. Bridge, Host and None networks are by default created for us which are using bridge, host and null drivers. One thing to note here is that it shows the scope as local meaning that these networks would only work for this host. But what is so different about these networks? Let’s see.

Bridge Networking mode

The Docker daemon creates “docker0” virtual ethernet bridge that forwards packets between all interfaces attached to it. Let's inspect this network a little bit more by using the inspect command and specifying the name or ID of the network.

$ docker network inspect a bridge

And it would output all the specifications and configurations of the network. 

  1. root@mehreen-Inspiron-3542:/home/mehreen# docker network inspect bridge 
  2. [{  
  3.     "Name""bridge",  
  4.     "Id""f47f4e8e34ebe75035115b88e301ac9548eb99e429cb8d9d9b387dec07a2db5f",  
  5.     "Created""2018-10-13T14:34:39.071898384+05:00",  
  6.     "Scope""local",  
  7.     "Driver""bridge",  
  8.     "EnableIPv6"false,  
  9.     "IPAM": {  
  10.         "Driver""default",  
  11.         "Options"null,  
  12.         "Config": [{  
  13.             "Subnet""",  
  14.             "Gateway"""  
  15.         }]  
  16.     },  
  17.     "Internal"false,  
  18.     "Attachable"false,  
  19.     "Ingress"false,  
  20.     "ConfigFrom": {  
  21.         "Network"""  
  22.     },  
  23.     "ConfigOnly"false,  
  24.     "Containers": {},  
  25.     "Options": {  
  26.         """true",  
  27.         """true",  
  28.         """true",  
  29.         """",  
  30.         """docker0",  
  31.         """1500"  
  32.     },  
  33.     "Labels": {}  
  34. }]  

As you can see above, the docker0 network is represented by bridge. Also the Subnet and Gateway was automatically created for it. Container can be automatically added to this networking by a simple command docker run and if there are containers already attached to the network then the inspect command would have shown them. Last but not least inter-container chatter is also enabled by default.

Host Networking mode

Let’s inspect this networking mode and see what comes to the screen.

  1. root@mehreen-Inspiron-3542:/home/mehreen# docker network inspect host  
  2. [{  
  3.     "Name""host",  
  4.     "Id""fbfb142290eeaf9f696467932b0f5d4e350dd3fd5fba22ad8dd495fde42bd9ea",  
  5.     "Created""2018-10-13T14:11:27.536955704+05:00",  
  6.     "Scope""local",  
  7.     "Driver""host",  
  8.     "EnableIPv6"false,  
  9.     "IPAM": {  
  10.         "Driver""default",  
  11.         "Options"null,  
  12.         "Config": []  
  13.     },  
  14.     "Internal"false,  
  15.     "Attachable"false,  
  16.     "Ingress"false,  
  17.     "ConfigFrom": {  
  18.         "Network"""  
  19.     },  
  20.     "ConfigOnly"false,  
  21.     "Containers": {},  
  22.     "Options": {},  
  23.     "Labels": {}  
  24. }]  

This enables container to share the networking namespace of the host which means it is directly exposed to the outside world. So there’s no automatically port assignment neither it is routable but you would have to use port mapping to reach services inside the container because configurations inside container are same as outside it.

None networking mode

The following is the result of inspecting this network.

  1. root @mehreen - Inspiron - 3542: /home/mehreen#docker network inspect none  
  2. [  
  3.   {  
  4.     "Name""none",  
  5.     "Id""75201eb0dee7bdac624d20c4aab536b73f49c5a6b9230a97d3f5f5424622e4c4",  
  6.     "Created""2018-10-13T14:11:27.390186279+05:00",  
  7.     "Scope""local",  
  8.     "Driver""null",  
  9.     "EnableIPv6"false,  
  10.     "IPAM": {  
  11.         "Driver""default",  
  12.         "Options"null,  
  13.         "Config": []  
  14.     },  
  15.     "Internal"false,  
  16.     "Attachable"false,  
  17.     "Ingress"false,  
  18.     "ConfigFrom": {  
  19.         "Network"""  
  20.     },  
  21.     "ConfigOnly"false,  
  22.     "Containers": {},  
  23.     "Options": {},  
  24.     "Labels": {}  
  25. }]  

Wait, I don’t see much difference here from the last one we inspected. Does it mean both are same? No, not at all. None networking mode offers a container-specific network stack which does not have an external network interface. It only has a local loopback interface.

But where are talking containers?

Well Yes! We saw the networks and all but we haven’t added any content to the network yet neither there’s any apparent communication between them. Now, it is the time to add some sweetness to our lives. Let’s create our very own network and see if we can get our containers to talk to each other.

We will create a single-host bridge network using a bridge driver. Because it’s simple to understand, easy to use and troubleshoot makes it a perfect choice for beginners.

Create the bridge network by executing the following command.

 $ docker network create -d bridge testbridge

This will create our network naming testbridge using bridge driver. If you get a long string in response to the above command, means the execution was successful. Let’s quickly check the network list if our network appears there.

Yes, we did it. Now if you inspect this network, it would be just like the bridge network we inspected before. Let’s attach some containers to our network.
$ docker run -dt --name cont_test1 --network testbridge alpine

We created our container naming cont1 on the testbridge network that we created earlier. And I used the alpine image to create the container. (There’s no specific reason for using alpine image except that it’s lightweight. You can use any other image like Ubuntu etc. The command would work fine.) Response again should be a long string. Let’s create another container just like this one.

 $ docker run -dt --name cont_test2 --network testbridge alpine

Now we have two containers on the same bridge network. Do you feel like inspecting the network? I did and here’s what I found.

  1. -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -   
  2.   "Containers": {  
  3.     "88ae819d1549527e36b62f50f563c124aa3bc23ae141964201f59601203848f9": {  
  4.         "Name""cont_test2",  
  5.         "EndpointID""1bd9a68af20b3abf44f94dbd98a2c846bbeb02f016359209a373bf0c54501d69",  
  6.         "MacAddress""02:42:ac:13:00:05",  
  7.         "IPv4Address""",  
  8.         "IPv6Address"""  
  9.     },  
  10.     "a1a4f170a616a8c1fa3cac5f746f26470c778f807c30afe4a8e7d44ee702d7ca": {  
  11.         "Name""cont_test1",  
  12.         "EndpointID""7bb7dbc24b89291b41b31e7d5ffbf0e5c46358122744ad16043f99593da1d41e",  
  13.         "MacAddress""02:42:ac:13:00:04",  
  14.         "IPv4Address""",  
  15.         "IPv6Address"""  
  16.     },  
  17.     -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --  

Our both containers are now added to the network. But can they talk? Let’s enter in one of our container and try to ping the other one to see if it works.

$ docker exec -it cont_test1 sh

We are in our first container. Let’s ping the other one by specifying its name or IP.

  1. root@mehreen-Inspiron-3542:/home/mehreen# docker exec -it cont_test1 sh
  2. / # ping cont_test2


Boom! Our containers are talking to each other. But wait how about we try pinging Google?

It worked.

What’s Next?

Our containers can talk to each other as well as talk to the outside world but this is not it. As I mentioned several times, this all is single-host networking. If we try talking to a container on another host, that’s not going to work. For that, multi-host networking is required which also brings in the swarm concept. Seems like a lot of learning on the way. I might cover that in some next article. Till then feel free to explore it as you like.

Similar Articles