[Docker Network Chapter 2] Explanation of Docker Networking

image.png

Overview

Docker can be used in a variety of use cases, including standalone mode, using Docker Compose, deploying a single host, or a container to connect the Docker engine across multiple hosts. Users can use Docker containers on default networks, host networks, or other types of more advanced networks such as overlays. This depends on the use case and the technology used.

In this article, you'll learn about different types of container networks as well as container networking. We'll talk about different types of networks, and finally understand how to use plugins to extend your Docker network. This article is the second part of the series. For Chapter 1, see this article. There are also resources related to Kubernetes Networking and Monitoring Docker Containers with cAdvisor, so if you are interested, please check them out as well.

[Docker Network Chapter 1] [Docker Network Chapter 3]

Standalone Docker network

Default bridge network

If you install a new Docker, you will see that the default bridge network is running.

Type docker network ls and you should see something like this:

NETWORK ID          NAME                DRIVER              SCOPE
5beee851de42        bridge              bridge              local

If you use the ifconfig command, you will also notice that this network interface is called "docker0".

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:90:68:1f:7f  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Every Docker installation has this network. When you run a container such as Nginx, it connects to the bridge network by default.

docker run -dit --name nginx nginx:latest

You can use the "inspect" command to see which containers are running in your network.

docker network inspect bridge


---
        "Containers": {
            "dfdbc18945190c832c3e0aaa7013915d77022851e69965c134045bb3a37168c4": {
                "Name": "nginx",
                "EndpointID": "33a598ffd6d4df792c58a6b6fdc34cd162c9dd3a3f1e58add29f30ad7f1dfdac",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },

Docker uses a software-based bridging network. This allows containers connected to the same bridge network to communicate while isolated from other containers that are not running on the same bridge network.

Let's see how containers running on the same bridge network can connect to each other. Let's create two containers for testing purposes.

docker run -dit --name busybox1 busybox
docker run -dit --name busyboxZ busybox

The IP address of the container is:

docker inspect busybox1 |  jq -r  ' [0].NetworkSettings.IPAddress'
docker inspect busybox2 |  jq -r  '.[0].NetworkSettings.IPAddress'


---
172.17.0.3
172.17.0.4

Let's ping the container from another container using one of these IP addresses. For example, use IP 172.17.0.3 to ping a container named "busybox1" from "busybox2".

docker exec -it busybox2 ping 172.17.0.3


---
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.245 ms

Therefore, containers on the same bridge can recognize each other using IP. What if you want to use the name of the container instead of the IP?

docker exec -it busybox2 ping busybox1
---
ping: bad address 'busybox1'

You can see that containers running on the same bridge network can recognize each other using their IP addresses. On the other hand, the default bridge network does not support automatic service detection.

User-defined bridge network

You can use the Docker CLI to create other networks. You can use the following to create a second bridge network.

docker network create my_bridge --driver bridge

Now connect "busybox 1" and "busybox 2" to the same network.

docker network connect my_bridge busybox1
docker network connect my_bridge busybox2

Retry pinging "busybox1" using the name.

docker exec -it busybox2 ping busybox1
---
PING busybox1 (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.113 ms

We can conclude that only user-defined bridge networks support automatic service detection. If you need to use service discovery on your container, create a new bridge instead of using the default bridge.

‍ ### Host network The container running on the host network matches the host's network configuration.

Taking the Nginx image example, you can see that port 80 is exposed.

EXPOSE 80

When running a container, you typically need to expose port 80 on a different port (for example, 8080).

docker run -dit -p 8080:80 nginx:latest

The container can now be accessed on port 8080.

curl -I 0.0.0.0:8080


---
HTTP/1.1 200 OK
Server: nginx/1.17.5
Date: Wed, 20 Nov 2019 22:30:31 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 22 Oct 2019 14:30:00 GMT
Connection: keep-alive
ETag: "5daf1268-264"

If you run the same container on the host network, the exposed port is ignored because port 80 on the host is used in all cases. When to run:

docker run -dit --name nginx_host --network host -p 8080:80 nginx:latest

Docker will display a warning:

WARNING: Published ports are discarded when using host network mode

Let's delete this container and rerun it without exposing the port.

docker rm -f nginx_host;
docker run -dit --name nginx_host --network host  nginx:latest

You can now run "curl" on port 80 of the host machine to see the web server respond.

curl -I 0.0.0.0:80



---
HTTP/1.1 200 OK
Server: nginx/1.17.5
Date: Wed, 20 Nov 2019 22:32:27 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 22 Oct 2019 14:30:00 GMT
Connection: keep-alive
ETag: "5daf1268-264"
Accept-Ranges: bytes

Because there is no Network Address Translation (NAT), running containers on the host network can optimize performance. When you run a container on a host network, that network is not separated from the host and does not get its own IP. These are the limits of this type of network.

Macvlan network

For example, if you are developing an application that monitors traffic that is expected to be directly connected to the underlying physical network, you can use the macvlan network driver. This driver assigns a MAC address to each container's virtual network interface.

Example:

docker network create -d macvlan --subnet=150.50.50.50/24--gateway=150.50.50.1  -o parent=eth0 pub_net

You must specify parent when creating the macvlan network. This is the host interface used when traffic is physically routed.

None

You may need to separate the container from incoming/outgoing traffic. You can use this type of network without a network interface.

The only interface that the container has is the localhost loopback interface (127.0.0.1).

Distributed network

Overlay network

The container platform has different hosts, each of which may have several containers running. These containers may need to communicate with each other. This is when overlay networks are useful.

Overlay networks are distributed networks created between multiple Docker daemons on different hosts. All containers connected to this network can communicate.

Ingress For example, Docker Swarm uses overlay networks to handle traffic between Swarm services.

To test this, let's create three Docker machines (manager + two machines).

docker-machine create manager
docker-machine create machine1
docker-machine create machine2

After configuring a different shell for each of these machines (using eval $ (using docker-machine env <machine_name>)), initialize Swarm with manager using the following command:

docker swarm init --advertise-addr &lt;IP_address&gt;

Don't forget to run the join command on both workers.

docker swarm join --token xxxx &lt;IP_address&gt;:2377

If you use docker network ls to list the networks on each host, you will notice the existence of an overlay Ingress network.

o5dnttidp8yq   ingress      overlay      swarm

In the manager, create a new service with three replicas.

docker service create  --name  nginx --replicas 3 --publish published=8080,target=80 nginx

Use docker service psnginx to see the swarm node where the service is deployed.

ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE        
fmvfo2nschcq        nginx.1             nginx:latest        machine1            Running             Running 3 minutes ago
pz1kob42tqoe        nginx.2             nginx:latest        manager             Running             Running 3 minutes ago
xhhnq68sm65g        nginx.3             nginx:latest        machine2            Running             Running 3 minutes ago

In this case, we are running a container on each node.

You can use docker inspect ingress to inspect your Ingress network for a list of peers (swam nodes) connected to the same network.

 "Peers": [
            {
                "Name": "3a3c4007c923",
                "IP": "192.168.99.102"
            },
            {
                "Name": "9827ad03b358",
                "IP": "192.168.99.100"
            },
            {
                "Name": "60ae1df1c8b2",
                "IP": "192.168.99.101"
            }

Ingress networks are certain types of overlay networks created by default.

If you create a service without connecting to a user-defined overlay network, it will connect to this Ingress network by default. The following is an example of a service that is running without a user-defined overlay network using Nginx, but is connected to the Ingress network by default.

docker service inspect nginx|jq -r  .[0].Endpoint.Ports[0]


---
{
  "Protocol": "tcp",
  "TargetPort": 80,
  "PublishedPort": 8080,
  "PublishMode": "ingress"
}

Docker network plugin

There are other types of use cases that require a new type of network. You can also use another technology to manage overlay networks and VXLAN tunnels.

The Docker network is extensible and you can use plugins to extend the functionality provided by default.

You can use or develop network plugins. Docker Hub also has some validated network plugins.

image.png

Each plugin has different use cases and installation instructions. Weave Net, for example, is a virtual Docker network that connects containers across a cluster of hosts and enables auto-discovery.

To install it, you can follow the official instructions on Docker Hub.

docker plugin install --grant-all-permissions store/weaveworks/net-plugin:2.5.2

You can create a Weave Net network and make it connectable using:

docker network create --driver=store/weaveworks/net-plugin:2.5.2 --attachable    my_custom_network

Then create a container using the same network.

docker run -dit --rm --network=my_custom_network -p 8080:80 nginx

There are up to 3 chapters in this series. If you missed Part I, check here (https://qiita.com/MetricFire/items/617ecce36f4aca64ddb9). If you're interested in monitoring Docker containers, learn how to do this in the article Monitoring Docker with cAdvisor (https://qiita.com/TomoEndo/items/4aa2d9889c49148a3d7f). Also see the blog post about Monitoring Kubernetes with Prometheus (https://qiita.com/MetricFire/items/7eec4addca4a40a4d2d2).

series

[Docker Network Chapter 1] [Docker Network Chapter 3]

Recommended Posts

[Docker Network Chapter 2] Explanation of Docker Networking
[Docker Network Chapter 1] Explanation of Docker Networking
[Docker Network Chapter 3] Understand the -net = host option
Introduction of Docker --Part 1--
Docker network (personal memorandum)
[November 2020 version] Connect DevContainer of Dockerfile to Docker network [VSCode]
Overview of Docker and containers
Docker monitoring-explaining the basics of basics-
[Docker] Introduction of basic Docker Instruction
Understand the basics of docker
I've only heard of Docker ...
[Java] Implementation of Faistel Network
Explanation of the FizzBuzz problem
A brief explanation of commitAllowingStateLoss