Try running MPLS-VPN with FR Routing on Docker

Introduction

Continuing from Try running OSPF with FR Routing on Docker, try Docker + FR Routing from the motive of wanting to study/verify the network easily and with as little resource as possible on a personal notebook PC. I am. This time we will do MPLS-VPN.

2021/1/13 postscript: docker-compose version was also made.

Referenced page

Since MPLS is taken care of on this page, the verification configuration and config design are based on this.

Also, for the MPLS environment construction using FR Routing, I referred to here.

Verification environment

Up to Docker, start from the installed state.

Verification configuration

image.png

The configuration and address are designed according to MPLS-VPN verification configuration and config settings (using static routes between PE and CE). The difference is that each network uses .2 instead of .1 (because the Docker network bridge address is .1), PE1/PE2 with eth0 on the core side and eth1 and 2 on the user side in order. This is where you are.

Blue letters are the network names in the docker network (bridge) implementation. In Docker, when attaching multiple networks to a container, if the attributes such as GW and Internal are the same, it seems that they are assigned to eth0, 1, 2, ... in the container in the lexicographic order of the network name [^ 1] [^ 2]. This time, I wanted to unify the core side with eth0 and the user side with eth1, 2, ... in order for the PE router, so I decided to use the above naming convention.

Overall flow

The flow of the entire procedure and the devices to be set are as follows.

Order Contents P router PE router CE router
1 Host OS settings - - -
2 Docker network and container creation
3 FRR daemon settings -
4 OSPF configuration -
5 LDP settings -
6 MP-BGP configuration - -
7 VRF settings - -
8 CE router configuration - -

1. Host OS settings

First, set the host OS (Ubuntu VM).

Load the MPLS kernel module according to the Official docs. Edit /etc/modules-load.d/modules.conf and add mpls_router and mpls_iptunnel.

:/etc/modules-load.d/modules.conf


# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
mpls_router
mpls_iptunnel
# modprobe mpls-router mopls-iptunnel
modprobe: FATAL: Module mpls-router not found in directory /lib/modules/4.15.0-124-generic

I got an error, so install the extension package according to the official instructions.

# apt install linux-modules-extra-`uname -r`

This will load the kernel module again.

# modprobe mpls-router mpls-iptunnel
# lsmod | grep mpls
mpls_router            28672  0
ip_tunnel              24576  1 mpls_router

It was loaded safely. If you make a container as a trial and do lsmod in the container, the above two are reflected.

# docker run -dit --name test alpine
# docker exec -it test lsmod | grep mpls
mpls_router            28672  0
ip_tunnel              24576  1 mpls_router

It seems that changing the kernel parameters to enable MPLS forwarding can be done inside the container, so I will do it later.

2. Docker network and container creation

Create your own network using Docker Network (bridge) and connect containers to it.

However, there is one problem here. Creating a network using Docker Network is easy, but when you connect the container, the IP address is set automatically. (Or manually specified with the --ip option). The IP address set by the container kernel automatically or manually in this way cannot be changed or deleted using FR Routing. In addition, multiple routes including the default route are automatically set and cannot be changed or deleted from FR Routing (and Kernel Route has the highest priority). If it is an IP address, it is possible to set it manually when creating a container and not touch it from FR Routing, but I want to set it using vtysh as much as possible, and the route is in the network setting It may be a problem, so you need to remove it with the ip addr del or ip route del command.

It's okay to delete them one by one manually, but since there are seven containers this time, I created a Dockerfile to create such a pre-configured FRR image. I would like to build an environment using the original FRR image created using this. I also installed tcpdump. (For tcpdump, you can specify veth on the host OS side ...

Create a pre-configured FR Routing image

Create a Dockerfile in an appropriate folder.

Dockerfile


FROM frrouting/frr:v7.5.0
RUN apk update && apk add tcpdump
COPY docker-start /usr/lib/frr/docker-start

I have installed tcpdump on the original FRRouting image and replaced the helper script with my own.

Create a helper script with the name docker-start in the same directory.

docker-start


#!/bin/sh

set -e

# Delete all IP addresses on ethX
devlist=`ip -o addr | cut -d\  -f 2 | sed '1d'`
for dev in $devlist
do
  IP=`ip -o addr show $dev | cut -d\  -f 7`
  ip addr del $IP dev $dev
done

# Delete all routes
routelist=`ip route | cut -d\  -f 1`
for route in $routelist
do
  ip route del $route
done


##
# For volume mounts...
##
chown -R frr:frr /etc/frr || true
/usr/lib/frr/frrinit.sh start

# Sleep forever
exec tail -f /dev/null

This is Script originally used in FR Routing with the processing of deleting all IP addresses and routes.

Give the script execute permission and docker build

# chmod +x docker-start
# docker build -t frr .
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
frr                 latest              42eb8fb2ebb3        4 seconds ago       128MB

After that, when creating a container, we will use frr instead of frrouting/frr: v7.5.0.

Creating a network

First, create a network.

# docker network create net1 --subnet=10.1.1.0/24
# docker network create net2 --subnet=10.1.2.0/24
# docker network create net3 --subnet=172.16.1.0/24
# docker network create net4 --subnet=172.16.2.0/24
# docker network create net5 --subnet=172.16.3.0/24
# docker network create net6 --subnet=172.16.4.0/24
# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
4f627b43f43a   bridge    bridge    local
62dced673495   host      host      local
b6c924d827f5   net1      bridge    local
ac750081ae13   net2      bridge    local
f96230418508   net3      bridge    local
9553456d0ad7   net4      bridge    local
5c39ee2ea4d7   net5      bridge    local
f2e362f4eaad   net6      bridge    local
20f96b0038a3   none      null      local

Creating a P router

I didn't seem to be able to create a container with a one-letter name, so I used PR for the container name and host name.

# docker create -it --name PR --hostname PR --privileged --net net1 frr
# docker network connect net2 PR
# docker start PR

Instead of suddenly docker run, connect all the networks with docker create docker network connect and then start the container.

Creating a PE router

# docker create -it --name PE1 --hostname PE1 --privileged --net net1 frr
# docker network connect net3 PE1
# docker network connect net5 PE1
# docker start PE1

# docker create -it --name PE2 --hostname PE2 --privileged --net net2 frr
# docker network connect net4 PE2
# docker network connect net6 PE2
# docker start PE2

Creating a CE router

# docker run -dit --name CE1 --hostname CE1 --privileged --net net3 frr
# docker run -dit --name CE2 --hostname CE2 --privileged --net net4 frr
# docker run -dit --name CE3 --hostname CE3 --privileged --net net5 frr
# docker run -dit --name CE4 --hostname CE4 --privileged --net net6 frr

Operation check

# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS     NAMES
01c534d3bafa   frr       "/sbin/tini -- /usr/…"   5 seconds ago    Up 3 seconds              CE4
861ce203d498   frr       "/sbin/tini -- /usr/…"   20 seconds ago   Up 19 seconds             CE3
acd7834f981b   frr       "/sbin/tini -- /usr/…"   32 seconds ago   Up 31 seconds             CE2
5ce59162b01b   frr       "/sbin/tini -- /usr/…"   56 seconds ago   Up 54 seconds             CE1
99bb821a4cb8   frr       "/sbin/tini -- /usr/…"   4 minutes ago    Up 3 minutes              PE2
824e0a5a5337   frr       "/sbin/tini -- /usr/…"   5 minutes ago    Up 5 minutes              PE1
aeaab8e9b887   frr       "/sbin/tini -- /usr/…"   6 minutes ago    Up 6 minutes              PR

Make sure 7 containers are running.

3. FRR Daemon settings

In the FRR configuration file, specify the daemon to start. Edit / etc/frr/daemons and set the daemon to start to yes.

The daemons to be started (additionally) on each router are as follows.

Router type Daemon to start
P router ospfd, ldpd
PE router bgpd, ospfd, ldpd
CE router -

This time, Static routing is used between the CE router and the PE router, so there is no need to start an additional daemon on the CE router.

P router

/etc/frr/daemons


(Abbreviation)
ospfd=yes
(Abbreviation)
ldpd=yes
(Abbreviation)

PE router

/etc/frr/daemons


(Abbreviation)
bgpd=yes
(Abbreviation)
ospfd=yes
(Abbreviation)
ldpd=yes
(Abbreviation)

After editing the configuration file, restart the FRR process to reflect the settings.

Example of P router:

PR


/ # /usr/lib/frr/frrinit.sh restart
Stopped watchfrr
Cannot stop ldpd: pid file not found
Cannot stop ospfd: pid file not found
Stopped staticdStopped zebra

Started watchfrr

Cannot stop ldpd and Cannot stop ospfd are displayed, but this is the first time, so there is no problem.

Operation check

Make sure that the specified process is up in each container. Example of P router:

PR


/ # ps
PID   USER     TIME  COMMAND
    1 root      0:00 /sbin/tini -- /usr/lib/frr/docker-start
    7 root      0:00 tail -f /dev/null
   56 root      0:00 /bin/sh
   88 root      0:00 /usr/lib/frr/watchfrr -d -F traditional zebra ospfd ldpd staticd
  106 frr       0:00 /usr/lib/frr/zebra -d -F traditional -A 127.0.0.1 -s 90000000
  111 frr       0:00 /usr/lib/frr/ospfd -d -F traditional -A 127.0.0.1
  114 frr       0:00 /usr/lib/frr/ldpd -L -u frr -g frr
  115 frr       0:00 /usr/lib/frr/ldpd -E -u frr -g frr
  116 frr       0:00 /usr/lib/frr/ldpd -d -F traditional -A 127.0.0.1
  120 frr       0:00 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1
  122 root      0:00 ps

ospfd and ldpd are up.

4. OSPF settings

First, we will create an underlay network that will be the foundation of the MPLS core network. This time, OSPF is used as IGP. There is also a page on how to create a dummy IF as a loopback IF for each router, but in this article we will utilize the lo that comes with each container from the beginning.

P router

PR(vtysh)


PR# conf
PR(config)# interface lo
PR(config-if)# ip address 9.9.9.9/32
PR(config-if)# exit

PR(config)# interface eth0
PR(config-if)# ip address 10.1.1.254/24
PR(config-if)# exit

PR(config)# interface eth1
PR(config-if)# ip address 10.1.2.254/24
PR(config-if)# exit

PR(config)# router ospf
PR(config-router)# network 9.9.9.9/32 area 0
PR(config-router)# network 10.1.1.0/24 area 0
PR(config-router)# network 10.1.2.0/24 area 0
PR(config-router)# end

PE router

PE1(vtysh)


PE1# conf
PE1(config)# interface lo
PE1(config-if)# ip address 1.1.1.1/32
PE1(config-if)# exit

PE1(config)# interface eth0
PE1(config-if)# ip address 10.1.1.2/24
PE1(config-if)# exit

PE1(config)# interface eth1
PE1(config-if)# ip address 172.16.1.254/24
PE1(config-if)# exit

PE1(config)# interface eth2
PE1(config-if)# ip address 172.16.3.254/24
PE1(config-if)# exit

PE1(config)# router ospf
PE1(config-router)# network 1.1.1.1/32 area 0
PE1(config-router)# network 10.1.1.0/24 area 0
PE1(config-router)# end

PE2(vtysh)


PE1# conf
PE1(config)# interface lo
PE1(config-if)# ip address 2.2.2.2/32
PE1(config-if)# exit

PE1(config)# interface eth0
PE1(config-if)# ip address 10.1.2.2/24
PE1(config-if)# exit

PE1(config)# interface eth1
PE1(config-if)# ip address 172.16.2.254/24
PE1(config-if)# exit

PE1(config)# interface eth2
PE1(config-if)# ip address 172.16.4.254/24
PE1(config-if)# exit

PE1(config)# router ospf
PE1(config-router)# network 2.2.2.2/32 area 0
PE1(config-router)# network 10.1.2.0/24 area 0
PE1(config-router)# end

Operation check

Let's check with the P router.

PR(vtysh)


PR# show ip ospf neighbor

Neighbor ID     Pri State           Dead Time Address         Interface                        RXmtL RqstL DBsmL
1.1.1.1           1 Full/Backup       36.732s 10.1.1.2        eth0:10.1.1.254                      0     0     0
2.2.2.2           1 Full/Backup       34.071s 10.1.2.2        eth1:10.1.2.254                      0     0     0

Confirm that neighbor has been established with 1.1.1.1 and 2.2.2.2.

PR(vtysh)


PR# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup

O>* 1.1.1.1/32 [110/10] via 10.1.1.2, eth0, weight 1, 00:04:48
O>* 2.2.2.2/32 [110/10] via 10.1.2.2, eth1, weight 1, 00:01:19
O   9.9.9.9/32 [110/0] is directly connected, lo, weight 1, 00:09:18
C>* 9.9.9.9/32 is directly connected, lo, 00:12:31
O   10.1.1.0/24 [110/10] is directly connected, eth0, weight 1, 00:09:08
C>* 10.1.1.0/24 is directly connected, eth0, 00:12:00
O   10.1.2.0/24 [110/10] is directly connected, eth1, weight 1, 00:08:45
C>* 10.1.2.0/24 is directly connected, eth1, 00:11:39

Make sure that the routes to 1.1.1.1/32 and 2.2.2.2/32 have been learned by OSPF and are listed in the FIB (marked with *).

Ping just in case. Check by exiting vtysh.

PR


/ # ping -I 9.9.9.9 1.1.1.1
PING 1.1.1.1 (1.1.1.1) from 9.9.9.9: 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=64 time=0.093 ms
64 bytes from 1.1.1.1: seq=1 ttl=64 time=0.488 ms
64 bytes from 1.1.1.1: seq=2 ttl=64 time=0.220 ms
^C
--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.093/0.267/0.488 ms

/ # ping -I 9.9.9.9 2.2.2.2
PING 2.2.2.2 (2.2.2.2) from 9.9.9.9: 56 data bytes
64 bytes from 2.2.2.2: seq=0 ttl=64 time=0.140 ms
64 bytes from 2.2.2.2: seq=1 ttl=64 time=0.340 ms
64 bytes from 2.2.2.2: seq=2 ttl=64 time=0.486 ms
^C
--- 2.2.2.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.140/0.322/0.486 ms

I passed.

By the way, if you ping PE1 (1.1.1.1) → PE2 (2.2.2.2) at this point, it will be transferred as a normal IP packet as a matter of course. Try pinging PE1 → PE2 while tcpdumping on the P router.

PE1


# ping -I 1.1.1.1 2.2.2.2
PING 2.2.2.2 (2.2.2.2) from 1.1.1.1: 56 data bytes
64 bytes from 2.2.2.2: seq=0 ttl=63 time=0.146 ms
64 bytes from 2.2.2.2: seq=1 ttl=63 time=0.374 ms
64 bytes from 2.2.2.2: seq=2 ttl=63 time=0.373 ms
^C
--- 2.2.2.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.146/0.297/0.374 ms

PR


/ # tcpdump -n -i any icmp
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
06:51:43.879037 eth0  In  IP 1.1.1.1 > 2.2.2.2: ICMP echo request, id 44032, seq 0, length 64
06:51:43.879052 eth1  Out IP 1.1.1.1 > 2.2.2.2: ICMP echo request, id 44032, seq 0, length 64
06:51:43.879080 eth1  In  IP 2.2.2.2 > 1.1.1.1: ICMP echo reply, id 44032, seq 0, length 64
06:51:43.879084 eth0  Out IP 2.2.2.2 > 1.1.1.1: ICMP echo reply, id 44032, seq 0, length 64

5. LDP settings

Let's go into the MPLS settings. As stated in the official docs, you need to change the kernel parameters to enable MPLS forwarding as a preliminary preparation. For each router, proceed in the order of kernel parameter setting → LDP setting.

P router

MPLS enablement

Change kernel parameters to enable MPLS. First, check the status.

PR


/ # sysctl -a | grep mpls
sysctl: error reading key 'net.ipv6.conf.all.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.default.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.eth0.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.eth1.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.lo.stable_secret': I/O error
net.mpls.conf.eth0.input = 0
net.mpls.conf.eth1.input = 0
net.mpls.conf.lo.input = 0
net.mpls.default_ttl = 255
net.mpls.ip_ttl_propagate = 1
net.mpls.platform_labels = 0

I'm getting an ipv6 related error, but don't worry about it.

On the P router, enable MPLS forwarding on both eth0 and eth1. Add to the following file to set net.mpls.conf.eth0.input and net.mpls.conf.eth1.input to 1 and net.mpls.platform_labels to 100000.

/etc/sysctl.conf


# content of this file will override /etc/sysctl.d/*
net.mpls.conf.eth0.input = 1
net.mpls.conf.eth1.input = 1
net.mpls.platform_labels = 100000

Reflects the settings.

/ # sysctl -p
net.mpls.conf.eth0.input = 1
net.mpls.conf.eth1.input = 1
net.mpls.platform_labels = 100000
# sysctl -a | grep mpls
sysctl: error reading key 'net.ipv6.conf.all.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.default.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.eth0.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.eth1.stable_secret': I/O error
sysctl: error reading key 'net.ipv6.conf.lo.stable_secret': I/O error
net.mpls.conf.eth0.input = 1
net.mpls.conf.eth1.input = 1
net.mpls.conf.lo.input = 0
net.mpls.default_ttl = 255
net.mpls.ip_ttl_propagate = 1
net.mpls.platform_labels = 100000

The value has been reflected.

LDP settings

Run LDP for MPLS label exchange.

PR(vtysh)


PR# conf
PR(config)# mpls ldp
PR(config-ldp)# address-family ipv4
PR(config-ldp-af)# discovery transport-address 9.9.9.9
PR(config-ldp-af)# interface eth0
PR(config-ldp-af-if)# exit
PR(config-ldp-af)# interface eth1
PR(config-ldp-af-if)# exit
PR(config-ldp-af)# end

Register only the interfaces that participate in the MPLS network. (No loopback required)

PE router

For PE1 and PE2, set the kernel parameters and LDP in the same way.

PE1

/etc/sysctl.conf


# content of this file will override /etc/sysctl.d/*
net.mpls.conf.eth0.input = 1
net.mpls.platform_labels = 100000

vtysh


PE1# conf
PE1(config)# mpls ldp
PE1(config-ldp)# address-family ipv4
PE1(config-ldp-af)# discovery transport-address 1.1.1.1
PE1(config-ldp-af)# interface eth0
PE1(config-ldp-af-if)# exit
PE1(config-ldp-af)# end

PE2

/etc/sysctl.conf


# content of this file will override /etc/sysctl.d/*
net.mpls.conf.eth0.input = 1
net.mpls.platform_labels = 100000

vtysh


PE2# conf
PE2(config)# mpls ldp
PE2(config-ldp)# address-family ipv4
PE2(config-ldp-af)# discovery transport-address 2.2.2.2
PE2(config-ldp-af)# interface eth0
PE2(config-ldp-af-if)# exit
PE2(config-ldp-af)# end

Operation check

Check with the P router.

PR(vtysh)


PR# show mpls ldp neighbor
AF   ID              State       Remote Address    Uptime
ipv4 1.1.1.1         OPERATIONAL 1.1.1.1         00:04:41
ipv4 2.2.2.2         OPERATIONAL 2.2.2.2         00:00:50

The LDP neighbor has been established.

PR(vtysh)


PR# show mpls ldp binding
AF   Destination          Nexthop         Local Label Remote Label  In Use
ipv4 1.1.1.1/32           1.1.1.1         16          imp-null         yes
ipv4 1.1.1.1/32           2.2.2.2         16          16                no
ipv4 2.2.2.2/32           1.1.1.1         17          16                no
ipv4 2.2.2.2/32           2.2.2.2         17          imp-null         yes
ipv4 9.9.9.9/32           1.1.1.1         imp-null    17                no
ipv4 9.9.9.9/32           2.2.2.2         imp-null    17                no
ipv4 10.1.1.0/24          1.1.1.1         imp-null    imp-null          no
ipv4 10.1.1.0/24          2.2.2.2         imp-null    18                no
ipv4 10.1.2.0/24          1.1.1.1         imp-null    18                no
ipv4 10.1.2.0/24          2.2.2.2         imp-null    imp-null          no
ipv4 172.16.1.0/24        1.1.1.1         -           imp-null          no
ipv4 172.16.2.0/24        2.2.2.2         -           imp-null          no
ipv4 172.16.3.0/24        1.1.1.1         -           imp-null          no
ipv4 172.16.4.0/24        2.2.2.2         -           imp-null          no

You can check the information in the "LIB table".

PR(vtysh)


PR# show mpls table
 Inbound Label  Type  Nexthop   Outbound Label
 -----------------------------------------------
 16             LDP   10.1.2.2  implicit-null
 17             LDP   10.1.1.2  implicit-null

This is the "LFIB table".

As before, try pinging PE1 (1.1.1.1) → PE2 (2.2.2.2).

PE1


/ # ping -c 1 -I 1.1.1.1 2.2.2.2
PING 2.2.2.2 (2.2.2.2) from 1.1.1.1: 56 data bytes
64 bytes from 2.2.2.2: seq=0 ttl=63 time=0.121 ms

--- 2.2.2.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.121/0.121/0.121 ms

At this time, if you capture the packet with the P router, you can see that the packet with the MPLS label is being forwarded instead of the raw IP packet.

PR


/ # tcpdump -n -i any -l | grep ICMP
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
07:17:28.103630 eth0  In  MPLS (label 17, exp 0, [S], ttl 64) IP 1.1.1.1 > 2.2.2.2: ICMP echo request, id 51200, seq 0, length 64
07:17:28.103636 eth1  Out IP 1.1.1.1 > 2.2.2.2: ICMP echo request, id 51200, seq 0, length 64
07:17:28.103668 eth1  In  MPLS (label 16, exp 0, [S], ttl 64) IP 2.2.2.2 > 1.1.1.1: ICMP echo reply, id 51200, seq 0, length 64
07:17:28.103669 eth0  Out IP 2.2.2.2 > 1.1.1.1: ICMP echo reply, id 51200, seq 0, length 64

To go, a packet with label 17 enters from eth0 of the P router, is stripped according to the LFIB table of the P router (implicit-null for Label 17), becomes a raw IP packet, and exits from eth1. You can see the situation. On the way back, label 16 is doing the same.

6. MP-BGP settings

Set MP-BGP on the PE router.

First, the settings for establishing iBGP peers between PEs and the settings for the Address Family for carrying VPN-IPv4 address information.

PE1(vtysh)


PE1# conf
PE1(config)# router bgp 65000
PE1(config-router)# neighbor 2.2.2.2 remote-as 65000
PE1(config-router)# neighbor 2.2.2.2 update-source 1.1.1.1
PE1(config-router)# address-family ipv4 vpn
PE1(config-router-af)# neighbor 2.2.2.2 activate
PE1(config-router-af)# exit-address-family
PE1(config-router)# end

Set PE2 in the same way.

PE2(vtysh)


PE2# conf
PE2(config)# router bgp 65000
PE2(config-router)# neighbor 1.1.1.1 remote-as 65000
PE2(config-router)# neighbor 1.1.1.1 update-source 2.2.2.2
PE2(config-router)# address-family ipv4 vpn
PE2(config-router-af)# neighbor 1.1.1.1 activate
PE2(config-router-af)# exit-address-family
PE2(config-router)# end

Operation check

PE1(vtysh)


PE1# show ip bgp summary

IPv4 Unicast Summary:
BGP router identifier 1.1.1.1, local AS number 65000 vrf-id 0
BGP table version 0
RIB entries 0, using 0 bytes of memory
Peers 1, using 14 KiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt
2.2.2.2         4      65000        13        16        0    0    0 00:01:16            0        0

Total number of neighbors 1

IPv4 VPN Summary:
BGP router identifier 1.1.1.1, local AS number 65000 vrf-id 0
BGP table version 0
RIB entries 0, using 0 bytes of memory
Peers 1, using 14 KiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt
2.2.2.2         4      65000        13        16        0    0    0 00:01:16            0        0

Total number of neighbors 1

Make sure that the Up/Down column is not never.

Let's also look at the BGP table for VPN-IPv4.

PE1(vtysh)


PE1# show bgp ipv4 vpn
No BGP prefixes displayed, 0 exist

At this point we haven't learned the route yet, so there are no entries.

7. VRF settings

Create a VRF for each user with PE1/PE2, and set the user route and route redelivery.

Creating a VRF

Create a VRF to accommodate each CE router. We will use the functions of Linux VRF. VRF creation cannot be done with FR Routing, so create it using the ip command. Also, make the corresponding IF belong to the created VRF. The VRF name for company A is CUSTA, and the VRF name for company B is CUSTB.

PE1


/ # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
41: eth2@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:ac:10:03:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
43: eth0@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:0a:01:01:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
45: eth1@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:ac:10:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

/ # ip link add CUSTA type vrf table 10
/ # ip link add CUSTB type vrf table 20
/ # ip link set CUSTA up
/ # ip link set CUSTB up
/ # ip link set eth1 master CUSTA
/ # ip link set eth2 master CUSTB

/ # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: CUSTA: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:1b:1d:12:01:7f brd ff:ff:ff:ff:ff:ff
3: CUSTB: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether de:06:68:aa:a5:ec brd ff:ff:ff:ff:ff:ff
41: eth2@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master CUSTB state UP mode DEFAULT group default
    link/ether 02:42:ac:10:03:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
43: eth0@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:0a:01:01:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
45: eth1@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master CUSTA state UP mode DEFAULT group default
    link/ether 02:42:ac:10:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

You can check it with vtysh.

PE1(vtysh)


PE1# show int br
Interface       Status  VRF             Addresses
---------       ------  ---             ---------
CUSTA           up      CUSTA
eth1            up      CUSTA           172.16.1.254/24

Interface       Status  VRF             Addresses
---------       ------  ---             ---------
CUSTB           up      CUSTB
eth2            up      CUSTB           172.16.3.254/24

Interface       Status  VRF             Addresses
---------       ------  ---             ---------
eth0            up      default         10.1.1.2/24
lo              up      default         1.1.1.1/32

Create it in PE2 in the same way.

PE2


/ # ip link add CUSTA type vrf table 10
/ # ip link add CUSTB type vrf table 20
/ # ip link set CUSTA up
/ # ip link set CUSTB up
/ # ip link set eth1 master CUSTA
/ # ip link set eth2 master CUSTB

/ # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: CUSTA: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:48:e9:21:1e:6b brd ff:ff:ff:ff:ff:ff
3: CUSTB: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 16:8f:42:1f:9b:84 brd ff:ff:ff:ff:ff:ff
47: eth0@if48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:0a:01:02:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
49: eth1@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master CUSTA state UP mode DEFAULT group default
    link/ether 02:42:ac:10:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
51: eth2@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master CUSTB state UP mode DEFAULT group default
    link/ether 02:42:ac:10:04:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

PE2(vtysh)


PE2# show int br
Interface       Status  VRF             Addresses
---------       ------  ---             ---------
CUSTA           up      CUSTA
eth1            up      CUSTA           172.16.2.254/24

Interface       Status  VRF             Addresses
---------       ------  ---             ---------
CUSTB           up      CUSTB
eth2            up      CUSTB           172.16.4.254/24

Interface       Status  VRF             Addresses
---------       ------  ---             ---------
eth0            up      default         10.1.2.2/24
lo              up      default         2.2.2.2/32

Setting routes for users

This time, static routing is used between CE and PE. Set for each of VRF CUSTA and CUSTB.

PE1(vtysh)


PE1# conf
PE1(config)# ip route 192.168.1.0/24 172.16.1.2 vrf CUSTA
PE1(config)# ip route 192.168.3.0/24 172.16.3.2 vrf CUSTB
PE1(config)# exit

PE1# show ip route vrf CUSTA
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup

VRF CUSTA:
C>* 172.16.1.0/24 is directly connected, eth1, 04:55:40
S>* 192.168.1.0/24 [1/0] via 172.16.1.2, eth1, weight 1, 04:12:14

PE1# show ip route vrf CUSTB
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup

VRF CUSTB:
C>* 172.16.3.0/24 is directly connected, eth2, 04:55:37
S>* 192.168.3.0/24 [1/0] via 172.16.3.2, eth2, weight 1, 04:09:52

Set PE2 in the same way.

PE2(vtysh)


PE2# conf
PE2(config)# ip route 192.168.2.0/24 172.16.2.2 vrf CUSTA
PE2(config)# ip route 192.168.4.0/24 172.16.4.2 vrf CUSTB
PE2(config)# exit

PE2# show ip route vrf CUSTA
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup

VRF CUSTA:
C>* 172.16.2.0/24 is directly connected, eth1, 05:00:25
S>* 192.168.2.0/24 [1/0] via 172.16.2.2, eth1, weight 1, 04:17:02

PE2# show ip route vrf CUSTB
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup

VRF CUSTB:
C>* 172.16.4.0/24 is directly connected, eth2, 05:00:26
S>* 192.168.4.0/24 [1/0] via 172.16.4.2, eth2, weight 1, 04:16:34

Route redelivery settings

Setting up route redelivery between the VRF's route table and the "global routing table".

PE1(vtysh)


PE1# conf
PE1(config)# router bgp 65000 vrf CUSTA
PE1(config-router)# address-family ipv4 unicast
PE1(config-router-af)# redistribute static
PE1(config-router-af)# label vpn export auto
PE1(config-router-af)# rd vpn export 1:100
PE1(config-router-af)# rt vpn both 10:100
PE1(config-router-af)# export vpn
PE1(config-router-af)# import vpn
PE1(config-router-af)# exit-address-family
PE1(config-router)# exit

PE1(config)# router bgp 65000 vrf CUSTB
PE1(config-router)# address-family ipv4 unicast
PE1(config-router-af)# redistribute static
PE1(config-router-af)# label vpn export auto
PE1(config-router-af)# rd vpn export 2:100
PE1(config-router-af)# rt vpn both 20:100
PE1(config-router-af)# export vpn
PE1(config-router-af)# import vpn
PE1(config-router-af)# exit-address-family
PE1(config-router)# end

It seems that the setting method is slightly different from Cisco in this area, but I understand what I am doing as follows.

Also, since the setting between CE and PE is static routing this time, it is set to redistribute static.

Set PE2 in the same way.

PE2(vtysh)


PE2# conf
PE2(config)# router bgp 65000 vrf CUSTA
PE2(config-router)# address-family ipv4 unicast
PE2(config-router-af)# redistribute static
PE2(config-router-af)# label vpn export auto
PE2(config-router-af)# rd vpn export 1:100
PE2(config-router-af)# rt vpn both 10:100
PE2(config-router-af)# export vpn
PE2(config-router-af)# import vpn
PE2(config-router-af)# exit-address-family
PE2(config-router)# exit

PE2(config)# router bgp 65000 vrf CUSTB
PE2(config-router)# address-family ipv4 unicast
PE2(config-router-af)# redistribute static
PE2(config-router-af)# label vpn export auto
PE2(config-router-af)# rd vpn export 2:100
PE2(config-router-af)# rt vpn both 20:100
PE2(config-router-af)# export vpn
PE2(config-router-af)# import vpn
PE2(config-router-af)# exit-address-family
PE2(config-router)# end

If set, BGP will learn user routes.

Operation check

Confirmed with PE1. First, let's take a look at the BGP table (VPN-IPv4 route).

PE1


PE1# show ip bgp ipv4 vpn
BGP table version is 6, local router ID is 1.1.1.1, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 1:100
*> 192.168.1.0/24   172.16.1.2@2<            0         32768 ?
    UN=172.16.1.2 EC{10:100} label=146 type=bgp, subtype=5
*>i192.168.2.0/24   2.2.2.2                  0    100      0 ?
    UN=2.2.2.2 EC{10:100} label=146 type=bgp, subtype=0
Route Distinguisher: 2:100
*> 192.168.3.0/24   172.16.3.2@3<            0         32768 ?
    UN=172.16.3.2 EC{20:100} label=147 type=bgp, subtype=5
*>i192.168.4.0/24   2.2.2.2                  0    100      0 ?
    UN=2.2.2.2 EC{20:100} label=147 type=bgp, subtype=0

Displayed  4 routes and 4 total paths

The user route is listed. 192.168.2.0/24 and 192.168.4.0/24 set in PE2 are also learned by iBGP. The RD/RT information set earlier is also listed.

Next, the BGP table for each VRF.

PE1# show ip bgp vrf CUSTA
BGP table version is 2, local router ID is 172.16.1.254, vrf id 2
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 192.168.1.0/24   172.16.1.2               0         32768 ?
*> 192.168.2.0/24   2.2.2.2@0<               0    100      0 ?

Displayed  2 routes and 2 total paths
PE1# show ip bgp vrf CUSTB
BGP table version is 4, local router ID is 172.16.3.254, vrf id 3
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 192.168.3.0/24   172.16.3.2               0         32768 ?
*> 192.168.4.0/24   2.2.2.2@0<               0    100      0 ?

Displayed  2 routes and 2 total paths

Each user route is learned correctly. Next, let's look at the route table for each VRF.

PE1# show ip route vrf CUSTA
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup

VRF CUSTA:
C>* 172.16.1.0/24 is directly connected, eth1, 05:29:08
S>* 192.168.1.0/24 [1/0] via 172.16.1.2, eth1, weight 1, 04:45:42
B>  192.168.2.0/24 [200/0] via 2.2.2.2 (vrf default) (recursive), label 146, weight 1, 00:04:28
  *                          via 10.1.1.254, eth0 (vrf default), label 17/146, weight 1, 00:04:28
PE1# show ip route vrf CUSTB
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup

VRF CUSTB:
C>* 172.16.3.0/24 is directly connected, eth2, 05:29:07
S>* 192.168.3.0/24 [1/0] via 172.16.3.2, eth2, weight 1, 04:43:22
B>  192.168.4.0/24 [200/0] via 2.2.2.2 (vrf default) (recursive), label 147, weight 1, 00:16:20
  *                          via 10.1.1.254, eth0 (vrf default), label 17/147, weight 1, 00:16:20

You can see that each user route received from PE2 is listed via the BGP table.

8. CE router configuration

Finally, we will set the CE. Set the loopback address, the address opposite the PE, and the default route.

CE1(vtysh)


CE1# conf
CE1(config)# interface lo
CE1(config-if)# ip address 192.168.1.1/24
CE1(config-if)# exit
CE1(config)# interface eth0
CE1(config-if)# ip address 172.16.1.2/24
CE1(config-if)# exit
CE1(config)# ip route 0.0.0.0/0 172.16.1.254
CE1(config)# end

The same applies to CE2-4.

CE2(vtysh)


CE2# conf
CE2(config)# interface lo
CE2(config-if)# ip address 192.168.2.1/24
CE2(config-if)# exit
CE2(config)# interface eth0
CE2(config-if)# ip address 172.16.2.2/24
CE2(config-if)# exit
CE2(config)# ip route 0.0.0.0/0 172.16.2.254
CE2(config)# end

CE3(vtysh)


CE3# conf
CE3(config)# interface lo
CE3(config-if)# ip address 192.168.3.1/24
CE3(config-if)# exit
CE3(config)# interface eth0
CE3(config-if)# ip address 172.16.3.2/24
CE3(config-if)# exit
CE3(config)# ip route 0.0.0.0/0 172.16.3.254
CE3(config)# end

CE4(vtysh)


CE4# conf
CE4(config)# interface lo
CE4(config-if)# ip address 192.168.4.1/24
CE4(config-if)# exit
CE4(config)# interface eth0
CE4(config-if)# ip address 172.16.4.2/24
CE4(config-if)# exit
CE4(config)# ip route 0.0.0.0/0 172.16.4.254
CE4(config)# end

Operation check

You should now be all done. Confirm that ping can be passed from CE1 to CE2. Specify loopback IF as Source with -I.

CE1


/ # ping -I 192.168.1.1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) from 192.168.1.1: 56 data bytes
64 bytes from 192.168.2.1: seq=0 ttl=62 time=0.650 ms
64 bytes from 192.168.2.1: seq=1 ttl=62 time=0.751 ms
64 bytes from 192.168.2.1: seq=2 ttl=62 time=0.675 ms
^C
--- 192.168.2.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.650/0.692/0.751 ms

I passed!

Confirm that ping can be passed from CE3 to CE4.

CE3


/ # ping -I 192.168.3.1 192.168.4.1
PING 192.168.4.1 (192.168.4.1) from 192.168.3.1: 56 data bytes
64 bytes from 192.168.4.1: seq=0 ttl=62 time=0.470 ms
64 bytes from 192.168.4.1: seq=1 ttl=62 time=0.157 ms
64 bytes from 192.168.4.1: seq=2 ttl=62 time=0.777 ms
^C
--- 192.168.4.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.157/0.468/0.777 ms

I passed!

As a test, make sure that CE1 → CE4 does not pass.

CE1


/ # ping -I 192.168.1.1 192.168.4.1
PING 192.168.4.1 (192.168.4.1) from 192.168.1.1: 56 data bytes
^C
--- 192.168.4.1 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss

It's a success because it doesn't pass!

Try packet capture

When pinging CE1 → CE2 above, if you capture a packet with eth1,0 of PE1 and eth0,1 of P router, eth0,1 of PE2, you can see how the packet is transferred while being labeled and taken. ..

PE1/PR/PE2


/ # tcpdump -n -i any -l | grep ICMP

CE1


/ # ping -c 1 -I 192.168.1.1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) from 192.168.1.1: 56 data bytes
64 bytes from 192.168.2.1: seq=0 ttl=62 time=0.551 ms

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.551/0.551/0.551 ms

The results are arranged in chronological order as follows.

To:

PE1


08:13:50.550878 eth1  In  IP 192.168.1.1 > 192.168.2.1: ICMP echo request, id 20992, seq 0, length 64
08:13:50.550900 eth0  Out MPLS (label 17, exp 0, ttl 63) (label 146, exp 0, [S], ttl 63) IP 192.168.1.1 > 192.168.2.1: ICMP echo request, id 20992, seq 0, length 64

PR


08:13:50.550905 eth0  In  MPLS (label 17, exp 0, ttl 63) (label 146, exp 0, [S], ttl 63) IP 192.168.1.1 > 192.168.2.1: ICMP echo request, id 20992, seq 0, length 64
08:13:50.550909 eth1  Out MPLS (label 146, exp 0, [S], ttl 63) IP 192.168.1.1 > 192.168.2.1: ICMP echo request, id 20992, seq 0, length 64

PE2


08:13:50.550913 eth0  In  MPLS (label 146, exp 0, [S], ttl 63) IP 192.168.1.1 > 192.168.2.1: ICMP echo request, id 20992, seq 0, length 64
08:13:50.550916 CUSTA Out IP 192.168.1.1 > 192.168.2.1: ICMP echo request, id 20992, seq 0, length 64
08:13:50.550926 eth1  Out IP 192.168.1.1 > 192.168.2.1: ICMP echo request, id 20992, seq 0, length 64

Raw IP packets arrive at eth0 of PE1, where they are labeled 146 on the inside (for VPN identification) and 17 on the outside (for forwarding within the MPLS network) and exit from eth1. In PR, it is transferred to PE2 with the outer label removed according to the LFIB table (implicit null for Label 17). In PE2, the inner label is also stripped and forwarded to CE2 as a raw IP packet according to the VRF CUSTA routing table. The last two Outs are for VRF CUSTA and eth1. [^ 3]

[^ 3]: When the IF specification of tcpdump is set to -i any, it seems that IN to VRF CUSTA is not displayed. If I specify IF individually, both are displayed ... Why ...

Return: The same goes for the return trip. 146 is assigned as the inner VPN identification label.

PE2


08:13:50.550976 eth1  In  IP 192.168.2.1 > 192.168.1.1: ICMP echo reply, id 20992, seq 0, length 64
08:13:50.550981 eth0  Out MPLS (label 16, exp 0, ttl 63) (label 146, exp 0, [S], ttl 63) IP 192.168.2.1 > 192.168.1.1: ICMP echo reply, id 20992, seq 0, length 64

PR


08:13:50.550984 eth1  In  MPLS (label 16, exp 0, ttl 63) (label 146, exp 0, [S], ttl 63) IP 192.168.2.1 > 192.168.1.1: ICMP echo reply, id 20992, seq 0, length 64
08:13:50.550986 eth0  Out MPLS (label 146, exp 0, [S], ttl 63) IP 192.168.2.1 > 192.168.1.1: ICMP echo reply, id 20992, seq 0, length 64

PE1


08:13:50.550988 eth0  In  MPLS (label 146, exp 0, [S], ttl 63) IP 192.168.2.1 > 192.168.1.1: ICMP echo reply, id 20992, seq 0, length 64
08:13:50.550990 CUSTA Out IP 192.168.2.1 > 192.168.1.1: ICMP echo reply, id 20992, seq 0, length 64
08:13:50.550994 eth1  Out IP 192.168.2.1 > 192.168.1.1: ICMP echo reply, id 20992, seq 0, length 64

reference

Recommended Posts

Try running MPLS-VPN with FR Routing on Docker
Try running OSPF with FR Routing on Docker
Try running cloudera manager with docker
Try running Slack's (Classic) Bot with docker
Try WildFly with Docker
Until you try running Apache Kafka with docker image
Try running an app made with Quarkus on Heroku
Try Docker on Windows 10 Home
I tried running WordPress with docker preview on M1 Mac.
Try using Redmine on Mac docker
WordPress with Docker Compose on CentOS 8
Try Docker on Windows Home (September 2020)
Try running Spring Boot on Kubernetes
Easily try C # 9 (.NET 5) on Docker
Try running SlackBot made with Ruby x Sinatra on AWS Lambda
I tried running Docker on Windows Server 2019
Try running Word2vec model on AWS Lambda
Try running MySql and Blazor with docker-compose
Scraping with puppeteer in Nuxt on Docker.
Starting with installing Docker on EC2 and running Yellowfin in a container
Try using Kong + Konga with Docker Compose.
Try putting Docker in ubuntu on WSL
Build an environment with Docker on AWS
Try the Docker environment on AWS ECS
Launched Redmine with Docker on Raspberry Pi 3
Run Ubuntu + ROS with Docker on Mac
Try building Express + PostgreSQL + Sequelize with Docker [Part 2]
Try running ScalarDB on WSL Ubuntu (Environment Construction)
Introducing Rspec with Ruby on Rails x Docker
Environment construction command memo with Docker on AWS
Keep docker container running with no resident process running
Run JSP Hello World with Tomcat on Docker
Notes on building Rails6 / PostgreSQL with Docker Compose
Update container image with KUSANAGI Runs on Docker
I tried running Ansible on a Docker container
Try building Express + PostgreSQL + Sequelize with Docker [Part 1]
Try using another Servlet container Jetty with Docker
Liberty on Docker
Redmine on Docker
Try running ScalarDB on WSL Ubuntu (Sample application creation)
Compile with Java 6 and test with Java 11 while running Maven on Java 8
Error encountered with notes when deploying docker on rails
[ARM64] Docker server monitoring with New Relic on Docker on RasPi4
Feel free to try Elasticsearch cluster with WSL2 + Docker
Try something like sharding with Spring's Abstract Routing DataSource!
Display ROS application on Docker with GUI on host side
Try connecting to AzureCosmosDB Emulator for Docker with Java
Run Mosquitto with Docker and try WebSocket communication with MQTT
Try running the Embulk command with your Lambda function