[DOCKER] Install Ubuntu20.04 on RaspberryPi 4 and build Kubernetes to run the container

Introduction

I wanted a test environment for Kubernetes and built it. It is a lot that comes out when you google. Cooking for 3 days [Kubernetes Raspberry Pi wrapping "CyberAgent style"], I thought it was delicious.

environment

It is the assembled Raspberry Pi 4. The switching hub and USB charger are fixed with Velcro. (I bought it for 100 yen.)

These are the parts I prepared this time.

Part name address
Raspberry Pi Raspberry Pi 4 Model B/4GBx 3 units
SD card Samsung micro SDXC 128GB MB-MC128GAx 3 sheets
Switching hub エレコムSwitching hubギガビット 5ポート AC電源 小型 EHC-G05PA-SBx 1 unit
USB charger Anker PowerPort I PD - 1 PD & 4 PowerIQx 1 unit
※type-c:1 mouth, type-A:4 mouths
Case GeeekPi Raspberry Pi 4 Model B Pi Rack Case with Cooling Fan and Heat Sinkx 1 unit
USB cable Mauknci type-c & type-c 30cmx 1
USB cable SUNGUY type-A & type-c 30cm 2 pcs setx2sets(4)
* One is left over. I'm using it for something else.
LAN cable Miyoshi MCO Category-6 Slim LAN-Bull 15cmx 3

--The switching hub is connected to the router with a LAN cable. I don't use wireless LAN. --Use an SDXC compatible writer to write to the SD card. --The power supply specification of Raspberry Pi 4 is 3A, and the output of the USB charger is 2.4A, but it is working.

This is the software to be installed.

I used the following for my work.

--Windows 10 PC (for writing SD card, SSH connection) --15-inch mobile monitor (for Raspberry Pi initial setting) --USB keyboard (for Raspberry Pi initial setting)

Network environment to build

The IP address is fixed.

Use hostname IP address
Master Node master01.example.jp 192.168.100.101/24
Worker Node1 worker01.example.jp 192.168.100.102/24
Worker Node2 worker02.example.jp 192.168.100.103/24

Other necessary IP address information.

Use IP address
gateway 192.168.100.254
DNS 192.168.100.254
IP pool for LoadBalancer 192.168.100.211-192.168.100.215

assembly

Since the USB charger is heavy, I assembled it at the bottom. Only one "type-c to type-c cable" is used for the Master Node, which may be important. (Maybe it doesn't make sense) There is nothing special to mention. After assembling, it is ready to start.

Install Ubuntu 20.04 on Raspberry Pi 4

Make an SD card for booting Raspberry Pi. The work PC is Windows 10.

Use "Raspberry Pi Imager" to write to the SD card. If you format it with "Raspberry Pi Imager", you can use an SD card of 64GB or more. Download from Raspberry Pi Imager.

Insert the SD card into the writer and start "Raspberry Pi Imager".

Format to use the SD card.

Click CHOOSE OS and select Erase. rapi02.png

Click "CHOOSE SD CARD" and select the inserted SD card. rapi03.png

Click WRITE to start formatting. rapi04.png

Formatting is complete. Click "CONTINUE" to dismiss the dialog. rapi05.png

Then write the OS. Click "CHOOSE OS" and select "Ubuntu". rapi06.png

Select Ubuntu Server 20.04.1 LTS (RPi 3/4). Select 64-bit. The 32-bit one above is different. rapi07.png

Click "CHOOSE SD CARD" and select the inserted SD card. rapi03.png

Click "WRITE" to write. rapi08.png

OS writing is complete. rapi09.png

Write to the remaining two SD cards in the same way.

Ubuntu 20.04 settings

Set up the network and then connect with SSH to work.

Insert the SD card with the OS written into the Raspberry Pi and start it. It is set using a 15-inch mobile monitor and a USB keyboard.

Log in to Ubuntu

User name Initial password
ubuntu ubuntu

If you are logging in for the first time, you will be prompted to change your password.

Check the OS version.

$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Disable default ubuntu account and create working account (work common to all 3 units)

Instead of using the default ubuntu account, create a k8suser account for your work.

$ sudo useradd -m -s /usr/bin/bash k8suser
$ sudo passwd k8suser          #change the password
New password:
Retype new password:
passwd: password updated successfully

$ sudo adduser k8suser sudo     #Add k8suser to the sudo group
Adding user `k8suser' to group `sudo' ...
Adding user k8suser to group sudo
Done.
$ cat /etc/group | grep sudo   #Confirm
#Shift for English keyboard settings+ }But"|"become
sudo:x:27:ubuntu,k8user

Prohibit login of default account (ubuntu).

$ sudo usermod -s /usr/sbin/nologin ubuntu
$ cat /etc/passwd | grep ubuntu   #Confirm
ubuntu:x:1000:1000:Ubuntu:/home/ubuntu:/usr/sbin/nologin

Master Node OS settings

Make settings around the Master Node network.

Rename host (run on Master Node)

$ sudo hostnamectl set-hostname master01.example.jp
$ hostname    #Confirm
master01.example.jp

Change IP address (run on Master Node)

#For English keyboard settings, it will be as follows
# Shift + ;But":」
# @But"[」
# [But"]」
$ sudo vi /etc/netplan/99-network.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: false
      dhcp6: false
      addresses:
        - 192.168.100.101/24
      gateway4: 192.168.100.254
      nameservers:
        addresses:
          - 192.168.100.254

$ sudo netplan apply
$ ip a    #Confirmation:「eth0:Confirm that the IP address is reflected in "inet" of ""

Log in to master01.example.jp with SSH.

access point User name password
192.168.100.101 k8suser Password set

Worker Node 1 OS settings

Configure the settings around the Worker Node 1 network.

Rename host (run on Worker Node 1)

$ sudo hostnamectl set-hostname worker01.example.jp
$ hostname    #Confirmation
worker01.example.jp

Change IP address (run on Worker Node 1)

#For English keyboard settings, it will be as follows
# Shift + ;But":」
# @But"[」
# [But"]」
$ sudo vi /etc/netplan/99-network.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: false
      dhcp6: false
      addresses:
        - 192.168.100.102/24
      gateway4: 192.168.100.254
      nameservers:
        addresses:
          - 192.168.100.254

$ sudo netplan apply
$ ip a    #Confirmation:「eth0:Confirm that the IP address is reflected in "inet" of ""

Log in to worker01.example.jp with SSH.

access point User name password
192.168.100.102 k8suser Password set

Worker Node 2 OS settings

Configure the settings around the Worker Node 2 network.

Rename host (run on Worker Node 2)

$ sudo hostnamectl set-hostname worker02.example.jp
$ hostname    #Confirm
worker02.example.jp

Change IP address (run on Worker Node 2)

#For English keyboard settings, it will be as follows
# Shift + ;But":」
# @But"[」
# [But"]」
$ vi /etc/netplan/99-network.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: false
      dhcp6: false
      addresses:
        - 192.168.100.103/24
      gateway4: 192.168.100.254
      nameservers:
        addresses:
          - 192.168.100.254

$ sudo netplan apply
$ ip a    #Confirmation:「eth0:Confirm that the IP address is reflected in "inet" of ""

Log in to worker02.example.jp with SSH.

access point User name password
192.168.100.103 k8suser Password set

OS settings common to all three

For the rest of the work, connect with SSH and continue setting the OS. When I run Kubernetes, swap needs to be stopped, but swap wasn't working. Is it because it's for Raspberry Pi?

$ free
              total        used        free      shared  buff/cache   available
Mem:        3884360      961332      152584        5184     2770444     2973200
Swap:             0           0           0      #Swap is 0

Update of existing package (common to 3 units)

$ sudo apt update
$ sudo apt -y upgrade

Edit hosts file (common to all 3)

$ sudo vi /etc/hosts
#Append
192.168.100.101       master01 master01.example.jp
192.168.100.102       worker01 worker01.example.jp
192.168.100.103       worker02 worker02.example.jp
$ cat /etc/hosts    #Confirm

Change time zone (common to all 3 units)

$ sudo timedatectl set-timezone Asia/Tokyo
$ timedatectl | grep Time    #Confirm
                Time zone: Asia/Tokyo (JST, +0900)

Change keymap (common to 3 units)

$ sudo localectl set-keymap jp106
$ localectl    #Confirm
   System Locale: LANG=C.UTF-8
       VC Keymap: jp106
      X11 Layout: jp
       X11 Model: jp106
     X11 Options: terminate:ctrl_alt_bksp

IPv6 stop (common to all 3 units)

$ sudo vi /etc/sysctl.conf
#Append
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
$ sudo sysctl -p
$ ip a    #Confirm. inet6 is not displayed.

Prevent iptables from using the nftables backend (common to all 3)

[Prevent iptables from using nftables backend](https://kubernetes.io/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#iptables%E3%81%8Cnftables%E3 % 83% 90% E3% 83% 83% E3% 82% AF% E3% 82% A8% E3% 83% B3% E3% 83% 89% E3% 82% 92% E4% BD% BF% E7% 94 % A8% E3% 81% 97% E3% 81% AA% E3% 81% 84% E3% 82% 88% E3% 81% 86% E3% 81% AB% E3% 81% 99% E3% 82% 8B ) Refer to the setting.

$ sudo apt-get -y install iptables arptables ebtables
$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
$ sudo update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives: using /usr/sbin/arptables-legacy to provide /usr/sbin/arptables (arptables) in manual mode
$ sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy
update-alternatives: using /usr/sbin/ebtables-legacy to provide /usr/sbin/ebtables (ebtables) in manual mode

Installation of Docker (common to all 3 units)

Install by referring to Install Docker Engine on Ubuntu.

$ sudo apt-get -y install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo apt-get -y install docker-ce docker-ce-cli containerd.io
$ sudo apt-mark hold docker-ce docker-ce-cli containerd.io
docker-ce set on hold.
docker-ce-cli set on hold.
containerd.io set on hold.

Assigning users to docker groups is not good for security, but it's for testing ...

$ sudo adduser k8suser docker
Adding user `k8suser' to group `docker' ...
Adding user k8suser to group docker
Done.
$ cat /etc/group | grep docker    #Confirmation
docker:x:998:k8suser

Log off and log on for group assignment. Check the version of Docker.

$ docker version
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46
 Built:             Wed Sep 16 17:03:40 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46
  Built:            Wed Sep 16 17:02:11 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Check the operation of Docker. Start the hello-world container.

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
256ab8fe8778: Pull complete
Digest: sha256:8c5aeeb6a5f3ba4883347d3747a7249f491766ca1caa47e5da5dfcf6b9b717c0
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm64v8)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Installation of kubeadm, kubectl, kubelet (common to all 3 units)

[Install kubeadm, kubelet, kubectl](https://kubernetes.io/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#kubeadm-kubelet-kubectl%E3%81%AE%E3 Install by referring to% 82% A4% E3% 83% B3% E3% 82% B9% E3% 83% 88% E3% 83% BC% E3% 83% AB).

$ sudo apt-get update && sudo apt-get -y install apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ sudo apt-get -y install kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

Check the version of each module.

$ kubeadm version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.3",
    "gitCommit": "1e11e4a2108024935ecfcb2912226cedeafd99df",
    "gitTreeState": "clean",
    "buildDate": "2020-10-14T12:47:53Z",
    "goVersion": "go1.15.2",
    "compiler": "gc",
    "platform": "linux/arm64"
  }
}

$ kubectl version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.3",
    "gitCommit": "1e11e4a2108024935ecfcb2912226cedeafd99df",
    "gitTreeState": "clean",
    "buildDate": "2020-10-14T12:50:19Z",
    "goVersion": "go1.15.2",
    "compiler": "gc",
    "platform": "linux/arm64"
  }
}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

$ kubelet --version
Kubernetes v1.19.3

kubectl version -o jsonOnly message at runtime(the connection to the server ・ ・ ・)Is displayed, but we will deal with it later.

Enable memory with cgruop (common to all 3 units)

enable is 0. (The last 0 is the enabled column)

$ cat /proc/cgroups | grep memory
memory  0       105     0

Set by referring to Install Kubernetes on Raspeye Cluster (Success).

Add to /boot/firmware/cmdline.txt. Addition at the end of the line. It is a one-line file.

$ sudo vi /boot/firmware/cmdline.txt
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory    #Append
$ cat /boot/firmware/cmdline.txt    #Confirm
net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Restart.

$ sudo reboot

Check when the OS boots.

$ cat /proc/cgroups | grep memory
memory  10      97      1

enable is 1, which means it is enabled.

Creating a Kubernetes cluster

It's finally Kubernetes settings. First, initialize the Master Node.

Set up by referring to Kubernetes documentation.

Master Node Initialization (Run on Master Node)

About options

Perform initialization.

$ sudo kubeadm init --apiserver-advertise-address=192.168.100.101 --pod-network-cidr=10.244.0.0/16
W1107 17:43:47.125493    2544 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01.example.jp] and IPs [10.96.0.1 192.168.100.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01.example.jp] and IPs [192.168.100.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01.example.jp] and IPs [192.168.100.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.510569 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01.example.jp as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master01.example.jp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: vpkasj.i7pe42jx57scb3bi
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.101:6443 --token vpkasj.i7pe42jx57scb3bi \
    --discovery-token-ca-cert-hash sha256:3646aa901c623280b56d8ec33873263a5e3452a979f594c0f628724ed9fe9cce

The last `` `kubeadm join ・ ・ ・ ``` is required when adding a Worker Node, so record it somewhere. Also, after 24 hours, token will not be available.

It is a method of checking the remaining time of the token and reissuing it, but if you execute the command at this point, an error will occur, so it will be described later.

Environment variables and input completion settings (run on Master Node)

Set environment variables. kubectl version -o jsonRun-time message(the connection to the server ・ ・ ・)Will not be displayed by setting the following.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ echo 'KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
$ source $HOME/.bashrc
$ kubectl version -o json
#Confirm that "The connection to the server ..." is not displayed.

Sets command completion.

$ source <(kubectl completion bash)
$ echo "source <(kubectl completion bash)" >> $HOME/.bashrc

About token of kubeadm join (executed on Master Node)

About the token that I mentioned later. If you want to check the remaining time, execute the `` `kubeadm token list``` command. If nothing is displayed, there is no valid token.

$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
2jdf77.0ww7uv0w2hodm99i   23h         2020-10-31T22:09:03+09:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.

If you want to reissue the token, run the `` `kubeadm token create``` command.

$ sudo kubeadm token create
W1101 14:37:16.210067  508855 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
2jdf77.0ww7uv0w2hodm99i  #This is token

If you want to check the hash of the CA certificate, run the `` `openssl``` command. It is written in Creating a single control plane cluster using kubeadm. I will.

$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'
3646aa901c623280b56d8ec33873263a5e3452a979f594c0f628724ed9fe9cce  #This is the hash of the CA certificate

Install Pod Network Add-on (Run on Master Node)

Set flannel as an add-on to allow pods to communicate with each other. I chose flannel, which has a good track record. Install by referring to the following.

$ curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -O
$ kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Confirm the startup. You can see kube-flannel-ds-XXXXX.

$ kubectl get pods -n kube-system
NAME                                          READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-trgnz                       1/1     Running   0          5m54s
coredns-f9fd979d6-w7zvv                       1/1     Running   0          5m54s
etcd-master01.example.jp                      1/1     Running   0          5m58s
kube-apiserver-master01.example.jp            1/1     Running   0          5m59s
kube-controller-manager-master01.example.jp   1/1     Running   0          5m59s
kube-flannel-ds-bmvz4                         1/1     Running   0          61s
kube-proxy-6rhgr                              1/1     Running   0          5m54s
kube-scheduler-master01.example.jp            1/1     Running   0          5m58s

Install LoadBalancer (Run on Master Node)

Use MetalLB. I decided from the following features.

--You can use a Load Balancer type Service in your on-premises environment --External IP can be assigned

It consists of two types of pods, Controller and Speaker.

Install by referring to MetalLB, bare metal load-balancer for Kubernetes.

$ curl https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/namespace.yaml -o namespace.yaml
$ curl https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/metallb.yaml -o metallb.yaml
$ kubectl apply -f namespace.yaml
namespace/metallb-system created
$ kubectl apply -f metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
secret/memberlist created

Confirm the startup. You can see controller-XXXXXXXXX-XXXXX and speaker-XXXXX.

$ kubectl get pod -n metallb-system
NAME                         READY   STATUS    RESTARTS   AGE
controller-8687cdc65-jgf2g   0/1     Pending   0          54s
speaker-q9ksw                1/1     Running   0          54s

controller is not running in Pending. As stated in MetalLB, bare metal load-balancer for Kubernetes, controller is Deployment. This is because there is no Worker Node that can run the pod because the Worker Node has not been registered yet. Since speaker is a DaemonSet, it will be started by all Nodes (Master, Worker). Now it's running on Master Node. (After adding the Worker Node, start 3 in total)

Join the Worker Node to the cluster (Run on Worker Node1, Worker Node2)

Execute the command recorded when the Master Node was initialized. If you forget it, it will be reissued.

$ sudo kubeadm join 192.168.100.101:6443 --token vpkasj.i7pe42jx57scb3bi \
    --discovery-token-ca-cert-hash sha256:3646aa901c623280b56d8ec33873263a5e3452a979f594c0f628724ed9fe9cce
[sudo] password for user01:
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Additional confirmation of Worker Node (executed on Master Node)

Confirmation is done on the Master Node. It took less than 5 minutes for the Worker Node to become Running.

$ kubectl get nodes
NAME                  STATUS   ROLES    AGE     VERSION
master01.example.jp   Ready    master   14m     v1.19.3
worker01.example.jp   Ready    <none>   4m59s   v1.19.3
worker02.example.jp   Ready    <none>   4m33s   v1.19.3
$ kubectl get pods -A
NAMESPACE        NAME                                          READY   STATUS    RESTARTS   AGE
kube-system      coredns-f9fd979d6-trgnz                       1/1     Running   0          14m
kube-system      coredns-f9fd979d6-w7zvv                       1/1     Running   0          14m
kube-system      etcd-master01.example.jp                      1/1     Running   0          15m
kube-system      kube-apiserver-master01.example.jp            1/1     Running   0          15m
kube-system      kube-controller-manager-master01.example.jp   1/1     Running   0          15m
kube-system      kube-flannel-ds-bmvz4                         1/1     Running   0          10m
kube-system      kube-flannel-ds-mwmt6                         1/1     Running   0          6m9s
kube-system      kube-flannel-ds-zm2fk                         1/1     Running   0          5m43s
kube-system      kube-proxy-6rhgr                              1/1     Running   0          14m
kube-system      kube-proxy-b8fjn                              1/1     Running   0          6m9s
kube-system      kube-proxy-htndc                              1/1     Running   0          5m43s
kube-system      kube-scheduler-master01.example.jp            1/1     Running   0          15m
metallb-system   controller-8687cdc65-jgf2g                    1/1     Running   0          8m7s
metallb-system   speaker-q9ksw                                 1/1     Running   0          8m7s
metallb-system   speaker-vmt52                                 1/1     Running   0          79s
metallb-system   speaker-wkcz4                                 1/1     Running   0          2m16s

The controller that was Pending is now Running. kube-flannel-ds-XXXXX, kube-proxy-XXXXX, speaker-XXXXX are increased by the number of Worker Nodes.

Label Worker Node (Run on Master Node)

$ kubectl label node worker01.example.jp node-role.kubernetes.io/worker=worker
node/worker01.example.jp labeled
$ kubectl label node worker02.example.jp node-role.kubernetes.io/worker=worker
node/worker02.example.jp labeled
$ kubectl get nodes  #Confirm
NAME                  STATUS   ROLES    AGE     VERSION
master01.example.jp   Ready    master   16m     v1.19.3
worker01.example.jp   Ready    worker   7m22s   v1.19.3
worker02.example.jp   Ready    worker   6m56s   v1.19.3

Run a container with Kubernetes

To check the operation, use the image of Nginx container that returns the hostname of the host where Docker is running.

display-hostname.yaml


apiVersion: v1
kind: Namespace
metadata:
  name: nginx-prod
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: pool-ips  #Metallb IP pool name
      protocol: layer2
      addresses:
      - 192.168.100.211-192.168.100.215
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-lb # Service(LoadBalancer)s name
  namespace: nginx-prod
  annotations:
    metallb.universe.tf/address-pool: pool-ips #Metallb IP pool name
spec:
  type: LoadBalancer
  ports:
    - name: nginx-service-lb
      protocol: TCP
      port: 8080 #Port to listen on the IP of the Service
      nodePort: 30080 #Port to listen on node IP (30000)-32767)
      targetPort: 80 #Forwarding destination(container)The port of the port number listening on
  selector: #service selctor is treated as matchLabels
    app: nginx-pod #Label of the destination pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment #Deployment name(This is also the name of the ReplicaSet)
  namespace: nginx-prod
spec:
  selector:
    matchLabels: #Creating a ReplicaSet for pods with matching labels
      app: nginx-pod
  replicas: 2
  template: #Pod template
    metadata:
      name: nginx-pod #Pod name
      namespace: nginx-prod
      labels: #Pod label
        app: nginx-pod
    spec:
      containers: #Container settings
        - name: nginx-container #The name of the container
          image: yasthon/nginx-display-hostname #Image name
          env:
            - name: nginx-container
          ports:
            - containerPort: 80 #Container port
          volumeMounts:
            - name: file-hostname
              mountPath: /usr/share/nginx/html/hostname
      volumes:
        - name: file-hostname
          hostPath:
            path: /etc/hostname

Create a resource.

$ kubectl apply -f display-hostname.yaml
namespace/nginx-prod created
configmap/config created
service/nginx-service-lb created
deployment.apps/nginx-deployment created
$ kubectl get all -n nginx-prod  #Confirm
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-7ff4cc65cd-5bkv5   1/1     Running   0          3m33s
pod/nginx-deployment-7ff4cc65cd-xsp76   1/1     Running   0          3m33s

NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP       PORT(S)          AGE
service/nginx-service-lb   LoadBalancer   10.97.27.72   192.168.100.211   8080:30080/TCP   3m33s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   2/2     2            2           3m33s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-7ff4cc65cd   2         2         2       3m33s

$ kubectl get configmap -n metallb-system  #Confirm
NAME     DATA   AGE
config   1      3m57s

Pod, Service, Deployment, ReplicaSet, ConfigMap have been created. The Service EXTERNAL-IP was assigned from the IP pool.

Check the LoadBalancer Ingress and Port values from the Service details.

$ kubectl describe svc nginx-service-lb -n nginx-prod
Name:                     nginx-service-lb
Namespace:                nginx-prod
Labels:                   <none>
Annotations:              metallb.universe.tf/address-pool: pool-ips
Selector:                 app=nginx-pod
Type:                     LoadBalancer
IP:                       10.97.27.72
LoadBalancer Ingress:     192.168.100.211
Port:                     nginx-service-lb  8080/TCP
TargetPort:               80/TCP
NodePort:                 nginx-service-lb  30080/TCP
Endpoints:                10.244.1.4:80,10.244.2.3:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age                    From                Message
  ----    ------        ----                   ----                -------
  Normal  IPAllocated   6m43s                  metallb-controller  Assigned IP "192.168.100.211"
  Normal  nodeAssigned  6m29s                  metallb-speaker     announcing from node "worker01.example.jp"
  Normal  nodeAssigned  6m15s (x2 over 6m36s)  metallb-speaker     announcing from node "worker02.example.jp"

The IP address of LoadBalancer Ingress is the same IP address as EXTERNAL-IP. (x2 over 59s)What does the display of?Assigned twice?The first time took time?? It doesn't seem to interfere with the operation. It may not be displayed.

Connection confirmation

Connect to the IP address of LoadBalancer Ingress and the port number of Port.

$ curl 192.168.100.211:8080/index.sh
<html><head>
<title>worker01.example.jp</title>
<meta http-equiv="Content-type" content="text/html;charset=UTF-8">
</head><body>
HOSTNAME : worker01.example.jp
</body></html>

If you repeat the curl command several times, the connection destination will change to worker02.example.jp. It will be load balanced. It doesn't seem to be a simple round robin.

Try stopping Worker Node 2

I unplugged the Ether cable from Worker Node2 and ran the curl command. The command was executed on the Master Node.

$ curl 192.168.100.211:8080/index.sh
<html><head>
<title>worker01.example.jp</title>
<meta http-equiv="Content-type" content="text/html;charset=UTF-8">
</head><body>
HOSTNAME : worker01.example.jp
</body></html>

$ curl 192.168.100.211:8080/index.sh
<html><head>
<title>worker01.example.jp</title>
<meta http-equiv="Content-type" content="text/html;charset=UTF-8">
</head><body>
HOSTNAME : worker01.example.jp
</body></html>

Of course, I only access worker01.example.jp.

Try stopping Worker Node1

Reconnect the Ether cable for Worker Node2. I unplugged the Ether cable for Worker Node1 and ran the curl command.

$ curl 192.168.100.211:8080/index.sh
<html><head>
<title>worker02.example.jp</title>
<meta http-equiv="Content-type" content="text/html;charset=UTF-8">
</head><body>
HOSTNAME : worker02.example.jp
</body></html>

$ curl 192.168.100.211:8080/index.sh
<html><head>
<title>worker02.example.jp</title>
<meta http-equiv="Content-type" content="text/html;charset=UTF-8">
</head><body>
HOSTNAME : worker02.example.jp
</body></html>

I can only connect to Worker Node2. Reconnecting the Ether cable of Worker Node1 will connect to both Worker Nodes.

It is working to access, except for Workers that have become inaccessible.

Also from the browser

http://192.168.100.211:8080/index.sh


 You can access it with.
 When accessed from a browser, it is firmly fixed.
 Even if you hit Ctrl + f5 repeatedly, the connection destination does not change.
 When I unplug the cable, it connects to another Worker Node, so Load Balancer seems to be working.

## Cluster cleanup

 If for some reason you want to recreate the cluster, perform a cleanup.
 I refer to the following.

 --[Creating a single control plane cluster using kubeadm](https://kubernetes.io/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tear-down),
 --[Troubleshooting kubeadm-kubeadm hangs when deleting managed container](https://kubernetes.io/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#%E7%AE% A1% E7% 90% 86% E3% 82% B3% E3% 83% B3% E3% 83% 86% E3% 83% 8A% E3% 82% 92% E5% 89% 8A% E9% 99% A4% E3% 81% 99% E3% 82% 8B% E6% 99% 82% E3% 81% ABkubeadm% E3% 81% 8C% E6% AD% A2% E3% 81% BE% E3% 82% 8B)



### Delete node (run on Master Node)

```bash
$ kubectl get nodes
NAME                  STATUS   ROLES    AGE    VERSION
master01.example.jp   Ready    master   8d     v1.19.3
worker01.example.jp   Ready    worker   2d8h   v1.19.3
worker02.example.jp   Ready    worker   2d8h   v1.19.3

$ kubectl drain worker01.example.jp --delete-local-data --force --ignore-daemonsets
node/worker01.example.jp cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-brt9l, kube-system/kube-proxy-5pch5, metallb-system/speaker-4n2xx
evicting pod metallb-system/controller-8687cdc65-cn48m
pod/controller-8687cdc65-cn48m evicted
node/worker01.example.jp evicted

$ kubectl drain worker02.example.jp --delete-local-data --force --ignore-daemonsets
node/worker02.example.jp cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-tdg4l, kube-system/kube-proxy-k56gw, metallb-system/speaker-qdv85
evicting pod metallb-system/controller-8687cdc65-nkvvn
pod/controller-8687cdc65-nkvvn evicted
node/worker02.example.jp evicted

$ kubectl delete node worker01.example.jp
node "worker01.example.jp" deleted

$ kubectl delete node worker02.example.jp
ode "worker02.example.jp" deleted

$ kubectl get nodes    #Confirm
NAME                  STATUS   ROLES    AGE   VERSION
master01.example.jp   Ready    master   8d    v1.19.3

Reset Worker Node (Run on Worker Node1, Worker Node2)

$ sudo kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y    #Enter y
[preflight] Running pre-flight checks
W1104 20:37:31.440013 1890335 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

Performs the processing described in the message.

Delete the CNI configuration.

$ sudo rm -rf /etc/cni/net.d

There are Kubernetes related rules left in iptables, so delete them.

$ sudo iptables -L -n
#You'll see a ton of Kubernetes-related rules.
$ sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
$ sudo iptables -L -n    #Confirm
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy DROP)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Reset Master Node (Run on Master Node)

$ sudo systemctl restart docker.service
$ sudo kubeadm reset
$ sudo rm -rf /etc/cni/net.d
$ sudo iptables -L -n
$ sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
$ sudo iptables -L -n    #Confirm

After cleanup, run the sudo kubeadm init command to create the cluster.

Run Pod on Master Node (Run on Master Node)

You can run pods on the Master Node. The default settings are not scheduled on the Master Node.

$ kubectl describe node master01 | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule

Clear the NoSchedule setting so that the pod works on the Master Node as well. Add a "-" (hyphen) after master: NoSchedule.

$ kubectl taint nodes master01.example.jp node-role.kubernetes.io/master:NoSchedule-
node/master01.example.jp untainted
$ kubectl describe node master01 | grep Taints    #Confirm
Taints:             <none>

kind:Deployment replicas:Change 2 to 3



$ vi display-hostname.yaml replicas: 3 #Change 2 to 3

$ kubectl apply -f display-hostname.yaml namespace/nginx-prod unchanged configmap/config unchanged service/nginx-service-lb unchanged deployment.apps/nginx-deployment configured $ kubectl get pod -n nginx-prod #Confirm NAME READY STATUS RESTARTS AGE nginx-deployment-7ff4cc65cd-5f85p 1/1 Running 0 39m nginx-deployment-7ff4cc65cd-lm782 0/1 ContainerCreating 0 38s nginx-deployment-7ff4cc65cd-nrhl7 1/1 Running 0 39m


 The middle pod you are trying to launch is the pod running on the Master Node.
 Even in the detailed information, you can confirm that it is started with master01.example.jp.

$ kubectl describe pod/nginx-deployment-7ff4cc65cd-lm782 -n nginx-prod | grep ^Node: Node: master01.example.jp/192.168.128.193


 If you want to stop the pods from working on the Master Node, first change the number of pods back to two.
``display-hostname.yaml```of```replicas:```Is set back to 2.

```bash
$ vi display-hostname.yaml
 replicas: change 2 # 3 to 2

$ kubectl apply -f display-hostname.yaml
namespace/nginx-prod unchanged
configmap/config unchanged
service/nginx-service-lb unchanged
deployment.apps/nginx-deployment configured
 $ kubectl get pod -n nginx-prod # Check
NAME                                READY   STATUS        RESTARTS   AGE
nginx-deployment-7ff4cc65cd-5f85p   1/1     Running       0          47m
nginx-deployment-7ff4cc65cd-lm782   1/1     Terminating   0          8m32s
nginx-deployment-7ff4cc65cd-nrhl7   1/1     Running       0          47m

The middle pod that was launched earlier is about to exit.

Set NoSchedule.

$ kubectl taint nodes master01.example.jp node-role.kubernetes.io/master:NoSchedule
node/master01.example.jp tainted
 $ kubectl describe node master01 | grep Taints # Check
Taints:             node-role.kubernetes.io/master:NoSchedule

NoSchedule is set and Pod no longer works on Master Node.

##Finally

I was able to build a Kubernetes cluster environment with Raspberry Pi. It's a good time. Compared to buying more servers and PCs, it is a great deal. All you have to do is play.

##Reference URL

This is an article that I used as a reference.

This is a reference article with a link in the text.

Recommended Posts

Install Ubuntu20.04 on RaspberryPi 4 and build Kubernetes to run the container
How to install and configure the monitoring tool "Graphite" on Ubuntu
How to install the language used in Ubuntu and how to build the environment
Install Ubuntu Desktop 20.10 on RaspberryPi4
Install docker and docker-compose on ubuntu in the shortest process
How to Install Elixir and Phoenix Framework on Ubuntu 20.04 LTS
How to run React and Rails on the same server
[Virtualization] Install VMware and build Ubuntu (20.04)
Install JDK and JRE on Ubuntu 16.10
Build the latest Samba 4 on Ubuntu 20.04
How to install WildFly on Ubuntu 18.04
How to build vim on Ubuntu 20.04
How to install and use Composer on an ECS instance on Ubuntu 16.04
Install Webpacker and Yarn to run Rails
Run NordVPN on Docker (Windows) Ubuntu container
Install OpenJDK (Java) on the latest Ubuntu
How to install production Metabase on Ubuntu
I want to install PHP 7.2 on Ubuntu 20.04.
How to change the timezone on Ubuntu
Docker container build fails to install php-radis
[Docker] How to build when the source code is bind-mounted on the container
Add the JDK to the TeamCity build agent container
How to install network drivers on standalone Ubuntu
Build a DHCP and NAT router on Ubuntu 16.04
How to install multiple JDKs on Ubuntu 18.04 LTS
Install and switch between multiple Javas on Ubuntu
Install the latest version of Jenkins on Ubuntu 16
Compile and run Java on the command line
How to build a Pytorch environment on Ubuntu
How to run NullpoMino 7.5.0 on Ubuntu 20.04.1 64bit version
Monitor the Docker container and SystemD process on the same host with Zabbix on Ubuntu.
Install Ubuntu 20.04 in virtual box on windows10 and build a development environment using docker
Reinstall Kubernetes on Ubuntu 19.10
Install pyqt5 on ubuntu
Run Rubocop and RSpec on CircleCI and deploy to ECS
Install Ruby on Ubuntu 20.04
Run the Android emulator on Docker using Android Emulator Container Scripts
Install Autoware on Ubuntu 18.04.5
How to build a Jenkins server with a Docker container on CentOS 7 of VirtualBox and access the Jenkins server from a local PC
Install rbenv with apt on ubuntu and put ruby
Install MySQL 5.6 on CentOS6 [How to specify the version]
Install Rust in WSL2 Ubuntu environment and build WASM build environment
Install Homebrew on Ubuntu 20.04
Run tiscamera on Ubuntu 18.04
Install ag (the silver searcher) [on CentOS / Ubuntu / Mac]
Build Zabbix on Ubuntu 20.04
Just plug in the storage and power to run the Raspberry Pi (Ubuntu Server Edition)
Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (1)
Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (3)
Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (2)
I want to use screen sharing on the login screen on Ubuntu 18
How to install JDK 8 on Windows without using the installer
How to install java9 on elementaryOS Freya or Ubuntu 14.04 LTS
Run the sample "Introduction to TensorFlow Development" on Jetson nano
Install OpenJDK7 (JAVA) on ubuntu 14.04
Build VNC Server on Ubuntu 20.04
Install Cybozu Office 10 on Ubuntu 20.4
Install Docker on Ubuntu Server 20.04
Install zabbix agent (5.0) on Ubuntu 18.04
Install MAV Proxy on Ubuntu 18.04
Install Arudino IDE on Ubuntu 20