OpenShift 4.6 UPI install on bare metal

Overview

Ouchi Kubernetes · OpenShift bias of the Hello, everyone! !! We have installed UPI on bare metal for OpenShift 4.6, so here are the steps.

Please also refer to the following slides presented at Slightly Early Year-End Party OpenShift.Run Winter 2020 # 11 --connpass. OpenShift from Easy way to Hard ? Way - Speaker Deck

The announcement was broadcast on YouTube, so if you want to get a rough overview of the procedure, please watch the video below (link to the playback time of my announcement start). A little early year-end party! OpenShift.Run Winter 2020 --YouTube

Reference document

--English documentation on openshift.com

References

-Install OpenShift 4.1 on bare metal UPI --Akahat Engineer Blog ――I think that most of the articles on building UPI installations on the net are based on this article. This is the original article in this field. ――We also refer to this article.

-Install OpenShift 4.1 on Intel's ultra-small PC "NUC"-Qiita ――It is built with ESXi, so I used it as a reference.

Diagram

I have ESXi 7.0 on bare metal in my home network. The bastion server (bastion) and OpenShift nodes are built as VMs.

env-00.png

Bare metal specs

item value
CPU Intel Core i7-8700K 3.7GHz 6core/12thread
RAM 64GB
STORAGE HDD 2TB

1.1.1. Prerequisites

--OpenShift 4.6 can be installed on Bare metal with user-provisioned infrastructure --You need to configure your firewall to allow sites that your cluster needs access to

1.1.2. OpenShift Container Platform Internet Access and Telemetry Access

It states that you need internet access to install the cluster. For installation methods that do not require an active connection to the Internet, It is described in another chapter (1.3. Installing a cluster on bare metal in a network-restricted environment 4.6 | Red Hat Customer Portal).

1.1.3. Cluster machine requirements when using user-provisioned O infrastructure

1.1.3.1. Required machine

Check the machine type and number. The required machines are listed below.

One temporary bootstrap machine
3 control planes, or master, machine
At least two compute machines(Also known as a worker machine)。

The bootstrap machine can be removed after the cluster installation is complete.

It is recommended to use a separate physical host for high availability, but this procedure uses a VM.

1.1.3.2. Network connection requirements

You need to provide the ignition configuration file when you start the cluster machine.

The following is provided by the bastion server built in this procedure.

--Web server

1.1.3.3. Minimum resource requirements

Check the minimum resource requirements.

node OS vCPU RAM [GB] STORAGE [GB] Number of units
Bootstrap RHCOS 4 16 120 1
Control RHCOS 4 16 120 3
Compute RHCOS 2 8 120 2
total - 20 80 720 6

In response to the above resource requirements, the following nodes are built in this verification environment.

One server (step server) is added to meet the infrastructure requirements. Build a DNS/DHCP/LoadBalancer/web server on this server. It also serves as a bastion server for accessing the cluster.

node OS vCPU RAM [GB] STORAGE [GB] Number of units
Bootstrap RHCOS 4 16 120 1
Control RHCOS 4 16 120 3
Compute RHCOS 2 8 120 2
Stepping stone server RHEL8 2 8 50 1
total - 22 84 770 7

The specifications of the bare metal prepared this time are as follows.

node OS vCPU RAM [GB] STORAGE [GB] Number of units
My bare metal ESXi 12 64 2000 1

The number of CPU cores and the amount of memory installed are insufficient. This time, each node and bastion server are built as a VM and are in an overcommitted state, As long as I installed OpenShift and deployed the tutorial web app, it seems to be working fine.

Creating a VM

Create the following VM in advance and make a note of the MAC address.

--The MAC address is assigned when you start the VM once, and you can check it on the VM setting screen. --Set the following as well --Enable Expose hardware assisted virtualization to the guest OS Hardware virtualization in Hardware virtualization --Enable Enable virtualized CPU performance counters in Performance counters

The part where the following MAC address is described in this procedure manual needs to be rewritten as appropriate.

VM name MAC address
bootstrap 00:0c:29:64:c8:5f
master-0 00:0c:29:43:7b:f5
master-1 00:0c:29:5f:bf:43
master-2 00:0c:29:99:23:79
worker-0 00:0c:29:d7:84:ef
worker-1 00:0c:29:a4:a2:15

1.1.4. Creating a user-provisioned infrastructure

The documentation describes:

  1. Configure DHCP on each node or configure a static IP address.
  2. Provision the required load balancer.
  3. Set the machine port.
  4. Set up DNS.
  5. Check the network connection.

Creating the infrastructure is equivalent to building a bastion server in this procedure manual, and is built in the following order of implementation.

--Subscription registration --IPv6 disabled, IPv4 routing

Subscription registration

If Status is Subscribed (or Subscribed), it is registered.

If it is not registered, register it with the subscription-manager register command.

[loft@bastion ~]$ subscription-manager register
Registering: subscription.rhsm.redhat.com:443/subscription
username: loftkun
password:
This system was registered with the following ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Registered system name: bastion
[loft@bastion ~]$

IPv6 disabled, IPv4 routing

[root@bastion loft]# cat <<EOF > /etc/sysctl.d/99-custom.conf
> net.ipv6.conf.all.disable_ipv6 = 1
> net.ipv4.ip_forward = 1
> EOF
[root@bastion loft]# sysctl -p /etc/sysctl.d/99-custom.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.ip_forward = 1
[root@bastion loft]#

SELinux

[root@bastion loft]# setsebool -P httpd_read_user_content 1
[root@bastion loft]# setsebool -P haproxy_connect_any 1

Stop unnecessary services

[root@bastion loft]# systemctl disable avahi-daemon.service
Removed /etc/systemd/system/multi-user.target.wants/avahi-daemon.service.
Removed /etc/systemd/system/sockets.target.wants/avahi-daemon.socket.
Removed /etc/systemd/system/dbus-org.freedesktop.Avahi.service.
[root@bastion loft]# systemctl stop avahi-daemon*
[root@bastion loft]#
[root@bastion loft]# systemctl stop cups.service
[root@bastion loft]# systemctl disable cups.service
Removed /etc/systemd/system/multi-user.target.wants/cups.path.
Removed /etc/systemd/system/multi-user.target.wants/cups.service.
Removed /etc/systemd/system/sockets.target.wants/cups.socket.
Removed /etc/systemd/system/printer.target.wants/cups.service.
[root@bastion loft]#
[root@bastion loft]# systemctl stop rpcbind.service
Warning: Stopping rpcbind.service, but it can still be activated by:
  rpcbind.socket
[root@bastion loft]# systemctl stop rpcbind.socket
[root@bastion loft]# systemctl disable rpcbind.service
Removed /etc/systemd/system/multi-user.target.wants/rpcbind.service.
[root@bastion loft]# systemctl disable rpcbind.socket
Removed /etc/systemd/system/sockets.target.wants/rpcbind.socket.
[root@bastion loft]#
[root@bastion loft]# systemctl stop libvirtd.service
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd-admin.socket
  libvirtd.socket
  libvirtd-ro.socket
[root@bastion loft]# systemctl disable libvirtd.service
Removed /etc/systemd/system/multi-user.target.wants/libvirtd.service.
Removed /etc/systemd/system/sockets.target.wants/virtlogd.socket.
Removed /etc/systemd/system/sockets.target.wants/virtlockd.socket.
Removed /etc/systemd/system/sockets.target.wants/libvirtd.socket.
Removed /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket.
[root@bastion loft]#

NIC settings

Set a fixed IP address for the NIC.

NIC IP address ESXi port group ESXi virtual switch
ens192 192.168.3.101/24 VM Network vSwitch0
ens224 172.16.0.1/24 OCP Network vSwitch1
[root@bastion loft]# nmcli
ens192:Connected to ens192
        "VMware VMXNET3"
        ethernet (vmxnet3), 00:0C:29:66:82:2C, hw, mtu 1500
ip4 default
        inet4 192.168.3.101/24
        route4 192.168.3.0/24
        route4 0.0.0.0/0

ens224:Connected to ens224
        "VMware VMXNET3"
        ethernet (vmxnet3), 00:0C:29:66:82:36, hw, mtu 1500
        inet4 172.16.0.1/24
        route4 172.16.0.0/24

lo:No management
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 192.168.3.1
        interface: ens192

To get information about known devices"nmcli device show"Use the.
To get an overview of active connection profiles"nmcli connection show"Use the.

For more information on how to use it, nmcli(1)And nmcli-examples(5)See the man page for.
[root@bastion loft]#

firewalld

[root@bastion loft]# firewall-cmd --get-active-zones
public
  interfaces: ens192 ens224
[root@bastion loft]# firewall-cmd --set-default-zone=trusted
success
[root@bastion loft]#
[root@bastion loft]#
[root@bastion loft]# firewall-cmd --get-active-zones
trusted
  interfaces: ens192 ens224
[root@bastion loft]# firewall-cmd --add-masquerade --zone=trusted --permanent
success
[root@bastion loft]# firewall-cmd --reload
success
[root@bastion loft]# firewall-cmd --get-active-zones
trusted
  interfaces: ens224 ens192
[root@bastion loft]# firewall-cmd --list-all --permanent --zone=trusted
trusted
  target: ACCEPT
  icmp-block-inversion: no
  interfaces:
  sources:
  services:
  ports:
  protocols:
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

[root@bastion loft]#

DNS and DHCP

Add the following to /etc/dnsmasq.conf.

port=53
domain-needed
bogus-priv
resolv-file=/etc/resolv.dnsmasq
no-poll
address=/apps.test.example.local/172.16.0.1
#user=dnsmasq
#group=dnsmasq
no-dhcp-interface=ens192
expand-hosts
domain=test.example.local
dhcp-range=172.16.0.100,172.16.0.200,255.255.255.0,12h
dhcp-host=00:0c:29:64:c8:5f,bootstrap,172.16.0.100
dhcp-host=00:0c:29:43:7b:f5,master-0,172.16.0.101
dhcp-host=00:0c:29:5f:bf:43,master-1,172.16.0.102
dhcp-host=00:0c:29:99:23:79,master-2,172.16.0.103
dhcp-host=00:0c:29:d7:84:ef,worker-0,172.16.0.104
dhcp-host=00:0c:29:a4:a2:15,worker-1,172.16.0.105
dhcp-option=option:dns-server,172.16.0.1
dhcp-option=option:netmask,255.255.255.0
dhcp-leasefile=/var/lib/dnsmasq/dnsmasq.leases
srv-host=_etcd-server-ssl._tcp.test.example.local,etcd-0.test.example.local,2380,0,10
srv-host=_etcd-server-ssl._tcp.test.example.local,etcd-1.test.example.local,2380,0,10
srv-host=_etcd-server-ssl._tcp.test.example.local,etcd-2.test.example.local,2380,0,10
log-dhcp
log-facility=/var/log/dnsmasq.log
#conf-dir=/etc/dnsmasq.d,.rpmnew,.rpmsave,.rpmorig

Editing resolv.conf and creating a new /etc/resolv.dnsmasq

[root@bastion loft]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
[root@bastion loft]# vim /etc/resolv.conf
[root@bastion loft]# cat /etc/resolv.conf
# Generated by NetworkManager
#nameserver 8.8.8.8
nameserver 127.0.0.1
[root@bastion loft]#

[root@bastion loft]# cat <<EOF > /etc/resolv.dnsmasq
> nameserver 8.8.8.8
> EOF
[root@bastion loft]# cat /etc/resolv.dnsmasq
nameserver 8.8.8.8
[root@bastion loft]#

Add the following to / etc/hosts.

192.168.3.101   api
172.16.0.1      api-int
172.16.0.101    etcd-0
172.16.0.102    etcd-1
172.16.0.103    etcd-2

172.16.0.100    bootstrap
172.16.0.101    master-0
172.16.0.102    master-1
172.16.0.103    master-2
172.16.0.104    worker-0
172.16.0.105    worker-1

Service start

[root@bastion loft]# systemctl enable dnsmasq.service
[root@bastion loft]# systemctl status dnsmasq.service
● dnsmasq.service - DNS caching server.
   Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sun 2020-11-15 10:24:27 JST; 1min 27s ago
  Process: 16843 ExecStart=/usr/sbin/dnsmasq -k (code=exited, status=1/FAILURE)
 Main PID: 16843 (code=exited, status=1/FAILURE)

November 15 10:24:27 bastion systemd[1]: Started DNS caching server..
November 15 10:24:27 bastion dnsmasq[16843]: dnsmasq: illegal repeated keyword at line 678 of /etc/dnsmasq.conf
November 15 10:24:27 bastion dnsmasq[16843]: illegal repeated keyword at line 678 of /etc/dnsmasq.conf
November 15 10:24:27 bastion systemd[1]: dnsmasq.service: Main process exited, code=exited, status=1/FAILURE
November 15 10:24:27 bastion dnsmasq[16843]: FAILED to start up
November 15 10:24:27 bastion systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
[root@bastion loft]# systemctl start dnsmasq.service
[root@bastion loft]# systemctl status dnsmasq.service
● dnsmasq.service - DNS caching server.
   Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-11-15 10:25:58 JST; 1s ago
 Main PID: 16863 (dnsmasq)
    Tasks: 1 (limit: 23860)
   Memory: 748.0K
   CGroup: /system.slice/dnsmasq.service
           └─16863 /usr/sbin/dnsmasq -k

November 15 10:25:58 bastion systemd[1]: Started DNS caching server..
[root@bastion loft]#

Load balancer

[root@bastion loft]# yum install haproxy
Updating Subscription Management repositories.
Final confirmation of metadata expiration: 0:32:It was held 35 hours ago on November 15, 2020 at 09:55:33.
The dependency has been resolved.

(Abbreviation)
Installation complete:
  haproxy-1.8.23-5.el8.x86_64

Has completed!
[root@bastion loft]#

Add the following to /etc/haproxy/haproxy.cfg. Comment out existing settings (frontend main, backend static, backend app).

frontend K8s-api
    bind *:6443
    option tcplog
    mode tcp
    default_backend     api-6443

frontend Machine-config
    bind *:22623
    option tcplog
    mode tcp
    default_backend     config-22623

frontend Ingress-http
    bind *:80
    option tcplog
    mode tcp
    default_backend http-80

frontend Ingress-https
    bind *:443
    option tcplog
    mode tcp
    default_backend     https-443


backend api-6443
    mode tcp
    balance     roundrobin
    option  ssl-hello-chk 
    server  bootstrap bootstrap.test.example.local:6443 check
    server  master-0 master-0.test.example.local:6443 check
    server  master-1 master-1.test.example.local:6443 check
    server  master-2 master-2.test.example.local:6443 check

backend config-22623
    mode tcp
    balance     roundrobin
    server  bootstrap bootstrap.test.example.local:22623 check
    server  master-0 master-0.test.example.local:22623 check
    server  master-1 master-1.test.example.local:22623 check
    server  master-2 master-2.test.example.local:22623 check

backend http-80
    mode tcp
    balance     roundrobin
    server  worker-0 worker-0.test.example.local:80 check
    server  worker-1 worker-1.test.example.local:80 check

backend https-443
    mode tcp
    balance     roundrobin
    option      ssl-hello-chk
    server  worker-0 worker-0.test.example.local:443 check
    server  worker-1 worker-1.test.example.local:443 check
[root@bastion loft]# systemctl enable haproxy.service
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
[root@bastion loft]# systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
[root@bastion loft]# systemctl start haproxy.service
[root@bastion loft]# systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-11-15 10:44:33 JST; 2s ago
  Process: 12836 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 12838 (haproxy)
    Tasks: 2 (limit: 23860)
   Memory: 3.2M
   CGroup: /system.slice/haproxy.service
           ├─12838 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           └─12839 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid

November 15 10:44:32 bastion systemd[1]: Starting HAProxy Load Balancer...
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for frontend 'K8s-api' as it requires HTTP mode.
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for frontend 'Machine-config' as it requires HTTP mode.
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for frontend 'Ingress-http' as it requires HTTP mode.
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for frontend 'Ingress-https' as it requires HTTP mode.
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for backend 'api-6443' as it requires HTTP mode.
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for backend 'config-22623' as it requires HTTP mode.
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for backend 'http-80' as it requires HTTP mode.
November 15 10:44:33 bastion haproxy[12838]: [WARNING] 319/104433 (12838) : config : 'option forwardfor' ignored for backend 'https-443' as it requires HTTP mode.
November 15 10:44:33 bastion systemd[1]: Started HAProxy Load Balancer.
[root@bastion loft]#

Web server

[root@bastion loft]# yum install nginx
Updating Subscription Management repositories.
Final confirmation of metadata expiration: 0:50:It was held 51 hours ago on November 15, 2020 at 09:55:33.
The dependency has been resolved.

(Abbreviation)

Installation complete:
  nginx-1:1.14.1-9.module+el8.0.0+4108+af250afe.x86_64                                  nginx-all-modules-1:1.14.1-9.module+el8.0.0+4108+af250afe.noarch
  nginx-mod-stream-1:1.14.1-9.module+el8.0.0+4108+af250afe.x86_64                       nginx-mod-http-image-filter-1:1.14.1-9.module+el8.0.0+4108+af250afe.x86_64
  nginx-mod-http-xslt-filter-1:1.14.1-9.module+el8.0.0+4108+af250afe.x86_64             nginx-mod-http-perl-1:1.14.1-9.module+el8.0.0+4108+af250afe.x86_64
  nginx-filesystem-1:1.14.1-9.module+el8.0.0+4108+af250afe.noarch                       nginx-mod-mail-1:1.14.1-9.module+el8.0.0+4108+af250afe.x86_64

Has completed!
[root@bastion loft]#

Edit the # edit for ocp line (4 places) for /etc/nginx/nginx.conf.

    server {
        listen        8008 default_server;     # edit for ocp
        #listen       80 default_server;       # edit for ocp
        #listen       [::]:80 default_server;  # edit for ocp
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
        disable_symlinks off;   # edit for ocp
    }
[root@bastion loft]# systemctl enable nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@bastion loft]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: inactive (dead)

November 15 10:46:32 bastion systemd[1]: nginx.service: Unit cannot be reloaded because it is inactive.
[root@bastion loft]# systemctl start nginx
[root@bastion loft]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-11-15 10:52:13 JST; 2s ago
  Process: 13143 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 13140 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 13139 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 13144 (nginx)
    Tasks: 3 (limit: 23860)
   Memory: 8.0M
   CGroup: /system.slice/nginx.service
           ├─13144 nginx: master process /usr/sbin/nginx
           ├─13145 nginx: worker process
           └─13146 nginx: worker process

November 15 10:52:13 bastion systemd[1]: Starting The nginx HTTP and reverse proxy server...
November 15 10:52:13 bastion nginx[13140]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
November 15 10:52:13 bastion nginx[13140]: nginx: configuration file /etc/nginx/nginx.conf test is successful
November 15 10:52:13 bastion systemd[1]: Started The nginx HTTP and reverse proxy server.
[root@bastion loft]#

1.1.5. Generate SSH private key and add to agent

[root@bastion loft]# ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/new_rsa
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/new_rsa.
Your public key has been saved in /root/.ssh/new_rsa.pub.
The key fingerprint is:
SHA256:KED2BrIIaokKYkQjRKohAbvFX55ZoPcWykjTCSxuTl0 root@bastion
The key's randomart image is:
+---[RSA 4096]----+
|@*+.. .          |
|OO+o.+Eo         |
|%=+o*.= o        |
|O++=.B B .       |
|++  + B S        |
|  .  . .         |
|                 |
|                 |
|                 |
+----[SHA256]-----+
[root@bastion loft]# ls -Fla ~/.ssh/new_rsa
-rw-------.1 root root 3381 November 15 10:58 /root/.ssh/new_rsa
[root@bastion loft]#

1.1.6. Obtaining the installation program

Get it from the following.

Install OpenShift 4 | Red Hat OpenShift Cluster Manager | Bare Metal User-Provisioned Infrastructure

Download the following files with a browser on the bastion server.

Type file name
OpenShift installer openshift-install-linux.tar.gz
Pull secret pull-secret.txt
Command line interface openshift-client-linux.tar.gz

The file is stored in / root/ocp.

[root@bastion loft]# mkdir /root/ocp
[root@bastion loft]# mv /home/${USER}/Downloads/openshift-install-linux.tar.gz /root/ocp/
[root@bastion loft]# mv /home/${USER}/Downloads/pull-secret /root/ocp/
[root@bastion loft]# mv /home/${USER}/Downloads/openshift-client-linux.tar.gz /root/ocp/
[root@bastion loft]# cd /root/ocp/
[root@bastion ocp]# tar xvf openshift-install-linux.tar.gz
[root@bastion ocp]# tar zxvf openshift-client-linux.tar.gz
[root@bastion ocp]# ls -Fla
Total 598976
drwxr-xr-x.2 root root 171 November 15 11:48 ./
dr-xr-x---.7 root root 264 November 15 11:06 ../
-rw-r--r--.1 root root 954 November 1 01:23 README.md
-rwxr-xr-x.2 root root 74528264 November 1 01:23 kubectl*
-rwxr-xr-x.2 root root 74528264 November 1 01:23 oc*
-rw-rw-r--.1 loft loft 24316569 November 15 11:19 openshift-client-linux.tar.gz
-rwxr-xr-x.1 root root 353038336 November 1 01:23 openshift-install*
-rw-r--r--.1 loft loft 86925136 November 15 11:02 openshift-install-linux.tar.gz
-rw-r--r--.1 loft loft 2767 November 15 11:02 pull-secret.txt
[root@bastion ocp]#

The RHCOS ISO is stored in the ESXi datastore. This procedure uses rhcos-installer.x86_64.iso. (It seems that the version number is no longer included in the ISO name).

You can download rhcos-metal.x86_64.raw.gz by clicking the Donwload RHCOS RAW button, but it is not used in this procedure manual. (It seems that raw is no longer used.)

1.1.7. Installing the CLI by downloading the binary

It has been done in the previous chapter.

1.1.8. Manual creation of installation configuration file

Check the values ​​of pullSecret and sshKey.

[root@bastion ocp]# cat pull-secret
{"auths":{"cloud.openshift.com": (Abbreviation),"email":"xxxxxx"}}}
[root@bastion ocp]# cat ~/.ssh/new_rsa.pub
ssh-rsa (Abbreviation) == root@bastion
[root@bastion ocp]#

Create an installation_directory (named / root/ocp/bare-metal) and create install-config.yaml.

The values ​​for pullSecret and sshKey need to be rewritten.

[root@bastion ocp]# mkdir /root/ocp/bare-metal
[root@bastion ocp]# cd /root/ocp/bare-metal
[root@bastion bare-metal]# cat <<EOF > install-config.yaml
> apiVersion: v1
> baseDomain: example.local
> compute:
> - hyperthreading: Enabled
>   name: worker
>   replicas: 0
> controlPlane:
>   hyperthreading: Enabled
>   name: master
>   replicas: 3
> metadata:
>   name: test
> networking:
>   clusterNetworks:
>   - cidr: 10.128.0.0/14
>     hostPrefix: 23
>   networkType: OpenShiftSDN
>   serviceNetwork:
>   - 172.30.0.0/16
> platform:
>   none: {}
> fips: false
> pullSecret: '{"auths":{"cloud.openshift.com": (Abbreviation),"email":"xxxxxx"}}}'
> sshKey: 'ssh-rsa (Abbreviation) == root@bastion'
> EOF
[root@bastion bare-metal]# cat install-config.yaml
apiVersion: v1
baseDomain: example.local
compute:
- hyperthreading: Enabled
  name: worker
  replicas: 0
controlPlane:
  hyperthreading: Enabled
  name: master
  replicas: 3
metadata:
  name: test
networking:
  clusterNetworks:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  none: {}
fips: false
pullSecret: '{"auths":{"cloud.openshift.com": (Abbreviation),"email":"xxxxxx"}}}'
sshKey: 'ssh-rsa (Abbreviation) == root@bastion'
[root@bastion bare-metal]#

1.1.9.3 Node cluster configuration

No work in particular

1.1.10. Creating Kubernetes manifest and Ignition configuration files

Create a manifest.

[root@bastion ocp]# pwd
/root/ocp
[root@bastion ocp]#  ./openshift-install create manifests --dir=bare-metal
INFO Consuming Install Config from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
INFO Manifests created in: bare-metal/manifests and bare-metal/openshift
[root@bastion ocp]#
[root@bastion ocp]# ls -Fla bare-metal/
Total 184
drwxr-xr-x.4 root root 107 November 15 12:14 ./
drwxr-xr-x.3 root root 189 November 15 11:54 ../
-rw-r--r--.1 root root 24084 November 15 12:14 .openshift_install.log
-rw-r-----.1 root root 155141 November 15 12:14 .openshift_install_state.json
drwxr-x---.2 root root 4096 November 15 12:14 manifests/
drwxr-x---.2 root root 4096 November 15 12:14 openshift/
[root@bastion ocp]# ls -Fla bare-metal/manifests/
96 in total
drwxr-x---.2 root root 4096 November 15 12:14 ./
drwxr-xr-x.4 root root 107 November 15 12:14 ../
-rw-r-----.1 root root 169 November 15 12:14 04-openshift-machine-config-operator.yaml
-rw-r-----.1 root root 1596 November 15 12:14 cluster-config.yaml
-rw-r-----.1 root root 147 November 15 12:14 cluster-dns-02-config.yml
-rw-r-----.1 root root 422 November 15 12:14 cluster-infrastructure-02-config.yml
-rw-r-----.1 root root 152 November 15 12:14 cluster-ingress-02-config.yml
-rw-r-----.1 root root 513 November 15 12:14 cluster-network-01-crd.yml
-rw-r-----.1 root root 272 November 15 12:14 cluster-network-02-config.yml
-rw-r-----.1 root root 142 November 15 12:14 cluster-proxy-01-config.yaml
-rw-r-----.1 root root 170 November 15 12:14 cluster-scheduler-02-config.yml
-rw-r-----.1 root root 264 November 15 12:14 cvo-overrides.yaml
-rw-r-----.1 root root 1335 November 15 12:14 etcd-ca-bundle-configmap.yaml
-rw-r-----.1 root root 3958 November 15 12:14 etcd-client-secret.yaml
-rw-r-----.1 root root 4009 November 15 12:14 etcd-metric-client-secret.yaml
-rw-r-----.1 root root 1359 November 15 12:14 etcd-metric-serving-ca-configmap.yaml
-rw-r-----.1 root root 3917 November 15 12:14 etcd-metric-signer-secret.yaml
-rw-r-----.1 root root 156 November 15 12:14 etcd-namespace.yaml
-rw-r-----.1 root root 334 November 15 12:14 etcd-service.yaml
-rw-r-----.1 root root 1336 November 15 12:14 etcd-serving-ca-configmap.yaml
-rw-r-----.1 root root 3890 November 15 12:14 etcd-signer-secret.yaml
-rw-r-----.1 root root 118 November 15 12:14 kube-cloud-config.yaml
-rw-r-----.1 root root 1304 November 15 12:14 kube-system-configmap-root-ca.yaml
-rw-r-----.1 root root 4042 November 15 12:14 machine-config-server-tls-secret.yaml
-rw-r-----.1 root root 3841 November 15 12:14 openshift-config-secret-pull-secret.yaml
[root@bastion ocp]# ls -Fla bare-metal/openshift/
28 in total
drwxr-x---.2 root root 4096 November 15 12:14 ./
drwxr-xr-x.4 root root 107 November 15 12:14 ../
-rw-r-----.1 root root 181 November 15 12:14 99_kubeadmin-password-secret.yaml
-rw-r-----.1 root root 2458 November 15 12:14 99_openshift-cluster-api_master-user-data-secret.yaml
-rw-r-----.1 root root 2458 November 15 12:14 99_openshift-cluster-api_worker-user-data-secret.yaml
-rw-r-----.1 root root 1140 November 15 12:14 99_openshift-machineconfig_99-master-ssh.yaml
-rw-r-----.1 root root 1140 November 15 12:14 99_openshift-machineconfig_99-worker-ssh.yaml
-rw-r-----.1 root root 173 November 15 12:14 openshift-install-manifests.yaml
[root@bastion ocp]#

Let mastersSchedulable be false.

[root@bastion ocp]# vim bare-metal/manifests/cluster-scheduler-02-config.yml
[root@bastion ocp]# cat bare-metal/manifests/cluster-scheduler-02-config.yml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
  creationTimestamp: null
  name: cluster
spec:
  mastersSchedulable: false
  policy:
    name: ""
status: {}
[root@bastion ocp]#

Create an Ignition configuration file.

[root@bastion ocp]# ./openshift-install create ignition-configs --dir=bare-metal
INFO Consuming Common Manifests from target directory
INFO Consuming Master Machines from target directory
INFO Consuming Openshift Manifests from target directory
INFO Consuming OpenShift Install (Manifests) from target directory
INFO Consuming Worker Machines from target directory
INFO Ignition-Configs created in: bare-metal and bare-metal/auth
[root@bastion ocp]# ls -Fla bare-metal/
1632 in total
drwxr-xr-x.3 root root 163 November 15 12:20 ./
drwxr-xr-x.3 root root 189 November 15 11:54 ../
-rw-r--r--.1 root root 73257 November 15 12:20 .openshift_install.log
-rw-r-----.1 root root 1232701 November 15 12:20 .openshift_install_state.json
drwxr-x---.2 root root 50 November 15 12:20 auth/
-rw-r-----.1 root root 291737 November 15 12:20 bootstrap.ign
-rw-r-----.1 root root 1720 November 15 12:20 master.ign
-rw-r-----.1 root root 96 November 15 12:20 metadata.json
-rw-r-----.1 root root 1720 November 15 12:20 worker.ign
[root@bastion ocp]# ls -Fla bare-metal/auth/
16 in total
drwxr-x---.2 root root 50 November 15 12:20 ./
drwxr-xr-x.3 root root 163 November 15 12:20 ../
-rw-r-----.1 root root 23 November 15 12:20 kubeadmin-password
-rw-r-----.1 root root 8948 November 15 12:20 kubeconfig
[root@bastion ocp]#

Place it in the public directory of nginx.

[root@bastion ocp]# mkdir /usr/share/nginx/html/ocp/rhcos/ignitions/ -p
[root@bastion ocp]# cp ./bare-metal/*.ign /usr/share/nginx/html/ocp/rhcos/ignitions/
[root@bastion ocp]# ls -Fla /usr/share/nginx/html/ocp/rhcos/ignitions/
Total 296
drwxr-xr-x.2 root root 63 November 15 12:24 ./
drwxr-xr-x.4 root root 37 November 15 11:41 ../
-rw-r-----.1 root root 291737 November 15 12:24 bootstrap.ign
-rw-r-----.1 root root 1720 November 15 12:24 master.ign
-rw-r-----.1 root root 1720 November 15 12:24 worker.ign
[root@bastion ocp]#

It contains a certificate with a 24-hour expiration date, so if you work for more than 24 hours, you will need to delete and create the bare-metal directory again.

Make sure you can get the ign file from nginx. If you cannot get it, set the permissions of the nginx public directory.

[root@bastion ocp]# cd ~/
[root@bastion ~]# curl 172.16.0.1:8008/ocp/rhcos/ignitions/master.ign
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.14.1</center>
</body>
</html>
[root@bastion ~]# chmod o+r /usr/share/nginx/html/ -R
[root@bastion ~]# curl 172.16.0.1:8008/ocp/rhcos/ignitions/master.ign
{"ignition":{"config": (Abbreviation) },"version":"3.1.0"}}[root@bastion ~]#
[root@bastion ~]#

1.1.11. Creating a Red Hat Enterprise Linux CoreOS (RHCOS) machine

1.1.11.1. Creating a Red Hat Enterprise Linux CoreOS (RHCOS) machine using an ISO image

Mount the ISO (rhcos-installer.x86_64.iso) on each VM.

bootstrap VM boot

Boots the bootstrap VM. Confirm that CoreOS starts and is in the command standby state. Specify the URL of bootstrap.ign in the coreos-installer install command.

#Performed on bootstrap VM
$ sudo coreos-installer install --ignition-url=http://172.16.0.1:8008/ocp/rhcos/ignitions/bootstrap.ign /dev/sda --insecure-ignition

#Reboot when install complete is displayed(Remarks:It was not restarted automatically, so it was done manually.)
$ sudo reboot

After rebooting, enter bootstarp from the bastion VM (bastion) with ssh and display the progress of bootstrap processing. (At first, you will notice a lot of errors, but you don't have to worry too much about the errors in the early stages of construction.)

[root@bastion loft]# ssh -i /root/.ssh/new_rsa [email protected]
Red Hat Enterprise Linux CoreOS 46.82.202010091720-0
  Part of OpenShift 4.6, RHCOS is a Kubernetes native operating system
  managed by the Machine Config Operator (`clusteroperator/machine-config`).

WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via `machineconfig` objects:
  https://docs.openshift.com/container-platform/4.6/architecture/architecture-rhcos.html

---
This is the bootstrap node; it will be destroyed when the master is fully up.

The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g.

  journalctl -b -f -u release-image.service -u bootkube.service
Last login: Sun Nov 15 04:12:21 2020 from 172.16.0.1
[core@bootstrap ~]$

[core@bootstrap ~]$ journalctl -b -f -u release-image.service -u bootkube.service
-- Logs begin at Sun 2020-11-15 04:04:49 UTC. --
Nov 15 04:08:37 bootstrap bootkube.sh[2455]: Created "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: Created "configmap-initial-etcd-serving-ca.yaml" configmaps.v1./initial-etcd-ca -n openshift-config
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: [#102] failed to create some manifests:
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-master-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-master-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-worker-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: [#103] failed to create some manifests:
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-master-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-master-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-worker-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: Created "99_openshift-machineconfig_99-master-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: Created "99_openshift-machineconfig_99-worker-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n

Start the master VM

In the same way, start the master VMs one by one.

Specify the URL of master.ign to install.

#Performed on the master VM
$ sudo coreos-installer install --ignition-url=http://172.16.0.1:8008/ocp/rhcos/ignitions/master.ign /dev/sda --insecure-ignition

#Reboot when install complete is displayed(Remarks:It was not restarted automatically, so it was done manually.)
$ sudo reboot

Start worker VM

In the same way, start the worker VMs one by one.

Specify the URL of worker.ign to install.

#Performed on the master VM
$ sudo coreos-installer install --ignition-url=http://172.16.0.1:8008/ocp/rhcos/ignitions/worker.ign /dev/sda --insecure-ignition

#Reboot when install complete is displayed(Remarks:It was not restarted automatically, so it was done manually.)
$ sudo reboot

1.1.12. Creating a cluster

It is OK if bootkube.service complete is displayed in the progress of bootstrap processing. This took less than an hour.

[core@bootstrap ~]$ journalctl -b -f -u release-image.service -u bootkube.service
-- Logs begin at Sun 2020-11-15 04:04:49 UTC. --
Nov 15 04:08:37 bootstrap bootkube.sh[2455]: Created "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: Created "configmap-initial-etcd-serving-ca.yaml" configmaps.v1./initial-etcd-ca -n openshift-config
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: [#102] failed to create some manifests:
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-master-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-master-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-worker-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: [#103] failed to create some manifests:
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-master-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-master-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: "99_openshift-machineconfig_99-worker-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": n          o matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: Created "99_openshift-machineconfig_99-master-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n
Nov 15 04:08:38 bootstrap bootkube.sh[2455]: Created "99_openshift-machineconfig_99-worker-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n
(Abbreviation)
Nov 15 05:00:21 bootstrap bootkube.sh[17079]: Skipped "cvo-overrides.yaml" clusterversions.v1.config.openshift.io/version -n openshift-cluster-version as it already exists
Nov 15 05:00:21 bootstrap bootkube.sh[17079]: Skipped "etcd-ca-bundle-configmap.yaml" configmaps.v1./etcd-ca-bundle -n openshift-config as it already exists
Nov 15 05:00:22 bootstrap bootkube.sh[17079]: Skipped "etcd-client-secret.yaml" secrets.v1./etcd-client -n openshift-config as it already exists
Nov 15 05:00:22 bootstrap bootkube.sh[17079]: Skipped "etcd-metric-client-secret.yaml" secrets.v1./etcd-metric-client -n openshift-config as it already exists
Nov 15 05:00:22 bootstrap bootkube.sh[17079]: Skipped "etcd-metric-serving-ca-configmap.yaml" configmaps.v1./etcd-metric-serving-ca -n openshift-config as it already exists
Nov 15 05:00:23 bootstrap bootkube.sh[17079]: Sending bootstrap-finished event.Tearing down temporary bootstrap control plane...
Nov 15 05:00:24 bootstrap bootkube.sh[17079]: Waiting for CEO to finish...
Nov 15 05:00:29 bootstrap bootkube.sh[17079]: I1115 05:00:29.772499       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:00:54 bootstrap bootkube.sh[17079]: I1115 05:00:54.071938       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:01:01 bootstrap bootkube.sh[17079]: I1115 05:01:01.076028       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:02:31 bootstrap bootkube.sh[17079]: I1115 05:02:31.511993       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:02:51 bootstrap bootkube.sh[17079]: I1115 05:02:51.213273       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:03:19 bootstrap bootkube.sh[17079]: I1115 05:03:19.621074       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:03:31 bootstrap bootkube.sh[17079]: I1115 05:03:31.838247       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:03:31 bootstrap bootkube.sh[17079]: I1115 05:03:31.891774       1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Nov 15 05:03:40 bootstrap bootkube.sh[17079]: I1115 05:03:40.380992       1 waitforceo.go:64] Cluster etcd operator bootstrapped successfully
Nov 15 05:03:40 bootstrap bootkube.sh[17079]: I1115 05:03:40.381071       1 waitforceo.go:58] cluster-etcd-operator bootstrap etcd
Nov 15 05:03:40 bootstrap bootkube.sh[17079]: bootkube.service complete

Run the bootstrap wait-for-completion command on the bastion server. If you see It is now safe to remove the bootstrap resources, it's OK.

[root@bastion ocp]# ./openshift-install --dir=bare-metal wait-for bootstrap-complete --log-level=debug
DEBUG OpenShift Installer 4.6.3
DEBUG Built from commit a4f0869e0d2a5b2d645f0f28ef9e4b100fa8f779
INFO Waiting up to 20m0s for the Kubernetes API at https://api.test.example.local:6443...
INFO API v1.19.0+9f84db3 up
INFO Waiting up to 30m0s for bootstrapping to complete...
DEBUG Bootstrap status: complete
INFO It is now safe to remove the bootstrap resources
INFO Time elapsed: 0s
[root@bastion ocp]#

1.1.13. Login to the cluster

[root@bastion ocp]# pwd
/root/ocp
[root@bastion ocp]# export KUBECONFIG=bare-metal/auth/kubeconfig
[root@bastion ocp]# ./oc -h
OpenShift Client

This client helps you develop, build, deploy, and run your applications on any
OpenShift or Kubernetes cluster. It also includes the administrative
commands for managing a cluster under the 'adm' subcommand.

Usage:
  oc [flags]

Basic Commands:
  login           Log in to a server
  new-project     Request a new project
  new-app         Create a new application
  status          Show an overview of the current project
  project         Switch to another project
  projects        Display existing projects
  explain         Documentation of resources

Build and Deploy Commands:
  rollout         Manage a Kubernetes deployment or OpenShift deployment config
  rollback        Revert part of an application back to a previous deployment
  new-build       Create a new build configuration
  start-build     Start a new build
  cancel-build    Cancel running, pending, or new builds
  import-image    Imports images from a container image registry
  tag             Tag existing images into image streams

Application Management Commands:
  create          Create a resource from a file or from stdin.
  apply           Apply a configuration to a resource by filename or stdin
  get             Display one or many resources
  describe        Show details of a specific resource or group of resources
  edit            Edit a resource on the server
  set             Commands that help set specific features on objects
  label           Update the labels on a resource
  annotate        Update the annotations on a resource
  expose          Expose a replicated application as a service or route
  delete          Delete resources by filenames, stdin, resources and names, or by resources and label selector
  scale           Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale       Autoscale a deployment config, deployment, replica set, stateful set, or replication controller
  secrets         Manage secrets
  serviceaccounts Manage service accounts in your project

Troubleshooting and Debugging Commands:
  logs            Print the logs for a container in a pod
  rsh             Start a shell session in a container.
  rsync           Copy files between local filesystem and a pod
  port-forward    Forward one or more local ports to a pod
  debug           Launch a new instance of a pod for debugging
  exec            Execute a command in a container
  proxy           Run a proxy to the Kubernetes API server
  attach          Attach to a running container
  run             Run a particular image on the cluster
  cp              Copy files and directories to and from containers.
  wait            Experimental: Wait for a specific condition on one or many resources.

Advanced Commands:
  adm             Tools for managing a cluster
  replace         Replace a resource by filename or stdin
  patch           Update field(s) of a resource using strategic merge patch
  process         Process a template into list of resources
  extract         Extract secrets or config maps to disk
  observe         Observe changes to resources and react to them (experimental)
  policy          Manage authorization policy
  auth            Inspect authorization
  convert         Convert config files between different API versions
  image           Useful commands for managing images
  registry        Commands for working with the registry
  idle            Idle scalable resources
  api-versions    Print the supported API versions on the server, in the form of "group/version"
  api-resources   Print the supported API resources on the server
  cluster-info    Display cluster info
  diff            Diff live version against would-be applied version
  kustomize       Build a kustomization target from a directory or a remote url.

Settings Commands:
  logout          End the current server session
  config          Modify kubeconfig files
  whoami          Return information about the current session
  completion      Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  ex              Experimental commands under active development
  help            Help about any command
  plugin          Provides utilities for interacting with plugins.
  version         Print the client and server version information

Use "oc <command> --help" for more information about a given command.
Use "oc options" for a list of global command-line options (applies to all commands).
[root@bastion ocp]#
[root@bastion ocp]# ./oc whoami
system:admin
[root@bastion ocp]#

1.1.14. Approving the CSR of the machine

Immediately after startup, I couldn't see the worker node, only the master node. You may have to wait a bit (watch with watch ./oc get csr) as you'll see less csr and you won't see anything for Pending. After a while, you can see the csr of Pending.

Execute ./oc adm certificate approve <NAME> for csr of Pending. When you approve, the csr of Pending will continue to be displayed, so approve it as appropriate.

After the approve is finished, the worker node is also visible. The Not Ready node also became Ready after a while.

[root@bastion ocp]# ./oc get no
NAME       STATUS     ROLES    AGE   VERSION
master-0   Ready      master   63m   v1.19.0+9f84db3
master-1   Ready      master   36m   v1.19.0+9f84db3
master-2   Ready      master   31m   v1.19.0+9f84db3
worker-0   NotReady   worker   44s   v1.19.0+9f84db3
worker-1   NotReady   worker   62s   v1.19.0+9f84db3
[root@bastion ocp]#
[root@bastion ocp]# ./oc get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR                                                                   CONDITION
csr-5wghd   34m     kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-797h4   9m38s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
csr-8tlmj   30m     kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-f2jdl   29m     kubernetes.io/kubelet-serving                 system:node:master-2                                                        Approved,Issued
csr-lbwcg   34m     kubernetes.io/kubelet-serving                 system:node:master-1                                                        Approved,Issued
csr-mfntj   9m13s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
csr-smzlr   62m     kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-wj6xk   61m     kubernetes.io/kubelet-serving                 system:node:master-0                                                        Approved,Issued
[root@bastion ocp]#
[root@bastion ocp]#
[root@bastion ocp]# ./oc adm certificate approve csr-797h4
certificatesigningrequest.certificates.k8s.io/csr-797h4 approved
[root@bastion ocp]# ./oc adm certificate approve csr-mfntj
certificatesigningrequest.certificates.k8s.io/csr-mfntj approved
[root@bastion ocp]# ./oc adm certificate approve csr-dfj47
certificatesigningrequest.certificates.k8s.io/csr-dfj47 approved
certificatesigningrequest.certificates.k8s.io/csr-px8f9 approved
[root@bastion ocp]# ./oc get csr
NAME        AGE   SIGNERNAME                                    REQUESTOR                                                                   CONDITION
csr-5wghd   37m   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-797h4   12m   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-8tlmj   32m   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-dfj47   98s   kubernetes.io/kubelet-serving                 system:node:worker-1                                                        Approved,Issued
csr-f2jdl   32m   kubernetes.io/kubelet-serving                 system:node:master-2                                                        Approved,Issued
csr-lbwcg   37m   kubernetes.io/kubelet-serving                 system:node:master-1                                                        Approved,Issued
csr-mfntj   11m   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-px8f9   80s   kubernetes.io/kubelet-serving                 system:node:worker-0                                                        Approved,Issued
csr-smzlr   64m   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-wj6xk   64m   kubernetes.io/kubelet-serving                 system:node:master-0                                                        Approved,Issued
[root@bastion ocp]#

1.1.15. Operator initial settings

It is OK if all AVAILABLE are TRUE and DEGRADED is FALSE. This process also seems to take tens of minutes.

[root@bastion ocp]# ./oc get clusteroperators
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.3     False       True          True       71m
cloud-credential                           4.6.3     True        False         False      88m
cluster-autoscaler                         4.6.3     True        False         False      68m
config-operator                            4.6.3     True        False         False      72m
console                                    4.6.3     False       True          True       102s
csi-snapshot-controller                    4.6.3     True        False         False      4m28s
dns                                        4.6.3     True        False         False      65m
etcd                                       4.6.3     True        False         False      41m
image-registry                             4.6.3     True        False         False      28m
ingress                                    4.6.3     True        False         False      7m41s
insights                                   4.6.3     True        False         False      71m
kube-apiserver                             4.6.3     True        True          True       30m
kube-controller-manager                    4.6.3     True        False         False      66m
kube-scheduler                             4.6.3     True        False         False      67m
kube-storage-version-migrator              4.6.3     True        False         False      7m4s
machine-api                                4.6.3     True        False         False      67m
machine-approver                           4.6.3     True        False         False      66m
machine-config                             4.6.3     True        False         False      66m
marketplace                                4.6.3     True        False         False      67m
monitoring                                           False       True          True       64m
network                                    4.6.3     True        False         False      73m
node-tuning                                4.6.3     True        False         False      70m
openshift-apiserver                        4.6.3     True        False         False      28m
openshift-controller-manager               4.6.3     True        False         False      66m
openshift-samples                          4.6.3     True        False         False      26m
operator-lifecycle-manager                 4.6.3     True        False         False      68m
operator-lifecycle-manager-catalog         4.6.3     True        False         False      66m
operator-lifecycle-manager-packageserver   4.6.3     True        False         False      2m51s
service-ca                                 4.6.3     True        False         False      71m
storage                                    4.6.3     True        False         False      72m
[root@bastion ocp]# ./oc get clusteroperators
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.3     False       False         True       74m
cloud-credential                           4.6.3     True        False         False      91m
cluster-autoscaler                         4.6.3     True        False         False      71m
config-operator                            4.6.3     True        False         False      74m
console                                    4.6.3     True        False         False      66s
csi-snapshot-controller                    4.6.3     True        False         False      7m11s
dns                                        4.6.3     True        False         False      68m
etcd                                       4.6.3     True        False         False      43m
image-registry                             4.6.3     True        False         False      31m
ingress                                    4.6.3     True        False         False      10m
insights                                   4.6.3     True        False         False      74m
kube-apiserver                             4.6.3     True        True          False      32m
kube-controller-manager                    4.6.3     True        False         False      68m
kube-scheduler                             4.6.3     True        False         False      70m
kube-storage-version-migrator              4.6.3     True        False         False      9m47s
machine-api                                4.6.3     True        False         False      70m
machine-approver                           4.6.3     True        False         False      69m
machine-config                             4.6.3     True        False         False      69m
marketplace                                4.6.3     True        False         False      70m
monitoring                                 4.6.3     True        False         False      31s
network                                    4.6.3     True        False         False      75m
node-tuning                                4.6.3     True        False         False      73m
openshift-apiserver                        4.6.3     True        False         False      31m
openshift-controller-manager               4.6.3     True        False         False      69m
openshift-samples                          4.6.3     True        False         False      29m
operator-lifecycle-manager                 4.6.3     True        False         False      70m
operator-lifecycle-manager-catalog         4.6.3     True        False         False      69m
operator-lifecycle-manager-packageserver   4.6.3     True        False         False      5m34s
service-ca                                 4.6.3     True        False         False      74m
storage                                    4.6.3     True        False         False      75m
[root@bastion ocp]#
[root@bastion ocp]# ./oc get clusteroperators
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.3     True        False         False      27s
cloud-credential                           4.6.3     True        False         False      95m
cluster-autoscaler                         4.6.3     True        False         False      75m
config-operator                            4.6.3     True        False         False      79m
console                                    4.6.3     True        False         False      5m22s
csi-snapshot-controller                    4.6.3     True        False         False      11m
dns                                        4.6.3     True        False         False      72m
etcd                                       4.6.3     True        False         False      48m
image-registry                             4.6.3     True        False         False      35m
ingress                                    4.6.3     True        False         False      14m
insights                                   4.6.3     True        False         False      78m
kube-apiserver                             4.6.3     True        False         False      37m
kube-controller-manager                    4.6.3     True        False         False      73m
kube-scheduler                             4.6.3     True        False         False      74m
kube-storage-version-migrator              4.6.3     True        False         False      14m
machine-api                                4.6.3     True        False         False      74m
machine-approver                           4.6.3     True        False         False      73m
machine-config                             4.6.3     True        False         False      73m
marketplace                                4.6.3     True        False         False      74m
monitoring                                 4.6.3     True        False         False      4m47s
network                                    4.6.3     True        False         False      80m
node-tuning                                4.6.3     True        False         False      77m
openshift-apiserver                        4.6.3     True        False         False      35m
openshift-controller-manager               4.6.3     True        False         False      73m
openshift-samples                          4.6.3     True        False         False      33m
operator-lifecycle-manager                 4.6.3     True        False         False      75m
operator-lifecycle-manager-catalog         4.6.3     True        False         False      73m
operator-lifecycle-manager-packageserver   4.6.3     True        False         False      9m50s
service-ca                                 4.6.3     True        False         False      78m
storage                                    4.6.3     True        False         False      79m
[root@bastion ocp]#

1.1.16. Completion of installation on user-provisioned infrastructure

If the message Install complete! Is displayed and the information such as the password is displayed, it is OK.

[root@bastion ocp]# ./openshift-install --dir=bare-metal wait-for install-complete --log-level=info
INFO Waiting up to 40m0s for the cluster at https://api.test.example.local:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ocp/bare-metal/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.test.example.local
INFO Login to the console with user: "kubeadmin", and password: "hNfq3-eLfje-6Nw2N-eFMiN"
INFO Time elapsed: 2m26s
[root@bastion ocp]#

Checking the Web UI

Launch your browser on the bastion server and try accessing https://console-openshift-console.apps.test.example.local. If the WebUI is displayed and you can log in, it's OK.

The user name and password are displayed in the wait-for install-complete command execution result.

192.168.3.100_ui_.png

1.1.17. Next step

Installation is complete, thank you for your hard work. So far, we've covered the steps to install UPI on bare metal in OpenShift 4.6. I would like to utilize Kubernetes / OpenShift at home.

Recommended Posts

OpenShift 4.6 UPI install on bare metal
Install gradle on mac
Install Corretto 8 on Windows
Install OpenJDK on macOS
Install Java on Mac
Install Golang on CentOS 8
Install pyqt5 on ubuntu
Install Neo4j 4.1.3 on centOS
Install Docker on Manjaro
Install Vertica 10.0 on CentOS 6.10
Install Ruby on Ubuntu 20.04
Install PostgreSQL 12 on Centos8
Install nginx on centOS7
Install lombok on SpringToolSuite4
Install Python 3 on CentOS 7
Install kuromoji on CentOS7
Install Autoware on Ubuntu 18.04.5
Install Mattermost on CentOS 7
Install PostGIS 2.5.5 on CentOS7
Install jpndistrict on CentOS 7
Install openjdk11 on mac
Install Homebrew on Ubuntu 20.04
Install Redmine 4.1.1 on CentOS 7
Install OpenJDK 8 on mac
Smokeping Install on CentOS7
Install PostgreSQL 13 on CentOS 7.5
Install Docker on Raspberry Pi
Install Docker on Windows 10 PRO
Install OpenJDK7 (JAVA) on ubuntu 14.04
Install Cybozu Office 10 on Ubuntu 20.4
Install Docker on Ubuntu Server 20.04
Install Sidekiq pro on Rails
Install rbenv on Amazon Linux
Install zabbix agent (5.0) on Ubuntu 18.04
Install MAV Proxy on Ubuntu 18.04
Install tomcat on Sakura's VPS
Install OpenFOAM v2006 on CentOS
Install Arudino IDE on Ubuntu 20
Install Java on WSL Ubuntu 18.04
Install Jenkins on Docker's CentOS
Install Ubuntu Desktop 20.10 on RaspberryPi4
Install Apache on CentOS on VirtualBox
Install Arduino IDE on Ubuntu 20.04
Install Rails on macOS Catalina
Install raspi-config on Ubuntu 20.04 (LTS)
Install Ruby 2.7 on RHEL 8 (AppStream)
Install docker on AWS EC2
Install WordPress 5.5 on Ubuntu 20.04 LTS
Install Ruby 2.7 on CentOS 7 (SCL)
Install PlantUML on Intellij on Ubuntu
Install tomcat + eclipse on mac
Install Ubuntu Server 20.04 on Btrfs
Note: Install PostgreSQL 9.5 on Ubuntu 18.04
Install java 1.8.0 on Amazon linux2
Install Ruby on Sakura's VPS