As the title says. I just wanted to make sure that it was the actual move.
--The GCP environment is ready.
--GCP local configured. (The gcloud
command is ready to use.)
--The kubectl
command is ready for use.
--GKE cluster created.
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
$ gcloud version
Google Cloud SDK 312.0.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.9", GitCommit:"4fb7ed12476d57b8437ada90b4f93b17ffaeed99", GitTreeState:"clean", BuildDate:"2020-07-15T16:18:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-gke.2502", GitCommit:"974eff7a63e05b7eb05c9aded92fae8a3ce14521", GitTreeState:"clean", BuildDate:"2020-10-19T17:01:32Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
$ go version
go version go1.15.2 linux/amd64
https://github.com/sky0621/study-k8sOnGKE/tree/v0.1.0/try01
Golang
Set up a web server appropriately, and when it receives an OS signal (SIGTERM
), it spits out a log (GOT_NOTIFY
).
Also prepare a log for defer
and check that the defer log does not appear when the OS signal is received.
main.go
package main
import (
"fmt"
"net/http"
"os"
"os/signal"
"syscall"
)
func main() {
fmt.Println("APP_START")
defer fmt.Println("DEFER")
//Goroutine waiting for OS signal (SIGTERM)
go func() {
fmt.Println("BEFORE_NOTIFY")
q := make(chan os.Signal, 1)
signal.Notify(q, syscall.SIGTERM)
<-q
fmt.Println("GOT_NOTIFY")
os.Exit(-1)
}()
//Start up an HTTP server properly
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if _, err := fmt.Fprint(w, "Hello"); err != nil {
fmt.Printf("HANDLE_ERROR_OCCURRED: %+v", err)
}
})
if err := http.ListenAndServe(":8080", nil); err != nil {
fmt.Printf("SERVE_ERROR_OCCURRED: %+v", err)
}
fmt.Println("APP_END")
}
Dockerfile An ordinary multi-stage build Dockerfile.
FROM golang:1.15 as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
FROM gcr.io/distroless/base
COPY --from=builder /app/server /server
CMD ["/server"]
Use Container Registry for the Docker image.
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/golang-app-try01', '.' ]
images:
- 'gcr.io/$PROJECT_ID/golang-app-try01'
The shell for building using the above is below.
build.sh
#!/usr/bin/env bash
set -euox pipefail
SCRIPT_DIR=$(dirname "$0")
cd "${SCRIPT_DIR}"
gcloud builds submit --config cloudbuild.yaml .
Get the Docker image from the Container Registry. There are three pods. The container port is 8080 (although I won't use it this time).
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: golang-app-try01
spec:
replicas: 3
selector:
matchLabels:
app: golang-app-try01
template:
metadata:
labels:
app: golang-app-try01
spec:
containers:
- name: golang-app-try01
image: gcr.io/MY_GCP_PROJECT_ID/golang-app-try01
ports:
- containerPort: 8080
The shell to deploy using the above is below.
I need the ID of the GCP project I'm using, which itself can be picked up from the gcloud
command in my local environment.
It was troublesome to find out how to specify the GCP project ID without writing it directly in Yaml of k8s (* It may be possible via ConfigMap or Secret, but if possible, it is easy), so rewrite it with sed
.
deploy.sh
#!/usr/bin/env bash
set -euox pipefail
SCRIPT_DIR=$(dirname "$0")
cd "${SCRIPT_DIR}"
project=$(gcloud config get-value project)
if [[ -z "${project}" ]]; then
echo -n "need project"
exit 1
fi
echo "${project}"
sed -i -e "s/MY_GCP_PROJECT_ID/${project}/" deployment.yaml
kubectl apply -f deployment.yaml
sed -i -e "s/${project}/MY_GCP_PROJECT_ID/" deployment.yaml
replica_n.sh
#!/usr/bin/env bash
set -euox pipefail
SCRIPT_DIR=$(dirname "$0")
cd "${SCRIPT_DIR}"
num=${1:-}
if [ -z "${num}" ]; then
echo -n "input replicas number: "
read num
fi
kubectl scale deployment golang-app-try01 --replicas="${num}"
$ ./build.sh
++ dirname ./build.sh
+ SCRIPT_DIR=.
+ echo .
.
+ cd .
+ gcloud builds submit --config cloudbuild.yaml .
Creating temporary tarball archive of 6 file(s) totalling 1.7 KiB before compression.
・
・
・
DONE
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
6452c516-cfbf-4497-b536-378023cbc34d 2020-11-03T19:29:14+00:00 29S gs://XXXXXXXX_cloudbuild/source/1604431752.38075-ccb069fbb0d0413382dc79d42e5c618a.tgz gcr.io/XXXXXXXX/golang-app-try01 (+1 more) SUCCESS
$ ./deploy.sh
++ dirname ./deploy.sh
+ SCRIPT_DIR=.
+ echo .
.
+ cd .
・
・
・
+ kubectl apply -f deployment.yaml
deployment.apps/golang-app-try01 created
・
・
・
There are 3 pods.
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
golang-app-try01 3/3 3 3 4m19s
If you look at the container log at this point, you can see that each of the three pods has a log for starting the application and starting OS signal standby.
$ ./replica_n.sh 0
++ dirname ./replica_n.sh
+ SCRIPT_DIR=.
+ echo .
.
+ cd .
+ num=0
+ '[' -z 0 ']'
+ kubectl scale deployment golang-app-try01 --replicas=0
deployment.apps/golang-app-try01 scaled
The log (GOT_NOTIFY
) when the OS signal was received came out as the log of each pod.
The log (DEFER
) of the person who prepared with defer
does not appear.
If you put it on GKE, the content you want to be surely processed when the application is stopped is not defer
, but Goroutine for receiving OS signal (SIGTERM
) is set up separately.