Good evening everyone ~ I'm Hirayama, the first year in charge of big data in the DoCoMo Service Innovation Department. Suddenly, do you like Kubernetes? I love you. In this article, I'm introducing something that I might have come up with (I was dying to use it) because I have such a deep love. I'm proud of myself because I think it's quite interesting in terms of ideas. I came up with it and did it as it was, so if I review it tomorrow, it may not be so ... Technically, the main article is about EKS. I hope you enjoy it.
EKS Lover for EKS Lover Excitement (?) Web Omikuji App System Construction Article. The procedure is as follows. (1) Expand the network with CFn (Cloud Formation). (2) Prepare AWS Cloud9 as an Operator environment and build the environment. (3) Deploy ECR (Elactic Contianer Registry) as a container image management destination. (4) Deploy an EKS (Elactic Kubernetes Service) cluster. (5) Deploy the strongest Omikuji app that I thought of on EKS. It's a list of AWS services that are likely to be called YAML engineers. It is around this time that young people think that it is the end of an engineer who was fascinated by containers, IaC, and CI/CD.
It is a state when accessing with Google Chrome. Access by $ {DomainName}: $ {Port}
.
Since it is designed to work with HTTP GET requests, the result will be displayed even with the curl command etc. (HTML is not formatted and is returned as it is ...).
One of the 4 patterns is displayed.
The emission rate is set to Daikichi (10%), Nakakichi (20%), Kokichi (60%), and Bad (10%).
Now you can check your fortune today!
What did you think of the above architecture diagram? Isn't there something a little mysterious? That's right. I'm writing ** "Draw a fortune" ** on the arrow extending from ELB (Elactic Load Balancer). If you're an infrastructure engineer who loves containers, do you know what you're doing at this point? First of all, I think you are using EKS a little tricky. As a hobby, I implemented what I thought might be interesting personally as an idea.
Build a network with CFn. The parameters to be entered in the template are
is. At this time, save a lot of CIDR [^ 1] for the convenience of EKS. The reason will be described later.
cfn-base-resources.yaml
AWSTemplateFormatVersion: '2010-09-09'
Description: omikuji-base-environment
Metadata:
AWS::CloudFormation::Interface :
ParameterGroups:
-
Label:
default: "Project Name Prefix"
Parameters:
- PJPrefix
-
Label:
default: "Data Center"
Parameters:
- AvailabilityZone1
- AvailabilityZone2
- VPCCIDR
- PublicSubnet1CIDR
- PublicSubnet2CIDR
ParameterLabels:
PJPrefix:
default: "Project Prefix"
AvailabilityZone1:
default: "Availability Zone 1"
AvailabilityZone2:
default: "Availability Zone 2"
VPCCIDR:
default: "VPC CIDR"
PublicSubnet1CIDR:
default: "Public Subnet 1 CIDR"
PublicSubnet2CIDR:
default: "Public Subnet 2 CIDR"
Parameters:
PJPrefix:
Type: String
Default: omikuji
AvailabilityZone1:
Type: String
Default: ap-northeast-1a
AvailabilityZone2:
Type: String
Default: ap-northeast-1c
VPCCIDR:
Type: String
Default: 10.10.0.0/16
Description: X:0~255
PublicSubnet1CIDR:
Type: String
Default: 10.10.1.0/24
Description: "X:same as above, Y:0~255"
PublicSubnet2CIDR:
Type: String
Default: 10.10.2.0/24
Description: "X:same as above, Y:0~255"
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VPCCIDR
EnableDnsHostnames: true
EnableDnsSupport: true
InstanceTenancy: default
Tags:
- Key: Name
Value: !Sub ${PJPrefix}-vpc
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Sub ${PJPrefix}-igw
AttachIGW:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
CidrBlock: !Ref PublicSubnet1CIDR
AvailabilityZone: !Ref AvailabilityZone1
MapPublicIpOnLaunch: 'true'
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${PJPrefix}-public-subnet-1
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
CidrBlock: !Ref PublicSubnet2CIDR
AvailabilityZone: !Ref AvailabilityZone2
MapPublicIpOnLaunch: 'true'
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${PJPrefix}-public-subnet-2
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${PJPrefix}-rt
AssociateIGWwithRT:
Type: AWS::EC2::Route
DependsOn: AttachIGW
Properties:
RouteTableId: !Ref RouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
AssociateSubnet1withRT:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet1
RouteTableId: !Ref RouteTable
AssociateSubnet2withRT:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref RouteTable
Outputs:
WorkerSubnets:
Value: !Join
- ","
- [!Ref PublicSubnet1, !Ref PublicSubnet2]
[^ 1]: Classless Inter-Domain Routing. It is an IP address mechanism without using a class. I don't care about the details now, so I think I'm deciding the range of IP addresses that can be used on that network.
Build an environment to operate EKS on the network created earlier. We will use the service of the integrated development environment called AWS Cloud9 to align the environment with our readers. The settings at the time of construction
is. In the environment created in this way
To do.
Follow the AWS Official EKS User Guide (https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).
Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Install kubectl
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
AWS Cloud9 issues temporary credentials called AMTC (Amazon Managed Temporary Credentials) that have the same scope as the permissions of the user who launched it, so this is not the case for everyday use.
However, due to the command (about 20 minutes?) That takes time to build EKS, the AMTC authentication that expires in 15 minutes and switches to a new one is angry with Token expired
on the way. I will end up. Therefore, in the AWS Management Console, issue the access key of the IAM user and feed it to Cloud9.
aws_credential settings
PROFILE_NAME=cluster-admin
aws configure --profile ${PROFILE_NAME}
export AWS_DEFAULT_PROFILE=${PROFILE_NAME} #cluster from default-Switching profile to admin user
When I did aws configure
, why did I try to repaint my credentials by myself because I gave AMTC from AWS? It is said, but there is no help for building EKS with commands. Let's push it. The following is how angry it looks.
This time, it is not essential to stick to the authentication part (I just want to mess with EKS !!), so I will not do it, but originally, it is very good for security to operate with the IAM access key eaten. Not. So, if it's a commercial service, use it only when you need it, and after that phase, switch to another credential. This project should be returned to AMTC after the construction is completed (I don't want to operate it or write an article, so I won't do it this time). You can put it back with a button.
Authentication is also possible by handling RBAC (Role-Based Access Control), which authenticates based on Role information in the settings of Kubernetes (this is what actually works in EKS). In fact, the system I operate with EKS in my business is operated by setting this RBAC for the IAM Role of the resource that operates EKS after the construction is completed, and the access key is erased. I will. At least for the team I belong to, I'm sure you're a commercial system user and you've never had an access key for more than a day ... I don't know ...
This is an important matter related to the incident, so I wrote it a little carefully ... If you don't understand it, you can skip it. This isn't the place I want to share in this article!
Build an ECR and store the container used in this system there. From here, you can gradually see how to implement the part of the fishing title that says "I can only think of it." There are four container images prepared this time.
is. It is a build of the following assets.
Dockerfile
FROM node:15.1.0-stretch-slim
WORKDIR /app
COPY index.js /app/index.js
COPY package.json /app/package.json
RUN npm install
RUN useradd -m -u 1009 app
USER app
EXPOSE 8081
CMD ["npm", "start"]
By the way, I'm sorry that the way to write Dockerfile is too texty. I was in such a hurry that I implemented the texto without being aware of the layer structure at all [^ 2], so this is a terrible way of writing.
[^ 2]: You should bring npm install
etc. which are hard to change in layers to the front. On the contrary, COPY
is better at the back. As it is, for example, if you make a small change to the COPY original index.js, all the layers after that will be updated. Every time I modify index.js, npm install
runs ... I thought I didn't like it, but I dared to implement it like this, eh ...・ ・ (Eyes cannot be matched)
index.js
const express = require('express');
const os = require('os');
const app = express();
app.get('/', (req, res) => {
const hostname = os.hostname();
res.send(`<!DOCTYPE html>
<html lang=“ja”>
<head>
<meta charset=“UTF-8”>
<title>
Omikuji
</title>
<style>
p {font-size:20px;}
.fortune {
color:red;
font-size: 30px;
font-weight: bold;
}
</style>
</head>
<body>
<h1></h1>
<p>Your fortune</p>
<p class="fortune">Daikichi</p>
<p>is</p>
</body>
</html>`);
});
app.listen(8081);
console.log('Example app listening on port 8081.');
package.json
{
"name": "app",
"version": "1.0.0",
"description": "Container App",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.1"
}
}
The behavior of the container is mainly described in index.js
.
It is a specification that returns an HTML-formatted message when an HTTP GET request is received on port: 8081.
The difference between the four containers is that the content of the message is Daikichi
, Nakayoshi
, Kokichi
, and Bad
.
Dear container-loving infrastructure engineers, have you come to understand how to implement this?
It's very refreshing here. You have a great system to manage your containers with just one command + parameter. It's as if you're making a huge structure with magic. Reveal, show me the light for stable operation in a chaotic world [^ 3]! !! Kuvanettis! !!
create_cluster.sh
subnet=subnet-00xxxxxxxxxxxxx,subnet-00xxxxxxxxxxxxx
eksctl create cluster \
--name eks-handson-cluster \
--version 1.18 \
--region ap-northeast-1 \
--vpc-public-subnets ${subnets} \
--nodegroup-name omikuji-nodegroup \
--node-type t2.small \
--nodes 1 \
--nodes-min 1 \
--nodes-max 1000 \
--asg-access
Just execute this eksctl create cluster
command and two CFn will work internally and create everything you need. It takes about 20 minutes here, so you couldn't use AMTC on Cloud9.
You can see what you're doing by looking at the Cloud9 terminal logs. While looking here, if you take a closer look at what is actually made with a mannequin, you can forgive even if it takes 20 minutes. I wonder if they are making this much ...
Now you can run and manage the container! Finally. The current situation is as follows.
[^ 3]: Unpredictable inconveniences occur in this world. I often hear that a mysterious spike of traffic suddenly occurs. I'm a young person, so I haven't experienced it yet ...
So far, it was built on the AWS platform. From here, it will be built on a platform called Kubernetes, although it is built with AWS resources.
In Kubernetes, by declaratively calling the API with a manifest file written in yaml format, the system will build the system in the desired state. The part that manages the meta is called the control plane node (also called the master node), while the part where the container actually moves and works is called the worker node. So the rest of the process is very easy, just create a manifest file and kubectl apply
.
It's very easy. I'm also convinced that container orchestration.
By the way, I finally came to the ** "strongest" part that only I, who is EKS Lover, can think of (laughs). ~~ I've come to this point, and I'm starting to wonder if there is such an implementation unexpectedly. But I have no choice but to go with this time ... ~~
Here is the story. As an implementation method, create multiple containers (that is, those that display the contents such as Daikichi) that deserve the fortune, and before that, bite something like a load balancer (to be exact, the Service.LoadBalancer object of Kubernetes). Then, the GET requests thrown are balanced to give randomness. The system design is object-oriented. It feels like the load balancer is pulling a fortune container.
Randomness now depends on the Service.NordPort object. that? The person who thought that the Service.LoadBalancer object was said earlier is sharp ... (although it was a few lines ago). In fact, the Service.NordPort object is automatically assigned when you use the Service.LoadBalancer object. So, when using the Service.LoadBalancer object, the Service.NordPort object is automatically assigned, and its internal specification adopts iptables routing by default, and its settings are randomly assigned. It draws fortunes at random, isn't it a little confusing? Well, it's random for the time being.
The procedure for Kubernetes is as follows. (1) Settings for auto-scaling (2) Deployment of each container (Deployment) (3) Deploy the load balancer (Service: Load Balancer)
This is because if you know the number of containers (Pod to be exact) to be deployed from the beginning, you can specify the number of nodes and deploy according to it, but since you use EKS, manage the number of nodes I want to leave it to Kubernetes, but I implemented it because it is my belief. If you only autoscale, you will get "Oh, that's good"!
In the settings when building with eksctl create cluster
, the worker node is created with ASG (Auto Scaling). In other words, resources are deployed on the AWS platform so that nodes can scale. However, when using EKS, you will manage resources related to containers on a platform called Kubernetes. Therefore, in order for Kubernetes to work with ASG, we need to deploy something called Cluster Autoscaler.
AWS Official has prepared a manifest file for that, so let's set it accordingly.
You can download cluster-autoscaler-autodiscover.yaml
, just name it your environment and kube apply
!
deployment_great_fortune.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: great-fortune
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
maxSurge: 4
replicas: 1
selector:
matchLabels:
sample-label: omikuji-app
template:
metadata:
labels:
sample-label: omikuji-app
spec:
containers:
- name: omikuji-great-fortune
image: 262403382529.dkr.ecr.ap-northeast-1.amazonaws.com/omikuji/fortune:great-fortune
ports:
- containerPort: 8081
Kubectl apply
the manifest file as above.
Since this is great_fortune, it is Daikichi's Omikuji container, but since it is replicas: 1
, one is expanded.
This time, the emission rate of Daikichi (10%), Nakakichi (20%), Kokichi (60%), and Bad (10%)
replicas: 1
replicas: 2
replicas: 6
replicas: 1
Expand with and put the Omikuji (container) in the box (worker node)!
And here, collect the previous flag, such as "Please turn off CIDR a lot." The reason for this is that each time you build one pod, one IP address is consumed. Therefore, if you narrow down the IP address too much, you may not be able to start the pod due to the limitation of the number of IP addresses, so when you turn off CIDR in actual operation, you have to consider that side as well. Don't. This time I cut a lot.
As it is now, you will not be able to access the deployed containers. Because there are no endpoints. It is the LoadBalacer object in the Service API that realizes that (note that this is an object on the Kubernetes side) Since EKS is used this time, if you create a Service.LoadBalancer object of Kubernetes, ELB (ClassicLoadBalancer) which is an AWS load balancer is actually built.
Aside from the detailed explanation, let's kubectl apply
the following manifest file for the time being.
service_loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- name: "http-port"
protocol: "TCP"
port: 8080
nodePort: 30081
targetPort: 8081
selector:
sample-label: omikuji-app
In this manifest file, sample-label
is specified as selector
. Then, the control plane will register the pods that have the same sample-label
and match their values (this time omikuji-app
) as the load balancing destination.
Together with the ELB story above, it can get confusing, so let's review the route of traffic here.
Now that we want to check the endpoint of the load balancer, let's actually see it on the AWS mannequin.
You can see that Kubernetes is doing a good job as per the manifest file. Let's access it using the DNS name here and check the operation. You can make a GET request to the URI while specifying the port number with $ {Domain name}: $ {Port}
in the access bar of Chrome. Since the load balancer port is open 8080
this time (specified in the above manifest file), enter XXXX ・ ・ ・ .elb.amazonaws.com: 8080
.
The fortune was closed safely. If you try several times, if you send a request from the browser, something seems to be cached [^ 4] and the result will be returned as it is (F5 repeated hits will not work). Therefore, if you want to check the randomness immediately, you can experience the randomness of balancing by changing the result every time you make a curl request.
Command to draw fortune 100 times by curl
for itr in `seq 100`; do curl a9809XXXXXXXXXXXXXXXXXXXXXXXXXX.ap-northeast-1.elb.amazonaws.com:8080 --silent | grep -G "<p\sclass=\"fortune\">"; done
[^ 4]: Even if I cut the cache of the browser with the verification tool of Google Chrome, the Omikuji did not change immediately with F5 repeated hits. Even with a browser, a different result will be returned after a while, but I'm not sure about this ... Please let me know if any of the experts come up with a suspicious part ...
Finally, here are the Kubernetes resources.
How was that. Did anyone come up with this implementation before opening the article? Regardless of its practicality and usefulness, it is not a title scam, and I hope you find it a little interesting. I came up with it, implemented it as it was, and wrote it in a hurry, so I haven't thought much about it, so I think there is something to be desired, but I would be grateful if you could share it.
In this article, you can learn the basics of EKS (or rather Kubernetes). I made it so that it does not depend on the environment of the local PC, so if you are interested, please give it a try.
Well then, the end m ... eh? The logic of such an application can be implemented instantly using random numbers in Python or C ++, and if you use AWS, you can easily implement the same thing using API gateway or Lambda? ??
It is noisy. If you ask me to do anything like this with apply
, EKS-chan will do exactly that, isn't it cute? I implemented it with love, but I'll show you something interesting soon! !!
Thank you to everyone who has seen this far. Well then! !!
Recommended Posts