1. Creating a CLOUD9 environment

1-1. Creating a CLOUD9 environment

All of this hands-on will be carried out in "Northern Virginia".

Open "Cloud9" and create it with the following requirements.

Step 1 Name environment

Name: Specify any name.

Step 2 Configure settings

The basics are as follows, all are default.

Environment type:Create a new EC2 instance for environment (direct access) Instance type:t2.micro (1 GiB RAM + 1 vCPU) Platform:Amazon Linux Network (VPC): Specify any VPC to specify the Public subnet.

Step 3 Review

Click the "Create environment" button.

1-2. Creating an IAM role and assigning it to EC2

Create an IAM role with the following IAM policy and attach it to EC2.


1-3. Temporary credential invalidation

Cloud9 has the ability to automatically set temporary credentials for IAM users, This temporary credential is limited to some actions such as IAM, so Disable this temporary credential so that the IAM role you assigned to your EC2 instance is used.

Open the gear-shaped icon in the upper right, open the AWS Settings menu, and disable AWS managed temporary credentials. スクリーンショット 2020-10-26 21.46.57.png

1-4. AWS CLI Initial Settings

leomaro7:~/environment $ rm -vf ${HOME}/.aws/credentials
leomaro7:~/environment $ aws --version
aws-cli/1.18.162 Python/3.6.12 Linux/4.14.193-113.317.amzn1.x86_64 botocore/1.19.2
leomaro7:~/environment $ AWS_REGION="us-east-1"
leomaro7:~/environment $ aws configure set default.region ${AWS_REGION}
leomaro7:~/environment $ aws configure get default.region
leomaro7:~/environment $ aws sts get-caller-identity
    "UserId": "",
    "Account": "",
    "Arn": ""

2. Creating a cluster

2-1. Installation of eksctl

eksctl: Command used to create the Kubernetes cluster itself

leomaro7:~/environment $ curl -L "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
leomaro7:~/environment $ sudo mv /tmp/eksctl /usr/local/bin
leomaro7:~/environment $ eksctl version

eksctl - The official CLI for Amazon EKS

2-2. Installation of kubectl

kubectl: Commands used to operate the created Kubernetes cluster

leomaro7:~/environment $ sudo curl -L -o /usr/local/bin/kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.7/2020-07-08/bin/linux/amd64/kubectl
leomaro7:~/environment $ sudo chmod +x /usr/local/bin/kubectl
leomaro7:~/environment $ kubectl version --short --client
Client Version: v1.17.7-eks-bffbac

2-3. Creating a cluster

AWS_REGION=$(aws configure get default.region)
eksctl create cluster \
  --name=ekshandson \
  --version 1.17 \
  --nodes=3 --managed \
  --region ${AWS_REGION} --zones ${AWS_REGION}a,${AWS_REGION}c

--In addition to the method of passing the setting option as an argument, it is also possible to describe the setting in YAML and specify that YAML as an argument. --You can also specify an existing VPC to create a cluster. This time, a new VPC is created. --eksctl uses CloudFormation to create AWS resources such as control plane VPCs, EKS clusters, and worker node Auto Scaling Groups. (It seems better to actually open CloudFormation and check the created resource.)

--Insufficient resources in Availability Zone --The AWS CLI version is old --IAM role is not assigned

2-4. Introducing useful tools and setting command completion

jq and bash-completion

jq: Convenient command to process json data bash-completion: Completes commands on the bash shell

sudo yum -y install jq bash-completion

docker, docker-compose and command completion

sudo curl -L -o /etc/bash_completion.d/docker https://raw.githubusercontent.com/docker/cli/master/contrib/completion/bash/docker
sudo curl -L -o /usr/local/bin/docker-compose "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)"
sudo chmod +x /usr/local/bin/docker-compose
sudo curl -L -o /etc/bash_completion.d/docker-compose https://raw.githubusercontent.com/docker/compose/1.26.2/contrib/completion/bash/docker-compose

kubectl command completion

kubectl completion bash > kubectl_completion
sudo mv kubectl_completion /etc/bash_completion.d/kubectl

eksctl command completion

eksctl completion bash > eksctl_completion
sudo mv eksctl_completion /etc/bash_completion.d/eksctl

k (alias)

cat <<"EOT" >> ${HOME}/.bash_profile

alias k="kubectl"
complete -o default -F __start_kubectl k


kube-ps1: Prompts for the current kubectl context and Namespace.

git clone https://github.com/jonmosco/kube-ps1.git ~/.kube-ps1
cat <<"EOT" >> ~/.bash_profile

source ~/.kube-ps1/kube-ps1.sh
function get_cluster_short() {
  echo "$1" | cut -d . -f1

kubectx / kubens###

kubectx and kubens: kubectx and kubens It makes it easy to switch the context and Namespace of kubectl.

git clone https://github.com/ahmetb/kubectx.git ~/.kubectx
sudo ln -sf ~/.kubectx/completion/kubens.bash /etc/bash_completion.d/kubens
sudo ln -sf ~/.kubectx/completion/kubectx.bash /etc/bash_completion.d/kubectx
cat <<"EOT" >> ~/.bash_profile

export PATH=~/.kubectx:$PATH


stern: Check container logs

sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern

Close the tab in the current terminal and open the tab in the new terminal for the settings added to ~ / .bash_profile to take effect.

3. Check the cluster

3-1. Checking the cluster

Shows the current cluster and basic information about the cluster.

leomaro7:~/environment $ eksctl get cluster
NAME            REGION
ekshandson      us-east-1

leomaro7:~/environment $ kubectl cluster-info
Kubernetes master is running at https://25FF2316ECD9ED0E8D621ED7DCFD6263.gr7.us-east-1.eks.amazonaws.com
CoreDNS is running at https://25FF2316ECD9ED0E8D621ED7DCFD6263.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

3-2. Node confirmation

Check the nodes that belong to the cluster, the capacity of the nodes, and the running pods.

leomaro7:~/environment $ kubectl get node
NAME                            STATUS   ROLES    AGE   VERSION
ip-192-168-4-184.ec2.internal   Ready    <none>   13m   v1.17.11-eks-cfdc40
ip-192-168-56-48.ec2.internal   Ready    <none>   13m   v1.17.11-eks-cfdc40
ip-192-168-6-63.ec2.internal    Ready    <none>   13m   v1.17.11-eks-cfdc40

leomaro7:~/environment $ kubectl describe node ip-192-168-4-184.ec2.internal

3-3. Confirmation of NAMESPACE

Namespace: A grouping of Kubernetes resources such as Pods and Services.

Check Namespace

leomaro7:~/environment $ kubectl get namespace
NAME              STATUS   AGE
default           Active   24m
kube-node-lease   Active   24m
kube-public       Active   24m
kube-system       Active   24m

Pod: The smallest unit of deployment in Kubernetes, with one or more containers running in a pod.

↓ is the pod of namespace: default (of course not yet)

leomaro7:~/environment $ kubectl get pod -n default
No resources found in default namespace.

Changed the default Namespace to kube-system using kubens.

leomaro7:~/environment $ kubens kube-system
Context "[email protected]" modified.
Active namespace is "kube-system".

In kube-system, pods like ↓ are running.

leomaro7:~/environment $ kubectl get pod -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
aws-node-2p7s2             1/1     Running   0          26m
aws-node-cmmkc             1/1     Running   0          26m
aws-node-vbp6f             1/1     Running   0          26m
coredns-75b44cb5b4-cktx9   1/1     Running   0          31m
coredns-75b44cb5b4-lq58q   1/1     Running   0          31m
kube-proxy-c6td9           1/1     Running   0          26m
kube-proxy-jxwc6           1/1     Running   0          26m

Use the -A option to get information for all Namespaces.

leomaro7:~/environment $ kubectl get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-2p7s2             1/1     Running   0          27m
kube-system   aws-node-cmmkc             1/1     Running   0          27m
kube-system   aws-node-vbp6f             1/1     Running   0          27m
kube-system   coredns-75b44cb5b4-cktx9   1/1     Running   0          32m
kube-system   coredns-75b44cb5b4-lq58q   1/1     Running   0          32m
kube-system   kube-proxy-c6td9           1/1     Running   0          27m
kube-system   kube-proxy-jxwc6           1/1     Running   0          27m
kube-system   kube-proxy-z454c           1/1     Running   0          27m

4. Deploy sample application

4-1. Creating DynamoDB table

aws dynamodb create-table --table-name 'messages' \
  --attribute-definitions '[{"AttributeName":"uuid","AttributeType": "S"}]' \
  --key-schema '[{"AttributeName":"uuid","KeyType": "HASH"}]' \
  --provisioned-throughput '{"ReadCapacityUnits": 1,"WriteCapacityUnits": 1}'

4-2. Creating a Docker image

If you have moved the directory, move it back to the ~ / environment / directory.

cd ~/environment/

DL the sample application and unzip it.

wget https://eks-for-aws-summit-online.workshop.aws/sample-app.zip
unzip sample-app.zip

Build with docker-compose.

cd sample-app
docker-compose build

Confirm that it was built.

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
backend             latest              412ec271d5e7        11 seconds ago      107MB
frontend            latest              fa4eba7cd29c        20 seconds ago      57.7MB
python              3-alpine            dc68588b1801        6 days ago          44.3MB

4-3. Registering the image in ECR

Create an ECR repository.

aws ecr create-repository --repository-name frontend
aws ecr create-repository --repository-name backend

Get the URL of the repository and store it in a variable.

frontend_repo=$(aws ecr describe-repositories --repository-names frontend --query 'repositories[0].repositoryUri' --output text)
backend_repo=$(aws ecr describe-repositories --repository-names backend --query 'repositories[0].repositoryUri' --output text)

Alias the image you just built with the URL name of the ECR repository.

docker tag frontend:latest ${frontend_repo}:latest
docker tag backend:latest ${backend_repo}:latest

Check the image with the alias.

docker images
REPOSITORY                                              TAG                 IMAGE ID            CREATED             SIZE
4.dkr.ecr.us-east-1.amazonaws.com/backend    latest              412ec271d5e7        2 minutes ago       107MB
4.dkr.ecr.us-east-1.amazonaws.com/frontend   latest              fa4eba7cd29c        2 minutes ago       57.7MB

Log in to ECR.

ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
AWS_REGION=$(aws configure get default.region)
aws ecr get-login-password | docker login --username AWS --password-stdin https://${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com

Push the image to ECR.

docker push ${frontend_repo}:latest
docker push ${backend_repo}:latest

4. Deploy the app

Creating a working directory.

mkdir -p ~/environment/manifests/
cd ~/environment/manifests/

Create a Namespace for Application 1. Also changed the default Namespace.

kubectl create namespace frontend
kubens frontend

Creating a Deployment


frontend_repo=$(aws ecr describe-repositories --repository-names frontend --query 'repositories[0].repositoryUri' --output text)
cat <<EOF > frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
  name: frontend
      app: frontend
  replicas: 2
        app: frontend
      - name: frontend
        image: ${frontend_repo}:latest
        imagePullPolicy: Always
        - containerPort: 5000
        - name: BACKEND_URL
          value: http://backend.backend:5000/messages
kubectl apply -f frontend-deployment.yaml -n frontend

Check the created deployment.

kubectl get deployment -n frontend
frontend   2/2     2            2           13s

Check the pod.

kubectl get pod -n frontend
NAME                        READY   STATUS    RESTARTS   AGE
frontend-84ccd456fb-l6kjl   1/1     Running   0          53s
frontend-84ccd456fb-wdhwr   1/1     Running   0          53s

Creating a Service.

Service: Provides name resolution and load balancing capabilities to access the pods launched by Deployment.


cat <<EOF > frontend-service-lb.yaml
apiVersion: v1
kind: Service
  name: frontend
  type: LoadBalancer
    app: frontend
  - protocol: TCP
    port: 80
    targetPort: 5000
kubectl apply -f frontend-service-lb.yaml -n frontend

Check the created Service.

kubectl get service -n frontend
NAME       TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
frontend   LoadBalancer   dd7c90ab25e44a939b065e566aa5432-1872056256.us-east-1.elb.amazonaws.com   80:30374/TCP   10s

Access EXTERNAL-IP and check if it is displayed. (It takes a few minutes to resolve the name)

Create Namespace for Application 2. Also changed the default Namespace.

kubectl create namespace backend
kubens backend

Creating a Deployment.

AWS_REGION=$(aws configure get default.region)
backend_repo=$(aws ecr describe-repositories --repository-names backend --query 'repositories[0].repositoryUri' --output text)
cat <<EOF > backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
  name: backend
      app: backend
  replicas: 2
        app: backend
      - name: backend
        image: ${backend_repo}:latest
        imagePullPolicy: Always
        - containerPort: 5000
        - name: AWS_DEFAULT_REGION
          value: ${AWS_REGION}
        - name: DYNAMODB_TABLE_NAME
          value: messages
kubectl apply -f backend-deployment.yaml -n backend

Check the pod.

kubectl get pod -n backend
NAME                       READY   STATUS    RESTARTS   AGE
backend-7544ddcd98-7lxcx   1/1     Running   0          12s
backend-7544ddcd98-bn5jq   1/1     Running   0          12s

Creating a Service.

cat <<EOF > backend-service.yaml
apiVersion: v1
kind: Service
  name: backend
  type: ClusterIP
    app: backend
  - protocol: TCP
    port: 5000
    targetPort: 5000
kubectl apply -f backend-service.yaml -n backend

Check the created Service.

kubectl get service -n backend
backend   ClusterIP   <none>        5000/TCP   13s

Try accessing EXTERNAL-IP again.


Use an EKS feature called IAM Roles for Service Accounts to grant IAM roles to application 2 pods to allow access to DynamoDB.

Create an OIDC identity provider and associate it with your cluster.

eksctl utils associate-iam-oidc-provider \
    --cluster ekshandson \

Create an IAM policy that allows full access to the DynamoDB messages table.

cat <<EOF > dynamodb-messages-fullaccess-policy.json
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "ListAndDescribe",
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Sid": "SpecificTable",
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:dynamodb:*:*:table/messages"
aws iam create-policy \
    --policy-name dynamodb-messages-fullaccess \
    --policy-document file://dynamodb-messages-fullaccess-policy.json

Create and associate an IAM role with a ServiceAccount to use to run Application 2.

ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
eksctl create iamserviceaccount \
    --name dynamodb-messages-fullaccess \
    --namespace backend \
    --cluster ekshandson \
    --attach-policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/dynamodb-messages-fullaccess \
    --override-existing-serviceaccounts \

Check the created ServiceAccount.

kubectl get serviceaccount -n backend
default                        1         24m
dynamodb-messages-fullaccess   1         35s

Modify the Deployment definition of Application 2 and run the pod with the created ServiceAccount.

Open backend-deployment.yaml in Cloud9 by double-clicking, add the serviceAccountName specification as follows, and save.

+     serviceAccountName: dynamodb-messages-fullaccess
kubectl apply -f backend-deployment.yaml -n backend

A new pod will be launched automatically, so check it.

kubectl get pod -n backend
NAME                       READY   STATUS    RESTARTS   AGE
backend-647595dd78-jjmml   1/1     Running   0          70s
backend-647595dd78-w7f6t   1/1     Running   0          72s

If you try to access EXTERNAL-IP again, it should be displayed correctly.

Recommended Posts

AWS EKS ~ demo
AWS memo