I tried AWS CDK!

Somewhat long introduction

About this article

Recently, I haven't written many "I tried" articles. This is because the official websites are so extensive that you end up just tracing the official websites.

** This time, I happened to know AWS CDK and was deeply moved, so I will write it as it is. ** **

What is a CDK?

It's a quote from the official website, but you can create AWS resources in a familiar programming language without using CloudFormation or Teraform! !! ** This is now! It's too now! !! ** **

The AWS Cloud Development Kit (AWS CDK) is an open source software development framework for modeling and provisioning cloud application resources using familiar programming languages.

https://aws.amazon.com/jp/cdk/

How did you know

The AWS CDK was briefly mentioned in the information provided by the team members during the in-house technology acquisition activities.
At first, I didn't know the existence, and I was thinking about something like a typographical error in the AWS SDK.

I won't go into it because it deviates from the main subject, but the information that flows is very useful! I highly recommend reading it. https://tomomano.gitlab.io/intro-aws/

## What to mention in this article The CDK has a wealth of examples. https://github.com/aws-samples/aws-cdk-examples

This time, I will trace the process of creating a new VPC using Python and provisioning ALB and EC2 from this. https://github.com/aws-samples/aws-cdk-examples/tree/master/python/new-vpc-alb-asg-mysql It would be nice to just pour the example into your environment, but I don't think that's the case. (Because the CIDR of VPC is already assigned and it may collide) ** Therefore, what part of the example is practical to play with? I would like to explain mainly from this perspective. ** **

Finally the main subject

I will finally get into the main subject here.

Pre-conditions

Of course, you need an AWS account. (This area is omitted as expected) And various installations are required. Since you can build an environment by completing the Workshop, it is only necessary to have Hello, CDK !, so we strongly recommend that you do it. https://cdkworkshop.com/

First, let's pour the Example with the minimum changes! (1st lap)

As a rough procedure, clone the example mentioned above and use Python's new-vpc-alb-asg-mysql. Let's change the example a little and pour it in first. (This is the first lap. Later, there will be the second lap)

clone I forgot to write it, but I'm using Ubuntu 18.04. For study material, subdirectories are dug for each theme in a directory called practice. So, clone it under ~ / practice / cdk /. (Please change this area to good)

$ git clone https://github.com/aws-samples/aws-cdk-examples.git

As a result, Python's new-vpc-alb-asg-mysql subordinate becomes as follows.

$ tree
.
├── app.py
├── cdk.json
├── cdk_vpc_ec2
│   ├── cdk_ec2_stack.py
│   ├── cdk_rds_stack.py
│   └── cdk_vpc_stack.py
├── img_demo_cdk_vpc.png
├── README.md
├── requirements.txt
├── setup.py
└── user_data
    └── user_data.sh

2 directories, 10 files

File modification and execution

First, make the minimum changes and actually pour it in. The file to be modified is

is. I'm sorry it's hard to understand, but the Japanese comments in the source code were made by me, so please refer to them.

Since it will be long, the contents of the file modification will be collapsed.

Click here for details.
Change before

cdk_vpc_stack.py (before change)


from aws_cdk import core
import aws_cdk.aws_ec2 as ec2


class CdkVpcStack(core.Stack):

    def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # The code that defines your stack goes here

        self.vpc = ec2.Vpc(self, "VPC",
                           max_azs=2,
                           cidr="10.10.0.0/16", #CIDR is written in a fixed manner!
                           # configuration will create 3 groups in 2 AZs = 6 subnets.
                           subnet_configuration=[ec2.SubnetConfiguration(
                               subnet_type=ec2.SubnetType.PUBLIC,
                               name="Public",
                               cidr_mask=24
                           ), ec2.SubnetConfiguration(
                               subnet_type=ec2.SubnetType.PRIVATE,
                               name="Private",
                               cidr_mask=24
                           ), ec2.SubnetConfiguration(
                               subnet_type=ec2.SubnetType.ISOLATED,
                               name="DB",
                               cidr_mask=24
                           )
                           ],
                           # nat_gateway_provider=ec2.NatProvider.gateway(),
                           nat_gateways=2,
                           )
        core.CfnOutput(self, "Output",
                       value=self.vpc.vpc_id)

The CIDR of the VPC is 10.10.0.0/16. If this is okay, you can run it as it is, but in my case I had to change it, so I will change it. Also, since it is not easy to use if it is written inline, I decided to cut it out as a variable.

After change

cdk_vpc_stack.py (after change)


from aws_cdk import core
import aws_cdk.aws_ec2 as ec2


vpc_cidr = "x.x.x.x/16" #Define CIDR as a variable and rewrite it to an arbitrary value.

class CdkVpcStack(core.Stack):

    def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # The code that defines your stack goes here

        self.vpc = ec2.Vpc(self, "VPC",
                           max_azs=2,
                           cidr=vpc_cidr,  #Changed to use variables
                           # configuration will create 3 groups in 2 AZs = 6 subnets.
                           subnet_configuration=[ec2.SubnetConfiguration(
                               subnet_type=ec2.SubnetType.PUBLIC,
                               name="Public",
                               cidr_mask=24
                           ), ec2.SubnetConfiguration(
                               subnet_type=ec2.SubnetType.PRIVATE,
                               name="Private",
                               cidr_mask=24
                           ), ec2.SubnetConfiguration(
                               subnet_type=ec2.SubnetType.ISOLATED,
                               name="DB",
                               cidr_mask=24
                           )
                           ],
                           # nat_gateway_provider=ec2.NatProvider.gateway(),
                           nat_gateways=2,
                           )
        core.CfnOutput(self, "Output",
                       value=self.vpc.vpc_id)

Next is EC2. (In short, change the key_name of the key pair.)

Change before

cdk_ec2_stack.py (before change)


from aws_cdk import core
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_elasticloadbalancingv2 as elb
import aws_cdk.aws_autoscaling as autoscaling

ec2_type = "t2.micro"
key_name = "id_rsa"  # Setup key_name for EC2 instance login 
linux_ami = ec2.AmazonLinuxImage(generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX,
                                 edition=ec2.AmazonLinuxEdition.STANDARD,
                                 virtualization=ec2.AmazonLinuxVirt.HVM,
                                 storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
                                 )  # Indicate your AMI, no need a specific id in the region
with open("./user_data/user_data.sh") as f:
    user_data = f.read()


class CdkEc2Stack(core.Stack):

    def __init__(self, scope: core.Construct, id: str, vpc, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # Create Bastion
        bastion = ec2.BastionHostLinux(self, "myBastion",
                                       vpc=vpc,
                                       subnet_selection=ec2.SubnetSelection(
                                           subnet_type=ec2.SubnetType.PUBLIC),
                                       instance_name="myBastionHostLinux",
                                       instance_type=ec2.InstanceType(instance_type_identifier="t2.micro"))
        
        # Setup key_name for EC2 instance login if you don't use Session Manager
        # bastion.instance.instance.add_property_override("KeyName", key_name)

        bastion.connections.allow_from_any_ipv4(
            ec2.Port.tcp(22), "Internet access SSH")

        # Create ALB
        alb = elb.ApplicationLoadBalancer(self, "myALB",
                                          vpc=vpc,
                                          internet_facing=True,
                                          load_balancer_name="myALB"
                                          )
        alb.connections.allow_from_any_ipv4(
            ec2.Port.tcp(80), "Internet access ALB 80")
        listener = alb.add_listener("my80",
                                    port=80,
                                    open=True)

        # Create Autoscaling Group with fixed 2*EC2 hosts
        self.asg = autoscaling.AutoScalingGroup(self, "myASG",
                                                vpc=vpc,
                                                vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE),
                                                instance_type=ec2.InstanceType(instance_type_identifier=ec2_type),
                                                machine_image=linux_ami,
                                                key_name=key_name,
                                                user_data=ec2.UserData.custom(user_data),
                                                desired_capacity=2,
                                                min_capacity=2,
                                                max_capacity=2,
                                                # block_devices=[
                                                #     autoscaling.BlockDevice(
                                                #         device_name="/dev/xvda",
                                                #         volume=autoscaling.BlockDeviceVolume.ebs(
                                                #             volume_type=autoscaling.EbsDeviceVolumeType.GP2,
                                                #             volume_size=12,
                                                #             delete_on_termination=True
                                                #         )),
                                                #     autoscaling.BlockDevice(
                                                #         device_name="/dev/sdb",
                                                #         volume=autoscaling.BlockDeviceVolume.ebs(
                                                #             volume_size=20)
                                                #         # 20GB, with default volume_type gp2
                                                #     )
                                                # ]
                                                )

        self.asg.connections.allow_from(alb, ec2.Port.tcp(80), "ALB access 80 port of EC2 in Autoscaling Group")
        listener.add_targets("addTargetGroup",
                             port=80,
                             targets=[self.asg])

        core.CfnOutput(self, "Output",
                       value=alb.load_balancer_dns_name)

After change

python:python:cdk_ec2_stack.py (after change)


from aws_cdk import core
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_elasticloadbalancingv2 as elb
import aws_cdk.aws_autoscaling as autoscaling

ec2_type = "t2.micro"
key_name = "hogehoge" #Change to the key pair prepared in advance.
linux_ami = ec2.AmazonLinuxImage(generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX,
                                 edition=ec2.AmazonLinuxEdition.STANDARD,
                                 virtualization=ec2.AmazonLinuxVirt.HVM,
                                 storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
                                 )  # Indicate your AMI, no need a specific id in the region
with open("./user_data/user_data.sh") as f:
    user_data = f.read()

#From this point onward, posting will be omitted.

Pour!

Thank you for waiting. For the time being, let's create a resource in this state.

#Set venv.
$ cd ~/practice/cdk/aws-cdk-examples/python/new-vpc-alb-asg-mysql
$ python3 -m venv .env
$ source .env/bin/activate

#Installation of dependent libraries
$ pip install -r requirements.txt

#Enter the CDK command to generate a CloudFormation template.
$ cdk ls
cdk-vpc
cdk-ec2
cdk-rds

$ cdk synth
Successfully synthesized to /home/****/practice/cdk/aws-cdk-examples/python/new-vpc-alb-asg-mysql/cdk.out

A directory called cdk.out will be created, and CloudFormation templates will be created in it. (By the way, the template was json ... isn't it yaml?)

$ cdk bootstrap

$ cdk deploy cdk-vpc #This will take some time. (same as above)
$ cdk deploy cdk-ec2
$ cdk deploy cdk-rds

The CloudFormation Stack is completed, and eventually the desired resource is completed. I can't list everything, so I'll attach only Subnet and how it looks from the management console. [TBD] Paste the image.

Problems at this point

--You cannot ssh because the key pair is not registered on the host called myBastion. It seems to be a prerequisite to use Session Manager --The Security Group on the same host is ** ssh fully open **

These will be resolved in the second lap. So I made it, but I will delete the resource easily!

Delete resource

#I erased it in the reverse order of when I made it. It disappeared cleanly. (As expected, the CloudFormation Stack will be deleted)
$ cdk destroy cdk-rds
$ cdk destroy cdk-ec2
$ cdk destroy cdk-vpc


Take a break (write something unexpectedly important)

For example, a private subnet 「cdk-vpc/VPC/PrivateSubnet1」 You can do it with the name.

You can use / (slash) in the resource name! ?? That was fresh. The place where kebab case (-connector) and CamelCase are mixed is playful.

How is the resource name determined? Speaking of which (details need confirmation) 最初のcdk-vpcの部分は、app.py Since the following is specified in, it seems to be decided by that.

app.py (excerpt only partly)


vpc_stack = CdkVpcStack(app, "cdk-vpc")
ec2_stack = CdkEc2Stack(app, "cdk-ec2",
                        vpc=vpc_stack.vpc)
rds_stack = CdkRdsStack(app, "cdk-rds",
                        vpc=vpc_stack.vpc,
                        asg_security_groups=ec2_stack.asg.connections.security_groups)

Therefore, for the second lap, I would like to change the cdk- part to a more significant value. This time I'll try akira-test-cdk- (although not very significant).

Let's go to the second lap! !! Let's et al!

2nd lap

Changes on the second lap

--Change resource name to akira-test-cdk-beginning --Assign a key pair to Bastion --Narrow down the inbound IP address of Bastion's Security Group --Set to go to the app port (8080) --Don't create RDS, let's run a DB-independent smoke test app for the time being!

Change resource name to akira-test-cdk-beginning

app.py


#!/usr/bin/env python3

from aws_cdk import core

from cdk_vpc_ec2.cdk_vpc_stack import CdkVpcStack
from cdk_vpc_ec2.cdk_ec2_stack import CdkEc2Stack
from cdk_vpc_ec2.cdk_rds_stack import CdkRdsStack

app = core.App()

vpc_stack = CdkVpcStack(app, "akira-test-cdk-vpc")
ec2_stack = CdkEc2Stack(app, "akira-test-cdk-ec2",
                        vpc=vpc_stack.vpc)
rds_stack = CdkRdsStack(app, "akira-test-cdk-rds",
                        vpc=vpc_stack.vpc,
                        asg_security_groups=ec2_stack.asg.connections.security_groups)

app.synth()

Assign a key pair to Bastion & Narrow down the Inbound IP address of Security Group & Use 8080 port

Each is a pseudonym, so please read it as you like.

cdk_ec2_stack.py


from aws_cdk import core
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_elasticloadbalancingv2 as elb
import aws_cdk.aws_autoscaling as autoscaling

ec2_type = "t2.micro"
key_name = "hogehoge" #Change to the key pair prepared in advance.
linux_ami = ec2.AmazonLinuxImage(generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2, #Change to Amazon Linux 2
                                 edition=ec2.AmazonLinuxEdition.STANDARD,
                                 virtualization=ec2.AmazonLinuxVirt.HVM,
                                 storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
                                 )  # Indicate your AMI, no need a specific id in the region
with open("./user_data/user_data.sh") as f:
    user_data = f.read()


class CdkEc2Stack(core.Stack):

    def __init__(self, scope: core.Construct, id: str, vpc, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # Create Bastion
        bastion = ec2.BastionHostLinux(self, "myBastion",
                                       vpc=vpc,
                                       subnet_selection=ec2.SubnetSelection(
                                           subnet_type=ec2.SubnetType.PUBLIC),
                                       instance_name="myBastionHostLinux",
                                       instance_type=ec2.InstanceType(instance_type_identifier="t2.micro"))
        
        # Setup key_name for EC2 instance login if you don't use Session Manager
        #Uncomment here and register the key pair
        bastion.instance.instance.add_property_override("KeyName", key_name)

        #Narrow down the source IP
        # bastion.connections.allow_from_any_ipv4(
        bastion.connections.allow_from(ec2.Peer.ipv4("x.x.x.x/32"), #Source IP here!
            ec2.Port.tcp(22), "Internet access SSH")

        # Create ALB
        alb = elb.ApplicationLoadBalancer(self, "myALB",
                                          vpc=vpc,
                                          internet_facing=True,
                                          load_balancer_name="myALB"
                                          )
        alb.connections.allow_from_any_ipv4(
            ec2.Port.tcp(80), "Internet access ALB 80")
        listener = alb.add_listener("my80",
                                    port=80,
                                    open=True)

        # Create Autoscaling Group with fixed 2*EC2 hosts
        self.asg = autoscaling.AutoScalingGroup(self, "myASG",
                                                vpc=vpc,
                                                vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE),
                                                instance_type=ec2.InstanceType(instance_type_identifier=ec2_type),
                                                machine_image=linux_ami,
                                                key_name=key_name,
                                                user_data=ec2.UserData.custom(user_data),
                                                desired_capacity=2,
                                                min_capacity=2,
                                                max_capacity=2,
volume_type=autoscaling.EbsDeviceVolumeType.GP2,
                                                )

        self.asg.connections.allow_from(alb, ec2.Port.tcp(8080), "ALB access 8080 port of EC2 in Autoscaling Group")
        listener.add_targets("addTargetGroup",
                             port=8080, # 80 ->Change to 8080
                             targets=[self.asg])

        core.CfnOutput(self, "Output",
                       value=alb.load_balancer_dns_name)


Change user_data

user_data.sh


#!/bin/bash
sudo yum update -y
sudo yum install -y java-11-amazon-corretto-headless
sudo yum install -y maven
# sudo yum -y install httpd php
# sudo chkconfig httpd on
# sudo service httpd start

Instead of creating an RDS, let's run a DB-independent smoke test app for the time being!

For the time being, run the Spring Boot app on EC2. (I'm sorry that I'm exhausted and the explanation is messy around here ... Well, it's not the main subject.)

HelloController.java


package com.example.demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class HelloController {

    @GetMapping("/")
    public String hello() {
        return "Hello, Spring boot with AWS CDK";
    }
}

** Since the Security Group linked to the EC2 of the app only accepts access from ALB, Bastion cannot ssh the EC2 of the app at this time. Therefore, change the SecurityGroup and manually change it to accept ssh from Bastion. ** **

[TODO] The explanation around here is too complicated, so I'll do it a little more ...

#Bring the jar to EC2 with SCP etc.

#to start
$ java -jar demo-0.0.1-SNAPSHOT.jar

#Hit the endpoint as a result of deploying ec2.

Impressions

CloudFormation has the impression that it is a little difficult for the person in charge of the application to create an AWS environment. CDK is a popular programming language that wraps it around, giving the impression that the infrastructure code is easier to read as a result. I used Python this time, but I wanted to try it in other languages as well.

--Knowledge of AWS itself is of course necessary --It's tough if you don't know a little about CloudFormation (you need to know enough to read the result of cdk synth) --You don't need much knowledge of Python (beginner level is okay) --CDK API specifications need to be checked each time (or should be checked)

The API specifications can be found below. https://docs.aws.amazon.com/cdk/api/latest/python/modules.html

Recommended Posts

I tried AWS CDK!
I tried AWS Iot
I tried using AWS Chalice
I tried scraping
I tried PyQ
I tried AutoKeras
I tried papermill
I tried django-slack
I tried Django
I tried spleeter
I tried cgo
I tried using parameterized
I tried using argparse
I tried using AWS Rekognition's Detect Labels API
I tried using mimesis
I tried using anytree
I tried competitive programming
I tried running pymc
I tried ARP spoofing
I tried using aiomysql
I tried Python> autopep8
I tried using coturn
I tried using Pipenv
I tried using matplotlib
I tried using "Anvil".
I tried using Hubot
I tried using ESPCN
I tried PyCaret2.0 (pycaret-nightly)
I tried using openpyxl
I tried using Ipython
I tried to debug.
I tried using PyCaret
I tried using cron
I tried Kivy's mapview
I tried using ngrok
I tried using face_recognition
I tried to paste
I tried using Jupyter
I tried connecting AWS Lambda with other services
I tried using PyCaret
I tried moving EfficientDet
AWS CDK with Python
I tried shell programming
I tried using Heapq
I tried using doctest
I tried Python> decorator
I tried running TensorFlow
I tried Auto Gluon
I tried using folium
I tried using jinja2
I tried Bayesian optimization!
I touched AWS Chalice
I tried to make a url shortening service serverless with AWS CDK
I tried using folium
I tried using time-window
When I tried to make a VPC with AWS CDK but couldn't make it
I tried Amazon Comprehend sentiment analysis with AWS CLI.
I tried to get an AMI using AWS Lambda
AWS Lambda now supports Python so I tried it
[Introduction to AWS] I tried playing with voice-text conversion ♪
I tried Value Iteration Networks