This is an article about creating a microservice-like application with Python, MySQL, and Docker and deploying it to Fargate.
I wanted to use Fargate, which is said to be the hottest (in part) on AWS, and I was interested in microservices architecture. However, since I have never dealt with microservices properly, it is a delusional architecture that "I wonder if it looks like this". If you have any mistakes, we would appreciate it if you could give us your opinion.
In addition, AWS has RDS, but since containers are beginners in the first place, I also use MySQL containers for studying.
There are many explanations on the net, but I will briefly describe my understanding.
--The application consists of __ small services, each loosely coupled __. --Because of the loose coupling, the repair is completed within each service. ――Since each service can be developed by multiple teams, each team can use the language and framework that they are good at. (On the contrary, it may be unified for the entire application) --The ** container ** is used as a technology to realize a small service. --There are Kubernetes and AWS ECS as tools for managing containers. --In data communication between services, ** REST and gRPC ** are mainly used. Messaging technology (such as Kafka) may also be used.
Fargate --To be exact, use it together with ECS and EKS. --You can manage containers by using container orchestration tools such as ECS, but you need to manage the host (server) to which you deploy the container separately. ――For example, if the container scales and the load on the host increases, the host also needs to be scaled, which is difficult for the user of such management. --Fargate is a management-type service ** where AWS manages this ** host. Since the scaling mentioned in the example is also performed automatically, ** users do not need to be aware of host management **.
This is the understanding. In other words, this time, I will try deploying the microservice built with __Docker to Fargate, which is an AWS management service __.
--There are two microservices, Apl and BackEnd. --Communication between services is HTTP (REST) --Access the API of the Apl service from the client. Image of accessing from client to DB by accessing API of BackEnd service from Apl service ――The reason why Python is divided into 3.6 and 3.8 is that I just wanted to experience that "Docker can do this", and there is no other reason. There is no problem with both 3.6.
First deploy the microservice locally as a host and then bring it to Fargate. After a successful deployment to Fargate, the goal is to be able to communicate with two microservices from the outside via HTTP.
Deploy the app on your local Mac. The sources etc. are as follows.
Create folders for each of the three containers (apl-service, db-service, mysql).
.
├── apl-service
│ ├── Dockerfile
│ └── src
│ ├── results.py
│ └── server.py
├── db-service
│ ├── Dockerfile
│ └── src
│ ├── server.py
│ └── students.py
├── docker-compose.yml
└── mysql
├── Dockerfile
└── db
├── mysql_data
└── mysql_init
└── setup.sql
mysql/Dockerfile
FROM mysql/mysql-server:5.7
RUN chown -R mysql /var/lib/mysql && \
chgrp -R mysql /var/lib/mysql
setup.sql and the data actually registered
create table students (id varchar(4), name varchar(20), score int);
insert into students values ('1001', 'Alice', 60);
insert into students values ('1002', 'Bob', 80);
commit;
mysql> select * from DB01.students;
+------+-------+-------+
| id | name | score |
+------+-------+-------+
| 1001 | Alice | 60 |
| 1002 | Bob | 80 |
+------+-------+-------+
db-service/Dockerfile
FROM python:3.6
#Work DIR on container
WORKDIR /usr/src/
#Library installation
RUN pip install flask mysql-connector-python
CMD python ./server.py
db-service/src/server.py
from flask import Flask, request, abort, render_template, send_from_directory
from students import get_students
app = Flask(__name__)
@app.route('/students', methods=['GET'])
def local_endpoint():
return get_students()
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5001)
db-service/src/students.py
import json
import mysql.connector as mydb
def get_students():
conn = mydb.connect(user="user", passwd="password",
host="mysql", port="3306")
cur = conn.cursor()
sql_qry = 'select id, name, score from DB01.students;'
cur.execute(sql_qry)
rows = cur.fetchall()
results = [{"id": i[0], "name": i[1], "score": i[2]} for i in rows]
return_json = json.dumps({"students": results})
cur.close()
conn.close()
return return_json
--Easy source explanation
--Execute server.py when starting the Docker container.
--server.py uses Flask to set up a server. It is an API that returns the result by executing get_students imported from students.py when the resource `` / students``` is accessed by GET. --get_students connects to the MySQL container and stores the result selected from DB in rows. It is converted from rows to dict in the comprehension notation, then converted to json and the value is returned. --For local deployment, you can connect well between containers without using
links. Or rather,
links` seems to be an old technique that hasn't been used very recently ...
apl-service/Dockerfile
FROM python:3.8
#Work DIR on container
WORKDIR /usr/src/
#Library installation
RUN pip install requests flask
CMD python ./server.py
apl-service/src/server.py
from flask import Flask, request, abort, render_template, send_from_directory
from results import get_results
API_URI = 'http://db-service:5001/students'
app = Flask(__name__)
@app.route('/results', methods=['GET'])
def local_endpoint():
return get_results(API_URI)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5002)
apl-service/src/results.py
import json
import requests
def get_results(uri):
r = requests.get(uri)
students = r.json()["students"]
return_students = []
for student in students:
if student["score"] > 70:
student.setdefault("isPassed", True)
return_students.append(student)
else:
student.setdefault("isPassed", False)
return_students.append(student)
return_json = json.dumps({"addedStudents": return_students})
return return_json
--Easy source explanation
--server.py is almost the same as db-service. It is an API that returns the result by executing get_results imported from results.py when the resource `` / results``` is accessed by GET. --The difference is that the URI is set to access the db-service API from get_results. --get_results stores the value obtained from db-serive in students. Turn this with for and set
" isPassed "` to True if the value of each score is greater than 70, False otherwise, and add the json element.
docker-compose
docker-compose.yml
version: '3'
services:
mysql:
container_name: mysql
build:
context: .
dockerfile: ./mysql/Dockerfile
hostname: mysql
ports:
- "3306:3306"
volumes:
- ./mysql/db/mysql_init:/docker-entrypoint-initdb.d
- ./mysql/db/mysql_data:/var/lib/mysql
environment:
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: DB01
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --skip-character-set-client-handshake
db-service:
build:
context: .
dockerfile: ./db-service/Dockerfile
container_name: db-service
ports:
- "5001:5001"
volumes:
- ./db-service/src/:/usr/src/
apl-service:
build:
context: .
dockerfile: ./apl-service/Dockerfile
container_name: apl-service
ports:
- "5002:5002"
volumes:
- ./apl-service/src/:/usr/src/
--Easy source explanation --In mysql volumes, docker-entrypoint-initdb.d is set to store the setup.sql mentioned earlier. In MySQL, the SQL under docker-entrypoint-initdb.d is executed at the first startup. In short, it's the initial data.
I will omit the result, but deploy it with the docker-compose command.
build&up
$ docker-compose build
$ docker-compose up
After successful deployment, try accessing each microservice via HTTP as a communication check. As you can see from the source, db-service uses ports 5001 and apl-service uses ports 5002.
db-Access service
$ curl http://127.0.0.1:5001/students
{"students": [{"id": "1001", "name": "Alice", "score": 60}, {"id": "1002", "name": "Bob", "score": 80}]}
apl-Access service
$ curl http://127.0.0.1:5002/results
{"addedStudents": [{"id": "1001", "name": "Alice", "score": 60, "isPassed": false}, {"id": "1002", "name": "Bob", "score": 80, "isPassed": true}]}
You have now successfully deployed the underlying (delusional) microservices locally. Since the result of the apl-service API is returned, you can also confirm that there is communication between the apl-service and db-service microservices.
I'll bring this to Fargate right away! But it was a long time from here ...
It is a configuration diagram of the final version based on deploying to Fargate.
The points are as follows.
--1 Deploying microservices to one Fargate. --There is inter-container communication (db-service⇔mysql) in the task. --There is inter-task communication (apl-service⇔db-service).
Based on these, we will deploy.
First, push the created three docker images (apl-service, db-service, mysql) to ECR. Since the UI of ECR is easy to understand, I think that you can push it by pressing "Create repository" from the AWS console and proceeding as it is.
docker rm
once.With ECS, you can use docker-compose.yml. After setting ecs-cli, we will deploy to Fargate according to the following tutorial, but docker-compose.yml etc. need to be modified. I will raise various correction points, but the source of the final version is posted at the end of the article, so if you are busy, please refer to that.
Tutorial: Create a Cluster of Fargate Tasks Using the Amazon ECS CLI (https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html#ECS_CLI_tutorial_fargate_configure )
Also, in this app, we are using ports 5001 and 5002, so we need to allow these in the security group settings in step 3.
When allowing 5001
$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 5001 --cidr 0.0.0.0/0 --region <region>
--This time, we will deploy two microservices as two tasks. For this reason, docker-compose.yml also needs to be split in two. You also need to execute the deploy command twice. --Split by backend (db-service, mysql) and apl (apl-service). --You will have two docker-compose.yml files with the same name, so separate the folders appropriately.
Example) Folder "aws_Store in "work" and deploy with task name backend
$ pwd
/Users/<abridgement>/aws_work
$ ls docker-compose.yml # db-service +mysql docker-compose.yml
docker-compose.yml
$ ecs-cli compose --project-name backend service up ~
Example) Folder "aws_Store in "work" and deploy with task name apl
$ pwd
/Users/<abridgement>/aws_work2
$ ls docker-compose.yml # apl-service docker-compose.yml
docker-compose.yml
$ ecs-cli compose --project-name apl service up ~
--Specify the image you pushed earlier instead of the Dockerfile. build
, container_name
, hostname
are not needed, so specify ʻimageinstead. --In the example below, the ECR repository name for the mysql container is
python-mysql / mysql. --Replace db-service and apl-service with ʻimage
as well.
Example) docker-compose.yml fix points(image settings)
mysql:
image: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/python-mysql/mysql
--Fargate does not support persistent storage volumes. Sounds difficult, but it means that you can't use volumes in docker-compose.yml. Therefore, the source such as server.py is not reflected in the container. --So, comment out (or physically delete) volumes from docker-compose.yml and modify each to include COPY in the Dockerfile. --db-service and apl-service make similar modifications. --Since I modified the Dockerfile, I need to push it to ECR again.
Example) docker-compose.yml fix points
# volumes:
# - ./db-service/src/:/usr/src/
Example) db-service/Dockerfile modification points
#Copy the source to the container
COPY ./src/server.py /usr/src/server.py
COPY ./src/students.py /usr/src/students.py
--As mentioned in the tutorial above, set up container logs as a best practice for Fargate tasks. Here the log group name is python-docker. --Set the same for db-service and apl-service.
Example) docker-compose.yml fix points(Container log)
mysql:
logging:
driver: awslogs
options:
awslogs-group: python-docker
awslogs-region: <region>
awslogs-stream-prefix: mysql
Also, although I haven't fixed it this time, it's also important to note that in Fargate, ports
must be the same number on the host and container sides. For example, you can specify 80: 5001
locally, but in Fargate you will get an error if you do not specify 5001: 5001
.
I've fixed about 4 points, but it still doesn't work.
From here, it is the modification required to realize the above-mentioned inter-container communication within the task and inter-task communication. First, in order to realize communication between containers, we will describe db-service and mysql.
After deploying the fix so far, if you check the status with the command ```ecs-cli compose ~ service ps ~ `` , it will be
RUNNING` for the time being.
Actually, it is the result of ps after startup.
$ ecs-cli compose --project-name backend service ps --cluster-config sample-config --ecs-profile sample-profile
Name State Ports TaskDefinition Health
<task-id>/db-service RUNNING XXX.XXX.XXX.XXX:5001->5001/tcp backend:12 UNKNOWN
<task-id>/mysql RUNNING XXX.XXX.XXX.XXX:3306->3306/tcp backend:12 UNKNOWN
Since the IP address of Globus is displayed, I try to communicate with db-service from Mac by GET, but as mentioned above, it still does not work.
$ curl http://XXX.XXXX.XXX.XXX:5001/students
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
Since it is on the ECS console, if you check the log on the Logs tab etc., you will get an error that the name cannot be resolved. In this case, I get an error saying that the mysql host name cannot be found from db-service. The point is that there is no communication between the containers. In addition, I wrote it when deploying locally, but this time I am not using Links. Also, the Fargate type does not support Links in the first place.
Then, how to solve it is that Fargate's container-to-container communication can be accessed by port number if it is on the same task. So, change the source of students.py as follows. Also, since the on-coding of the host name is subtle, I will modify it so that it is passed as an environment variable from docker-compose.yml.
db-service/src/students.py fix points
import json
import mysql.connector as mydb
import os #add to
def get_students():
#Fixed to get from environment variable
conn = mydb.connect(user=os.environ['DB_USER'], passwd=os.environ['DB_PASS'],
host=os.environ['DB_HOST'], port=os.environ['DB_PORT'])
docker-compose.yml fix points(db-service)
db-service:
-abridgement-
environment:
DB_HOST: 127.0.0.1
DB_USER: "user"
DB_PASS: "password"
DB_PORT: "3306"
Push to ECR again and deploy.
If you have deployed to Fargate once, to reflect the changes, drop it once with the ecs-cli compose ~ service down ~ `` `command in the tutorial, and
ecs-cli compose ~ Restart with the command ~ service up ~ ```.
You can confirm that HTTP communication from Mac to db-service is successful. The following is a review of the communication paths.
$ curl http://XXX.XXX.XXX.XXX:5001/students
{"students": [{"id": "1001", "name": "Alice", "score": 60}, {"id": "1002", "name": "Bob", "score": 80}]}
Next, let the apl-service communicate. As mentioned earlier, apl-service executes the db-service API, so it needs to communicate with another task. To achieve this in Fargate, enable service detection and launch. The mechanism of service detection is omitted because the following article by Classmethod is very easy to understand, but in short, Route53 A record is automatically created between microservices, and name detection is possible.
Since we are using ecs-cli this time, we will deploy it with commands by referring to the following AWS official tutorial.
Tutorial: Create an Amazon ECS Service That Uses Service Discovery Using the Amazon ECS CLI (https://docs.aws.amazon.com/en_jp/AmazonECS/latest/developerguide/ecs-cli-tutorial-servicediscovery .html)
The host name when the service is detected is service_name.namespace
. This time, the service name is created as backend and the namespace is created as sample, so in order to access db-service from apl-service, it is necessary to set the host name backend.sample
.
So, as with db-service, modify it to set the host name from the environment variable.
apl-service/src/server.py fix points
from flask import Flask, request, abort, render_template, send_from_directory
from results import get_results
import os #add to
#Fixed to get from environment variable
API_URI = 'http://' + os.environ['BACKEND_HOST'] +':' + os.environ['BACKEND_PORT'] + '/students'
docker-compose.yml fix points(apl-service)
apl-service:
-abridgement-
environment:
BACKEND_HOST: "backend.sample"
BACKEND_PORT: "5001"
Now you're ready to go. Push the apl-service container to ECR again. If db-service and mysql are already running, it is necessary to set and deploy on the same namespace, so drop the service once. Then, in the folder that contains each docker-compose.yml file, add namespace options and deploy. The service names are backend and apl, respectively.
Deploy backend service
$ ecs-cli compose --project-name backend service up --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
Deploy apl service
$ ecs-cli compose --project-name apl service up --private-dns-namespace sample.com --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
As a caveat, you should also add --ecs-profile
to the tutorial deploy command.
$ curl http://XXX.XXX.XXX.XXX:5002/results
{"addedStudents": [{"id": "1001", "name": "Alice", "score": 60, "isPassed": false}, {"id": "1002", "name": "Bob", "score": 80, "isPassed": true}]}
Finally, the communication of apl-service was confirmed. The following is a review of the communication paths.
I've fixed a lot, so I'll summarize the source and deploy commands that have changed. In addition, region was done with ap-northeast-1, but please note that those who are using another need to change.
mysql/Dockerfile
FROM mysql/mysql-server:5.7
COPY ./db/mysql_init/setup.sql /docker-entrypoint-initdb.d/setup.sql
RUN chown -R mysql /var/lib/mysql && \
chgrp -R mysql /var/lib/mysql
Since setup.sql is unchanged, it is omitted.
db-service/Dockerfile
FROM python:3.6
#Work DIR on container
WORKDIR /usr/src/
#Library installation
RUN pip install flask mysql-connector-python
#Copy the source to the container
COPY ./src/server.py /usr/src/server.py
COPY ./src/students.py /usr/src/students.py
CMD python ./server.py
db-service / src / server.py
has not changed, so it is omitted.
db-service/src/students.py
import json
import mysql.connector as mydb
import os
def get_students():
conn = mydb.connect(user=os.environ['DB_USER'], passwd=os.environ['DB_PASS'],
host=os.environ['DB_HOST'], port=os.environ['DB_PORT'])
cur = conn.cursor()
sql_qry = 'select id, name, score from DB01.students;'
cur.execute(sql_qry)
rows = cur.fetchall()
results = [{"id": i[0], "name": i[1], "score": i[2]} for i in rows]
return_json = json.dumps({"students": results})
cur.close()
conn.close()
return return_json
apl-service/Dockerfile
FROM python:3.8
#Work DIR on container
WORKDIR /usr/src/
#Library installation
RUN pip install requests flask
#Copy the source to the container
COPY ./src/server.py /usr/src/server.py
COPY ./src/results.py /usr/src/results.py
CMD python ./server.py
apl-service/src/server.py
from flask import Flask, request, abort, render_template, send_from_directory
from results import get_results
API_URI = 'http://db-service:5001/students'
app = Flask(__name__)
@app.route('/results', methods=['GET'])
def local_endpoint():
return get_results(API_URI)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5002)
ʻApl-service / src / results.py` has not changed, so it is omitted.
docker-compose (db-service, mysql)
docker-compose.yml
# aws_account_id,region needs to be modified according to the actual situation
version: '3'
services:
mysql:
image: <aws_account_id>.dkr.ecr.ap-northeast-1.amazonaws.com/python-mysql/mysql
ports:
- "3306:3306"
environment:
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: DB01
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --skip-character-set-client-handshake
logging:
driver: awslogs
options:
awslogs-group: python-docker
awslogs-region: ap-northeast-1
awslogs-stream-prefix: mysql
db-service:
image: <aws_account_id>.dkr.ecr.ap-northeast-1.amazonaws.com/python-mysql/db-service
ports:
- "5001:5001"
environment:
DB_HOST: 127.0.0.1
DB_USER: "user"
DB_PASS: "password"
DB_PORT: "3306"
logging:
driver: awslogs
options:
awslogs-group: python-docker
awslogs-region: ap-northeast-1
awslogs-stream-prefix: db-service
docker-compose (apl-service)
docker-compose.yml
# aws_account_id,region needs to be modified according to the actual situation
version: '3'
services:
apl-service:
image: <aws_account_id>.dkr.ecr.ap-northeast-1.amazonaws.com/python-mysql/apl-service
ports:
- "5002:5002"
environment:
BACKEND_HOST: "backend.sample"
BACKEND_PORT: "5001"
logging:
driver: awslogs
options:
awslogs-group: python-docker
awslogs-region: ap-northeast-1
awslogs-stream-prefix: apl-service
We'll cover the main parts from step 3 of the tutorial (using the Amazon ECS CLI to create a cluster of Fargate tasks), but the ones with comments are the changes. Replace the parentheses and region with the actual values.
Step 3
$ ecs-cli up --cluster-config sample-config --ecs-profile sample-profile
$ aws ec2 describe-security-groups --filters Name=vpc-id,Values=<vpc-id> --region ap-northeast-1
$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 80 --cidr 0.0.0.0/0 --region ap-northeast-1
# 5001,5002 also allowed
$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 5001 --cidr 0.0.0.0/0 --region ap-northeast-1
$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 5002 --cidr 0.0.0.0/0 --region ap-northeast-1
Step 5
#deploy backend-Added namespace creation option for service discovery
$ ecs-cli compose --project-name backend service up --create-log-groups --cluster-config sample-config --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
#deploy apl-Same as backend
$ ecs-cli compose --project-name apl service up --create-log-groups --cluster-config sample-config --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
Step 6
$ ecs-cli compose --project-name backend service up --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
$ ecs-cli compose --project-name apl service up --private-dns-namespace sample.com --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
Step 10
$ ecs-cli compose --project-name backend service down --cluster-config sample-config --ecs-profile sample-profile
$ ecs-cli compose --project-name apl service down --cluster-config sample-config --ecs-profile sample-profile
$ ecs-cli down --force --cluster-config sample-config --ecs-profile sample-profile
--Image of using microservice apps in Fargate --Mechanisms and methods of intra-task communication and inter-task communication in ECS --Various restrictions, such as Fargate not supporting persistent storage volumes
I feel that I have deepened my understanding of not only Fargate but also microservices in general because I made microservices with full scratch. I wrote a lot about what I was addicted to, so it may have been a difficult article to understand. It's been a long time, but thank you for reading. I would be very grateful if you could point out or ask questions.
-Build python (jupyter) + MySQL environment using docker compose -When you want to communicate between containers with AWS Fargate -Use mysql with docker
Recommended Posts