This is the way to build a production environment on AWS by turning a Ruby on Rails application created as a portfolio into a Docker container. The portfolio itself is here. [[Portfolio] Overview of the portfolio created during job change activities (Tec camp)] (https://qiita.com/sho_U/items/058e590325ee6833abb0)
I was suffering a lot, so I hope it helps someone.
title | |
---|---|
1 | Docker containerization of Rails application in local environment |
2 | Create a VPC on AWS. Create a public subnet |
3 | Create a private subnet |
4 | Create an EC2 instance |
5 | Create an RDS |
6 | Upload Docker container to AWS |
This item was quite annoying. There are many things that I have to connect to, such as ngix, puma, and RDS, and by building a container in AWS, I sometimes get into a state where I am not sure where I am now. If you follow this article, it may not work, but I hope you can help someone. If possible, I don't want to feel like that again, so I left it as an article before I forget it.
Upload the local docker container to the created AWS instance.
Currently, if you check database.yml
database.yml
username: root
password: password
host: db
In this way, secure information is hard-coded. Since there is a risk of leaking from the code of github as it is, set secure information using environment variables in the container of the production environment.
See below for environment variables for reference. [About environment variables] (https://qiita.com/sho_U/items/cd5fc4d4d76b65b92d23)
Install a gem called dotenv-rails to take advantage of environment variables. Add the following to the Gemfile.
Gemfile.
gem 'dotenv-rails'
By installing dotenv-rails, you will be able to retrieve the environment variables described in the ".env" file in the docker container with the following description.
ENV['DATABASE_PASSWORD']
Also, if you upload the .env file to github, there is no source or child, so add it to gitignore.
.env
Describe .env as below and place it directly under the app.
.env
DB_USERNAME=root
DB_PASSWORD=gakjadfmoaeur
DB_HOST=fito2-db-instance.〇〇〇〇〇〇.ap-northeast-1.rds.amazonaws.com
DB_DATABASE=fitO2_db
DB_HOST is an RDS endpoint. You can find out by clicking on the database on the AWS RDS dashboard and selecting the appropriate RDS from the list.
Add the following to database.yml so that the production environment reads the value from the environment variable.
database.yml
production:
<<: *default
database: <%= ENV['DB_DATABASE'] %>
adapter: mysql2
encoding: utf8mb4
charset: utf8mb4
collation: utf8mb4_general_ci
host: <%= ENV['DB_HOST'] %>
username: <%= ENV['DB_USERNAME'] %>
password: <%= ENV['DB_PASSWORD'] %>
Modify the following parts for production.
nginx.conf
server {
listen 80;
# =========Switch between local and production===========
server_name fixed IP;
# server_name localhost;
# ======================================
docker-compose.yml
version: '3'
services:
app:
build:
context: .
# =========Switch between local and production===========
command: bundle exec puma -C config/puma.rb -e production
# command: bundle exec puma -C config/puma.rb
# ======================================
volumes:
- .:/fitO2
- public-data:/fitO2/public
- tmp-data:/fitO2/tmp
- log-data:/fitO2/log
networks:
- fitO2-network
# =========Switch between local and production===========
# depends_on:
# - db
# db:
# image: mysql:5.7
# environment:
# MYSQL_ROOT_PASSWORD: password
# MYSQL_USER: user
# MYSQL_PASSWORD: password
# MYSQL_DATABASE: fitO2_development
# volumes:
# - db-data:/var/lib/mysql
# networks:
# - fitO2-network
# ======================================
web:
build:
context: ./nginx_docker
volumes:
- public-data:/fitO2/public
- tmp-data:/fitO2/tmp
ports:
- 80:80
depends_on:
- app
networks:
- fitO2-network
volumes:
public-data:
tmp-data:
log-data:
db-data:
networks:
fitO2-network:
external: true
change point -Comment out the db container because it is unnecessary because it uses RDS in the production environment. ** (Comment out the depend on of the app container as well) ** -Added -e production to rails s of command so that the server can be started in the production environment.
Once again, we will organize the flow of building a docker container on an AWS instance.
Currently, locally, it was possible to build the following docker container based on the file fitO2 as shown below.
Then, I rewrote the file called fitO2 and modified it so that the following container is created. (Delete DB container)
Then push fitO2 to github. Then, clone from the AWS side (pull from the second time onward). Based on the cloned fitO2, build in AWS to build a container and connect to the RDS created in AWS.
The communication flow is an orange arrow.
First, commit the modified file and push it to github.
Log in to AWS with an ssh connection and install docker.
sudo yum install -y docker
//Installation
sudo service docker start
//start docker
sudo usermod -G docker ec2-user
//ec2-Grant permissions to user
exit
//Logout
//Login with ssh again
docker info
It is okay if the following display is displayed.
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.13-ce
sudo chkconfig docker on
//Docker starts automatically when EC2 starts
Install docker-compose.
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
docker-compose -v
docker-compose version 1.24.0, build 0aa59064
//Success if the version is displayed as above
//If Permission denied
sudo chmod +x /usr/local/bin/docker-compose
//With docker-After granting execute permission to compose, docker again-compose -Run v to see the version.
Install git.
sudo yum install -y git
Create a private key. (All questions are enter)
ssh-keygen -t rsa -b 4096
View and copy the created private key.
cat ~/.ssh/id_rsa.pub
https://github.com/settings/keys Go here.
Click "new SSH key" in the upper right.
Enter the title as appropriate and paste the character string you copied earlier in place of key. (Include from ssh-rsa ~)
ssh -T [email protected]
Type in the above, enter yes on the way, and if the following is displayed, AWS and github authentication has been established.
Hi yourname! You've successfully authenticated, but GitHub does not provide shell access.
After that, you will be able to pull etc. from the AWS instance side to your own github.
Clone from github. (URL is the URL when the repository is displayed)
cd /
//Move directly under the route. If you unpack the container in your home directory, you may get an error when puma and nginx work together. puma.sock read error
sudo git clone https://github.com/〇〇〇〇〇〇/〇〇〇〇〇〇
ls
//Type ls to see if the directory exists.
Right now, the file (.env master.key) mentioned in gitignore is not included. Therefore, the file is transferred directly from local to AWS using ssh communication.
exit
//Log out and move to your local fitO2 file
sudo scp -i ~/.ssh/fitO2_key.pem .env ec2-user@Fixed IP:/home/ec2-user/
sudo scp -i ~/.ssh/fitO2_key.pem config/master.key ec2user@Fixed IP:/home/ec2-user
//Use the scp command to AWS.Transfer env..If the transferred file is displayed like env, it's okay.
//Transfer to your home directory once due to authority.
Log in again.
cd
ls -a
If you move to your home directory and .env and master.key exist, there is no problem. Move .env and master.key.
sudo mv .env /fitO2
sudo mv master.key /fitO2/config
Check if you are moving just in case
ls -a /fitO2/
ls -a /fitO2/config
cd /fitO2
docker-compose build
//Create a container.
(If Permition denied, sudo chmod 777/usr/local/bin/docker-compose
To execute. )
docker network create fitO2-network
//Create a network.
docker-compose run app rails assets:precompile RAILS_ENV=production
//Precompile
docker-compose up
//Start the container.
If it is displayed as below, it has started without any problem.
Creating fito2_app_1 ... done
Creating fito2_web_1 ... done
Attaching to fito2_app_1, fito2_web_1
app_1 | Puma starting in single mode...
app_1 | * Version 3.12.6 (ruby 2.5.1-p57), codename: Llamas in Pajamas
app_1 | * Min threads: 5, max threads: 5
app_1 | * Environment: production
app_1 | * Listening on tcp://0.0.0.0:3000
app_1 | * Listening on unix:///fitO2/tmp/sockets/puma.sock
app_1 | Use Ctrl-C to stop
Open another tab and log in to AWS.
docker-compose exec app rails db:create db:migrate RAILS_ENV=production
docker-compose exec app rails db:seed RAILS_ENV=production
//(If necessary)
Now, if you access http: // fixed IP, it should be displayed correctly.
docker ps
//Find out the container ID.
Copy the CONTAINER ID with NAMES fito2_app_1.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c5ec64f0ea76 fito2_web "/bin/sh -c '/usr/sb…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp fito2_web_1
442b9ddb3a20 fito2_app "bundle exec puma -C…" 5 minutes ago Up 5 minutes fito2_app_1
Log in to the container
docker exec -it 442b9ddb3a20 bash
Start mysql.
service mysql start
mysql -u root -h RDS endpoint-p
//Enter your password to log in
Since it is restricted by the security group, it can only be accessed from the created EC2 container.
Try it locally.
That is all.
Introducing Docker to Rails application on EC2 (Rails, Nginx, RDS) [How to deploy Docker to an existing Rails app in a development environment (Rails, nginx, mysql)] (https://qiita.com/Yusuke_Hoirta/items/18dae771163a02a53a37) [free! And the shortest? I will publish the Ruby on Rails on Docker on AWS app. ] (https://qiita.com/at-946/items/1e8acea19cc0b9f31b98) [Careful explanation with images] How to upload Rails application to AWS (EC2) from scratch [Part 1 ~ Network, RDS environment setting ~] Some practical patterns of Nginx + Rails (Puma) on Docker [Docker + Rails + Puma + Nginx + MySQL] (https://qiita.com/eighty8/items/0288ab9c127ddb683315)
Recommended Posts