There are many people who publish applications using AWS, and I also wanted to actually experience it. As the title suggests, I tried to build an environment with Docker, develop an application with Rails, and actually deploy it on AWS. AWS aims to deploy within the free frame.
AWS services are set by IAM users as much as possible. First, log in to AWS as an IAM user.
First, create a VPC. Open the VPC service page and select VPC from the menu. Select Create VPC. Create a VPC by setting the VPC name and CIDR block.
Create a subnet. For the subnet, prepare one subnet for EC2 and two subnets for RDS. Select a subnet from the menu and choose ** Create Subnet **. Select the VPC you just created, set the subnet name, Availability Zone, CIDR block, and select ** Create Subnet **.
Subnets for RDS have separate availability zones.
Creating an internet gateway
Set up an internet gateway to connect to the internet. Select ** Internet Gateway ** from the menu and then ** Create Internet Gateway **. Set the name and select ** Create Internet Gateway **. Select ** Attach to VPC ** from ** Actions ** to attach to the VPC you created.
Open it in your EC2 dashboard and select ** Launch Instance ** to create an EC2 instance. For EC2 instances, select the machine image and instance type that can be used in the free tier. Create a new security group. I want to make an SSH connection, so set up SSH.
type | protocol | Port range | Source |
---|---|---|---|
HTTP | TCP | 80 | 0.0.0.0/0 |
SSH | TCP | 22 | My IP |
If there are no problems with the settings, select ** Start **. You create a key pair when you create an instance. A key pair is required for SSH connection to EC2. Select ** Create new key pair **, set the key pair name and download it locally.
When [Status Check] in the instance menu is completed and [Instance Status] becomes [Running], the instance has been successfully started.
Then set the Elastic IP. Select ** Elastic IP ** from the menu. Select ** Elastic IP Address Assignment **, select New Address Assignment, and then select ** Assign **. Select the Elastic IP you created and select ** Action ** to ** Associate Elastic IP Address **. Configure the EC2 instance you created to associate your Elastic IP address with EC2.
Before creating RDS, create a subnet group for RDS. Select ** Subnet Group ** from the RDS menu, then select ** Create DB Subnet Group **.
Select the subnet group name and VPC, set the two subnets created for RDS, and create them. Open RDS Services and select ** Create Database **. Select MySQL as the database. Basically, the existing settings are fine, but don't forget to select the section called ** Free usage tier **. For the version etc., set the version you use.
Set the database name, master user name, and master password as appropriate. Later, you'll need it to configure Rails. (You can check it after creating it) Select the created VPC and subnet group. Select ** New ** as the security group to create a new security group. Select ** Create Database ** to create the database.
Modify the security group inbound rules for the database.
type | protocol | Port range | Source |
---|---|---|---|
MYSQL/Aurora | TCP | 3306 | EC2 security group ID |
Write the RDS information to credentials.yml.enc.
python
docker-compose run -e EDITOR="vim" app rails credentials:edit
yml:credentials.yml.enc
rds:
host:RDS endpoint
database:RDS database name
username:RDS master user name
password:RDS master password
Insert the RDS information in the production section of config/database.yml.
database.yml
#abridgement
production:
<<: *default
host: <%= Rails.application.credentials.rds[:host] %>
database: <%= Rails.application.credentials.rds[:database] %>
username: <%= Rails.application.credentials.rds[:username] %>
password: <%= Rails.application.credentials.rds[:password] %>
Since the database uses RDS, comment out all the parts of db :. Also comment out mysql-data: for volumes :. Add -e production to command: in app: to launch your application in production.
docker-compose.yml
version: '3'
services:
# db:
# image: mysql:5.7
# command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
# env_file:
# - ./.env
# volumes:
# - mysql-data:/var/lib/mysql
# ports:
# - "4306:3306"
app:
build: .
env_file:
- ./.env
command: bundle exec puma -C config/puma.rb -e production
init: true
volumes:
- .:/myproject
- public-data:/myproject/public
- tmp-data:/myproject/tmp
- log-data:/myproject/log
# depends_on:
# - db
web:
build:
context: containers/nginx
init: true
volumes:
- public-data:/myproject/public
- tmp-data:/myproject/tmp
ports:
- 80:80
# depends_on:
# - app
volumes:
# mysql-data:
public-data:
tmp-data:
log-data:
Enter the Elastic IP address in place of xx.xx.xx.xx in server_name.
nginx.conf
upstream myproject {
server unix:///myproject/tmp/sockets/puma.sock;
}
server {
listen 80;
server_name xx.xx.xx.xx [or localhost];
#abridgement
}
}
For other KEYs and environment variables that you do not want to be known to the outside such as Github, create an .env file and write it there. dotenv-rails makes it easier to manage when using Git.
gemfile
gem 'dotenv-rails'
python
$ bundle install
By putting the .env file in the .gitignore file, it will not be pushed to github.
.gitignore
/.env
Edit production.rb in config/environments. Editing the following contents is convenient when starting up in a production environment.
production.rb
config.assets.js_compressor = Uglifier.new(harmony: true)
config.assets.compile = true
** When you're done, push your application to github. ** **
Make settings for SSH connection to EC2.
python
$ mkdir ~/.ssh
$ mv ~/Downloads/myapp.pem ~/.ssh/
$ chmod 600 ~/.ssh/myapp.pem
$ ssh -i ~/.ssh/myapp.pem [email protected]
myapp.pem is the name of the downloaded key pair. After changing permissions with chmod, log in to EC2 using your key pair. ec2-user is the AWS linux default user. xxx.xxx.xxx.xxx is the Elastic IP of your EC2 instance.
First of all, update yum.
python
[ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum update -y
Install Docker and docker-compose on EC2.
python
#Install Docker
[[email protected] ~]$ sudo yum install -y docker
[[email protected] ~]$ sudo service docker start
[[email protected] ~]$ sudo usermod -G docker ec2-user
[[email protected] ~]$ exit
$ ssh -i ~/.ssh/myapp.pem [email protected]
[[email protected] ~]$ sudo chkconfig docker on
# docker-install compose
[[email protected] ~]$ sudo curl -L "https://github.com/docker/compose/releases/download/Version to install/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
[[email protected] ~]$ sudo chmod +x /usr/local/bin/docker-compose
Install git to clone your application from Github.
python
[[email protected] ~]$ sudo yum install -y git
[[email protected] ~]$ git clone [Github path]
Copy the master.key and .env files to decrypt the credentials locally.
python
[[email protected] ~]$ exit
$ scp -i ~/.ssh/myapp.pem master.key [email protected]:myapp/config/
$ scp -i ~/.ssh/myapp.pem .env [email protected]:myapp/
python
$ ssh -i ~/.ssh/myapp.pem [email protected]
[[email protected] ~]$ cd myapp
[[email protected] ~/myapp]$ docker-compose build
[[email protected] ~/myapp]$ docker-compose up -d
[[email protected] ~/myapp]$ docker-compose exec app bin/rails db:create db:migrate RAILS_ENV=production
If you access the Elastic IP address and it is displayed correctly, you are done.
I thought that deploying using AWS was a high hurdle for beginners, but I was able to overcome it because there was a wealth of wonderful literature. I've fixed it in places and deployed it through trial and error, so I'm not sure if it can be done in exactly the same way, but I think it's the best method I can do in the field.
** Points of concern ** Depending on the reference article, there is a part that I think "Is this method working in the development environment?" "Isn't RDS used because it is not a production environment?", So let me refer to some articles. I received. If you specify production: in database.yml as RDS, I think that RDS will not work unless you launch the application in the production environment. (Sorry if you make a mistake) It may be possible to do it depending on the docker-compose setting, but I decided to do some more research.
Task Since I used Docker, I thought I should have considered deploying using AWS's ECS and Fargate. I would like to challenge if I have the next opportunity.
** Other ** I also adopted S3. That is summarized in a separate article. Save image data with AWS_S3 + Ruby on Rails_Active Storage
free! And the shortest? I will publish the Ruby on Rails on Docker on AWS app. Introducing Docker to Rails application on EC2 (Rails, Nginx, RDS) Let's do it while watching! Let's start AWS -VPC construction- Credentials.yml.enc python added from Rails 5.2
Recommended Posts