** Finally! !! !! !! !! ** ** I put rails in a Docker container and deployed it. It took roughly 3 weeks or more! Regarding job hunting, wantedly is still wanted, but I feel that Green's reaction has improved. Is it a sign of demodulation from the corona? Recently, I was given the opportunity to interview a company with good super welfare, but it was not possible due to lack of company research. I was dented for about two days, but I think it was an experience that brought about great growth in my interview preparation, and I will do my best to find a company that suits me.
Let's make a memorandum of docker equipped with rails
☆ Deployment reference ☆ Used for studying local environment construction This article was especially helpful for creating containers I made a container and broke it, so I organized a lot
ruby 2.5.1 rails 5.2.3 Mac OS docker for mac
・ Various things such as docker file are installed in the portfolio created with rails ← This section summarizes the contents ・ Build and check if it works in the local environment ・ Upload to Github ・ Pull to EC2 Regarding deployment, I don't feel like I can write the above ☆ articles, so I will omit it.
$ tree
├── app-...abridgement
├── Dockerfile
├── containers(There is no problem even if this folder is not separate) --- nginx
│ ├── Dockerfile
│ └── nginx.conf
├── Gemfile
├── Gemfile.lock
├── docker-compose.yml
・
abridgement
For Docker beginners who have just finished tech camp, I will explain it in a nutshell. Once all the environments in which the application runs are set up in the box (blue frame in the image), the production environment and local environment can be built. You don't have to do it one by one, just replace it, right? I feel like. Another advantage is that you can share the environment prepared by Docker without worrying about the different version or OS for each computer.
The container launches Rails container and Nginx container, receives it on Nginx localhost, communicates with puma, and connects to rails.
Dockerfile It is a blueprint of the container. ** Enter what to put in the container, what file to read, and what to do. ** There is one docker file per container. The one I deployed this time has two docker files, so you can think of it as two containers. The container is built based on the Dockerfile.
At first, the mechanism was refreshing because there are only words that I don't hear in tech camps, such as images and builds.
.Dockerfile (rails container)
FROM ruby:2.5.1
#Update repository and install dependent modules
RUN apt-get update -qq && \
apt-get install -y build-essential \
nodejs \
vim ← Necessary when using vim in a container
* The latest version of rails requires webpacker, so you need to install it separately.
Please refer to the video "I won docker" on youtube.
#Create a working directory with the name webapp directly under the root (application directory in the container)
RUN mkdir /webapp ← It works even if it's not crazy. webapp can be any name
WORKDIR /webapp
#Copy the host Gemfile to the container
ADD Gemfile /webapp/Gemfile
ADD Gemfile.lock /webapp/Gemfile.lock
* Application Gemfile.Will I get an error if I don't empty the contents of the lock?
#Run bundle install
RUN bundle install
#Copy everything in the host's application directory to the container
ADD . /webapp
.Dockerfile (nginx container)
FROM nginx:1.15.8
#Deleted the include directory (probably to prevent fogging)
RUN rm -f /etc/nginx/conf.d/*
#Copy Nginx config file to container
ADD nginx.conf /etc/nginx/conf.d/nginx.conf
#Start Nginx after build is complete
CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/nginx.conf
FROM Get the image from docker hub. You can pull anything in the docker hub. The first container does not have anything installed like a freshly bought computer, so I will prepare it to run ruby. If you do not specify the version, the latest version will be installed. RUN Execution command. The process that takes place when building a container. It is processed like a terminal command. COPY Copy the files in the directory on the host machine (MAC in this case). ADD Copy the files in the directory. -The difference from Copy is whether the remote file can be copied (ADD) or not (COPY), or the compression is answered (COPY) or not (ADD). CMD Processing that is performed when the container is built and then executed. It is processed like a terminal command. ・ The difference from RUN is the timing of execution. Is the update done at build time, or is the command executed when the container starts?
Refers to the configuration file of nginx itself. Without it, nginx will not start. As long as the contents are good, neither nginz.conf nor default.conf will hinder execution, but a name that everyone can see and understand is better.
nginx.conf
#Specify proxy destination
#Send the request received by Nginx to the backend puma
upstream webapp {
#I want to communicate with sockets, so puma.Specify sock
server unix:///webapp/tmp/sockets/puma.sock;
}
server {
listen 80;
#Specify domain or IP
server_name localhost;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
#Specify root. Otherwise, the path will get lost when connecting to the application.
root /webapp/public;
client_max_body_size 100m;
error_page 404 /404.html;
error_page 505 502 503 504 /500.html;
try_files $uri/index.html $uri @webapp;
keepalive_timeout 5;
#Reverse proxy related settings
location @webapp {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://webapp;
}
}
Set the request reception method in server {}, and when all pass, proceed to upstream and socket communication is performed. It's not a place to understand when starting a container, so don't memorize it and copy it. However, check the name of the directory. I'm studying proxy related things.
docker-compose.yml A file that manages the execution of multiple containers. When there was only one container, it was executed like docker container run ....., but docker-compose up can start all the containers written in the file.
version: '3'
services:
app:
build:
context: .
env_file:
- ./environments/db.env
command: bundle exec puma -C config/puma.rb
volumes:
- .:/webapp
- public-data:/webapp/public
- tmp-data:/webapp/tmp
- log-data:/webapp/log
depends_on:
- db
db:
image: mysql:5.7
env_file:
- ./environments/db.env
volumes:
- db-data:/var/lib/mysql
web:
build:
context: containers/nginx
volumes:
- public-data:/webapp/public
- tmp-data:/webapp/tmp
ports:
- 80:80
depends_on:
- app
volumes:
public-data:
tmp-data:
log-data:
db-data:
Describe the files, commands, etc. to be referenced when starting. depends_on Shows container dependencies. In the above case, start in the order of DB ⇨ app ⇨ web (nginx). volume Data storage space outside the container. The container works even if there is no volume. If you destroy the container, the information in the container will be lost at that time, and it will take time to enter the same information on the side. Use this when you have data that you want to keep even though you crush the container many times. The image is an external hard disk mount Make data outside of docker available inside docker. env_file The reading port for the password used as an environment variable. You can write it directly to the file, or you can write it to the file as above. The latter is better for security (probably)
db.env(Example)
MYSQL_ROOT_PASSWORD=aaaaa
MYSQL_USER=aaaaa
MYSQL_PASSWORD=aaaaa
#Build
docker-compose build
#to start
docker-compose up
#Start behind the scenes
docker-compose up -d
#Stop the container
docker-compose stop
#Stop the container and discard
docker-compose down
#Stop the container and discard&volume deleted
docker-compose down --volume
#Check inside the container
docker ps
docker images
#image deleted
docker image prune
#Delete everything that isn't working
docker system prune
#Rails actions in docker
docker-compose run [Application name] rails db:like create
⇨docker-It's a private matter to the application in compose. db:create
No, it really took a long time to implement. Especially Nginx had a hard time because it has a refreshing proxy and server.
Recommended Posts