** * This article was created on July 12, 2019 [The story of introducing the very Rails-like serverless framework "Ruby on Jets" into the production environment --LiBz Tech Blog](https://tech.libinc. It is the same content as co.jp/entry/2019/07/12/113215) **
Hello! Watanabe, who turned 26 the other day, has been steadily increasing his fear of being 30 years old.
This is my third blog post. Thank you for your many bookmarks in the previous Getting Started with Kubernetes (GKE) Cheaply.
This time, I will write about ** Rails-like ** Ruby serverless framework ** "Ruby on Jets" ** that I actually started using in business.
The product I'm in charge of developing has a function that allows job seekers and our career advisors to exchange messages via LINE.
This feature is implemented using LINE's Messaging API, but it gets the message sent by the job seeker. The only way is to "receive data with a webhook" There is a big problem that when a webhook is requested to our server ** the message sent by the job seeker disappears if there is something wrong with the server or if it is in maintenance state ** was.
Therefore, we have decided to build a serverless system that is not affected by the life or death of our ** server operated by ECS **, and modify the process of receiving LINE messages so that it will be handled there.
First, I will introduce what kind of configuration it was, but in the end, the current production environment is as follows.
Even though it is a serverless architecture, the functions required this time are
There were only the above three.
There was talk of using DynamoDB to store data, but in the end it was decided to use the SQS FIFO queue.
** LINE server → API Gateway → Lambda → SQS (FIFO queue) ← Existing application **
It became a very simple structure.
The processing that Lambda will be in charge of is
-** Verify if it is a request (webhook) from the LINE server ** -** Save requested data to SQS queue **
There are two.
Regarding "the process of verifying whether it is a request (webhook) from the LINE server", since it is already implemented in Rails, I thought that Ruby that can reuse the code would be the easiest, but Lambda is Ruby. It was December 2018, which was relatively recent, so I was worried.
Configuration management is a bottleneck in serverless architectures. If you only use Lambda, you can manage the version of the code, so there may be no problem, but if you want to manage the settings such as API Gateway along with the code, it is better to use some tool.
Serverless Framework It is probably the most used tool for serverless architecture, and there are quite a lot of documents and Japanese articles. Before I came to LiB, I was often taken care of by the company. Personally, I wonder if it will be Serverless Framework + Node for stability.
AWS SAM AWS official framework for building serverless applications. Debugging was very easy with SAM Local (now called SAM CLI) that allows you to set up a pseudo API Gateway server in your local environment, but when I was using it it was buggy and painful. is,, (I think it has been fixed now!)
Ruby on Jets This time, I adopted this Ruby on Jets. I am surprised when I actually use it. Far from being Rails-like, it was almost Rails. For details, it is easy to understand in Qiita here, but the routing written in routes is reflected in API Gateway as it is, and I wrote it in Controller. The processing is reflected in Lambda.
Although I decided on Jets, there are many unstable parts such as only one main committer and it is updated once every 3 days, so if something goes wrong, I can switch to Serverless Framework. It was adopted with a view. (I decided that the transfer cost wouldn't be too high because the implementation isn't that complicated.)
You can create a project with the jets new
command.
Specify API as the mode. We also specified the --no-database
option because we are not using the DB this time.
$jets new project name--mode api --no-database
Jets supports RDB and DynamoDB such as MySQL and PostgreSQL, but RDB and Lambda are said to be very incompatible due to the connection relationship, so if you use it, will it be DynamoDB?
Reference: A brief explanation of why AWS Lambda and RDBMS are incompatible --Sweet Escape
By the way, the author of gem dynomite
, which manages migration of DynamoDB like ActiveRecord and makes CRUD operations easy, is also the developer of this Jets.
config/routes.rb
Jets.application.routes.draw do
get 'hoge', to: 'hoge#huga'
post 'foo', to: 'foo#bar'
end
It's exactly the same as Rails.
With the above settings, def huga
of HogeController
will be executed when a get request is made to / hoge
.
Actually, when the API Gateway is accessed by GET / hoge
, the Lambda function that executes the code of the huga
method is linked.
In Rails, I think that it is basic to create various Controllers that inherit ʻActionController :: Base, but in Jets, create Controllers that inherit
Jets :: Controller :: Base`.
app/controllers/hoge_controller.rb
# Jets::Controller::It inherits ApplicationController which inherits Base
class HogeController < ApplicationController
def huga
response_body = {
hello: 'world!!',
request_params: {
headers: event['headers'],
body: event['body'],
query_parameters: event['queryStringParameters'],
path_parameters: event['pathParameters']
}
}
render json: response_body
end
end
The event variable also changes depending on what triggered Lambda to start, but in the case of API Gateway, you can easily get the request parameter as described above.
You will need the permissions listed here [https://rubyonjets.com/docs/extras/minimal-deploy-iam/). I didn't use DynamoDB and Route53 in this production deployment, so I didn't need them.
CloudFormation permissions are required because the configuration will eventually be transformed / executed as a CloudFormation template. This is not limited to Jets, and most serverless frameworks manage configuration by converting resource settings such as API Gateway into CloudFormation templates.
It's config / secrets.yml
in Rails.
Unfortunately Jets doesn't have secrets.yml, but I was able to do the same by using the env file.
# .env.development
SECRET_KEY_BASE=abcdefg
SECRET_ACCESS_KEY=12345
SECRET_ACCESS_TOKEN=7890
By writing as above, it will be set in the environment variable of Lambda function and can be obtained by ʻENV ['key_name']`.
It also supports AWS SSM Parameter and can be described as follows.
# .env.production
SECRET_KEY_BASE=ssm:/secret_key_base
SECRET_ACCESS_KEY=ssm:/secret_access_key
SECRET_ACCESS_TOKEN=ssm:/secret_access_token
Of course, authority around SSM is also required, but you can avoid hard coding of secret_key by setting parameters in SSM in advance. If this is the case, you can also push to github.
Deploy with the jets deploy command.
#Deploy
$ AWS_PROFILE=[profile name] bundle exec jets deploy [Environment name]
#Delete deployed resources
$ AWS_PROFILE=[profile name] bundle exec jets remove [Environment name]
Jets also provides commands for Blue-Green deployment.
# Blue-Green deploy
$ AWS_PROFILE=[profile name] JETS_ENV_EXTRA=[1~Number of 9] bundle exec jets deploy [Environment name]
By deploying by specifying the number with JETS_ENV_EXTRA
, resources such ashoge-resources- [environment name]-[numbers 1 to 9]
are created.
You can perform operations such as switching resource endpoints after sufficient verification.
https://rubyonjets.com/docs/env-extra/
This time it happened that the requirement was small, so I decided to use Jets, but I have the impression that it is not yet mature enough to be used for large-scale services.
However, I was surprised at how much the difference between existing applications and serverless applications is becoming. Will application engineers eventually be able to develop without worrying about the server (infrastructure) at all? Lol
Nowadays, I think while reading the document, I think I haven't mastered half of the functionality of Jets yet. I want you to grow into one of the biggest products that represent Ruby! (People who use Jets from now on need to be prepared to respond to updates that come once every three days. Lol)
https://rubyonjets.com
In developing SQS this time, I would like to introduce ** Elastic MQ **, which can build a pseudo SQS in the local environment, because it was very convenient.
I used softwaremill / elasticmq for Docker image.
docker-compose.yml
version: '3.2'
services:
jets:
..abridgement..
local_sqs:
image: softwaremill/elasticmq
container_name: local_sqs
ports:
- "9324:9324"
volumes:
- local_sqs.conf:/opt/elasticmq.conf
You can customize the queue by mounting local_sqs.conf
on the ElasticMQ /opt/elasticmq.conf
.
I wanted to use the FIFO queue this time, so I set it as follows.
local_sqs.conf
include classpath("application.conf")
node-address {
protocol = http
host = local_sqs
port = 9324
context-path = ""
}
rest-sqs {
enabled = true
bind-port = 9324
bind-hostname = "0.0.0.0"
sqs-limits = strict
}
generate-node-address = false
queues {
"Queue name.fifo" {
fifo = true
}
}
The reason why .fifo
is added to the queue name is that the FIFO queue created by SQS is automatically added with .fifo
as a suffix.
Recently, tools for developing applications that run on AWS locally have become available, which is very appreciated by developers.
Recommended Posts