I wanted to put into practice development with GitHub Flow, that is, development with simple pull request operation, involving sales, and created such an environment. At that time, I made and released a web application that supplements some missing functions, so I will write various stories about it. (It's not about whether you can actually use this web app, but rather about improving development with a similar flow.)
https://github.com/uniaim-event-team/pullre-kun
The usage and functions of this web application are described in README.md for the time being, but in this article, I will write them in order, including a little background story. We are looking for acclaim for issue support and other additional development.
Until now, there have been the following issues in the development of additional functions for existing web applications (services).
--There are some functions that have not been fully verified. --I haven't been able to give a sufficient lecture on how to use the added function / It is only recognized by some people and becomes a ** hidden function ** --Everyone forgets the existence of the function over time and becomes a ** hidden function ** ――Actually, the development side has to do the data setting work that can be registered normally if you use the screen. ―― ~~ In the first place, data cannot be registered from a normal screen ** Hidden function ** is created and operated ~~
In order to solve such problems, we considered the operation with the following flow.
Compared to a typical similar workflow, what is a little stronger is that the ticket registrant even writes a test scenario. There were a lot of doubts about this, but I thought it would be possible if it was technically possible or impossible in the following points, so I decided to trust it.
――No matter how much a sales person is, when providing it to a customer, the minimum operation explanation of the management screen should be possible if requested. ――It is more natural to confirm that the end user and other users of the system can use it satisfactorily without any knowledge of technology. ――Regarding the listing of confirmation items at that time, even if the sales side does not have a viewpoint such as coverage, it should be possible to simulate what is actually done in front of the customer without technical knowledge. ..
However, as a matter of course, I think that it is impossible for the person who issued the requirements / requirements to write a comprehensive test, so if there is a need, the development side will increase it. It is difficult to identify what kind of conditions the system depends on, when the requirements / requirements are issued, and it is difficult to write a test that covers them in a technical sense. I decided it was difficult.
There were some technical issues in introducing this flow, but they were generally summarized in the following issues.
--In what environment does the ticket registrant verify?
For example, in the case of modification contents such as adding table items, it is necessary to refer to the schema defined in the pull request. There is a way to test in the developer's environment, but since the developer's development work is continuing, it is often assumed that it will not be usable during the test. Also, it is possible that it will not work properly due to the influence of another repair, which is not good for each other. In the first place, there is also the problem that testing with a source that includes some other modifications is not a test for pure pull request content.
Then, when it comes to manually creating a verification environment for it, it is a little difficult to manually create a verification environment for it every time in a situation where one person may make many pull requests a day.
There is something like netlify if you can fix it only with JS, but there are many problems in preparing the schema and dataset, and in the case of the target project, the server side is Python, so I could not apply it for a while.
In principle, it can be solved with docker, and there was a story of actually doing the same thing with docker at least three years ago! Mechanism that automatically builds the environment deployed with the commit ID when issuing a Pull Request to make it easier to check at the time of review / before merging
Originally I wrote an automatic test and it worked with CircleCI, so it works with docker to some extent, but some processing does not work well just by making docker (specifically, html is pdf using wkhtml) In the process of converting to, I could not immediately put xvfb in the middle of the process on docker ... it is not running on CircleCI), I make a docker for it It's annoying and I'm here.
The policy is as follows.
--Create a lot of verification instances in advance, set the load balancer and DNS, make it ready for normal access when the server process starts, and then turn off the instance. --DB decides to share Aurora ** including the development environment ** and create that Aurora --If there is a pull request, pick it up and turn on the verification instance. --On the verification instance side, identify the commit that the instance should check out and start the server process with that commit. --Turn off the corresponding verification server when the pull request is closed
The part that processes the pull request excluding the pre-work is the part that is supported by the web application.
The system configuration image is as follows. The Controller Instance runs the web application, and the Staging Instances periodically execute batch processing to start the server process.
It's only been a few weeks since I started operation, and there are some parts that are different from the normal state due to the influence of the new corona, but for the time being, I thought that the test on the actual screen became considerably easier. As a development verifier, it's very easy for me to be able to verify what I used to check out in my environment, fix the schema, and test while doing something, without touching my environment at all. Also, I think it's a big step forward to make it easier to involve non-developers before the release.
This is the same as the description in README.md. Almost almost Google Translate.
Create an EC2 instance. And allow port 5250 to one of them. One is called a "controller instance". Others are called "staging instances".
Create an IAM policy that allows the following actions:
「ec2:DescribeInstances」 「ec2:StartInstances」 「ec2:StopInstances」
The policy resource is the instance you created. (Note: Describ Instances is for all resources) Then attach the policy to the user / role and save the access and secret keys.
Install git on the entire instance. Note: If you are using amazon linux2
$ sudo yum install git
Set the application as a staging server.
Install python3 on the entire instance. Note: If you are using amazon linux2
$ sudo yum install python3
Install mysql-client on the entire instance. Note: If you are using amazon linux2
$ sudo yum install mysql
Run or install a database like MySQL.
Clone Purle-kun.
install "requirements"
Install requirements (contents of requirements.txt).
The controller instance has basic authentication. You need to create a password token (hash) and save it in app.ini.
Create an app.ini file. The sample is app.ini.default. Then deploy it to the entire instance.
Add the following line to the ** controller server ** crontab.
* * * * * cd /home/ec2-user/pullre-kun; python3 update_pull.py
Add the following line to the ** staging server ** crontabs.
* * * * * cd /home/ec2-user/pullre-kun; python3 client.py
Run the following command on the ** controller server **.
$ cd ~/pullre-kun
$ python3 init.py
Run the following command on the ** controller server **.
$ cd ~/pullre-kun
$ nohup python3 app.py&
Go to https: // <your-domain> / server / list
to see the entire server.
Then click the Register Staging Server button.
Then go to https: // <your-domain> / master / server
and update the db_schema for each record.
Go to https: // <your-domain> / master / git_hub_users
and register your users.
"Login" is the github user login and db_schema is the original schema of the clone.
With the above procedure, if there is a pull request, the verification environment will be started automatically.
Overall it is Flask + SQLAlchemy, but it is configured to use CherryPy as a server. If there is a lot of hope, I may write a commentary soon. (The usage of general-purpose WTForm according to the SQLAlchemy model is a little special) (Please comment on this article or create an issue on GitHub and add +1.)
Recommended Posts