About the test

In the project I belong to, I am developing by writing tests for each method of class (it is hard to say that it is a unit test ...) and tests for each API. The test uses pytest. Since we are developing with multiple members, we will merge it, and if it is pushed to the repository, jenkins will automatically execute the test before deploying, and if it succeeds, we will deploy it. Here's a summary of the good and the pain (although it's still in progress) and what I think should be improved. (Hereafter, text only!)

Good thing

--You can rest assured

I'm relieved when the test passes. Calm down mentally. This is important for sound development.
You don't have to work while always worrying about "Is the code I wrote okay?"

--Bugs can be found early and can be calmly fixed

When a serious bug occurs after the service is started, it needs to be dealt with quickly and accurately, and it will of course lose its reputation.
Moreover, while responding to that, we must also prepare for the next service.
I want to avoid such a situation as much as possible.
You can write tests and run them automatically every time your code is merged so you can quickly find the bug and fix it when you can afford it.

--You can find the difference in recognition of specifications.

When an error occurs, you may notice a deviation in the specifications while searching for the cause.
If this is also discovered after it is released, it will be a big deal.
It is very helpful to be able to detect it early.
This also applies to master data.
When data is created that the engineer did not expect, you can check what the intention was for this data and take action.

--I can read the code

If you make corrections, you can read the code written by the members.
Code review is done separately, but after that you will have the opportunity to read the code, which will give you knowledge and discover problems.

Spicy thing

The test run time is getting longer and longer. In particular, testing on a method-by-method basis can be very long.
It currently takes about an hour to run everything (although API testing takes a few minutes).
This is the correct phenomenon, but it is also a problem.
Currently, I run the test of the corrected part in my environment, and if the test passes, I have jenkins do it all.

--Errors occur frequently

There are two main causes for errors in your project.
One is an error caused by updating the master data. The other is an error due to a specification change.
The latter is occasional, but the former occurs frequently and is honestly cumbersome to deal with.
This is because the tests we are writing are closely tied to the master data.
To give a simple example, I write a test that if you sell an item with ID 1, you will get 100 yen. That's why we don't write "unit tests" but rather class-based tests.
Master data changes frequently during development.
The sale price may change or the ID data itself may disappear.
Check the master data each time and make corrections (sometimes an error still occurs and the cause is a code bug).
I don't think this is a good thing, but I also benefit from the peace of mind of comparing the results with specific numbers.

--There are times when errors continue to occur

When I'm busy, it's hard to start fixing.
As a result, jenkins will always send you error notifications.
This doesn't make sense to automate the tests, so the team made the decision to fix at least API test errors as soon as they occur.
However, we try to support class-based tests as much as possible.

What I think should be improved

--Test particle size

Depending on the person who writes the test, it may be just checking that no error occurs, or writing a test that compares the results in detail.
When starting a project, it may be better to adjust the granularity of this area.
If the data is still in a temporary state, write a test that will allow the program to work normally, and when the data is created, create a separate test for the granularity of the integration test. Isn't it? I thought.

--Reduced execution time

Method-by-method testing takes about an hour.
It's still under development, so I still have some time, but I can't afford to find a bug during the release, fix it, test it for an hour, and then reflect it in production.
What's wrong?

Summary

Looking back on the development so far (although it is still in the development period), I finally wanted to say that the test is good. It can be painful. Test maintenance is troublesome. But if you pass the test, you will feel happy. After the release, every time I fix it, another bug comes out ... I don't think it can be avoided thanks to this test. If you haven't written a test yet, let's write a test.

This time I wrote about ensuring the security of the code, but next time I will write about performance improvement. Also, I will study if there is a way to write something that looks a little better.

Recommended Posts

About the test
About the queue
About the Unfold function
About the service command
About the confusion matrix
About the Visitor pattern
For the G test 2020 # 2 exam
About the Python module venv
About the ease of Python
test
About the enumerate function (python)
About the traveling salesman problem
About understanding the 3-point reader [...]
About the components of Luigi
About the features of Python
Test the version of the argparse module
Think about the minimum change problem
AtCoder: Python: Daddy the sample test.
About the Ordered Traveling Salesman Problem
[Python] What is @? (About the decorator)
Test the goodness of fit of the distribution
About the return value of pthread_mutex_init ()
About the return value of the histogram.
About the basic type of Go
About the upper limit of threads-max
About the average option in sklearn.metrics.f1_score
About the behavior of yield_per of SqlAlchemy
About the size of matplotlib points
About the basics list of Python basics
Roughly think about the loss function
About LangID
About CAGR
About virtiofs
About python-apt
About Permission
About sklearn.preprocessing.Imputer
About gunicorn
Jarque-Bera test
About requirements.txt
About locale
Locust-Load test
About permissions
Write the test in a python docstring
Django test
About Opencv ②
About the behavior of enable_backprop of Chainer v2
About the virtual environment of python version 3.7
About axis = 0, axis = 1
Miscellaneous notes about the Django REST framework
About Opencv ③
[OpenCV] About the array returned by imread
About import
About the open source support group NumFOCUS
Post test
Roughly think about the gradient descent method
[Python] Summarize the rudimentary things about multithreading
About numpy
About the development environment you are using
About pip
About Linux
About the arguments of the setup function of PyCaret