As the number of tests in a project increases, the CI rotation becomes slower, and the stress increases when the tests are performed frequently locally. I'm wondering if there are 10,000 tests, but sometimes I get "Oh, there are some slow test cases?" It's even more problematic if the product code is responsible for the slow testing. Since the test framework of python uses pytest, a memorandum that the following pytest is searching for slow processing like this.
I mainly did the following two things.
(1) Output a test case with a long processing time with the --durations
option of pytest
(2) Profile with pytest-profiling to identify slow processing points.
This is the test code used in this article. We create a list of integers from 1 to 10000000 in order using 4 patterns, and finally compare them with assert. Reading sleep in ** setup () ** is just included to make it easier to understand when outputting the test processing time, and it is meaningless in terms of processing.
The source code can be found at here.
test_sample.py
import time
import pytest
import sys
COUNT = 10000000
@pytest.fixture(scope='module')
def expected():
#Creating expected values to compare with assert
return [i for i in range(1, COUNT)]
@pytest.fixture(scope='function', autouse=True)
def setup():
#Sleeve that is meaningless in terms of processing
time.sleep(0.1)
yield
time.sleep(0.2)
def test_1_1(expected):
actual = [i for i in range(1, COUNT)]
assert expected == actual
def test_1_2(expected):
actual = list({i for i in range(1, COUNT)})
assert expected == actual
def test_1_3(expected):
actual = []
for i in range(1, COUNT):
actual.append(i)
assert expected == actual
def test_1_4(expected):
actual = []
for i in range(1, COUNT):
# actual = actual + [i]It takes time to die
actual += [i]
assert expected == actual
If you add the option --durations = N
when executing pytest, the slowest test + N before and after processing (setup / teardown) will be displayed in the execution result.
If N = 0
is set, all the results will be output.
The result of executing pytest with --durations = 0
is as follows. By default, the results of 0.01s or less are hidden, but if you add the -vv
option, all the results will be displayed.
=========== slowest test durations ===========
2.13s call tests/test_sample.py::test_1_3
1.25s call tests/test_sample.py::test_1_4
1.08s call tests/test_sample.py::test_1_2
0.81s call tests/test_sample.py::test_1_1
0.66s setup tests/test_sample.py::test_1_1
0.20s teardown tests/test_sample.py::test_1_2
0.20s teardown tests/test_sample.py::test_1_4
0.20s teardown tests/test_sample.py::test_1_3
0.20s teardown tests/test_sample.py::test_1_1
0.10s setup tests/test_sample.py::test_1_4
0.10s setup tests/test_sample.py::test_1_2
0.10s setup tests/test_sample.py::test_1_3
============= 4 passed in 7.40s =============
The top four are the execution results of four test cases. It's quick to write in simple comprehension. Next, it takes about 0.66 seconds to preprocess (setup) ** test_1_1 () **. This seems to be the addition of the processing time of ** expected () ** + 0.1 seconds of sleep in ** setup () **. This is followed by four teardowns for each test and setup processes other than ** test_1_1 () **. This is the same as the sleep time described in ** setup () **.
After identifying which tests are slow, you need to identify which processes in the test are slow to improve speed. According to Net Wisdom, there is a way to run pytest and profiler at the same time. However, it is necessary to analyze the output result of the profile separately using ** pstats ** etc. When I was investigating whether this area could be easily done, I arrived at pytest-profiling.
pytest-profiling
You can install it with pip.
pip install pytest-profiling
You can also output the profile result as SVG, but Graphviz is required, so please install it separately.
It's easy to use, just add the --profile-svg
option when running pytest.
pytest tests/test_sample.py --profile-svg
When executed with the --profile-svg
option, pytest-profiling performs the following processing.
If you add --profile
instead of the --profile-svg
option, it seems that the above 3rd process is performed.
After a series of processing, a prof directory is created, and a prof file of pstats analysis information and an SVG image are created under it. The following files are created in the prof directory.
prof
├ combined.prof
├ combined.svg
├ test_1_1.prof
├ test_1_2.prof
├ test_1_3.prof
└ test_1_4.prof
** test_1_1.prof ** to ** test_1_4.prof ** are profile analysis information for each test case. The name of the test is given to the file name, but Japanese etc. are created by replacing it with an underscore. ** combined.prof ** is profile analysis information for all the tests performed. ** combined.svg ** is converted to a graph and then converted to an SVG image, which looks like the image below.
The README of gprof2dot explains how to read the graph. Each node of the graph contains the following information.
+------------------------------+
| function name |
| total time % ( self time % ) |
| total calls |
+------------------------------+
In addition, the following information is described on the edge that connects the calling function and the called function (expressed as parent and child).
total time %
calls
parent --------------------> children
If you follow from the highest parent node in descending order of total time%, it seems that you can easily identify the slow processing part.
I was running it on Windows at first, but the prof file was generated but the SVG file was not created. If you look closely, it looks like Supported OS does not include Windows.
However, since the prof file is generated, it was possible to manually convert the prof file to SVG. I think gprof2dot will be installed when you install pytest-profiling.
gprof2dot -f pstats prof/combined.prof > prof/tmp
dot -Tsvg -o prof/combined.svg prof/tmp
By the way, I ran it from the Visual Studio Code terminal, but I got an error when executing the dot command. It doesn't seem to work if the default shell is powershell. Changing from "Select Default Shell" to cmd worked fine.
If you write the following in the pytest configuration file and put it under the root directory of the project, pytest-profiling will be executed when pytest is executed.
pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
addopts = -vv --durations=0 --profile-svg
pytest_plugins = ['pytest_profiling']
However, if there are many test cases, the analysis information will be complicated, so first identify the slow part of the test with the --durations
option, and then execute pytest-profiling only for the corresponding test to identify the slow part of the process. I think it's easier to do it. If you write it in the configuration file, it will be executed every time.
Recommended Posts