Previously summarizes how to unit test flask using python pytest.
Unit tests are often tested from the perspective of whether processing routes and branching conditions are covered (garage test). Here is a summary of how to use a library called `pytest-cov`
that determines how much the coverage rate is.
--Install pytest with pip install pytest
.
--Install pytest-cov with pip install pytest-cov
.
Since pytest-cov works based on pytest, an environment (source to be tested and test source) that can run pytest is required. Nothing is needed for pytest-cov. For how to use pytest, see Unit test flask with pytest.
We will see how to use pytest-cov through a simple function similar to When summarizing unit tests.
Use the source with the if statement in the example When summarizing unit tests to check the branch.
testing_mod.py
def add_calc(a, b):
if (a == 1):
b = b + 1
return a + b
It is the same as When summarizing unit tests.
py_test_main.py
import pytest
import testing_mod
def test_ok_sample():
result = testing_mod.add_calc(1, 2)
assert 4 == result
Now that you have the source of the test target and test method, execute it. The execution method differs slightly depending on whether the test is to cover the processing route or the conditional branch.
You can see the coverage test rate (C0 coverage) of the processing route by adding the --cov
option when running pytest.
# pytest --cov -v py_test_main.py
~~~~ Abbreviation ~~~~~
py_test_main.py::test_ok_sample PASSED [100%]
----------- coverage: platform win32, python 3.6.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------
py_test_main.py 5 0 100%
testing_mod.py 4 0 100%
-------------------------------------
TOTAL 9 0 100%
====== 1 passed in 0.05s ======
Looking at the result, you can see that the Cover of testing_mod.py under test is 100% and all processing routes have passed. By the way, Stmts is the number of processing lines that passed during the test, and Miss is the number of processing lines that did not pass during the test.
You can see the coverage test rate (C1 coverage) of conditional branching by adding the ``` --cov --cov-branch` `` option when running pytest.
# pytest --cov --cov-branch -v py_test_main.py
~~~~ Abbreviation ~~~~~
py_test_main.py::test_ok_sample PASSED [100%]
----------- coverage: platform win32, python 3.6.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover
---------------------------------------------------
py_test_main.py 5 0 0 0 100%
testing_mod.py 4 0 2 1 83%
---------------------------------------------------
TOTAL 9 0 2 1 91%
====== 1 passed in 0.08s ======
Looking at the results, unlike before, Branch and BrPart are increasing. This time, testing_mod.py is 83% because we are testing only the conditions that enter the if statement. By the way, Branch is the number of conditional branches, and BrPart is the number of conditions that do not pass. In this example, Branch has two patterns, a condition that enters the if statement and a condition that does not enter the if statement, and since the condition that does not enter the if statement has not been tested, one pattern of BrPart appears.
Add a test to make the branch 100%.
py_test_main.py
import pytest
import testing_mod
def test_ok_sample():
result = testing_mod.add_calc(1, 2)
assert 4 == result
def test_ok_not_if_sample():
result = testing_mod.add_calc(2, 2)
assert 4 == result
Try running it again to see if the branch is 100%.
# pytest --cov --cov-branch -v py_test_main.py
~~~~ Abbreviation ~~~~~
py_test_main.py::test_ok_sample PASSED [ 50%]
py_test_main.py::test_ok_not_if_sample PASSED [100%]
----------- coverage: platform win32, python 3.6.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover
---------------------------------------------------
py_test_main.py 8 0 0 0 100%
testing_mod.py 4 0 2 0 100%
---------------------------------------------------
TOTAL 12 0 2 0 100%
====== 1 passed in 0.08s ======
Looking at the result, Cover was 100% and BrPart was 0.
Coverage is a good way to reduce bugs as it is very straightforward and mechanically thought out. However, some people consider coverage to be absolute, causing test omissions and unnecessarily increased test man-hours. For example, "The combination of condition 1 and the following condition 2 is strange and I missed the bug, but the coverage was OK" or "To pass the exception creation process (for all SQL) caused by the DB connection error 1 It took a week. " On the other hand, it is a very effective library because you can find conditions that are often overlooked, such as conditions that are not included in the if statement as in this example. Personally, I think it would be better to use it as an auxiliary tool that first looks at the coverage, analyzes the conditions, and then considers whether or not there is a test.
Recommended Posts