This is the 11th day article of Python Part 2 Advent Calendar 2015.
In this article, the tips obtained when using pytest are summarized in reverse lookup format. Also, a sample project (confirmed with Python 3.5.0 / pytest 2.8.4) containing the contents of this article is placed in the following repository, so please refer to it as well. https://github.com/FGtatsuro/pytest_sample
pytest is, as the name implies, a test library written in Python. Similar libraries include unittest and nose.
Since I am not familiar with the above two tools, I cannot make an evaluation in comparison with them, but I personally felt that the following points were characteristic.
--Assert with Python standard assert statement without defining your own assert method (ex. ʻAsertEquals`). --There are appropriate hook points in the test life cycle (collection-> execution-> reporting), and it is relatively easy to define your own processing. --By default, the ability to filter the tests you run is very rich. By utilizing the hook points mentioned above, it is possible to implement a unique filtering function.
You can specify the default value of the option given to pytest in setup.cfg
.
setup.cfg
#The options that can be specified are py.test --See help
[pytest]
addopts = -v
(Reference) http://pytest.org/latest/customize.html?highlight=setup%20cfg#adding-default-options
The pytest itself doesn't have a timeout feature, but it can be supported by the pytest-timeout plugin.
pytest-installation of timeout plugin
$ pip install pytest-timeout
There are several ways to set the timeout, which can be combined as follows.
--Specify the default timeout time in setup.cfg
.
--For tests that require longer than that, specify them individually in the decorator.
Default timeout time
[pytest]
addopts = -v
timeout = 5
Specifying the timeout time by annotation
import time
@pytest.mark.timeout(10)
def test_timeout():
time.sleep(8)
When HTTP access occurs in integration test etc., it is convenient to be able to confirm the HTTP request / response at the time of test execution in the standard output (in many cases, at least when implementing the test). Taking requests as an example, the purpose can be achieved by defining a handler as follows.
Request handler implementation example
import requests
# _logging is a self-defined module
from ._logging import get_logger
logger = get_logger(__name__)
class ResponseHandler(object):
def __call__(self, resp, *args, **kwargs):
logger.debug('### Request ###')
logger.debug('Method:{0}'.format(resp.request.method))
logger.debug('URL:{0}'.format(resp.request.url))
logger.debug('Header:{0}'.format(resp.request.headers))
logger.debug('Body:{0}'.format(resp.request.body))
logger.debug('### Response ###')
logger.debug('Status:{0}'.format(resp.status_code))
logger.debug('Header:{0}'.format(resp.headers))
logger.debug('Body:{0}'.format(resp.text))
class HttpBinClient(object):
'''
The client for https://httpbin.org/
'''
base = 'https://httpbin.org'
def __init__(self):
self.session = requests.Session()
#Register the created handler
# http://docs.python-requests.org/en/latest/user/advanced/?highlight=response#event-hooks
self.session.hooks = {'response': ResponseHandler()}
def ip(self):
return self.session.get('{0}/ip'.format(self.base))
However, with the default settings of pytest, the contents of the standard output when the test is executed are captured by pytest (used for reporting the test results), so the handler output cannot be confirmed as it is.
To check, you need to set the value of the capture
option to no
.
--capture=Execution example of no
$ py.test --capture=no
=========================================================================================== test session starts ============================================================================================
...
tests/test_calc.py::test_add
PASSED
tests/test_client.py::test_ip
DEBUG:sample.client:2015-12-10 11:26:17,265:### Request ###
DEBUG:sample.client:2015-12-10 11:26:17,265:Method:GET
DEBUG:sample.client:2015-12-10 11:26:17,265:URL:https://httpbin.org/ip
DEBUG:sample.client:2015-12-10 11:26:17,265:Header:{'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'User-Agent': 'python-requests/2.8.1', 'Accept': '*/*'}
DEBUG:sample.client:2015-12-10 11:26:17,265:Body:None
DEBUG:sample.client:2015-12-10 11:26:17,265:### Response ###
DEBUG:sample.client:2015-12-10 11:26:17,265:Status:200
DEBUG:sample.client:2015-12-10 11:26:17,265:Header:{'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*', 'Server': 'nginx', 'Access-Control-Allow-Credentials': 'true', 'Content-Length': '33', 'Connection': 'keep-alive', 'Date': 'Thu, 10 Dec 2015 02:26:17 GMT'}
DEBUG:sample.client:2015-12-10 11:26:17,315:Body:{
"origin": xxxxx
}
...
If it is troublesome to specify the option every time, it is good to specify the default value.
Default value for capture option
[pytest]
addopts = -v --capture=no
timeout = 5
(Reference) http://pytest.org/latest/capture.html?highlight=capture
With the junit-xml
option, you can output a test report in XUnit format to a specified file. This makes it easy to work with Jenkins.
XUnit format report output
[pytest]
addopts = -v --capture=no --junit-xml=results/results.xml
timeout = 5
When checking the test results in Jenkins, it is convenient to be able to check the individual test methods and the standard output at that time. As mentioned above, if the default value of the capture
option is set to no
, the test report will not include the standard output, so it is a good idea to overwrite the value at runtime.
--capture=Specify sys at run time
$ py.test --capture=sys
=========================================================================================== test session starts ============================================================================================
...
tests/test_calc.py::test_add PASSED
#HTTP request/No response log
tests/test_client.py::test_ip PASSED
tests/test_timeout.py::test_timeout PASSED
...
#HTTP request for test report/Includes response log
$ cat results/results.xml
<?xml version="1.0" encoding="utf-8"?><testsuite errors="0" failures="0" name="pytest" skips="0" tests="3" time="9.110"><testcase classname="tests.test_calc" file="tests/test_calc.py" line="5" name="test_add" time="0.00024390220642089844"><system-out>
</system-out></testcase><testcase classname="tests.test_client" file="tests/test_client.py" line="7" name="test_ip" time="0.9390749931335449"><system-out>
DEBUG:sample.client:2015-12-10 12:29:33,753:### Request ###
DEBUG:sample.client:2015-12-10 12:29:33,754:Method:GET
DEBUG:sample.client:2015-12-10 12:29:33,754:URL:https://httpbin.org/ip
DEBUG:sample.client:2015-12-10 12:29:33,754:Header:{'Connection': 'keep-alive', 'User-Agent': 'python-requests/2.8.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'}
DEBUG:sample.client:2015-12-10 12:29:33,754:Body:None
DEBUG:sample.client:2015-12-10 12:29:33,754:### Response ###
DEBUG:sample.client:2015-12-10 12:29:33,754:Status:200
DEBUG:sample.client:2015-12-10 12:29:33,754:Header:{'Content-Type': 'application/json', 'Date': 'Thu, 10 Dec 2015 03:29:34 GMT', 'Connection': 'keep-alive', 'Access-Control-Allow-Origin': '*', 'Content-Length': '33', 'Access-Control-Allow-Credentials': 'true', 'Server': 'nginx'}
DEBUG:sample.client:2015-12-10 12:29:33,811:Body:{
"origin": "124.33.163.178"
}
</system-out></testcase><testcase classname="tests.test_timeout" file="tests/test_timeout.py" line="8" name="test_timeout" time="8.001494884490967"><system-out>
</system-out></testcase></testsuite>%
By linking with setuptools, you can execute tests without polluting the execution environment (without changing the contents of = pip freeze
). I think this is useful in the sense that it avoids unnecessary trouble if you have the opportunity to run the test other than yourself, and the executor is not using Python enough to cut the environment with virtualenv
.
An example of setup.py
is shown below.
setup.py(Cooperation between setuptools and pytest)
import os
import sys
from setuptools import setup, find_packages
from setuptools.command.test import test as TestCommand
#Implementation of the command to execute Pytest
class PyTest(TestCommand):
#When specifying pytest options--pytest-args='{options}'To use
user_options = [
('pytest-args=', 'a', 'Arguments for pytest'),
]
def initialize_options(self):
TestCommand.initialize_options(self)
self.pytest_target = []
self.pytest_args = []
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = []
self.test_suite = True
def run_tests(self):
import pytest
errno = pytest.main(self.pytest_args)
sys.exit(errno)
version = '0.1'
# setup.Need to import pytest in py
setup_requires=[
'pytest'
]
install_requires=[
'requests',
]
tests_require=[
'pytest-timeout',
'pytest'
]
setup(name='pytest_sample',
...
setup_requires=setup_requires,
install_requires=install_requires,
tests_require=tests_require,
# 'test'Is associated with "command to execute Pytest"
cmdclass={'test': PyTest},
)
After modifying setup.py
, you can run the test with python setup.py test
.
Test run via setuptools
$ python setup.py test
...
tests/test_calc.py::test_add
PASSED
tests/test_client.py::test_ip
DEBUG:sample.client:2015-12-10 12:54:20,426:### Request ###
DEBUG:sample.client:2015-12-10 12:54:20,426:Method:GET
DEBUG:sample.client:2015-12-10 12:54:20,426:URL:https://httpbin.org/ip
DEBUG:sample.client:2015-12-10 12:54:20,426:Header:{'Connection': 'keep-alive', 'User-Agent': 'python-requests/2.8.1', 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate'}
DEBUG:sample.client:2015-12-10 12:54:20,426:Body:None
DEBUG:sample.client:2015-12-10 12:54:20,426:### Response ###
DEBUG:sample.client:2015-12-10 12:54:20,426:Status:200
DEBUG:sample.client:2015-12-10 12:54:20,426:Header:{'Server': 'nginx', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true', 'Connection': 'keep-alive', 'Content-Length': '33', 'Date': 'Thu, 10 Dec 2015 03:54:20 GMT', 'Content-Type': 'application/json'}
DEBUG:sample.client:2015-12-10 12:54:20,484:Body:{
"origin": "124.33.163.178"
}
PASSED
tests/test_timeout.py::test_timeout
PASSED
...
If you want to give pytest options, use the --pytest-args
option defined in the implementation above.
Test run via setuptools(With options)
$ python setup.py test --pytest-args='--capture=sys'
...
tests/test_calc.py::test_add PASSED
tests/test_client.py::test_ip PASSED
tests/test_timeout.py::test_timeout PASSED
...
(Reference) http://pytest.org/latest/goodpractises.html
(Supplement) When I read the document again when writing this, I found out that there is a library that does the same thing. It may be good to use this. https://pypi.python.org/pypi/pytest-runner
The pytest-xdist plugin allows you to run tests in parallel.
Parallel execution of tests
#Parallel execution in 2 processes
$ py.test -n 2
Since it will be executed in a separate process, the library dependency must be resolved in advance in the execution environment. Therefore, it is not compatible with execution via setuptools, which resolves dependencies at runtime.
The lf
option allows you to rerun only the tests that failed in the previous run.
Information on failed tests is recorded in .cache/v/cache/lastfailed
directly under the directory where the test was executed. If you want to work with other tools (ex. Rerun only if there is a failed test), you may want to refer to this file directly.
$ py.test --capture=sys
collected 3 items
...
tests/test_calc.py::test_add PASSED
tests/test_client.py::test_ip PASSED
tests/test_timeout.py::test_timeout FAILED
...
#Information about failed tests
$ cat .cache/v/cache/lastfailed
{
"tests/test_timeout.py::test_timeout": true
}
#Test fix...
#Rerun only failed tests
$ py.test --capture=sys --lf
pytest can filter the tests to be executed by various conditions.
First of all, from the method of specifying the module / method to flow.
module/Method specification
#Specify the module
$ py.test tests/test_calc.py
#Specify a method in the module
$ py.test tests/test_calc.py::test_add
You can also execute only the modules / methods that match the string
Filtering by string matching
# 'calc'Module containing the string/Execute the method
$ py.test -k calc
You can also filter by decorator marks. Since it supports ʻand / or`, it is possible to specify conditions such as with / without multiple marks.
Filtering by mark
#Marking: pytest.mark.<Any character string can be specified>
@pytest.mark.slow
@pytest.mark.httpaccess
def test_ip():
...
@pytest.mark.slow
@pytest.mark.timeout(10)
def test_timeout():
...
# @pytest.mark.Run tests with slow
$ py.test -m slow
# @pytest.mark.slow/@pytest.mark.Run a test with both httpaccess
$ py.test -m 'slow and httpaccess'
# @pytest.mark.It has slow, [email protected] tests without httpaccess
$ py.test -m 'slow and not httpaccess'
(Reference) https://pytest.org/latest/example/markers.html#mark-examples
There are various other functions, so if you are interested, you should read the Official Document.
Tomorrow 12th is @shinyorke.
Recommended Posts