Run flake8 and pytest on GitLab CI

What to deal with in this article

--Run flake8 and pytest in your GitLab pipeline --Change the above execution timing when submitting a merge request

How did you write this article?

I know that static analysis, the use of formatters, and writing test code are useful as a means of ensuring the quality of software, but it is difficult to introduce it into team development (or simply). I was wondering what to do).

Since I use GitLab in my daily work, I thought that I could easily create a simple mechanism, aside from difficult things, so I summarized it briefly.

As long as you register with GitLab, you can run it with the Free plan, so you can try it immediately.

Execution environment

--GitLab.com (Cloud service)

1. Environment construction and Flake8 check

1-1. Create a repository in GitLab

Create a repository in GitLab to carry out a series of tasks. It's private and okay.

1-2. Addition of gitlab-yml and PUSH

It is extremely simple without the concept of stage division. Pull the image of Python3 and run flake8 against the following code in the src directory.

image: python:3
 
before_script:
  - pip install flake8
 
test:
  script:
    - flake8 src/

When I checked the CI / CD after PUSH, I was able to confirm that the pipeline was running and the analysis by flake8 was being executed.

2. Add Test

Now that we've finished running flake8, let's run a test with pytest.

2-1. Creating and running tests locally

Let's create a test code with a directory structure that separates the test files as shown below.

.
├── __init__.py
├── poetry.lock
├── pyproject.toml
├── src
│   └── main.py
│   └── __init__.py
├── tests #add to
│   └── test_main.py #add to
│   └── __init__.py #add to
└── venv

The contents of the test code are as follows. I don't really care about the content this time, so it's quite appropriate.

import sys, os
current_path = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, current_path + '/../')

from src import main

def test_add():
    result = main.add(3, 5)
    expected = 8

    assert result == expected

def test_substract():
    result = main.substract(10, 5)
    expected = 5

    assert result == expected

def test_multiply():
    result = main.multiply(4, 5)
    expected = 20

    assert result == expected

I ran the created test locally and confirmed that everything passed.

$ pytest tests   
======================== test session starts ========================
platform darwin -- Python 3.7.6, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
collected 3 items                                                   

tests/test_main.py ...                                        [100%]

========================= 3 passed in 0.01s =========================

2-2. Coverage measurement

I will also try to measure coverage using pytest-cov.

$ pytest --cov=src tests/  
======================== test session starts ========================
...
collected 3 items                                                   

tests/test_main.py ...                                        [100%]

---------- coverage: platform darwin, python 3.7.6-final-0 -----------
Name              Stmts   Miss  Cover
-------------------------------------
src/__init__.py       0      0   100%
src/main.py           6      0   100%
-------------------------------------
TOTAL                 6      0   100%


========================= 3 passed in 0.03s =========================

That's boring, so I'll add new code to main.py to deliberately reduce coverage.

--Additional code

...
def divide(x, y):
    return x / y

--Pytest -cov output result

% pytest --cov=src tests/   
============================================ test session starts ============================================
...
collected 3 items                                                                                           

tests/test_main.py ...                                                                                [100%]

---------- coverage: platform darwin, python 3.7.6-final-0 -----------
Name              Stmts   Miss  Cover
-------------------------------------
src/__init__.py       0      0   100%
src/main.py           8      1    88%
-------------------------------------
TOTAL                 8      1    88%


============================================= 3 passed in 0.03s =============================================

2-3. Running on GitLab CI / CD

Add a new run of pytest-cov to your yaml file and push it.

image: python:3
 
before_script:
  - pip install flake8 pytest pytest-cov
 
test:
  script:
    - flake8 src/
    - pytest --cov=src tests/

When I pushed the code and checked the execution result of the pipeline, the pipeline stopped because the newly added code was caught by flake8.

Modify the code again and push it.

This time it ended normally.

2-4. Added coverage display to pipeline

Even if you measure the coverage, you have to go to the JOB log one by one, so It corresponds to display the coverage using the function of GitLab.

Specifically, use the test coverage analysis function from the GitLab settings page. It seems to be a function to display the coverage rate from the log using a regular expression, but in the case of pytest-cov, it is in the example, so copy and save it as it is.

When I pushed the code again, I was able to confirm that the coverage rate was displayed as follows.

3. Change the execution timing of the pipeline

Currently, the flow is to push PUSH directly to master → execute the pipeline, In actual development, I think that reviews are often included in merge requests.

So, I would like to run a pipeline when submitting a merge request and build a flow of review and merge only if the analysis by flake8 and the test by pytest pass.

3-1. Modify yaml

Cut a new branch and modify the contents of the yaml file as follows. The only fix is to specify merge_request with only.

For details, refer to the following from the GitLab page. Pipelines for Merge Requests

image: python:3
 
before_script:
  - pip install flake8 pytest pytest-cov
 
test:
  script:
    - flake8 src/
    - pytest --cov=src tests/
  only:
  - merge_requests

After pushing, first make sure the pipeline isn't running like it used to be.

3-2. Execution of merge request

Submit a merge request from the newly cut branch to master.

After submitting the merge request (pipeline in operation)

It was confirmed that the pipeline is operating as follows. You will see a button called "Merge when the pipeline succeeds" that is not normally displayed.

After submitting a merge request (after pipeline operation)

The display changes from when the pipeline is running, and the execution result (success) and coverage of the pipeline are output.

3-3. Execution of merge request (failure)

Try pushing a new commit by adding a test case that intentionally fails. As shown below, it was confirmed that the pipeline works not only for new merge requests but also for additional commits.

After the pipeline went live, the display was changed as follows. With the current settings (default), merging is possible even if the pipeline fails.

It seems that GitLab can be set to merge only when the pipeline is successful, so Check "Pipelines must succeed" to save the settings.

Then, as shown below, merging became impossible. I think it depends on the project how to operate it, but it would be easier to guarantee the quality if there is a rule that only the code that passed static analysis and testing will be merged.

Recommended Posts

Run flake8 and pytest on GitLab CI
Run py.test on Windows Anaconda and MinGW
Install and run dropbox on Ubuntu 20.04
Install and run Python3.5 + NumPy + SciPy on Windows 10
Run OpenVino on macOS and pyenv and pipenv environment
Run the flask app on Cloud9 and Apache Httpd
Install Apache 2.4 on Ubuntu 19.10 Eoan Ermine and run CGI
Install Docker on Arch Linux and run it remotely
Run Django on PythonAnywhere
Run mysqlclient on Lambda
Run OpenMVG on Mac
Until the infrastructure engineer builds GitLab to run CI / CD