I tried a visual regression test on GitHub Pages

motivation

Hello @glassmonkey. Speaking of Github Action, CI. I wanted to try something that uses images if I wanted to do CI, so I tried a simple visual regression test. I made it in a rush, so I would appreciate your opinions and impressions.

What is a visual regression test?

The explanation on the blog of Astamuse was easy to understand, so I will quote it.

The visual regression test is a visual regression test, specifically a test that takes a screenshot and extracts the difference.

In other words, doing a visual regression test in the environment of Github Action The following image will be generated in the virtual environment.

In this article

How did you do

sample

The ones I actually made are as follows. https://github.com/glassmonkey/vue-sample/pull/3 I added it to the application I made for studying Vuejs.

This time, I surrounded it with a rectangle so that the changes are easy to understand as shown below.

The original image Image after change Difference image
<img width="600px"# alt="The original image" src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/152938/136950cf-4133-b4c5-badb-6ed751650df1.png "> Changed image Difference image

Requirements

https://github.com/glassmonkey/vue-sample/blob/master/.github/workflows/test.yml#L19-L20


BASE_URL: https://glassmonkey.github.io/vue-sample

DIFF_URL: http://localhost:8080

Test content

Test flow

The yml for the test is as follows. https://github.com/glassmonkey/vue-sample/blob/master/.github/workflows/test.yml

name: test

on: pull_request
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1 # ①
      - name: develop run # ②
        run: |
          docker-compose up -d
      - name: run test # ③
        run: |
          cd tests && \
          docker-compose build && \
          docker-compose run app \
        env:
          WINDOW_SIZE: 1024,768
          BASE_URL: https://glassmonkey.github.io/vue-sample/
          DIFF_URL: http://localhost:8080
      - uses: jakejarvis/s3-sync-action@master # ④
        env:
          AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: 'ap-northeast-1'
          SOURCE_DIR: './tests/dist'
          DEST_DIR:  ${{github.repository}}/${{github.sha}}
      - name: post maessage # ⑤
        run: |
          cd tests && bash post.sh
        env:
          S3_PATH: https://${{ secrets.AWS_S3_BUCKET }}.s3-ap-northeast-1.amazonaws.com/${{github.repository}}/${{github.sha}}
          GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

Authentication information needs to be set as follows from settings> secrets on Github. スクリーンショット 2019-12-02 12.30.20.png

To briefly explain

  1. Initialization
  2. Build a virtual environment with the contents of PR
  3. Build and run a container for visual regression testing
  4. Upload the generated image to s3
  5. Post the image to PR

It becomes the flow of. I will explain the contents of the post in 3 image generation and 5 PR.

Image generation

Shooting of scoon shots and generation of difference images are performed below. https://github.com/glassmonkey/vue-sample/blob/master/tests/src/main.py

Since the container to be executed for the test this time is independent of the locally built application, the name of localhost cannot be resolved, so I forcibly rewrote it on the test side below.

    if "localhost" in url: 
         host_addr = subprocess.run(["ip route | awk 'NR==1 {print $3}'"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).stdout.decode("utf8").rstrip('\n')
         url = url.replace("localhost", host_addr)

About screenshots

I referred to the article Running Selenium, Headless Chrome and Python3 on Docker. It should be noted that the following optional items must be adjusted. Especially when starting with headless chrome, I was a little addicted to the fact that it is necessary to specify the window size at the timing of the start character.

   options.add_argument('--headless')
   options.add_argument('--no-sandbox')
   options.add_argument('--disable-dev-shm-usage')
   options.add_argument('--hide-scrollbars')
   options.add_argument('--window-size={}'.format(os.environ['WINDOW_SIZE']))

This time, I made the URL statically as a screenshot as shown below, but I thought that it would be okay to customize it from the outside because it seems that you can also specify the Dom element to be screenshotd.

    driver.get(url)
    driver.save_screenshot(filename)

About difference image generation

I referred to the article here. I am trying to generate a difference image of the screenshot image with the following function.

def diff_images(base_image_path, diff_image_path):
    """
    referer to https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/
    :param base_image_path:
    :param diff_image_path:
    :return:
    """
    # load the two input images
    base_image = cv2.imread(base_image_path)
    diff_image = cv2.imread(diff_image_path)

    # convert the images to grayscale
    grayA = cv2.cvtColor(base_image, cv2.COLOR_BGR2GRAY)
    grayB = cv2.cvtColor(diff_image, cv2.COLOR_BGR2GRAY)

    (score, sub) = compare_ssim(grayA, grayB, full=True)
    sub = (sub * 255).astype("uint8")
    print("SSIM: {}".format(score))
    thresh = cv2.threshold(sub, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
    cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = imutils.grab_contours(cnts)

    for c in cnts:
        # compute the bounding box of the contour and then draw the
        # bounding box on both input images to represent where the two
        # images differ
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(base_image, (x, y), (x + w, y + h), (0, 0, 255), 2)
        cv2.rectangle(diff_image, (x, y), (x + w, y + h), (0, 0, 255), 2)
    cv2.imwrite("/app/dist/base.png ", base_image)
    cv2.imwrite("/app/dist/diff.png ", diff_image)
    cv2.imwrite("/app/dist/sub.png ", sub)

POST to PR

https://github.com/glassmonkey/vue-sample/blob/master/tests/post.sh

I am posting an image uploaded to s3 via Api on Github using Action to upload to s3. For processing, I referred to jessfraz / shaking-finger-action.

result

It was safely dropped on this PR. https://github.com/glassmonkey/vue-sample/pull/3 スクリーンショット 2019-12-02 12.49.26.png

Summary

I was able to do a simple visual regression test. Since it's a big deal, it may be possible to put it in another repository as an action properly. I hadn't touched the API on Github properly, so I was really worried about what to do with the post image. I chose s3 this time, but there may be another approach. I gave up because the installation of Opencv becomes complicated, but I gave up the test container from alpine because it is slow, but I want to make it alpine.

Recommended Posts

I tried a visual regression test on GitHub Pages
I made a github action that notifies Slack of the visual regression test
I tried Kaokore, a Japanese classic dataset, on EfficientNet.
I tried MLflow on Databricks
I tried to create a server environment that runs on Windows 10
I tried AdaNet on table data
I tried to create a simple credit score by logistic regression.
I tried using the COTOHA API (there is code on GitHub)
I tried Cython on Ubuntu on VirtualBox
I tried to register a station on the IoT platform "Rimotte"
I tried installing MySQL on a Linux virtual machine on OCI Compute
I tried object detection with YOLO v3 (TensorFlow 2.0) on a windows CPU!
I tried to draw a system configuration diagram with Diagrams on Docker
I tried to verify the result of A / B test by chi-square test
Github Interesting Repository ① ~ I found a graphic repository that looks interesting, so I tried it ~
I tried multiple regression analysis with polynomial regression
I tried to create a linebot (preparation)
I tried playing a ○ ✕ game using TensorFlow
I tried drawing a line using turtle
I installed Kivy on a Mac environment
I tried a functional language with Python
I tried to make a Web API
I built a TensorFlow environment on windows10
How to test on a Django-authenticated page
I tried using pipenv, so a memo
I tried benchmarking a web application framework
I tried 3D detection of a car
I passed the 1st AI implementation test [A grade], so I tried various things
I tried a TensorFlow tutorial (MNIST for beginners) on Cloud9-Classification of handwritten images-
I tried to make a translation BOT that works on Discord using googletrans
I tried to build a super-resolution method / ESPCN
I tried to build a super-resolution method / SRCNN ①
I tried running YOLO v3 on Google Colab
I don't want to take a coding test
[Memo] I tried a pivot table in Python
I tried launching jupyter nteract on heroku server
[Pythonocc] I tried using CAD on jupyter notebook
I tried using Pythonect, a dataflow programming language.
I tried reading a CSV file using Python
I tried to generate a random character string
I tried adding a Python3 module in C
I tried to build a super-resolution method / SRCNN ③
I tried to build a super-resolution method / SRCNN ②
I tried running alembic, a Python migration tool
I tried LINE Message API (line-bot-sdk-python) on GAE
Create a Python execution environment on IBM i
I tried using a database (sqlite3) with kivy
I tried playing with the calculator on tkinter
Automatic test of Pipenv + Pytest on Github Actions
I tried to make a ○ ✕ game using TensorFlow
I did a little research on the class
The guy who stumbled upon failing to publish a blog on github pages on Pelican