Build a subpixel accuracy measurement system with Jetson Nano + USB camera + OpenCV + Scikit-image

I wrote a Python script to measure subpixel accuracy with Jetson Nano + USB camera + OpenCV + Scikit-image. Not limited to Jetson Nano, if Python runs, it will work on Raspberry Pi.

For the coordinates found by template matching, use Phase Correlation to locate the subpixels. The reproducibility is pretty good, but the linearity depends on the lens.

You can download the script from: https://github.com/takurot/templatematch

Execution environment

IMG_20191117_143549.jpg

Execution example

IMG_20191117_143644.jpg

Required modules

OpenCV Scikit-image Numpy Matplot

Execution procedure

  1. Put the template target in the □ cursor on the screen + ESC
  2. Continuous measurement execution on the screen, □ cursor is drawn where the template image is found
  3. The print statement outputs the execution time and pixel coordinates with the upper left as (0,0).

Code commentary

templatematch.py


def main():
    TEMPLATE_SIZE = 32
    capture = cv2.VideoCapture(0)

↑ It says TEMPLATE_SIZE, but the real size is double 64 Premise that the camera is on port 0 to acquire images from the USB camera with cv2.VideoCapture (0)

templatematch.py


    while True:
        ret, org_img = capture.read()
        img = cv2.resize(org_img, dsize=None, fx=0.5, fy=0.5)

↑ The size of the image is determined by fx and fy. The original size is 1280x960, but it has been resized to 640x480. I resized it so that the processing time is appropriate, but the size itself can be set arbitrarily. The smaller the image, the faster.

templatematch.py


        height, width, channnel = img.shape[:3]
        
        y1 = int(height/2-TEMPLATE_SIZE)
        y2 = int(height/2+TEMPLATE_SIZE)
        x1 = int(width/2-TEMPLATE_SIZE)
        x2 = int(width/2+TEMPLATE_SIZE)
        # print(width, height, x1, x2, y1, y2)
        if ret != True:
            print("Error1")
            return
        disp = cv2.rectangle(img, (x1, y1), (x2, y2), (0, 0, 255), 3)
        cv2.imshow("Select Template(Size 64x64) Press ESC", disp)
        key = cv2.waitKey(10)
        if key == 27: # ESC 
            break

↑ □ The coordinates to draw the cursor of the template are calculated and displayed based on the image center. Press ESC to complete template registration.

templatematch.py


    image = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    template = image[y1:y2, x1:x2]

    cv2.imshow("Template-2", template)

↑ The image registered as a template is converted to monochrome and displayed.

templatematch.py


    while True:
        time_start = time.time()
        ret, org_img2 = capture.read()
        if ret != True:
            print("Error2")
            return
        img2 = cv2.resize(org_img2, dsize=None, fx=0.5, fy=0.5)
        offset_image = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
        time_cap = int((time.time() - time_start) * 1000)

↑ Resize the measured image and convert it to monochrome. The measurement is done in monochrome.

templatematch.py


        time_start = time.time()
        result = match_template(offset_image, template)
        ij = np.unravel_index(np.argmax(result), result.shape)
        x, y = ij[::-1]
        meas_image = offset_image[y:(y+TEMPLATE_SIZE*2), x:(x+TEMPLATE_SIZE*2)]
        # print (template.shape[0], template.shape[1], meas_image.shape[0], meas_image.shape[1])
        shift, error, diffphase = register_translation(template, meas_image, 100)
        time_meas = int((time.time() - time_start) * 1000)

↑ ** This is the core of processing ** After performing template matching, cut out the image to the template size based on the found coordinate criteria. Then, Phase Correlation (register_translation) is executed on the cut out image and the template image to obtain the coordinates with subpixel accuracy. The last 100 of register_translation stands for "1/100 pixel precision". Increasing this number will increase the accuracy.

templatematch.py


        cv2.rectangle(img2, (x, y), (x+TEMPLATE_SIZE*2, y+TEMPLATE_SIZE*2), (0, 255, 0), 3)

        cv2.imshow("Real Time Measurement 640x480", img2)

        print ("Capture[ms]:", time_cap, "Meas[ms]:", time_meas, "X[pix]:", x+TEMPLATE_SIZE+shift[0], "Y[pix]:", y+TEMPLATE_SIZE+shift[1])

        key = cv2.waitKey(10)
        if key == 27: # ESC 
            break

↑ □ Draw a cursor at the coordinates where the template is found and output Output execution time and coordinates Finish when ESC is pressed

Impressions

I was able to easily build a measurement system with subpixel accuracy at home. I think we can increase the number of cameras and perform multi-eye measurement. Since the execution time depends on the measured image size and template size, it is necessary to adjust it appropriately for the target time. If you use a microscope, you can measure with an accuracy of nm, but vibration in the execution environment has a great effect, so a vibration isolation table is required.

Please let me know if it helps! But don't submit it by copying!

Recommended Posts

Build a subpixel accuracy measurement system with Jetson Nano + USB camera + OpenCV + Scikit-image
Build a cheap summarization system with AWS components
Display USB camera video with Python OpenCV with Raspberry Pi
Create a web surveillance camera with Raspberry Pi and OpenCV
Attempting to build opencv-python with Dockerfile on jetson nano but error (/tmp/nano_build_opencv/build_opencv.sh 3.4.10' returned a non-zero code: 1)