[DOCKER] Inference with Custom Vision model on Jetson Nano

Introduction

Run your Custom Vision model on Jetson Nano. This article I will continue.

environment

Jetson Nano setup

Let's complete the setup by referring to this article. You can infer with the GUI, so you don't have to switch to the CUI.

Download Custom Vision model

Export the model by referring to here.

First, learn in one of the following domains with Custom Vision.

image.png

After learning, press Custom Vision Performance tab → Export to export and download with ** Linux ** of ** Docker **. export.png Bring the exported model zip file to Nano with `` `scp``` etc. and unzip it.

unzip CustomVision.zip -d customvision

You should find DockerFile in the extracted customvision folder. Edit it as follows (python3-opencv may not be needed).

Dockerfile


FROM nvcr.io/nvidia/l4t-tensorflow:r32.4.4-tf2.3-py3
RUN apt-get update -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python3-opencv
RUN pip3 install flask pillow
COPY app /app
# Expose the port
EXPOSE 80
# Set the working directory
WORKDIR /app
# Run the flask server for the endpoints
CMD python3 -u app.py

It seems that Tensorflow 2.3 is included. Refer to here, and it seems that Tensorflow 1.15 can also be included. It took less time than I expected to build.

docker build . -t mycustomvision

Start the container when you can build.

docker run -p 127.0.0.1:80:80 -d mycustomvision

If you display the container list, you can see that it is running.

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
10548014b918        mycustomvision      "/bin/sh -c 'python3…"   2 hours ago         Up 2 hours          127.0.0.1:80->80/tcp   sleepy_elgamal

inference

Let's infer by POSTing with Python.

inference.py


import json
import requests

file = "./test.png "

with open(file,"rb") as f:
    data = f.read()

url = "http://127.0.0.1/image"
files = {"imageData":data}

response = requests.post(url,files=files)
results = json.loads(response.content)
print(results)

↓ Result

$ python3 inference.py
{'created': '2020-11-05T14:58:32.073106', 'id': '', 'iteration': '', 'predictions': [{'boundingBox': {'height': 0.16035638, 'left': 0.738249, 'top': 0.41299437, 'width': 0.05781723}, 'probability': 0.91550273, 'tagId': 0, 'tagName': '1'}], 'project': ''}

I got the result properly.

in conclusion

Thank you for your hard work. Originally, Tensorflow 2.0.0 was specified in the Dockerfile, but I'm not sure if 2.3.0 is okay. For the time being, I'm okay because I can make inferences that seem to be correct.

If you have any mistakes, please point them out.

Recommended Posts

Inference with Custom Vision model on Jetson Nano
[Ruby on Rails] Model test with RSpec
Set referrer limits on google vision api with hocalhost