Load the TensorFlow model file .pb with readNetFromTensorflow ().

This article is about trying to use OpenCV's readNetFromTensorflow () to process ** object detection ** models trained on your own dataset at high speed.

I will summarize the countermeasures.

Operating environment

Python: 3.7.6 Tensorflow: 1.14.0 OpenCV: 4.2.0

About readNetFromTensorflow ()

Below is the code to run the trained model with opencv.

# How to load a Tensorflow model using OpenCV
# Jean Vitor de Paulo Blog - https://jeanvitor.com/tensorflow-object-detecion-opencv/
 
import cv2
 
# Load a model imported from Tensorflow
tensorflowNet = cv2.dnn.readNetFromTensorflow('frozen_inference_graph.pb', 'graph.pbtxt')
 
# Input image
img = cv2.imread('img.jpg')
rows, cols, channels = img.shape
 
# Use the given image as input, which needs to be blob(s).
tensorflowNet.setInput(cv2.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))
 
# Runs a forward pass to compute the net output
networkOutput = tensorflowNet.forward()
 
# Loop on the outputs
for detection in networkOutput[0,0]:
    
    score = float(detection[2])
    if score > 0.2:
    	
        left = detection[3] * cols
        top = detection[4] * rows
        right = detection[5] * cols
        bottom = detection[6] * rows
 
        #draw a red rectangle around detected objects
        cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (0, 0, 255), thickness=2)
 
# Show the image with a rectagle surrounding the detected objects 
cv2.imshow('Image', img)
cv2.waitKey()
cv2.destroyAllWindows()

For details, refer to How to load Tensorflow models with OpenCV.

Files required to call the model

tensorflowNet = cv2.dnn.readNetFromTensorflow('frozen_inference_graph.pb', 'graph.pbtxt')

--frozen_inference_graph.pb --graph.pbtxt (model structure)

The above two files are required to load the tensorflow model with opencv.

File creation

Step1 Execution of learning

I used train.py for training.

$ python train.py \
      --logtostderr \
      --train_dir=log \
      --pipeline_config_path=ssd_mobilenet_v1_coco.config

Here, a folder called log is created, and in it

will be created. We will use these two files in the next step.

Step2 Create frozen_inference_graph.pb

Use this export_inference_graph.py.

$ python export_inference_graph.py \
    --input_type image_tensor \
    --pipeline_config_path log/pipeline.config \
    --trained_checkpoint_prefix log/model.ckpt-3000 \ #3000 is the number of steps
    --output_directory model_data

When executed, it will be in the model_data folder

Can be found. We will use these next.

Step3 Create graph.pbtxt (model structure)

Use this tf_text_graph_ssd.py.

$ python tf_text_graph_ssd.py \
    --input model_data/frozen_inference_graph.pb \
    --output graph_data/graph.pbtxt \
    --config model_data/pipeline.config

When executed, graph.pbtxt is created in the graph_data folder.

The freeze graph and model structure are now complete.

Model loading

#Model loading
model = cv2.dnn.readNetFromTensorflow('graph_data/frozen_inference_graph.pb', 
                                      'graph_data/graph.pbtxt')
print('Loading completed')

Then ...

---------------------------------------------------------------------------
error                                     Traceback (most recent call last)
<ipython-input-44-afa605a67bd6> in <module>
      1 #Model loading
      2 model = cv2.dnn.readNetFromTensorflow('graph_data/frozen_inference_graph.pb', 
----> 3                                       'graph_data/graph.pbtxt')
      4 print('Loading completed')

error: OpenCV(4.2.0) /Users/travis/build/skvark/opencv-python/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:544: error: (-2:Unspecified error) Input layer not found: FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_166__cf__169 in function 'connect'

Input layer not found: /// in function 'connect' I can't find the input layer! As a result of struggling with this error, I settled down in the following way.

Measures to be taken when Input layer not found appears.

node {
  name: "image_tensor"
  op: "Placeholder"
  attr {
    key: "dtype"
    value {
      type: DT_UINT8
    }
  }
  attr {
    key: "shape"
    value {
      shape {
        dim {
          size: -1
        }
        dim {
          size: 300
        }
        dim {
          size: 300
        }
        dim {
          size: 3
        }
      }
    }
  }
}
node {
  name: "FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1"
  op: "Conv2D"
  input: "FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_166__cf__169"
  attr {
    key: "data_format"
    value {
      s: "NHWC"
    }
  }

If you look at the beginning of graph.pbtxt, you can see that the input layer does not exist. Then you can create an input layer,

node {
  name: "image_tensor"
  op: "Placeholder"
  attr {
    key: "dtype"
    value {
      type: DT_UINT8
    }
  }
  attr {
    key: "shape"
    value {
      shape {
        dim {
          size: -1
        }
        dim {
          size: 300
        }
        dim {
          size: 300
        }
        dim {
          size: 3
        }
      }
    }
  }
}
node {                           #From here
  name: "Preprocessor/mul"
  op: "Mul"
  input: "image_tensor"
  input: "Preprocessor/mul/x"
}
node {
  name: "Preprocessor/sub"
  op: "Sub"
  input: "Preprocessor/mul"
  input: "Preprocessor/sub/y"
}                                #Add up to here
node {
  name: "FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1"
  op: "Conv2D"
  input: "Preprocessor/sub"      #add to
  input: "FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_166__cf__169"
  attr {
    key: "data_format"
    value {
      s: "NHWC"
    }
  }

I made a direct change to graph.pbtxt like this. And Screenshot from Gyazo I was able to read it safely.

We have not been able to investigate the cause of why the input layer was not defined in the first place. If anyone knows it, I would appreciate it if you could comment.

Recommended Posts

Load the TensorFlow model file .pb with readNetFromTensorflow ().
Calibrate the model with PyCaret
Extract the xz file with python
Customize Model / Layer / Metric with TensorFlow
Follow the file hierarchy with fts
Download the file deployed with appcfg.py
Validate the learning model with Pylearn2
Open the file with the default app
Let's tune the model hyperparameters with scikit-learn!
Check the existence of the file with python
Run the interaction model with Attention Seq2 Seq
Let's read the RINEX file with Python ①
Try rewriting the file with the less command
Check the file size with du -sh *
Download the file with PHP [Under construction]
Try TensorFlow RNN with a basic model
Load the network modeled with Rhinoceros in Python ③
Load caffe model with Chainer and classify images
Exposing the DCGAN model for Cifar 10 with keras
Adjust file permissions with the Linux command chmod
Load the network modeled with Rhinoceros in Python ②
Solving the Lorenz 96 model with Julia and Python
Save the object to a file with pickle
Convert the character code of the file with Python3
Calculate the angle between n-dimensional vectors with TensorFlow
Load the network modeled with Rhinoceros in Python ①
I tried to make something like a chatbot with the Seq2Seq model of TensorFlow
Tutorial to infer the model learned in Tensorflow with C ++/OpenVINO at high speed
Zundokokiyoshi with TensorFlow
Breakout with Tensorflow
I tried to touch the CSV file with Python
Monitor the training model with TensorBord on Jupyter Notebook
Load the module with the same name in another location
Learn Wasserstein GAN with Keras model and TensorFlow optimization