This article is about trying to use OpenCV's readNetFromTensorflow () to process ** object detection ** models trained on your own dataset at high speed.
I will summarize the countermeasures.
Python: 3.7.6 Tensorflow: 1.14.0 OpenCV: 4.2.0
Below is the code to run the trained model with opencv.
# How to load a Tensorflow model using OpenCV
# Jean Vitor de Paulo Blog - https://jeanvitor.com/tensorflow-object-detecion-opencv/
import cv2
# Load a model imported from Tensorflow
tensorflowNet = cv2.dnn.readNetFromTensorflow('frozen_inference_graph.pb', 'graph.pbtxt')
# Input image
img = cv2.imread('img.jpg')
rows, cols, channels = img.shape
# Use the given image as input, which needs to be blob(s).
tensorflowNet.setInput(cv2.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))
# Runs a forward pass to compute the net output
networkOutput = tensorflowNet.forward()
# Loop on the outputs
for detection in networkOutput[0,0]:
score = float(detection[2])
if score > 0.2:
left = detection[3] * cols
top = detection[4] * rows
right = detection[5] * cols
bottom = detection[6] * rows
#draw a red rectangle around detected objects
cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (0, 0, 255), thickness=2)
# Show the image with a rectagle surrounding the detected objects
cv2.imshow('Image', img)
cv2.waitKey()
cv2.destroyAllWindows()
For details, refer to How to load Tensorflow models with OpenCV.
tensorflowNet = cv2.dnn.readNetFromTensorflow('frozen_inference_graph.pb', 'graph.pbtxt')
--frozen_inference_graph.pb --graph.pbtxt (model structure)
The above two files are required to load the tensorflow model with opencv.
I used train.py for training.
$ python train.py \
--logtostderr \
--train_dir=log \
--pipeline_config_path=ssd_mobilenet_v1_coco.config
Here, a folder called log is created, and in it
will be created. We will use these two files in the next step.
Use this export_inference_graph.py.
$ python export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path log/pipeline.config \
--trained_checkpoint_prefix log/model.ckpt-3000 \ #3000 is the number of steps
--output_directory model_data
When executed, it will be in the model_data folder
Can be found. We will use these next.
Use this tf_text_graph_ssd.py.
$ python tf_text_graph_ssd.py \
--input model_data/frozen_inference_graph.pb \
--output graph_data/graph.pbtxt \
--config model_data/pipeline.config
When executed, graph.pbtxt is created in the graph_data folder.
The freeze graph and model structure are now complete.
#Model loading
model = cv2.dnn.readNetFromTensorflow('graph_data/frozen_inference_graph.pb',
'graph_data/graph.pbtxt')
print('Loading completed')
Then ...
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-44-afa605a67bd6> in <module>
1 #Model loading
2 model = cv2.dnn.readNetFromTensorflow('graph_data/frozen_inference_graph.pb',
----> 3 'graph_data/graph.pbtxt')
4 print('Loading completed')
error: OpenCV(4.2.0) /Users/travis/build/skvark/opencv-python/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:544: error: (-2:Unspecified error) Input layer not found: FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_166__cf__169 in function 'connect'
Input layer not found: /// in function 'connect' I can't find the input layer! As a result of struggling with this error, I settled down in the following way.
node {
name: "image_tensor"
op: "Placeholder"
attr {
key: "dtype"
value {
type: DT_UINT8
}
}
attr {
key: "shape"
value {
shape {
dim {
size: -1
}
dim {
size: 300
}
dim {
size: 300
}
dim {
size: 3
}
}
}
}
}
node {
name: "FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1"
op: "Conv2D"
input: "FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_166__cf__169"
attr {
key: "data_format"
value {
s: "NHWC"
}
}
If you look at the beginning of graph.pbtxt, you can see that the input layer does not exist. Then you can create an input layer,
node {
name: "image_tensor"
op: "Placeholder"
attr {
key: "dtype"
value {
type: DT_UINT8
}
}
attr {
key: "shape"
value {
shape {
dim {
size: -1
}
dim {
size: 300
}
dim {
size: 300
}
dim {
size: 3
}
}
}
}
}
node { #From here
name: "Preprocessor/mul"
op: "Mul"
input: "image_tensor"
input: "Preprocessor/mul/x"
}
node {
name: "Preprocessor/sub"
op: "Sub"
input: "Preprocessor/mul"
input: "Preprocessor/sub/y"
} #Add up to here
node {
name: "FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1"
op: "Conv2D"
input: "Preprocessor/sub" #add to
input: "FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_166__cf__169"
attr {
key: "data_format"
value {
s: "NHWC"
}
}
I made a direct change to graph.pbtxt like this. And I was able to read it safely.
We have not been able to investigate the cause of why the input layer was not defined in the first place. If anyone knows it, I would appreciate it if you could comment.
Recommended Posts