For image segmentation using deep learning --Creating a small image batch from a large image --Data augmentation by ImageDataGenerator to hold. The environment uses python3.7 and Tensorflow 2.1.1.
[ISBI challenge 2012(Segmentation of neuronal structures in EM stacks)] The image data of the cells listed in (http://brainiac2.mit.edu/isbi_challenge/home) is used. Image data can be downloaded by registering on the homepage. Contains the original cell image and pre-painted data. Based on this data, we will prepare the learning data in order to automatically paint in supervised learning.
The original image data is (512, 512) in size (above). Divide this image into sizes (256, 256). First, read the data.
from skimage import io
import tensorflow as tf
import glob
dir_name = "./data/train"
paths_train_img = glob.glob(dir_name + "/train_image*")
paths_train_label = glob.glob(dir_name + "/train_label*")
train_images = []
train_labels = []
for path in paths_train_img[:-5]:
train_images.append(io.imread(path)/255.0)
path = path.replace('image', 'labels')
train_labels.append(io.imread(path)/255.0)
train_images = train_images[..., tf.newaxis]
train_labels = train_labels[..., tf.newaxis]
# print(train_images.shape)
# (25, 512, 512, 1)
train_images contains the original cell image, and train_labels contains the pre-painted data. The images are made up of train_images [i] and train_labels [i]. I have added one axis using tf.newaxis for later use in tensorflow.
Now, cut the image and create a batch. Use tf.image.extract_patches for that. Regarding tf.image.extract_patches, the official page was difficult to understand personally, but stackoverflow article //stackoverflow.com/questions/40731433/understanding-tf-extract-image-patches-for-extracting-patches-from-an-image) was easy to understand.
ksize_rows = 256
ksize_cols = 256
strides_rows = 256
strides_cols = 256
ksizes = [1, ksize_rows, ksize_cols, 1]
strides = [1, strides_rows, strides_cols, 1]
rates = [1, 1, 1, 1]
padding='VALID'
def make_patches(images):
image_patches = tf.image.extract_patches(images, ksizes, strides, rates, padding)
#image patches(25, 2, 2, 65536)Shape.
# 65536=256*At 256(256, 256)The size image is stored one-dimensionally.
patches = []
for patch in image_patches:
for i in range(patch.shape[0]):
for j in range(patch.shape[1]):
patch2 = patch[i,j,:]
patch2 = np.reshape(patch2, [ksize_rows, ksize_cols,1])
# (i,j)Image batch of positions(256, 256)Reshape to the shape of
patches.append(patch2)
patches = np.array(patches)
return patches
train_image_patches = make_patches(train_images)
train_label_patches = make_patches(train_labels)
By using make_patches above, the image can be cut into batch images of (256, 256) size. Since the original image was 25 images of (512, 512) size, train_image_patches and train_label_images contain 100 image data of (256, 256) size respectively.
We will perform data augmentation using Image Data Generator. ImageDataGenerator inflates the image using transformations such as rotate, zoom, and flip the image. Pass the maximum values such as rotation and zoom as arguments. You can also give a function for image preprocessing to preprocess the image before various conversions. In the example below, Gaussian noise is added as pretreatment.
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import skimage
def add_noise(img):
output_img = skimage.util.random_noise(img, var=0.01, clip=False)
return np.array(output_img)
SEED1 = 1
batch_size = 2
args={
"rotation_range":0.2,
"width_shift_range":0.2,
"height_shift_range":0.2,
"zoom_range":0.2,
"shear_range":0.2,
"vertical_flip":True,
"horizontal_flip":True,
"fill_mode":"reflect",
"preprocessing_function":add_noise
}
image_data_generator = ImageDataGenerator(**args
).flow(train_image_patches, batch_size=batch_size, seed=SEED1)
args.pop("preprocessing_function")
label_data_generator = ImageDataGenerator(**args
).flow(train_label_patches, batch_size=batch_size, seed=SEED1)
Since I don't want to add noise to the pre-painted correct answer data, I removed the preproccesing_function from args to create label_data_generator. By creating image_data_generator and label_data_genrator using the same seed, the original image and the pre-painted image correspond correctly.
Finally, put them together into one generator.
def my_image_mask_generator(image_data_generator, mask_data_generator):
train_generator = zip(image_data_generator, mask_data_generator)
for (img, mask) in train_generator:
yield (img, mask)
my_generator = my_image_mask_generator(image_data_generator, label_data_generator)
Let's take a look at the image data that is actually created.
plt.figure(dpi=100)
for i in range(3):
img, mask = next(my_generator)
plt.subplot(2, 3, i + 1)
plt.imshow(img[0, :,:,0], cmap="gray")
plt.subplot(2, 3, i + 4)
plt.imshow(mask[0, :, :, 0], cmap="gray")
plt.axis('off')
plt.show()
You can see that an image batch has been created in which the original cell image and the pre-painted image are correctly associated. In addition, the image looks like a part of it is reflected toward the edge. This is because the ImageDataGenerator has "fill_mode": "reflect", so the blanks that occur when the image is translated are complemented by reflect mode.
You now have image batching and data padding for image segmentation. Next time, we will perform deep learning using this image data.
Recommended Posts