In the previous (3rd), Drawing a "weather map-like front" by machine learning based on meteorological data (3) I posted a story about cutting out frontal elements from a "breaking weather map" to create a teacher image.
In this story, the black-and-white version of the "flash weather map" that had been accumulated in the past was used because the color version of the "flash weather map" for cutting out frontal elements was not available for a short period of time (although it is a completely personal matter). I tried to colorize it.
Recently, I hear about black-and-white photographs and films being colorized by machine learning. I feel that the reality of the past, such as images during the war that had no sense of reality, has increased overwhelmingly, and I feel that it is approaching with a sense of the same era.
The colorization this time has nothing to do with such a story, only the algorithm situation of front element extraction.
In the black-and-white version of the weather map, the front, longitude and latitude lines, map lines, and isobars are all the same black, as a matter of course. Therefore, I couldn't think of a good way to cut out only the front from there, and it became necessary to colorize it.
In fact, what I really did is in chronological order.
(1) As an output image, we first created a neural network that learns the breaking weather map itself as a teacher image without cutting out front elements. The front was not learned well. (2) I noticed the existence of a color version of the "flash weather map" when I was thinking about how to extract front elements in order to narrow down the learning target to the front. (3) I downloaded the available color version of "Breaking Weather Map" and was able to extract frontal images based on the colors. (4) When I trained with a teacher image with only fronts, I found that it would work to some extent, so I decided to increase the number of weather map examples. At this point, the color version was only available for about half a year. (5) Since the neural network I was making around this time generates an image from an image, maybe it is possible to convert a black-and-white weather map to a color weather map? I thought.
It was that. Since the neural network of the image started to move from the image to some extent, I thought that it could be applied.
As a result, this worked better than I expected.
I was able to increase the number of teacher images, which was only about half a year, to more than two years at a time. By getting two rounds of spring, summer, autumn and winter, the variation of teacher images has increased.
Mac mini(2018) Processor 3.2GHz 6-core Intel Core i7 Memory 32GB 2667 MHz DDR4
OS macOS Catalina python3.7 Keras 2.0
We are using.
The network I was creating has a fairly simple structure. With a Sequential structure, a 1-channel grayscale image is a network in which features are extracted by Conv2D, passed through a 4-stage CNN at the bottom layer, and then returned by Conv2D Transpose to a 3-channel color image.
The input image is 256x256, but I reduced the Strides of Conv2D four times, made it 16x16, and then passed Conv2D four more times. After that, it is restored to the original size by Conv2D Transpose.
Since the learning uses mean_squared_error
, it is a learning that brings it to the teacher image as a pixel value.
network.py
# parameter settings
num_hidden1 = 32
num_hidden2 = 64
num_hidden3 = 128
num_hidden4 = 64
num_hidden5 = 32
num_hidden6 = 32
######### start of network definition
NN_1 = Sequential()
#--- encode start
NN_1.add(Conv2D(num_hidden1, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', input_shape=(in_ch, i_dmlat, i_dmlon), padding='same'))
NN_1.add(Conv2D(num_hidden1, data_format='channels_first', kernel_size=(3,3), activation='relu', input_shape=(in_ch, i_dmlat, i_dmlon), padding='same'))
NN_1.add(Conv2D(num_hidden1, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', input_shape=(in_ch, i_dmlat, i_dmlon), padding='same'))
NN_1.add(Conv2D(num_hidden2, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same'))
NN_1.add(Conv2D(num_hidden2, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
NN_1.add(Conv2D(num_hidden2, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same'))
#--- encode out
NN_1.add(Conv2D(num_hidden3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
NN_1.add(Conv2D(num_hidden3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
NN_1.add(Conv2D(num_hidden3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
NN_1.add(Conv2D(num_hidden3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
#--- decode start
NN_1.add(Conv2DTranspose(num_hidden4, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same'))
NN_1.add(Conv2DTranspose(num_hidden4, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same'))
NN_1.add(Conv2DTranspose(num_hidden5, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same'))
NN_1.add(Conv2DTranspose(num_hidden5, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same'))
NN_1.add(Conv2D(num_hidden5, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
NN_1.add(Conv2D(num_hidden6, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
#--- back to 3 channel
NN_1.add(Conv2D(3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same'))
####### end of network definition
# compile network
NN_1.compile(optimizer='adam', loss='mean_squared_error' , metrics=['accuracy'])
# do training
NN_1.fit(np_i_data_train_tr, np_t_data_train_tr, epochs=num_itter , callbacks=cbks, batch_size=8 , validation_split=0.2 )
# Save model and weights
json_string = NN_1.to_json()
open(os.path.join(paramfiledir, 'cnnSPAStoColSPAS2_011_model.json'),'w').write(json_string)
NN_1.save_weights(os.path.join(paramfiledir, 'cnnSPAStoColSPAS2_011_weight.hdf5'))
The input image is, for example, (flash weather map 2018/9/30 21 UTC).
The output teacher image is like this.
Of the six months' worth of data, I tried to use two months' worth for evaluation, and trained the black-and-white version and the color version using about four months' worth (more than 700 sheets because there are six sheets a day).
The training data will converge to such an image relatively quickly. Maps, longitude / latitude lines, and dates will be overfitted in no time.
Using the learned network, I tried to colorize the actual black and white weather map. The right is the original black-and-white weather map, and the left is the colorized weather map. I think that the purpose of cutting out frontal elements based on color is sufficient colorization.
Although it is a detailed story, the box with the date and time in the upper left is also the result of learning and predicting as an image. The numbers (atmospheric pressure, moving speed) that appear in the weather map seem to be straightforwardly predicted, "July 2017" is predicted only as "January 2018" as cancer. Since the data used for learning is from 2018 onwards and does not include July, "I can't predict what's not in the input" I feel like saying.
The same thing was converted to Heisei as a cancer for the character string of the reiwa of the generation. It seems that the learning was such that the weight was zero and only the bias remained, so that whatever would come, it would be Heisei.
I tried to mischief by handwriting (laughs). This is my creation.
I tried to draw it for children.
It seems that I have a longer day to handwrite frontal symbols.
This time, I summarized the story of colorizing black and white weather maps. As a result, the number of front cropped images has increased significantly, and the accuracy of drawing learning for the final goal of "weather map-like fronts" has improved.
Next time, as the final episode, I will try to draw a "weather map-like front" by machine learning based on a neural network for drawing fronts (5) Machine learning Automatic Front Detection in Weather Data](https: // qiita I will post about .com / m-taque / items / 2788f623365418db4078). As of 2020.2.9, the monthly limit of the image posting capacity on the Qiita site has been reached, and the final episode will be carried over to March.