Try to draw a "weather map-like front" by machine learning based on weather data (5)

Let's draw a "weather map-like front" by machine learning based on weather data (5) Machine learning Automatic Front Detection in Weather Data

1. Learning a "weather map-like front"

1.1 What is a "weather map-like front"?

The front of the weather map seems to be analyzed mainly from the viewpoint of disaster prevention or the part that has an influence on Japan. If you look at the weather maps of other countries, you may see fronts that are not drawn on the weather maps of Japan. For example, the column below describes the fronts that appear in European meteorological maps. [The story of a "hidden" front that cannot be seen by satellite](https://oceana.ne.jp/column/53284 "The story of a" hidden "front that cannot be seen by satellite")

From these, it can be said that frontal analysis is not always uniquely determined, and there seems to be a style and intention for each institution. So, I can say that the machine learning front drawing I am trying to draw a ** Japanese weather map-like front ** from the weather data. If the learning accuracy is improved and it becomes possible to draw ** Japanese weather map-like frontal analysis ** like the Japan Meteorological Agency, it may become an expert system for weather condition analysis **.

[Weather Forecast and Disaster Prevention-Forecaster's Road (Chuko Shinsho)](https://www.amazon.co.jp/ Weather Forecast and Disaster Prevention-Forecaster's Road-Chuko Shinsho-Nagazawa- Yoshitsugu / dp / 4121025202) says, "Write 3,000 weather maps for one person." The data used for learning this time is about 2000 sheets. As a deep learning forecaster, I haven't made much effort to be a full-fledged person.

By the way, although this machine learning produces a weather map, it does not ** forecast the weather. ** ** Weather forecasts predict the future from the present, ** fronts are conversions from the present to the present data **.

2. Machine learning flow

2.1 What are you going to do?

An image is generated from an image by CNN. It is a method of calculating the probability of which color should be red, blue, or white for each pixel of the generated image and setting the color with the highest probability to the pixel.

This method is based on the following. Exascale Deep Learning for Climate Analytics Thorsten Kurth, Sean Treichler, Joshua Romero, Mayur Mudigonda, Nathan Luehr, Everett Phillips, Ankur Mahesh, Michael Matheson, Jack Deslippe, Massimiliano Fatica, Prabhat, Michael Houston arXiv:1810.01993 [cs.DC]

This paper won the ACM Gordon Bell Prize at the SC18 international conference in the field of supercomputers held in Dallas, Texas in 2018. Fortunately, I was on a business trip to SC18 and had the opportunity to listen to the award-winning lectures, and I remember being confused between public and private and very deeply moved.

2.2 Flow of machine learning

I will summarize the flow of machine learning by neural network. ** (1) Create an input image ** Download and visualize weather data. We prepared 6 kinds of color images (2nd)

** (2) Make a teacher image ** Create an image of only the front elements extracted from the "breaking weather map" based on the colors (3rd) I also did colorization of black and white weather maps to increase teacher images (4th)

** (3) This and that of CNN ** ** ・ Input data: Sort ** The CNN input is an 18-channel tensor that concatenates 6 types of input images (3 channels). If all the data is close to each other in the CNN mini-batch, it will be biased at the time of learning, so the pair of input data and teacher image is randomly sorted in time.

** ・ Input data: one-hot vector ** Since CNN is created so that each pixel calculates the ** probability ** corresponding to each of red, blue, and white, the correct answer is 1 and the others are 0 ** for the RGB teacher image. Convert it to an array. In other words, white is (1,0,0), red is (0,1,0), and blue is (0,0,1).

·output data For example, CNN outputs (0.1, 0.7, 0.2) as the value of a certain pixel. In this case, this pixel is red, and the picture is made.

·neural network CNN creates a U-Net-aware branch route.

** · Loss function ** Use categorical_cross_entropy. This will calculate the probabilities of red, blue and white.

** (4) Let's learn! ** ** Let's finally learn CNN. This time, from January 2017 to October 2019, we are learning using two sheets (data of 6:00 UTC and 18:00 UTC) from each day. Since the teacher data is a one-hot vector in which one of red, blue, and white is 1, it becomes possible to reproduce the color of the front element as the learning progresses.

My Mac mini gets ** untouchable ** for a few days. I think it's one of the most CPU-intensive individuals of the Mac mini, and I think it's all about CPU.

** (5) Make a prediction! (Let the first look data draw a front) ** When the training has converged, the trained network is used to draw the front of the first-look data. This time, as the first look data, the front was drawn on the data from November 2019 to January 2020.

3. Anyway the result

3.1 Learning data

This is an example of training data. It appears that it has converged until a front element corresponding to a position similar to the front of the preliminary weather map can be generated.

From top left to right,

From bottom left to right,

tst_chart_3cmprd2019_041606.v8.256.png

3.2 Generated data (first look data)

In 3.1, we generated a front using a network trained from the data from January 2017 to October 2019.

Example of successful generation

There is a cyclone over the sea east of Japan, with a cold front extending southwest. ** Neural networks can also generate this cold front. ** Also, warm fronts are being generated, albeit interrupted.

tst_chart_3cmprd2020_010818.v8.256.png

A pretty bad example

In the preliminary weather map, a cold front extends southwest from the low pressure system near the Kuril Islands. In addition, a stationary front is analyzed along the 30th parallel north. On the other hand, ** neural networks can hardly generate either front. ** **

tst_chart_3cmprd2020_012412.v8.256.png

Which data is contributing?

Looking at the data at a glance, it seems that a front is drawn in the area ** where the contour lines are crowded at the equivalent potential temperature of ** 850 hPa in the input data. In a bad example, it looks a little less crowded, but what about?

The filter size of this CNN is 3x3. You may want to try using a slightly larger filter or shrink the image.

By the way, it is the front generation result for January 2020. It's just an animation that generates data like this.

ww_qiita_chart202001DDHH.v8.256.gif

3.3 Learning progress

Loss and Accuracy transitions. The situation is up to 800 epochs that have been retrained to regain this data.

I stopped learning every 200 epochs and restarted it, but at that time, by randomizing the input image (described later), some of the data to be learned is replaced, so Loss and Accuracy are bad once. Become.

lossAndAccu.png

It is a change of the generated image about the example that was successfully generated earlier. From the left, 200, 400, 600, 800 epoch generated images with parameters at the time of learning, finally 1500 epochs or more Generated image when executed. It is possible to draw a front where it should be drawn at an early stage, and it seems that the accuracy of front symbols (such as chevron marks) is increasing.

ofile_qiita_2020010818.v8.256.png

This is a bad example. Hmm? Maybe it was still better to generate it with the parameters that are less learned?

ofile_qiita_2020012412.v8.256.png

As for study time, it took about 580 seconds for one epoch on my Mac mini. With this, I continued to run it for about 4 days. I wonder if I have a GPU and want to run it further.

3.4 By the way ...

I haven't come to the point where I can make a clear mistake on the front line, but I feel that if I increase the number of data examples and increase the calculation resources, I think I can get to a pretty good point. So ** is this useful for anything? ** ** For frontal analysis, you can get the results of the Japan Meteorological Agency, so it is better to leave it to a professional.

If it makes sense, does it mean that the numerical weather prediction results and climate calculation results are automatically fronted?

ww.gif

For example, the above is a front line for the GSM forecast calculation result with UTC at 12:00 on February 18, 2020 as the initial value. Since the forecast weather map is announced at intervals of 24 hours and 48 hours, we are drawing a front line for the results of GSM every 6 hours.

4. TL; DR CNN

The following is a description of CNN itself and tips for some addictive parts.

4.1 Input data

4.1.1 Randomly sort the set of input image / teacher image

The input image and teacher image are arranged in chronological order immediately after loading. If you train as it is, it is assumed that the weather conditions will be close to each other in time for each mini-batch, so the list will be randomly sorted to make the batch a mixture of different weather conditions.

randomize.py



# i_datalist_train n arranged in order of date and time_weather image data for datalist input
# t_datalist_train n arranged in order of date and time_datalist teacher image data

#0 to n_datalist-Randomly n from a sequence of numbers up to 1_Generate a datalist index
#Loop processing

for c_list in random.sample(range(n_datalist), n_datalist ):

  t_datalist_train.append(t_datalist[c_list])
  #Sorted teacher image list

  i_datalist_train.append(i_datalist[c_list])
  #Sorted input image list

4.1.2 Making front image one-hot

In the front element image for the teacher image created up to the last time, each pixel is red, blue, or white. The background that occupies most of the area is white, the cold front and anticyclone symbol is blue, and the warm front and occluded front and cyclone symbol are red. Since the image is generated by CNN and the error function by categorical_cross_entropy is calculated with the teacher image, this front element image data is converted into a one-hot vector. In order to use the numpy method called to_categorical, convert the RGB array to an array that takes either 0, 1, or 2 once. Then use to_categorical to convert it to a one-hot vector. The following sources are running around that.

one-hot.py


    # t_img Loaded front image file
    # img_size     t_Image size of img

    t_data = np.empty((img_size[1],img_size[0]))
    # t_data one-0 to make it hot,1,Prepare an array for ternating 2

    #1 for red per pixel,Blue is 2,Otherwise set to 0
    for x in range(img_size[1]):
      for y in range(img_size[0]):
        r,g,b = t_img.getpixel((y,x))
        #Store RGB values in r g b respectively
        if(r>g+20):
          if(r>b+20):
            t_data[x,y]=1 #Red pixel
          else:
            if(b>g+20):
              t_data[x,y]=2 #Blue pixel
            else:
              t_data[x,y]=0 #White pixels
        else:
          if(b>r+20):
            if(b>g+20):
              t_data[x,y]=2 #Blue pixel
            else:
              t_data[x,y]=0 #White pixels
          else:
            t_data[x,y]=0 #White pixels
    
    # t_From data, 3 element one-hot vector array T_Convert to data
    T_data = np_utils.to_categorical(t_data[:,:],3)

4.2 Output data

From the predicted output data, the one with the highest probability is adopted and converted into a three-color image. When the data is output, one value is larger as shown below. Places the corresponding color on the pixel.

python


w_array
(256, 256, 3)
[[[9.99969959e-01 1.26371087e-05 1.73822737e-05]
  [1.00000000e+00 8.79307649e-09 8.33461922e-09]
  [1.00000000e+00 1.22459204e-12 8.95228910e-16]
  ...
  [9.99985695e-01 6.48013793e-06 7.86928376e-06]
  [9.99960303e-01 8.51386540e-06 3.12020056e-05]
  [9.99833941e-01 2.61777150e-05 1.39806682e-04]]

 [[9.99999881e-01 8.55169304e-08 1.83308195e-08]
  [1.00000000e+00 9.66997732e-11 1.11044485e-12]
  [1.00000000e+00 4.26908814e-16 1.04265986e-22]
  ...

Source for color placement on pixels. In w_array, the probability value vectors of (R, G, B) are arranged in latitude x longitude. Based on this, image data mask1 is created.

rasterize.py



  # w_array Nd of predicted output data_array One screen of array is stored

  s_img = array_to_img(w_array[:,:,:].reshape(i_dmlat,i_dmlon,3))
  # s_img Imaged data

  new_img_size = [w_array.shape[1], w_array.shape[0]]
  mask1 = Image.new('RGB', new_img_size)
  #mask1 Array for output

  for x in range(new_img_size[1]):
    for y in range(new_img_size[0]):

      # w_The third element of array is w1/r1/Store in b1
      w1 = w_array[x,y,0]
      r1 = w_array[x,y,1]
      b1 = w_array[x,y,2]

      if(r1>w1):
        if(r1>b1):  # r1>w1, r1>b1
          r,g,b=255,0,0
          #Set red as RGB when r1 is maximum
        else: # r1=<w1
          if(b1>w1):  # r1=<w1<b1
            r,g,b=0,0,255
            #Set blue as RGB when b1 is maximum
          else: # r1=<w1, b1=<w1
            r,g,b=255,255,255
            #Set white as RGB when w1 is maximum
      else: # w1>=r1
        if(b1>r1):
          if(b1>w1): # b1>w1>r1
            r,g,b=0,0,255
            #Set blue as RGB when b1 is maximum
          else: # w1>=b1>r1
            r,g,b=255,255,255
            #Set white as RGB when w1 is maximum
        else: # w1>=r1 w>=b1
          r,g,b=255,255,255
          #Set white as RGB when w1 is maximum

      mask1.putpixel((y,x),(r,g,b))
      #Set RBG value to mask1

4.3 Neural Network (Convolutional Neural Network)

4.3.1 Network structure

The structure of CNN has a branch like U-Net and is made by myself. An input is an 18x256x256 tensor and an output is a 3x256x256 tensor. The image size is halved by specifying the stride = (2,2) parameter of Conv2D, and finally becomes 120x16x16. In normal U-Net, the number of channels is doubled when halving the image size, but this time it is not doubled cleanly due to the amount of memory.

GSMtoWChart045.png

4.3.2 Network definition by Keras

The source of Python-Keras for the network definition part is below. The Keras network definition method is to use the ** Functional API ** to connect the output of each layer to the next layer or the connection part at the end.

cnv2dtra = concatenate([cnv2dtra , cnv2d14 ], axis=1) The part such as is the part where cnv2d14, which has branched to cnv2dtra, is connected by the first element (channel).

cnn.py


#- Neural Network Definition

num_hidden1 = 18
num_hidden2 = 32
num_hidden3 = 48
num_hidden3a = 72
num_hidden3b = 120
num_hidden4 = 128
num_hidden5 = 16
num_hidden6 = 16
num_itter = int(_num_itter) #Number of learning

#--- seting Neural Network using Keras

in_ch = 18 #Input image RGB x 6 types of channels

cnn_input = Input(shape=(in_ch, i_dmlat, i_dmlon))
# i_dmlat, i_dmlon Number of pixels in latitude and longitude of images

# 1st hidden layer

cnv2d1 = Conv2D(num_hidden1, data_format='channels_first', kernel_size=(3,3), dilation_rate=(3,3), activation='relu', padding='same')(cnn_input)
cnv2d2 = Conv2D(num_hidden1, data_format='channels_first', kernel_size=(3,3), dilation_rate=(3,3), activation='relu', padding='same')(cnv2d1)
cnv2d4 = Conv2D(num_hidden1, data_format='channels_first', strides=(2,2) , kernel_size=(3,3), activation='relu', padding='same')(cnv2d2)

# 2nd hidden layer

cnv2d5 = Conv2D(num_hidden2, data_format='channels_first', kernel_size=(3,3), dilation_rate=(3,3), activation='relu', padding='same')(cnv2d4)
cnv2d6 = Conv2D(num_hidden2, data_format='channels_first', kernel_size=(3,3), dilation_rate=(3,3), activation='relu', padding='same')(cnv2d5)
cnv2d8 = Conv2D(num_hidden2, data_format='channels_first', strides=(2,2) , kernel_size=(3,3), activation='relu', padding='same')(cnv2d6)

# 3rd hidden layer

cnv2d9 = Conv2D(num_hidden3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2d8)
cnv2d10 = Conv2D(num_hidden3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2d9)
cnv2d12 = Conv2D(num_hidden3, data_format='channels_first', strides=(2,2) , kernel_size=(3,3), activation='relu', padding='same')(cnv2d10)

# 4th hidden layer

cnv2d13 = Conv2D(num_hidden3a, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2d12)
cnv2d14 = Conv2D(num_hidden3a, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2d13)
cnv2d16 = Conv2D(num_hidden3a, data_format='channels_first', strides=(2,2) , kernel_size=(3,3), activation='relu', padding='same')(cnv2d14)

cnv2d17 = Conv2D(num_hidden3b, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2d16)
cnv2d19 = Conv2D(num_hidden3b, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2d17)

#--- decode start

cnv2dtra = Conv2DTranspose(num_hidden3a, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same')(cnv2d19)

cnv2dtra = concatenate([cnv2dtra , cnv2d14 ], axis=1)
#Branch join

cnv2dtrb = Conv2D(num_hidden3a, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2dtra)
cnv2dtr1 = Conv2DTranspose(num_hidden3, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same')(cnv2dtrb)

cnv2dtr1 = concatenate([cnv2dtr1 , cnv2d10 ], axis=1)
#Branch join

cnv2dtr2 = Conv2D(num_hidden3, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2dtr1)
cnv2dtr4 = Conv2DTranspose(num_hidden2, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same')(cnv2dtr2)

cnv2dtr4 = concatenate([cnv2dtr4, cnv2d6], axis=1)
#Branch join

cnv2dtr5 = Conv2D(num_hidden2, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2dtr4)
cnv2dtr7 = Conv2DTranspose(num_hidden1, data_format='channels_first', kernel_size=(3,3), strides=(2,2), activation='relu', padding='same')(cnv2dtr5)

cnv2dtr7 = concatenate([cnv2dtr7, cnv2d2], axis=1)
#Branch join

cnv2dtr8 = Conv2D(num_hidden1, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2dtr7)
cnv2dtr9 = Conv2D(num_hidden1, data_format='channels_first', kernel_size=(3,3), activation='relu', padding='same')(cnv2dtr8)

cnv2dtr10 = Conv2D(3, data_format='channels_first', kernel_size=(1,1), activation=softMaxAxis , padding='same')(cnv2dtr8)
#Finally make one ternary image


NN_1 = Model( input=cnn_input , output=cnv2dtr10 )
#Input is cnn_Build a network with the last cnv2dtr10 as output at input

NN_1.compile(optimizer='adam', loss='categorical_crossentropy' , metrics=['accuracy'])
#Loss function categorical_crossentropy

4.4 Final layer softmax

In the final layer, softmax calculates red, blue and white for each pixel. Since the standard softmax cannot specify a part of the array, we define a function called softMaxAxis. This method Stack Over flow:"How to specify the axis when using the softmax activation in a Keras layer?" Is referred to.

softMaxAxis.py


def softMaxAxis(x):
        return softmax(x,axis=1)

At this time, it seems necessary to tell the name of the custom object when saving and reloading the model.

load_model


NN_1 = load_model( paramfile , custom_objects={'softMaxAxis': softMaxAxis })

4.5 Superimposition of generated image and isobar diagram

Use ImageMagick. convert -compose multiply is the command to superimpose. Specifies that the centers of the two images are aligned with no vertical or horizontal offset by -gravity center -geometry + 0 + 0.

Image_Overlay.sh


# m_file Generated front image (PNG)
# s_file Isobar diagram (PNG)
composite -gravity center -geometry +0+0 -compose multiply  ${m_file} ${s_file} out.png

5. Summary

I created a neural network that recognizes the weather data visualization image and generates a "front image that the Japan Meteorological Agency analyzes", and tried to draw the front automatically.

I think that this method is effective not only for drawing fronts but also for any data plotted on the map that is related to the weather. On the other hand, I feel that it is quite difficult to evaluate the hit loss. Even if you look at the pixel value deviation mechanically, it feels different, and what do you do with the evaluation of capturing the shape of a front? Like the weather forecaster exam, it may be unexpectedly correct for a person to score.

Currently, we are working on generating synthetic radar images from MSM meteorological data, and we are considering evaluating it as correct data. I'm thinking of releasing this as well soon.

This was Qiita's first post, but I posted a long sentence five times. Thank you to everyone who read it.

Recommended Posts

Try to draw a "weather map-like front" by machine learning based on weather data (5)
Try to draw a "weather map-like front" by machine learning based on weather data (3)
Try to draw a "weather map-like front" by machine learning based on weather data (1)
Try to draw a "weather map-like front" by machine learning based on weather data (4)
Try to draw a "weather map-like front" by machine learning based on weather data (2)
Try to forecast power demand by machine learning
A story about data analysis by machine learning
Machine learning beginners try to make a decision tree
Try to draw a Bezier curve
How to collect machine learning data
Try to make a blackjack strategy by reinforcement learning ((1) Implementation of blackjack)
Search for technical blogs by machine learning focusing on "easiness to understand"
Try to predict the value of the water level gauge by machine learning using the open data of Data City Sabae
[Machine learning] Create a machine learning model by performing transfer learning with your own data set
Try to draw a life curve with python
Notes on machine learning (updated from time to time)
Try to create a new command on linux
[Keras] I tried to solve a donut-type region classification problem by machine learning [Study]
Try to make a blackjack strategy by reinforcement learning (② Register the environment in gym)
Build a machine learning Python environment on Mac OS
Try to predict forex (FX) with non-deep machine learning
Time series data prediction by AutoML (automatic machine learning)
[Machine learning] Try to detect objects using Selective Search
xgboost: A valid machine learning model for table data
An introduction to machine learning from a simple perceptron
Build a machine learning environment natively on Windows 10 (x64)
How to interactively draw a machine learning pipeline with scikit-learn and save it in HTML
Data visualization with Python-It's too convenient to draw a graph by attribute with "Facet" at once
Try to make a blackjack strategy by reinforcement learning (③ Reinforcement learning in your own OpenAI Gym environment)
Introduction to machine learning
Build a python machine learning study environment on macOS sierra
Build a machine learning environment on mac (pyenv, deeplearning, opencv)
Try to build a deep learning / neural network with scratch
Try to evaluate the performance of machine learning / regression model
Collect machine learning data by scraping from bio-based public databases
Introduction to Machine Learning with scikit-learn-From data acquisition to parameter optimization
Made icrawler easier to use for machine learning data collection
Try to evaluate the performance of machine learning / classification model
Dockerfile for creating a data science environment based on pip3
Machine learning beginners try to reach out to Naive Bayes (2) --Implementation
Create AI to identify Zuckerberg's face by deep learning ③ (Data learning)
Try to predict if tweets will burn with machine learning
Machine learning environment settings based on Python 3 on Mac (coexistence with Python 2)
Machine learning beginners try to reach out to Naive Bayes (1) --Theory
Using open data from Data City Sabae to predict water level gauge values by machine learning Part 2
I was frustrated by Kaggle, so I tried to find a good rental property by scraping & machine learning
How to quickly create a machine learning environment using Jupyter Notebook on macOS Sierra with anaconda