I am an SE who has been in charge of technical calculation systems for a long time after graduating from earth science when I was a student. In response to the machine learning boom that started several years ago, I was playing with weather data using my Mac mini at home because I wanted to try something after studying.
This time, I tried to draw a "front" by machine learning, so I summarized it (Qiita's first post).
Since there are multiple processes, I am thinking of posting in several times. This article is a general flow and a preliminary announcement.
"Weather map" is created by the Japan Meteorological Agency based on meteorological observation data. Fronts, high pressures, and low pressures are drawn on isobars, which are often seen on TV and newspapers. The figure below is the "Breaking Weather Map (SPAS)" at 18:00 UTC on November 25, 2019 (3:00 JST 26th).
A "stationary front" extends from the southeastern sea of the Kanto region toward the northeast, and a "cold front" extends further east from 150 ° east longitude.
A "front" is the boundary between air masses of different nature, usually the boundary between cold and warm air. However, when we visualize the meteorological data (wind, atmospheric pressure, temperature) on the surface of the earth at this time,
It's like that (I used the GSM GPV downloaded from the Research Institute for Sustainable Humanosphere, Kyoto University). There seems to be a boundary between cold and warm air all over the place, and I feel like "Why is only one drawn on the weather map the front?" In this figure, the wind is arrow feathers, the thick contour line is atmospheric pressure, and the thin contour line and color (change from warm to cold from red to blue) represent temperature.
If the front is drawn automatically, it may be that the place where the difference is large is calculated based on the temperature data, but in that case the one drawn in the weather map is I don't feel like being identified. Therefore, the theme was to draw a "weather map-like front" by machine learning.
In conclusion, it is the front as shown in the figure below that finally generated machine learning.
This figure is a pure visualization of barometric pressure overlaid with machine-learned fronts. Well, a front is generated at a reasonable position.
In reality, this is a fairly successful case, and in other cases it's not.
Breaking weather map (September 19, 2019, 6:00 UTC)
Machine learning drawing
In this case, a stationary front in the south is generated, but various fronts (occluded, cold, warm) extending from the blocked cyclone near Sakhalin in the north of Hokkaido are not generated.
Specifically, using numerical meteorological information (GPV), visualization of some meteorological elements is input, and the frontal map of the "flash weather map" released by the Japan Meteorological Agency is used as teacher data, U-Net. I learned it by a like CNN.
An image that visualizes ground surface data (atmospheric pressure, temperature, atmospheric pressure) and sky data (atmospheric pressure, temperature, wind, equivalent potential temperature, dew point) from the initial time data of the Global Forecast Model (GSM) of the Meteorological Agency.
Taking UTC at 18:00 on November 25, 2019 as an example, it is as follows.
Ground surface data
Data in the sky Atmospheric pressure, wind, temperature 850hPa (near 1500m above the sky)
500hPa (near 5500m above the sky)
Equivalent potential temperature (850hPa) Equivalent potential temperature is a concept that combines the amount of water vapor and the temperature. A large value indicates high temperature and humidity.
Dew point depression (700hPa) Dew point depression is the difference between air temperature and dew point temperature, and the higher the humidity, the smaller the value. This is colored within 3 degrees.
Plumb bob (700hPa) It represents the movement of air in the vertical direction and is a measure of convective activity (precipitation activity).
From the image above, CNN will generate the following figure.
This and the barometric pressure map visualized from the meteorological data
The final output is the result of superimposing the above and then entering the time.
In other words, where should CNN itself look at multiple images and draw a "front"? You are learning.
I will post about the process I did this time in the following order.
[Part 2: Try to draw a "weather map-like front" by machine learning based on weather data (2)](https://qiita.com/m-taque/items/988b08185097dca5f5b5 "Based on weather data Let's draw a "weather map-like front" by machine learning (2) ") It is a story of weather data visualization as input. The story of visualizing numerical data in GPV format (handling of matplotlib, GRIB format)
[3rd: Draw a "weather map-like front" by machine learning based on weather data (3)](https://qiita.com/m-taque/items/4d5bb45e6b5dc42dc833 "Based on weather data Let's draw a "weather map-like front" by machine learning (3) ") A story about extracting fronts from color weather maps for training data, that is, a story about extracting only fronts from SPAS of the Japan Meteorological Agency to create teacher data.
[4th: Try to draw a "weather map-like front" by machine learning based on weather data (4)](https://qiita.com/m-taque/items/80ba51b74167b2aa669e "Based on weather data Let's draw a "weather map-like front" by machine learning (4) ") The story of making a CNN that colorizes black and white weather maps to increase color weather maps In the second spin-out story, there were few color weather maps, so I used CNN again to colorize the black and white weather maps.
[5th: Let's draw a "weather map-like front" by machine learning based on weather data (5) Machine learning Automatic Front Detection in Weather Data](https://qiita.com/m-taque/ items / 2788f623365418db4078)
This is the story of the front line drawing CNN, that is, the story of the CNN that is the learning body.
Well then.