--I want to read a file for GrADS with Python and post-process it. ――But the environment of shared computational resources must be clean ――However, there are many things you want to install, such as conversion software and libraries.
So let's use container virtualization.
Docker What is good with Docker is that you can create a dedicated virtual computer that performs the necessary post-processing without polluting the computational resources. You can launch the container when you use it, discard it after processing it, and launch it again when needed. In the case of Docker, the start-up cost is very low, so you can easily start it up, perform one process, and then discard it again.
It would be nice to write a Dockerfile for automation, but for now I'll just launch and maintain the shell and simply commit it to create the image. If you want to use it for a long time, you should create a Dockerfile so that it can be upgraded, but it is better to check the basic functions first in the shell.
This time, based on Ubuntu, install cdo, pip3, and netcdf4.
This time I will use Ubuntu.
$ docker pull ubuntu/ubuntu
$ docker images
You can confirm that the image has been created.
To run the image
$ docker run -it ubuntu/ubuntu
will do.
Install pip3
and cdo
.
Also install netCDF4
with pip3
.
# apt update
# apt install python3 python3-pip cdo
# pip3 install netCDF4
Check if it can be converted properly with cdo and netCDF4 can also be imported.
# python3
>>> import netCDF4 as nc
>>> ^D
# cdo
....
Use the exit command or Ctrl + d to exit and create an image.
The container ID required to create the image requires -a
to confirm the finished container.
# exit
(host) $ docker ps -a
I think the screen will look like this. CONTAINER ID will be the container ID specified at the time of commit. There is an image name in IMAGE, and exited ** ago in STATUS, so you can see which one is roughly there.
$ docker commit [container id] [author]/[image name]
Let's commit like that.
$ docker images
You can confirm that the image has been created.
There are two ways to actually calculate, you can also run and then interactively run from the shell. However, this time I will specify a command at the time of run and execute it.
By binding the directory of programs and data groups to Docker, you can actually access it easily from the container. Also, do X forwarding to check the graph.
$ sudo docker run -e DISPLAY=$DISPLAY --net host -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:/root/.Xauthority -v /home/xxx/:/home/ --shm-size 16g xxx/ubuntu-cdo2 /bin/bash -c "cd /home/speedy-epyc/speedy/python-script; python3 rmse.py"
Add --shm-size
if necessary if there is a missing program.
Recommended Posts