The first Mac with Apple SoC (Apple Silicon: M1) was released on November 17, 2020. Since Intel was changed to an Arm CPU, there was concern about compatibility with existing software, but Apple released Rosseta 2 which is an emulation environment at the same time as this machine was released, and the software for Intel Mac should work without problems. Is said to be possible. However, regarding Docker, the official mentions compatibility issues, and it is said that complete operation is difficult at this time (November 2020) (link) *
As a 2012 Macbook Pro user, I was planning to buy a new Mac later this year because the new macOS (Big Sur) from 2020 is not supported, but even if there are some compatibility issues, The reviews on the M1 Mac are generally high, and I don't feel like buying an Intel Mac at this point. So, after overcoming the temporary compatibility issue for the time being, I decided to introduce the M1 Mac. I'm currently building a Python development environment with Docker installed on a Mac, hosted locally, and Anaconda in it, but until the release of Docker for M1 Mac, the Docker environment Attempted to move (host) outside of Mac.
Candidates for relocation are as follows ・ AWS (EC2 Ubuntu server with free tier)
-Using a Mac as a local terminal ・ You have already acquired an AWS account -You already have a working Docker file or Docker image. ・ Within the AWS free trial period (1 year from account creation) ・ You must be logged in to AWS from within Japan
-As the root user, enter your AWS account and password (next page) and sign in
・ Service: Search and select EC2
-Select "Launch Instance" from the "Launch Instance" pull-down.
-Select a machine image. Just in case, check "Free usage limit only". Here we select Ubuntu Server
-Selection of instance type: "t2.micro" is selected as a free frame. After selecting, go to "Next Step: Setting Instance Details"
-It seems that there is no need to change the detailed settings of the instance. Please note that some settings will be charged if changed. Go to "Next Step: Add Storage"
-Size the storage as needed. The default is 8GB, but since it is not enough to build my own Dockerfile, change the size (GiB) to the free frame limit of 30GB. Go to "Next Step: Add Tag"
-Add a tag to create a key pair to be executed later. Following the example, I entered the name for the key and the server name for the value. After entering, go to "Next Step: Security Group Settings"
-Since security requirements are case by case, I will not mention the details here, but if you do not limit the IP address at least, the following warning will be issued. In this case, if you select "My IP" from the "Source" pull-down menu, the global IP address of the local side you are currently connected to will be automatically entered. Unless you are assuming access from multiple locations, there seems to be no problem, so this setting is used. Go to the next "Confirmation and Creation"
-Confirmation screen for instance creation. If it is as set, select "Start"
-Create a key pair. When creating an instance for the first time, there should be no existing key, so select "Create new key pair", enter the key pair name (pem file name), download and save.
-It says that an instance is being created, but nothing changes even if you wait with this screen. Select View Instance
-Transfer to the screen that displays the status of the instance. When the instance is "Running" as shown below, it starts counting 750 hours a month (available for free). Select a row for your instance to see the details below. "Public IPv4 DNS" in the center of the screen is the connection destination of the server.
・ I think that 750 hours per month will not be exceeded even if it keeps operating (because 750 ÷ 24 = 31.25> 31 days), but I think that billing may start without knowing it just before the end of free billing. So I want to make a habit of stopping the instance whenever I don't use the server. To stop it, select "Stop Instance" from the "Instance Status" pull-down.
・ A confirmation screen will appear, so select "Stop" again.
-If the following display is displayed, you should have been out of the billing state. In addition, there will be a slight lime lag from the display of the green band "Successfully stopped" until the instance becomes "Stopped".
-First, log in to the AWS server. Log in with ssh from the terminal. It is good to disable overwriting of pem files with chmod in advance.
$chmod 400 key pair name.pem
$ ssh -i <keypair path/keypair name.pem> ubuntu@<Public IPv4 DNS>
・ Install docker after ubuntu update
$ sudo apt update
$ sudo apt upgrade
$ sudo apt install docker.io
-After installation, grant administrator privileges to Docker for user ubuntu. After that, exit once, log in again, enter the docker command without sudo, and confirm that it is accepted.
$ sudo gpasswd -a ubuntu docker
-Exit once and transfer the Docker file and working directory to the AWS server by sftp. Depending on how you get the docker image, there may be no Docker file, so in that case you will transfer the docker image directly, but the docker image including anaconda will be a fairly large file, so check before transferring.
$ sftp -i <keypair path/keypair name.pem> ubuntu@<Public IPv4 DNS>
$put <arbitrary directory>/Dockerfile
* If you do not specify a local directory, the local folder when you log in
(Desktop in this case)Becomes
Default when connecting if no remote directory is specified
(/home/ubuntu)Placed in
$ put -r <Work directory used locally>
$ exit
-When the file transfer is complete, exit sftp and re-enter with ssh.
3.3 Docker build -Build a Dockerfile and create a Docker image. Prepare in advance if you want to put a directory for build context. After the build is complete, use the docker images command to check that the image is created.
$ mkdir anaconda_build(Create build context, optional)
$ mv Dockerfile anaconda_build
$ cd anaconda_build
$ docker build .
(When build is finished)
$ docker images
-Run docker run with the following command, start anaconda, connect to the AWS server from the Mac browser, and check that Jupyter Notebook starts up. At that time, the specified work directory is visible and the file transferred by sftp is included.
$ docker run -v ~/workdir:/workdir -p 8888:8888 <image ID>
(In the screenshot below, the directory name is "work", but the mistake is "workdir".)
At this point, the anaconda environment should have been built as before. If you don't need this server anymore, you can "quit the instance" and it will disappear cleanly, so it's easy. At that time, don't forget to collect the artifacts in the working directory with sftp get in advance.
Docker environment relocation (Raspberry Pi) → I found that there is no anaconda (arm64 version) repository that can be used with Raspberry Pi, so I want to work on it at a later date
Docker course taught by US AI developers from scratch (Udemy)
Recommended Posts