Official tutorial to learn Docker systematically Japanese translation

About this article

** Translated by ** getting-started : https://github.com/docker/getting-started/tree/6190776cb618b1eb3cfb21e207eefde511d13449

License: Apache License 2.0

Docker desktop

Docker Desktop is a tool for building and sharing containerized applications and microservices. Works on Mac OS and Windows.

** Japanese translator memo **

To install the Docker desktop, go to here and download it, or run the following command to install it from Homebrew Cask. ..

$ brew cask install docker

Open the Docker desktop to start the tutorial. Each command etc. will be explained in detail in the latter half, so please experience how the container is created here.

Clone

First, clone the repository.

The Getting Started project is a simple Github repository that contains everything you need to create an image and run it as a container.

$ git clone https://github.com/docker/getting-started.git

Build

Next, create an image.

Docker images are a private file system for containers. It provides all the files and code that the container needs.

$ cd getting-started
$ docker build -t docker101tutorial .

Run

Let's run the container.

Launch a container based on the image created in the previous step. When you launch the container, you can launch your application using resources that are securely isolated from the rest of your PC.

$ docker run -d -p 80:80 --name docker-tutorial docker101tutorial

Share

Save and share your image.

Saving and sharing your image on Docker Hub makes it easy for other users to download and launch your image on any desired machine.

To use Docker Hub, you need to create a Docker account.

$ docker tag docker101tutorial michinosuke/docker101tutorial
$ docker push michinosuke/docker101tutorial

docker_01.png

Docker Tutorial

Access the container you created in the Docker Desktop Tutorial to begin a more detailed Docker Tutorial.

Let's access [http: // localhost](http: // localhost).

Let's start with

About the command you just executed

I was able to set up a container for this tutorial.

First, let's take a look at the command you just executed. I may have forgotten it, so I will write it again.

docker run -d -p 80:80 docker/getting-started

You may have noticed that some flags are used. Each flag has the following meanings.

** Application **

Single-letter flags can be combined to shorten the entire command. For example, the above command can also be written as:

docker run -dp 80:80 docker/getting-started

Docker dashboard

Before proceeding with the tutorial, I will introduce a Docker dashboard that can display a list of containers running on a PC. With the Docker dashboard, you can quickly access the container's logs, get shells inside the container, and easily manage the container's lifecycle (stops and deletes).

To access the dashboard, go to Mac (https://docs.docker.com/docker-for-mac/dashboard/) or Windows (https://docs.docker.com/docker-for-) Follow the steps in windows / dashboard /). If you open it now, this tutorial should be running. The name of the container (which is jolly_bouman below) is a randomly generated name. So I think it is displayed with a different name.

tutorial-in-dashboard.png

What is a container?

I started a container, but what exactly is a container? Simply put, a container is another simple process that is isolated from all other processes on the host machine. Isolation uses the kernel namespace, Cgroups, and features that have long been used in Linux. Docker has been working to make these features familiar and easy to use.

What is a container image?

Use an isolated file system when running the container. This file system is sourced from the container image. The image contains the container's file system, so all the dependencies, settings, scripts, binaries, etc. needed to run the application must be included in the image. The image also contains container settings such as environment variables, startup default commands, and other metadata.

We'll talk more about image layers, best practices, and more later.

** Information **

If you know chroot, think of the container as an extension of chroot. The file system is just brought from the image. However, the container has a more powerful isolation feature than chroot.

Introduction of apps to use

From here, we will proceed using a simple list management application that runs on Node.js. It doesn't matter if you don't know Node.js. No knowledge of JavaScript is required.

Let's assume that the development team is very small and is making a simple app to show the MVP (Minimum Viable Product). You don't have to think about how it works for large teams or multiple developers, just create an app to show how it works and what you can do.

todo-list-sample.png

Get the app

Before you can run the application, you need to put the source code of the application on your PC. In a real project, I think it's common to clone from a repository. However, in this tutorial, we have created a ZIP file containing the application, so we will use that.

  1. After [Download ZIP file](http: //localhost/assets/app.zip), open the ZIP file and unzip it.

  2. After unzipping, open the project in any editor. If you don't have the editor installed, use Visual Studio Code (https://code.visualstudio.com/). You should see package.json and two subdirectories (src and spec).

ide-screenshot.png

Create a container image for your app

Use Dockerfile to build your application. Dockerfile is a text-based instruction script used to create container images. If you have ever created a Dockerfile, you may find that the Dockerfile below is flawed. We'll talk about that later.

  1. Create a Dockerfile with the following contents in the directory containing package.json.
FROM node:12-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]

Check if the Dockerfile has an extension such as .txt. Some editors will automatically add the extension, which may result in an error in the next step.

  1. If not, open a terminal and change to the app directory where the Dockerfile is located. Let's build the container image using the docker build command.
docker build -t getting-started .

This command uses a Dockerfile to build a new container image. You may have noticed that a lot of "layers" have been installed. The reason is that I instructed the builder to start with the node: 12-alpine image. However, the image wasn't on my PC, so I had to download the image.

After the image was downloaded, I copied the application and used yarn to install the application dependencies. The CMD instruction specifies the default command to be executed when the container is started from this image.

Finally, the -t flag tags the image. Think of it as giving the image a human-friendly name. We've named the image getting-started so you can refer to it when you start the container.

The . at the end of the docker build command indicates that Docker will look for the Dockerfile in the current directory.

Launch the app container

Now that we have the image, let's run the application. To do this, use the docker run command. (Do you remember using it once?)

  1. Start the container using the docker run command and specify the name of the image you just created.
docker run -dp 3000:3000 getting-started

Remember the -d and -p flags? I started the new container in detached (background run) mode and mapped port 3000 on the host to port 3000 on the container. If you do not do port mapping, you will not be able to access the application.

  1. After a few seconds, try opening http: // localhost: 3000 in your web browser. You should see the app.

todo-list-empty.png

  1. Try adding one or two items and see if they work as expected. You can check the completion of the item or delete the item. The front end can store the item in the back end. It ’s very easy, is n’t it?

At this point, you have a todo list management app with some items. Now let's learn about container management with a few changes.

If you take a look at the Docker dashboard, you'll see that there are two containers running. (This tutorial itself and the app container just launched.)

dashboard-two-containers.png

wrap up

In this chapter, you learned the basics of building a container image and created a Dockerfile for it. After building the image, I started the container and touched the running app.

Next, let's learn how to modify the app and update the running app with a new image. Along the way, you'll also learn some useful commands.

Update the app

As a small feature request, the product team asked me to change the "empty text" that appears when an item in my to-do list doesn't exist. I want to change it as follows.

You have no todo items yet! Add one above!

It's easy? We will make this change.

Update the source code

  1. Rewrite line 56 of src / static / js / app.js to use the new text.
- <p className="text-center">No items yet! Add one above!</p>
+ <p className="text-center">You have no todo items yet! Add one above!</p>
  1. Build the updated image using the same command you used earlier.
docker build -t getting-started .
  1. Start a new container with the updated code.
docker run -dp 3000:3000 getting-started

Ah! I think I got this error. (ID is different)

docker: Error response from daemon: driver failed programming external connectivity on endpoint laughing_burnell 
(bb242b2ca4d67eba76e79474fb36bb5125708ebdabd7f45c8eaf16caaabde9dd): Bind for 0.0.0.0:3000 failed: port is already allocated.

What happened? The old container was running, so I couldn't launch the new one. The reason for this problem is that a particular port can only be listened to by one process (including the container) on the PC where the container is located, but the container was already using port 3000. You need to delete the old container to get rid of this error.

Replace old container

In order to delete the container, you must first stop it. Once stopped, you can delete it. There are two ways to delete an old container. Please choose the method you like.

Delete the container with CLI

  1. Use the docker ps command to get the ID of the container.
docker ps
  1. Stop the container using the docker stop command.
# <the-container-id>Replace with the ID obtained with the docker ps command.
docker stop <the-container-id>
  1. After stopping the container, delete it with the docker rm command.
docker rm <the-container-id>

** Application **

You can stop and delete the container with one command by adding the "force" flag to the docker rm command.

Example) docker rm -f <the-container-id>

Delete the container using the Docker dashboard

You can delete a container with just two clicks by opening the Docker dashboard. It's much easier than finding and deleting the container ID.

  1. Open the dashboard and hover over the app's container to see the list of actions on the right.

  2. Click the trash can button to delete the container.

  3. Confirm the deletion and you're done.

dashboard-removing-container.png

Launch the updated app

  1. Launch the updated app.
docker run -dp 3000:3000 getting-started
  1. Reload your browser at [http: // localhost: 3000](http: // localhost: 3000) and you will see the updated text.

todo-list-updated-empty-text.png

wrap up

While we were able to update the app, there were two things to note.

Before we talk about persistence, let's learn how to share an image with others.

Share the app

Now that you've created the image, let's share it. To share a Docker image, you need to use the Docker registry. The default registry is Docker Hub, and the images I've used so far are also taken from there.

Creating a repository

To push an image, you first need to create a repository on Docker Hub.

  1. Go to Docker Hub (https://hub.docker.com/) and log in if necessary.

  2. Click the ** Create Repository ** button.

  3. Specify getting-started as the repository name. Make sure the publishing level is Public.

  4. Click the ** Create ** button.

If you look at the right side of the page, you'll see the Docker command. Here is an example of the command you need to run to push to this repository.

push-command.png

Push the image

  1. Try running the push command from Docker Hub on the command line. Note that the command namespace should be your own, not "docker".
$ docker push docker/getting-started
The push refers to repository [docker.io/docker/getting-started]
An image does not exist locally with the tag: docker/getting-started

Why did it fail? This push command searched for an image named docker / getting-started, but couldn't find it. I can't find the image by running docker image ls. To resolve this issue, the images you've created so far need to be tagged to give them a different name.

** Japanese translator memo **

There is docker images in the command that works the same as docker image ls. This is due to a reorganization of the Docker command, with docker image ls being newer and recommended. In this tutorial, we will see other commands that have the same behavior in the future.

Reference: https://qiita.com/zembutsu/items/6e1ad18f0d548ce6c266

  1. Log in to Docker Hub using the docker login -u YOUR-USER-NAME command.

  2. Use the docker tag command to give the getting-started image a new name. Replace YOUR-USER-NAME with your Docker ID.

docker tag getting-started YOUR-USER-NAME/getting-started
  1. Try running the push command again. I haven't added any tags to the image name, so if you've copied and pasted it from Docker Hub, remove the tagname part. If you don't specify a tag, Docker uses the tag latest.
docker push YOUR-USER-NAME/getting-started

Run the image on a new instance

Now that we've built the image and pushed it to the registry, let's run this container image on a new instance. For that, use Play with Docker.

  1. Open Play with Docker (http://play-with-docker.com/) in your browser.

  2. Log in with your Docker Hub account.

  3. After logging in, click the "\ + ADD NEW INSTANCE" link in the left bar (if you don't see it, spread your browser a little sideways). After a few seconds, a terminal window will appear on your browser.

pwd-add-new-instance.png

  1. Launch the pushed app on the terminal.
docker run -dp 3000:3000 YOUR-USER-NAME/getting-started

After the image is acquired, it will start.

You'll see a badge that says 5.3000, click on it to see the app you've modified. You did it, did not you. If you don't see the badge that says 3000, click the Open Port button and enter 3000.

wrap up

In this chapter you learned how to push and share images to the registry. Then I went into a new instance and launched the pushed image. This is common in CI pipelines, where once the pipeline creates an image and pushes it to the registry, the latest version of the image is available in production.

Now that you understand, let's return to the last topic in the previous chapter. There was a problem that all the items in the to-do list were deleted when the app was restarted. Of course, that's not a good UX (user experience), so let's learn how to retain data after a restart.

Persist database

As you may have noticed, the to-do list is initialized every time you start the container. Why. Let's dig a little deeper into how the container works.

Container file system

When the container is started, various layers of the image are used for the file system. In addition, each container reserves a "scratch space" for creating / updating / deleting. Changes do not affect another container, even if the same image is used.

Take a look

To see it in action, let's start two containers and create a file for each. If you create a file in one container, you will find that the file is not valid in the other container.

Start a ubuntu container that creates a /data.txt with random numbers from 1.1 to 10000.

docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null"

If you're familiar with commands, you'll see that you're launching a Bash shell and calling two commands (using && for that). In the first part, I'm writing a random number to /data.txt. The second command just keeps observing the files to keep the container running.

  1. Let's go inside the container with exec to see what is output. You can do this by opening the dashboard and clicking on the first action of the running ubuntu image.

dashboard-open-cli-ubuntu.png

You can see that the shell is running inside the ubuntu container. Execute the following command to display the contents of /data.txt. Once you've done that, close the terminal again.

cat /data.txt

If you want to do the same thing using the command line, use docker exec. After getting the container ID with docker ps, you can get the contents of the file with the following command.

docker exec <container-id> cat /data.txt

You should see a random number.

  1. Launch another ubuntu container and check if the same file exists.

docker run -it ubuntu ls /

There is no data.txt. It was written to the scratch space for the first container.

  1. Delete the first container using the docker rm -f command.

Container volume

By now, we know that the container starts by defining the image at startup. A container can create, update, and delete files, but it is lost when the container is deleted and all changes are limited to that container. But with volumes, you can change all of them.

Volumes (https://docs.docker.com/storage/volumes/) provides the ability to allow a particular file system path in a container to connect to a host machine. If a directory in the container is mounted, changing that directory will also affect the host machine. Even if you restart the container, if you mount the same directory, you can refer to the same file.

There are two types of volumes. I use both, but let's use ** named volume ** for the time being.

Persist Todo data

The ToDo app stores the data in the SQLite Database located in /etc/todos/todo.db. Don't worry if you don't know SQLite. SQLite is a simple relational database that stores all your data in one file. This isn't the best way to work with large data, but it works for small demo apps. How to switch to a different database engine will be described later.

Since the database is a single file, you should be able to persist this file on the host so that it can be referenced by the next container so that you can resume from where you left off. You can make the data persistent by creating a volume and connecting to the directory where the data is stored (also called a mount). When the container writes to the todo.db file, it is kept on the host in the volume.

To briefly touch, I'm going to use ** Named Volume **. Think of a named volume as a bucket for your data. Docker reserves physical space on the disk, so you only need to remember the name of the volume. When using a volume, Docker will verify that you are getting the correct data.

  1. Create a volume with the docker volume create command.
docker volume create todo-db
  1. The ToDo app you have already launched is running without using persistent volumes, so use the dashboard or use the docker rm -f <id> command to stop it.

  2. I'm starting a ToDo app container, add the -v flag to specify the volume connection. Use the named volume to connect to / etc / todos and capture all the files.

docker run -dp 3000:3000 -v todo-db:/etc/todos getting-started
  1. After launching the container, open the app and try adding some items to your to-do list.

items-added.png

  1. Delete the ToDo app container. Use the dashboard or get the ID with docker ps and then delete it withdocker rm -f <id>.

  2. Start a new container using the same command as above.

  3. After confirming that the list is displayed, delete the container and proceed.

Now that you understand how to persist data.

** Application **

Named volumes and bind mounts (more on this later) are two volumes that have been supported since you installed Docker, but there are also many driver plugins that support NFS, SFTP, NetApp, etc. doing. This is very important when launching containers on multiple hosts in a cluster environment such as Swarm or Kubernetes.

Learn more about volume

Many people often ask, "Where is the actual location where Docker stores data when using named volumes?" If you want to know, you can use the docker volume inspect command.

docker volume inspect todo-db
[
    {
        "CreatedAt": "2019-09-26T02:18:36Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/todo-db/_data",
        "Name": "todo-db",
        "Options": {},
        "Scope": "local"
    }
]

The Mountpoint is the actual location where the data is stored on disk. Note that most machines require root privileges to access this directory from the host.

** Access volume data directly on Docker desktop **

While running on the Docker desktop, Docker commands are actually running inside a small virtual machine on the machine. If you want to see the actual contents of the Mountpoint directory, you first need to go inside the virtual machine.

wrap up

At this point, I was able to create a functional application that could be restarted alive. We hope to show off to investors and help them understand our vision.

However, it takes a little too long to rebuild the image every time you make a change. There is a better way to make changes. The bind mount (the one I hinted at earlier) is the way to do it. Let's take a look.

Use bind mount

In the previous chapter, we used named volumes to persist the database. Named volumes are useful if you just want to store your data, because you don't have to worry about where you store your data.

You can use ** bind mount ** to control the exact Mountpoint on the host. It can also be used for data persistence, but it is often used to provide additional data to a container. When developing an app, you can connect the source code to a container with a bind mount to change, respond to, or see the changes immediately.

For apps made with Node, nodemon is the best way to monitor file changes and restart your application. Similar tools exist in most languages and frameworks.

Volume type comparison table

Bind mounts and named volumes are the two main volume types that the Docker engine has. On the other hand, additional volume drivers are available in other use cases (SFTP, Ceph / getting-started-with-the-docker-rbd-volume-plugin /), NetApp, [S3](https://github.com (/ elementar / docker-s3-volume) etc.).

Named Volumes Bind mount
Host location Docker chooses I choose
Mount example( -vuse) my-volume:/usr/local/data /path/to/data:/usr/local/data
Create a new volume with the contents of the container Yes No
Volume driver support Yes No

Start a container in developer mode

Let's start a container that can be used in the development stage. Do the following:

So let's get started.

  1. Make sure the getting-started container you have used so far is not started.

  2. Execute the following command. I will also explain what you are doing.

docker run -dp 3000:3000 \
    -w /app -v "$(pwd):/app" \
    node:12-alpine \
    sh -c "yarn install && yarn run dev"

If you are using PowerShell, use the following command.

docker run -dp 3000:3000 `
    -w /app -v "$(pwd):/app" `
    node:12-alpine `
    sh -c "yarn install && yarn run dev"
  1. You can see the logs with the docker logs -f <container-id> command. If you look at this, you know you're ready.
docker logs -f <container-id>
$ nodemon src/index.js
[nodemon] 1.19.2
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] starting `node src/index.js`
Using sqlite database at /etc/todos/todo.db
Listening on port 3000

When you're done looking at the logs, you can quit with Ctrl + C.

  1. Now let's make some changes to the app. Let's change the "Add Item" button in the src / static / js / app.js file to" Add ". It's on line 109.
-                         {submitting ? 'Adding...' : 'Add Item'}
+                         {submitting ? 'Adding...' : 'Add'}
  1. Just refresh (or open) the page and you should see the changes reflected in your browser almost instantly. It takes a few seconds to restart the Node server, so if you get an error, try refreshing after a few seconds.

updated-add-button.png

  1. Try making other changes. Once that's done, stop the container and then use docker build -t getting-started . to build a new image.

Using bind mounts is very common in local development. The advantage is that you don't need to have any build tools or environments installed on your development machine. Just the docker run command will pull the development environment and you're ready to go. We'll talk about Docker Compose in a later chapter, which can simplify commands with lots of flags.

wrap up

We have made the database persistent so that we can respond quickly to the demands and desires of investors and founders. but please wait a moment. Great news has jumped in!

** Your project will be developed in the future. ** **

Databases need to be migrated to something more extensible than SQLite in preparation for commercialization. Simply put, you should keep your relational database and use MySQL. But how do you get MySQL to work? How do I allow communication between containers? I will talk about that in the next chapter.

App with multiple containers

So far, we've been working on a single container app. But I want to add MySQL to my application. A common question is, "Where should I run MySQL? Should I run it in the same container and start it separately?" In general, each container should do only one thing. There are several reasons for this.

There are other reasons as well. So, I will update the app so that it works like this.

multi-app-architecture.png

Container network

Remember, containers run independently by default and know nothing about other processes or containers on the same machine. So how do you allow a container to communicate with other containers? The answer is ** network **. You don't have to be a network engineer. Just remember this rule.

When two containers are in the same network, they can communicate with each other. Communication is not possible unless you are in the same network.

Start MySQL

There are two ways to place a container on the network. The first is the method of assigning at the start. The second is to connect an existing container. This time, let's create the network first and then connect the started MySQL container.

  1. Create a network.
docker network create todo-app
  1. Start the MySQL container and connect to the network. It also defines some environment variables used to initialize the database. (See the "Environment Variables" chapter in MySQL Docker Hub listing)
docker run -d \
    --network todo-app --network-alias mysql \
    -v todo-mysql-data:/var/lib/mysql \
    -e MYSQL_ROOT_PASSWORD=secret \
    -e MYSQL_DATABASE=todos \
    mysql:5.7

If you are using PowerShell, use the following command.

docker run -d `
    --network todo-app --network-alias mysql `
    -v todo-mysql-data:/var/lib/mysql `
    -e MYSQL_ROOT_PASSWORD=secret `
    -e MYSQL_DATABASE=todos `
    mysql:5.7

You have specified the --network-alias flag. This will be discussed later.

** Information for professionals **

I used a volume named todo-mysql-data and mounted it on / var / lib / mysql where the MySQL data is stored. However, I am not using the docker volume create command. Docker recognized that it was trying to use a named volume and created it automatically.

  1. To check if the database is running, connect to the database and make sure it is connected.
docker exec -it <mysql-container-id> mysql -p

If you are asked for a password, enter ** secret **. Within the MySQL shell, view the database list and make sure you have the todos database.

mysql> SHOW DATABASES;

It should look like this.

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| todos              |
+--------------------+
5 rows in set (0.00 sec)

You have a todos database.

Connect to MySQL

Now that we have confirmed that MySQL has started, let's actually use it. But if you start another container on the same network, how do you find the container? (Remember that each container has a different IP address.)

To understand this, use the nicolaka / netshoot container, which contains useful tools for troubleshooting and debugging network problems.

  1. Launch a new container using the nicolaka / netshoot image. Make sure you are connected to the same network.
docker run -it --network todo-app nicolaka/netshoot
  1. Use the dig command, which is a convenient DNS tool inside the container. Find the IP address whose host name is mysql.
dig mysql

You should see something like this:

; <<>> DiG 9.14.1 <<>> mysql
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32162
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;mysql.             IN  A

;; ANSWER SECTION:
mysql.          600 IN  A   172.23.0.2

;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Tue Oct 01 23:47:24 UTC 2019
;; MSG SIZE  rcvd: 44

If you look at "ANSWER SECTION", you can see that the A record of mysql is 172.23.0.2 (your IP address is likely to have a different value). mysql is usually not a valid hostname, but Docker was able to resolve the IP address of the container with the network alias mysql (remember using the --network-alias flag". Do you have?).

This means that even ToDo apps can communicate with the database simply by connecting to a host named mysql. It's never been easier.

Run apps using MySQL

The ToDo app allows you to set some environment variables that specify the MySQL connection settings. The details are as follows.

** Caution **

It's okay to use environment variables to set up connections in a development environment, but using them in applications running in a production environment is a ** highly deprecated ** practice. Diogo Monica, formerly in charge of security at Docker, explains why great article -secret-data /) is written.

A safer approach is to take advantage of the secret support provided by the container orchestration framework. In most cases, these secret files will be mounted in a running container. Many apps (including MySQL images and ToDo apps) also support environment variables with the _FILE suffix that point to files that contain files.

As an example, if you set the variable MYSQL_PASSWORD_FILE, the app will use the contents of the referenced file as the connection password. Please note that Docker does not support any of these environment variables. The app needs to know how to look for environment variables and get the contents of the file.

Now that the explanation is over, let's start the container.

  1. Specify each of the environment variables mentioned above to allow the container to connect to your app's network.
docker run -dp 3000:3000 \
  -w /app -v "$(pwd):/app" \
  --network todo-app \
  -e MYSQL_HOST=mysql \
  -e MYSQL_USER=root \
  -e MYSQL_PASSWORD=secret \
  -e MYSQL_DB=todos \
  node:12-alpine \
  sh -c "yarn install && yarn run dev"

If you are using PowerShell, use the following command.

docker run -dp 3000:3000 `
  -w /app -v "$(pwd):/app" `
  --network todo-app `
  -e MYSQL_HOST=mysql `
  -e MYSQL_USER=root `
  -e MYSQL_PASSWORD=secret `
  -e MYSQL_DB=todos `
  node:12-alpine `
  sh -c "yarn install && yarn run dev"
  1. If you look at the container logs (docker logs <container-id>), you will see a message stating that you are using a MySQL database.
# Previous log messages omitted
$ nodemon src/index.js
[nodemon] 1.19.2
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] starting `node src/index.js`
Connected to mysql db at host mysql
Listening on port 3000
  1. Open the app in your browser and try adding some apps to your to-do list.

  2. Connect to the MySQL database and see if the item is written. The password is ** secret **.

docker exec -ti <mysql-container-id> mysql -p todos

Then, within the MySQL shell, do the following:

mysql> select * from todo_items;
+--------------------------------------+--------------------+-----------+
| id                                   | name               | completed |
+--------------------------------------+--------------------+-----------+
| c906ff08-60e6-44e6-8f49-ed56a0853e85 | Do amazing things! |         0 |
| 2912a79e-8486-4bc3-a4c5-460793a575ab | Be awesome!        |         0 |
+--------------------------------------+--------------------+-----------+

Of course, the contents of the table are different because it contains your items. But you can see that the item is stored here.

Looking at the Docker dashboard, two containers are running. However, there is no indication that they are grouped into one app. Let's see how to improve this.

dashboard-multi-container-app.png

wrap up

I was able to create an application that stores data in an external database that runs in a separate container. You learned a little about container networks and learned how to use DNS to discover services.

However, you may be overwhelmed by everything you need to launch this app. You need to create a network, start a container, specify all environment variables, open ports, and so on. Certainly it's too much to remember and difficult to tell someone.

In the next chapter, we'll talk about Docker Compose. With Docker Compose, you can more easily share your application stack and launch it with just one simple command.

Use Docker Compose

Docker Compose was developed to make it easier to define and share multi-container apps. With Compose, you can define a service by creating a YAML file and start or stop it with a single command.

The big advantage of using Compose is that you can define an application stack in a file and save it in the root of your (versioned) project repository so that anyone can easily contribute to your project. In fact, there are many such projects on GitHub and GitLab.

Let's go for the first time.

Install Docker Compose

If you have Docker Desktop / Toolbox installed on your Windows or Mac, you already have Docker Compose installed. Docker Compose is also installed on your Play-with-Docker instance. If you are using a Linux machine, you will need to install Docker Compose according to this page (https://docs.docker.com/compose/install/).

After the installation is complete, you should be able to check the version information by running the following command.

docker-compose version

Create a Compose file

  1. Create a file called docker-compose.yml in the root of your app's project.

  2. In the Compose file, start by defining the schema version. In most cases it is better to use the latest version. See Compose File Reference (https://docs.docker.com/compose/compose-file/) for compatibility with the latest schema versions.

version: 3.7
  1. Next, define a list of services (or containers) you want to run as part of your app.
version: "3.7"

services:

Next, let's move the service to a Compose file.

Define your app's services

Remember, here are the commands I used to define the container for my app.

docker run -dp 3000:3000 \
  -w /app -v "$(pwd):/app" \
  --network todo-app \
  -e MYSQL_HOST=mysql \
  -e MYSQL_USER=root \
  -e MYSQL_PASSWORD=secret \
  -e MYSQL_DB=todos \
  node:12-alpine \
  sh -c "yarn install && yarn run dev"

If you are using PowerShell, I used a command like this:

docker run -dp 3000:3000 `
  -w /app -v "$(pwd):/app" `
  --network todo-app `
  -e MYSQL_HOST=mysql `
  -e MYSQL_USER=root `
  -e MYSQL_PASSWORD=secret `
  -e MYSQL_DB=todos `
  node:12-alpine `
  sh -c "yarn install && yarn run dev"
  1. First, define the service entry and image for the container. You can choose any service name. The name is automatically used as a network alias, making it easier to define a MySQL service.
version: "3.7"

services:
  app:
    image: node:12-alpine
  1. Generally, commands are written near the definition of image, but in any order. Now let's write to the file.
version: "3.7"

services:
  app:
    image: node:12-alpine
    command: sh -c "yarn install && yarn run dev"
  1. Let's move the -p 3000: 3000 part of the command to ports. I'll use the Simplified Writing (https://docs.docker.com/compose/compose-file/#short-syntax-1), but it's verbose and the Long Writing (https://docs. You can use docker.com/compose/compose-file/#long-syntax-1) as well.
version: "3.7"

services:
  app:
    image: node:12-alpine
    command: sh -c "yarn install && yarn run dev"
    ports:
      - 3000:3000
  1. Next, move the working directory (-w / app) and volume mapping (-v "$ (pwd): / app") to working_dir and volumes. Volumes are also written as short and long. There is compose-file / # long-syntax-3).

One of the advantages of Docker Compose volume definitions is that you can use relative paths from the current directory.

version: "3.7"

services:
  app:
    image: node:12-alpine
    command: sh -c "yarn install && yarn run dev"
    ports:
      - 3000:3000
    working_dir: /app
    volumes:
      - ./:/app
  1. Finally, migrate the environment variables using the environment key.
version: "3.7"

services:
  app:
    image: node:12-alpine
    command: sh -c "yarn install && yarn run dev"
    ports:
      - 3000:3000
    working_dir: /app
    volumes:
      - ./:/app
    environment:
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: secret
      MYSQL_DB: todos

Define a MySQL service

Now let's define the MySQL service. The command used for the container is:

docker run -d \
  --network todo-app --network-alias mysql \
  -v todo-mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=secret \
  -e MYSQL_DATABASE=todos \
  mysql:5.7

If you are using PowerShell, use the following command.

docker run -d `
  --network todo-app --network-alias mysql `
  -v todo-mysql-data:/var/lib/mysql `
  -e MYSQL_ROOT_PASSWORD=secret `
  -e MYSQL_DATABASE=todos `
  mysql:5.7
  1. First, define a new service, name it mysql, and it will automatically get the network alias. Let's specify the image to use.
version: "3.7"

services:
  app:
    # The app service definition
  mysql:
    image: mysql:5.7
  1. Next, specify the volume mapping. A named volume was created automatically when you started the container with docker run. However, it is not created when launched using Compose. Define the volume with volume: at the top level, and then specify the mount point for the service config. If you specify only the volume name, the default options are used. However, there are many other options available (https://docs.docker.com/compose/compose-file/#volume-configuration-reference).
version: "3.7"

services:
  app:
    # The app service definition
  mysql:
    image: mysql:5.7
    volumes:
      - todo-mysql-data:/var/lib/mysql

volumes:
  todo-mysql-data:
  1. Finally, specify the environment variables.
version: "3.7"

services:
  app:
    # The app service definition
  mysql:
    image: mysql:5.7
    volumes:
      - todo-mysql-data:/var/lib/mysql
    environment: 
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: todos

volumes:
  todo-mysql-data:

The completed docker-compose.yml looks like this:

version: "3.7"

services:
  app:
    image: node:12-alpine
    command: sh -c "yarn install && yarn run dev"
    ports:
      - 3000:3000
    working_dir: /app
    volumes:
      - ./:/app
    environment:
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: secret
      MYSQL_DB: todos

  mysql:
    image: mysql:5.7
    volumes:
      - todo-mysql-data:/var/lib/mysql
    environment: 
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: todos

volumes:
  todo-mysql-data:

Launch the application stack

Now that you have docker-compose.yml, all you have to do is start it.

  1. First, make sure no other copy of app / db is running. (Use docker ps anddocker rm -f <ids>.)

  2. Start the application stack using the docker-compose up command. Add the -d flag to make it run in the background.

docker-compose up -d

When you run it, you should see something like this:

Creating network "app_default" with the default driver
Creating volume "app_todo-mysql-data" with default driver
Creating app_app_1   ... done
Creating app_mysql_1 ... done

You can create volumes as well as networks. By default, Docker Compose automatically creates a network of application stacks (which is why I didn't define it in the Compose file).

** Japanese translator memo ** --When the following error is displayed with docker-compose up

[ERROR] [FATAL] InnoDB: Table flags are 0 in the data dictionary but the flags in file ./ibdata1 are 0x4800!

It was fixed by deleting the volume with the following command. It may have been caused by mistakenly writing docker-compose.yml in mysql: latest at the beginning.

docker volume rm app_todo-mysql-data
  1. Take a look at the logs using the docker-compose logs -f command. You can see that the logs for each service are aggregated into one. This is very useful when you want to see timing issues. The -f flag follows the log, so the generated log is output live.

The actual output looks like this:

mysql_1  | 2019-10-03T03:07:16.083639Z 0 [Note] mysqld: ready for connections.
mysql_1  | Version: '5.7.27'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)
app_1    | Connected to mysql db at host mysql
app_1    | Listening on port 3000

The (colored) service name on the first line helps identify the message. If you want to see the logs for a particular service, add the service name to the end of the log command (eg docker-compose logs -f app).

** Application ** --Wait the database before launching the app

In fact, when you start the application, it will try to connect after MySQL is up and ready. Docker doesn't provide built-in support for launching another container after the container is fully launched, run, and prepared. For Node-based projects, you can use wait-port. Similar projects exist in other languages and frameworks.

  1. Open the app and you should see it running. And you can stop it with one command.

Check the application stack on the Docker dashboard

If you look at the Docker dashboard, you'll see a group named ** app **. This is the Docker Compose project name and is used to group the containers. By default, the project name is the name of the directory where docker-compose.yml is located.

dashboard-app-project-collapsed.png

If you open the app dropdown, you'll see the two containers defined in the compose file. The name is <project name> _ <service name> _ <duplicate number>. This makes it easier to see which container is which app and which container is the mysql database.

dashboard-app-project-expanded.png

Stop all

All you have to do is run docker-compose down or put the entire app in the trash on the Docker dashboard. The container will be stopped and the network will be deleted.

** Delete Volume **

By default, running docker-compose down does not delete the named volume in the compose file. If you want to delete the volumes, you need to add the --volumes flag.

In the Docker dashboard, deleting the app stack does not delete the volume.

Once stopped, you can simply switch to another project and run docker-compose up to develop that project. It's very easy, isn't it?

wrap up

In this chapter, you learned about Docker Compose and how it makes it easier to define and share applications for multiple services. I migrated the command I was using to the appropriate Compose format and created a Compose file.

The tutorial has also entered the final stage. However, the Dockerfile I've used so far has a big problem, so I'd like to cover some best practices for image building. Let's take a look.

Image building best practices

Image layer

Did you know that you can see the components of an image? You can use the docker image history command to see the command used to create each layer in the image.

  1. Take a look at the layers of the getting-started image you created earlier in this tutorial using the docker image history command.
docker image history getting-started

You should see something like this (probably a different ID):

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
a78a40cbf866        18 seconds ago      /bin/sh -c #(nop)  CMD ["node" "src/index.j…    0B                  
f1d1808565d6        19 seconds ago      /bin/sh -c yarn install --production            85.4MB              
a2c054d14948        36 seconds ago      /bin/sh -c #(nop) COPY dir:5dc710ad87c789593…   198kB               
9577ae713121        37 seconds ago      /bin/sh -c #(nop) WORKDIR /app                  0B                  
b95baba1cfdb        13 days ago         /bin/sh -c #(nop)  CMD ["node"]                 0B                  
<missing>           13 days ago         /bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…   0B                  
<missing>           13 days ago         /bin/sh -c #(nop) COPY file:238737301d473041…   116B                
<missing>           13 days ago         /bin/sh -c apk add --no-cache --virtual .bui…   5.35MB              
<missing>           13 days ago         /bin/sh -c #(nop)  ENV YARN_VERSION=1.21.1      0B                  
<missing>           13 days ago         /bin/sh -c addgroup -g 1000 node     && addu…   74.3MB              
<missing>           13 days ago         /bin/sh -c #(nop)  ENV NODE_VERSION=12.14.1     0B                  
<missing>           13 days ago         /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
<missing>           13 days ago         /bin/sh -c #(nop) ADD file:e69d441d729412d24…   5.59MB   

Each layer represents a layer of the image. On this screen, the base of the image is displayed at the bottom and the latest layer is displayed at the top. This will help you quickly see each layer and locate large images.

  1. Did you notice that some lines have been omitted? You can get full output by adding the --no-trunc flag. (It's interesting to get non-abbreviated output with omitted flags, isn't it?)
docker image history --no-trunc getting-started

Layer cache

I've actually seen layers, which is a very important story when it comes to reducing the build time of container images.

After changing layers, all downstream layers need to be regenerated.

Let's take a look at the Dockerfile we were using again.

FROM node:12-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]

Going back to the image history output, we can see that each command in the Dockerfile is a new layer of the image. You may remember that the yarn dependencies were reinstalled when you made changes to the image. Is there a way to fix this? It's ridiculous to install the same dependencies every time you build.

To fix this, you need to rebuild the Dockerfile to cache the dependencies. For Node-based applications, dependencies are defined in the package.json file. This means that if you copy only this file first, the other files will be copied after the dependencies are installed. So the yarn dependency will only be recreated if you change the package.json. Isn't it great?

  1. Update the Dockerfile so that the package.json is copied first, the dependencies are installed, and then the other files are copied.
FROM node:12-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --production
COPY . .
CMD ["node", "src/index.js"]
  1. Create a file called .dockerignore in the same folder as the Dockerfile and write the following contents.
node_modules

With the .dockerignore file, you can select and copy only the files you need for your image. See here (https://docs.docker.com/engine/reference/builder/#dockerignore-file) for more information. In this case, the node_modules folder will be excluded in the second COPY. Otherwise, it will be overwritten with the file generated by the RUN command. For more information on why this method is recommended for Node.js applications and other best practices, see Node Web Apps on Docker (https://nodejs.org/en/docs/guides/). See nodejs-docker-webapp /).

  1. Build a new image with docker build.
docker build -t getting-started .

You should see something like this:

Sending build context to Docker daemon  219.1kB
Step 1/6 : FROM node:12-alpine
---> b0dc3a5e5e9e
Step 2/6 : WORKDIR /app
---> Using cache
---> 9577ae713121
Step 3/6 : COPY package.json yarn.lock ./
---> bd5306f49fc8
Step 4/6 : RUN yarn install --production
---> Running in d53a06c9e4c2
yarn install v1.17.3
[1/4] Resolving packages...
[2/4] Fetching packages...
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
Done in 10.89s.
Removing intermediate container d53a06c9e4c2
---> 4e68fbc2d704
Step 5/6 : COPY . .
---> a239a11f68d8
Step 6/6 : CMD ["node", "src/index.js"]
---> Running in 49999f68df8f
Removing intermediate container 49999f68df8f
---> e709c03bc597
Successfully built e709c03bc597
Successfully tagged getting-started:latest

You can see that all the layers have been rebuilt. I've made major changes to the Dockerfile, so there's no problem anymore.

  1. Edit the src / static / index.html file (change the contents of<title>to" The Awesome Todo App ").

  2. Build the Docker image again with docker build -t getting-started .. This time, what you see should change a bit and look like this:

Sending build context to Docker daemon  219.1kB
Step 1/6 : FROM node:12-alpine
---> b0dc3a5e5e9e
Step 2/6 : WORKDIR /app
---> Using cache
---> 9577ae713121
Step 3/6 : COPY package.json yarn.lock ./
---> Using cache
---> bd5306f49fc8
Step 4/6 : RUN yarn install --production
---> Using cache
---> 4e68fbc2d704
Step 5/6 : COPY . .
---> cccde25a3d9a
Step 6/6 : CMD ["node", "src/index.js"]
---> Running in 2be75662c150
Removing intermediate container 2be75662c150
---> 458e5c6f080c
Successfully built 458e5c6f080c
Successfully tagged getting-started:latest

You've noticed that the build is very fast. You can also see that steps 1-4 all include Using cashe. You are now ready to use the build cache. It's very fast to push, pull and update images.

Multi-stage build

I won't go into too much detail in this tutorial, but multi-stage builds are a ridiculously powerful tool that can help you create images using multiple stages. It has the following advantages:

Maven and tomcat example

When building Java-based applications, you need a JDK to compile your source code into Java bytecode. However, the JDK is not needed in a production environment. You may also use tools like Maven or Gradle to build your app. These are also not needed for the final image. This is where multi-stage builds come in handy.

FROM maven AS build
WORKDIR /app
COPY . .
RUN mvn package

FROM tomcat
COPY --from=build /app/target/file.war /usr/local/tomcat/webapps 

In this example, we will build Java using Maven in the first stage named build. In the second stage starting with FROM tomcat, copy the files from the build stage. The final image is only the final stage created (overridden with the --target flag).

React example

When building a React application, you need a Node environment that compiles JS code (usually JSX), SASS stylesheets, etc. into static HTML and JS, CSS. If you don't need server-side rendering, you don't need a Node environment to build your production environment. Then all you have to do is place your static resources in a static Nginx container.

FROM node:12 AS build
WORKDIR /app
COPY package* yarn.lock ./
RUN yarn install
COPY public ./public
COPY src ./src
RUN yarn run build

FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html

Here, we build (maximize the layer cache) using the node: 12 image and then copy the output to the Nginx container. This is better, isn't it?

wrap up

With a little understanding of how the image is structured, the image builds faster and allows you to place fewer changes. Multi-stage builds also help improve the security of the final container by reducing the overall image size and decoupling build-time and runtime dependencies.

What to do next

The tutorial is over, but there's more to learn about containers. I won't go into detail here, but let's take a look.

Container orchestration

Running a container in a production environment can be difficult. I don't want to log in to the machine and run docker run or docker-compose up. Why? So what happens if the container dies? How to scale across multiple machines? Container orchestration solves this problem. Tools like Kubernetes, Swarm, Nomad, and ECS all solve this problem in a slightly different way.

The general idea is to have a "manager" who receives the ** expected state **. This state is like "I want you to start two instances of the web application and open port 80". The manager monitors all the machines in the cluster and delegates the work to the "workers". The manager monitors changes (such as when the container is stopped) and then acts so that the ** actual state ** reflects the expected state.

Cloud Native Computing Infrastructure Project

CNCF (Cloud Native Platform Project) is a variety of vendor-independent open source projects, including Kubernetes, Prometheus, Envoy, Linkerd, NATS and more. You can see the created projects here and the overall CNCF diagram here. I will. Many of these projects will help you solve problems such as monitoring, logging, security, image registries, and messaging.

So, if you are not sure about the whole container and the development of cloud native applications, please use it. Join the community, ask questions, and study. We look forward to your participation.

Recommended Posts

Official tutorial to learn Docker systematically Japanese translation
[Introduction to Docker] Official Tutorial (Japanese translation)
Docker Gradle Quick Reference Japanese Translation
Docker Desktop WSL 2 backend Japanese translation
docker tutorial (memo)
settings.gradle Japanese translation
Gradle Japanese translation
build.gradle Japanese translation
Learn Docker roughly
List how to learn from Docker to AKS on AWS