Preferred Networks, which developed Chainer, will be based on PyTorch in the future Article Was seen. It seems that many academic papers are also written by PyTorch. PyTorch is a deep learning library developed and published by Facebook. Is PyTorch likely to become the standard in the future? So I'd like to start PyTorch with the Official Tutorials (https://pytorch.org/tutorials/).
"[WHAT IS PYTORCH?]" (Https: // pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py) ”.
1.Google Colaboratory 2. Use Google Colaboratory with your Google account 3.What is PyTorch? 3.1. Creating a Tensor 3.2. Operation of Tensor 3.3.Tensor ⇔ Numpy 3.4.CUDA Tensor 4. Finally History
1.Google Colaboratory
I think the PyTorch tutorial will be easier if you use the Google Colaboratory environment. Google Colaboratory is an environment provided by Google that allows you to run Python in your browser. You need a Google account, but by working with Google Drive, you can run files on the drive in a Colaboratory environment, and read and export files on the drive.
Let's prepare an environment to use Colaboratory. When you log in with your Google account and access Colaboratory (https://colab.research.google.com/), you will see a pop-up like the one below. .. Here, "cancel" once.
Then, the following "Welcome to Colaboratory" screen will be displayed. Click Copy to Drive.
If you go to Google Drive, a folder of "Colab Notebooks" is created as shown below.
You are now ready to use Google Colaboratory.
Go to PyTorch's Tutorials (https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html) (https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html). Click Run in Google Colab. You can proceed as it is, but click Copy to Drive.
Then, the file will be copied to the "Colab Notebooks" folder of Google Drive created earlier. After that, you can execute the file in the Colaboratory environment by right-clicking the file ⇒ "Open with app" ⇒ "Google Colaboratory". By copying the tutorial to Google Drive and proceeding, you can proceed while adding your own description.
3.What is PyTorch?
The introduction is long, but I'm going to proceed with the PyTorch tutorial. This time, it's from the very first "What is PyTorch?".
PyTorch handles input data in Tensor. A tensor is an array of arbitrary dimensions. A single number or scalar is a 0th-order tensor, a 1-dimensional array or vector is a 1st-order tensor, a 2-dimensional array or matrix is a 2nd-order tensor, and a 3D array is a 3rd-order tensor. Create an uninitialized Tensor with torch.empty.
from __future__ import print_function
import torch
x = torch.empty(5, 3)
print(x)
tensor([[2.8129e-35, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 2.8026e-45],
[0.0000e+00, 1.1210e-44, 0.0000e+00],
[1.4013e-45, 0.0000e+00, 0.0000e+00]])
Create a tensor with a random value in torch.rand.
x = torch.rand(5, 3)
print(x)
tensor([[0.0129, 0.2380, 0.2860],
[0.0942, 0.6319, 0.9040],
[0.3457, 0.0503, 0.9295],
[0.2715, 0.8802, 0.6511],
[0.3274, 0.0322, 0.0097]])
Create a tensor with zero elements in torch.zeros.
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
You can create a tensor by passing a list to torch.tensor.
x = torch.tensor([5.5, 3])
print(x)
tensor([5.5000, 3.0000])
Rewrite the original Tensor with element 1 with new_ones. Also, randn_like rewrites the original Tensor with a random value. Since it is randn, it is a random value of standardization (mean 0, standard deviation 1).
x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes
print(x)
x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x)
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
tensor([[-1.7169, 0.0026, 0.0341],
[-0.8156, 0.0672, 0.6364],
[-0.3116, -0.1866, -1.3844],
[-0.2527, -0.9790, -1.6029],
[-0.9892, 0.4730, 0.4554]])
Get the size of the Tensor with size ().
print(x.size())
torch.Size([5, 3])
Addition is done on an element-by-element basis. There are two descriptions, the + operator and torch.add.
y = torch.rand(5, 3)
print(x + y)
tensor([[-1.1761, 0.5185, 0.9026],
[-0.6358, 0.8661, 0.9583],
[ 0.4605, -0.0935, -0.7063],
[ 0.7133, -0.8798, -1.0570],
[-0.3332, 1.0319, 0.5329]])
print(torch.add(x, y))
tensor([[-1.1761, 0.5185, 0.9026],
[-0.6358, 0.8661, 0.9583],
[ 0.4605, -0.0935, -0.7063],
[ 0.7133, -0.8798, -1.0570],
[-0.3332, 1.0319, 0.5329]])
You can specify the output tensor with the out argument.
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
tensor([[-1.1761, 0.5185, 0.9026],
[-0.6358, 0.8661, 0.9583],
[ 0.4605, -0.0935, -0.7063],
[ 0.7133, -0.8798, -1.0570],
[-0.3332, 1.0319, 0.5329]])
Rewrite the input Tensor with the calculation result with add_.
# adds x to y
y.add_(x)
print(y)
tensor([[-1.1761, 0.5185, 0.9026],
[-0.6358, 0.8661, 0.9583],
[ 0.4605, -0.0935, -0.7063],
[ 0.7133, -0.8798, -1.0570],
[-0.3332, 1.0319, 0.5329]])
NumPy slices can be used as well.
print(x[:, 1])
tensor([ 0.0026, 0.0672, -0.1866, -0.9790, 0.4730])
torch.view transforms the shape of the Tensor. If you specify -1, it is complemented considering other dimensions.
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
You can get it as a normal value by using item () for a Tensor with 1 element.
x = torch.randn(1)
print(x)
print(x.item())
tensor([-1.5867])
-1.5867252349853516
3.3.Tensor ⇔ Numpy
The conversion from NumPy to Tensor is done with torch.numpy (). Because it shares memory, changing one will change the other as well.
a = torch.ones(5)
print(a)
tensor([1., 1., 1., 1., 1.])
b = a.numpy()
print(b)
[1. 1. 1. 1. 1.]
a.add_(1)
print(a)
print(b)
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
The conversion from Tensor to NumPy is done with torch.from_numpy ().
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
3.4.CUDA Tensor
You can use torch.to () to move the Tensor to different devices. The code below navigates to the CUDA device. CUDA is a GPU environment platform provided by NVIDIA.
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
tensor([0.9866], device='cuda:0')
tensor([0.9866], dtype=torch.float64)
In order for the above code to run in Colaboratory, you need to set "Hardware Accelerator" to "GPU" from the menu "Change Runtime".
That's the content of PyTorch's first tutorial, "What is PyTorch?". Next time would like to proceed with the second tutorial "AUTOGRAD: AUTOMATIC DIFFERENTIATION".
2020/02/23 First edition released 2020/02/28 Addition of next link
Recommended Posts