Introduction to Lightning pytorch

pytorch I tried to learn by myself, but I stumbled on various things, so I summarized it. Specifically, pytorch tutorialの一部をGW中に翻訳・若干改良しました。この通りになめて行けば短時間で基本的なことはできるようになると思います。躓いた人、自分で書きながら勉強したい人向けに各章末にまとめのコードのリンクがあるのでよしなにご活用ください。

Features of pytorch

--Syntactic structure close to chainer --defined by run: Backpropagation graph is created according to forward propagation when the code is run --You can write relatively freely without getting angry (subjective) --Easy to write a network whose structure changes dynamically --Batch processing assumption (Data Loader concept) --Fast execution (cf pytorch implementation of deepPose)

import modules

import torch #Basic module
from torch.autograd import Variable #For automatic differentiation
import torch.nn as nn #For network construction
import torch.optim as optim #Optimization function
import torch.nn.functional as F #Various functions for the network
import torch.utils.data #Data set reading related
import torchvision #Image related
from torchvision import datasets, models, transforms #Various data sets for images

Roughly pytorch

Basics of pytorch

--pytorch performs operations with a type called Tensor

x = torch.Tensor(5, 3) #Definition of 5x3 Tensor
y = torch.rand(5, 3) #Definition of Tensor initialized with 5x3 random numbers
z = x + y #Normal calculation is also possible

--All variables need to be converted to Tensor to use pytorch --Tensor-> numpy: (Tensor variable) .numpy () --numpy-> Tensor: torch.from_numpy (Numpy variable)

x = np.random.rand(5, 3)
y = torch.from_numpy(x)
z = y.numpy()

--In order to make the Tensor type differentiating, it is necessary to make it more Variable type. --You have to use your own function to make automatic differentiation correspond when using Variable.

x = torch.rand(5, 3)
y = Variable(x)
z = torch.pow(y,2) + 2 #y_i**2 + 2

Data acquisition

The data given by pytorch must be Tensor (train), Tensor (target). There is TensorDataset as a function to convert data labels at the same time. Pytorch's DataLoader only supports batch processing.

train = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train))
train_loader = torch.utils.data.DataLoader(train, batch_size=100, shuffle=True)
test = torch.utils.data.TensorDataset(torch.from_numpy(X_test), torch.from_numpy(y_test))
test_loader = torch.utils.data.DataLoader(test, batch_size=100, shuffle=True)

It is also possible to shuffle in advance by setting shuffle = True etc.

In addition, you can use torchvision for image processing related processing (transformation can uniformly execute processing such as Tensorization, standardization, cropping, etc.), and data sets such as CIFAR-10 can be read.

#Image transformation processing
transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

#CIFAR-Loading trainset before transformation into 10 tensors
rawtrainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True)

#CIFAR-10 trains,Load testset
#Transform applies transform
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)

#Apply Data Loader->This makes it possible to allocate and shuffle batches at once.
#batch_Specify batch size with size
#num_Specify how many cores the workers will load the data with(The default is main only)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
                                         shuffle=False, num_workers=2)

Model definition

You need to class define the model as in the example below --OK if you define init and forward

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(784,500)
        self.fc2 = nn.Linear(500, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.log_softmax(self.fc3(x))
        return x

#Model definition
model = Net()

Learning

Set the Loss function and optimizer as follows.

#Specifying the Loss function
criterion = nn.CrossEntropyLoss()

#Specifying Optimizer
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

Training is done under this. I think that the flow will be quite similar, so I will post it for reference

#training
#Specifying the number of epochs
for epoch in range(2):  # loop over the dataset multiple times
    
    #Total loss of all data
    running_loss = 0.0 
    
    
    for i, data in enumerate(trainloader):
        
        #Split into input data labels
        # get the inputs
        inputs, labels = data
        
        #Transformed into Variable
        # wrap them in Variable
        inputs, labels = Variable(inputs), Variable(labels)

        #Initialize optimizer
        # zero the parameter gradients
        optimizer.zero_grad()

        #A series of flows
        # forward + backward + optimize
        outputs = net(inputs)
        
        #Here Cross for label data-Entropy is taken
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        #Display of loss
        # print statistics
        running_loss += loss.data[0]
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Up to this point, the code is also placed below, so please refer to it. Pytorch tutorial

Model transfer and preservation

Model loading

Since resnet etc. are included by default, it can be read

#Loading models from resnet
model_ft = models.resnet18(pretrained=True)

Fine Tuning You can also freeze the model and rewrite the final layer

#Freeze of pre-trained model
for param in model_conv.parameters():
    param.requires_grad = False

#Rewrite parameters only in the last layer of the model
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)

Save and load the model

Saving the model

torch.save(model.state_dict(), 'model.pth')

Model loading

param = torch.load('model.pth')
model = Net() #Requires class declaration before loading
model.load_state_dict(param)

The sample code related to transfer learning is as follows Pytorch transfer learning

Customize

Self-made activation function

Just define the class as below and write about forward and backward

class MyReLU(torch.autograd.Function):
    
    #Only the activation function of forward and the calculation of backward need to be described.
    def forward(self, input):
        
        #Memory of value
        self.save_for_backward(input)
        
        #Definition part of ReLU
        #x.clamp(min=0) <=> max(x, 0)
        return input.clamp(min=0)

    #Description of backpropagation
    #Just return the gradient information
    def backward(self, grad_output):

        #Remembered Tensor call
        input, = self.saved_tensors
        
        #Copy so that it is not passed by reference
        grad_input = grad_output.clone()
        
        #input<0 => 0  else input
        grad_input[input < 0] = 0
        return grad_input

Self-made Loss function

If you define a class for the Loss function and write init and forward, it will work.

class TripletMarginLoss(nn.Module):

def __init__(self, margin):
    super(TripletMarginLoss, self).__init__()
    self.margin = margin

def forward(self, anchor, positive, negative):
    dist = torch.sum(
        torch.pow((anchor - positive),2) - torch.pow((anchor - negative),2),
        dim=1) + self.margin
    dist_hinge = torch.clamp(dist, min=0.0)  #max(dist, 0.0)Equivalent to
    loss = torch.mean(dist_hinge)
    return loss

Dynamic network

Since it is a defined-by-run format, it is possible to rearrange layers by conditional branching etc. in the forward part.

class DynamicNet(torch.nn.Module):
    
    #Layer definition
    def __init__(self, D_in, H, D_out):
        super(DynamicNet, self).__init__()
        self.input_linear = torch.nn.Linear(D_in, H)
        self.middle_linear = torch.nn.Linear(H, H)
        self.output_linear = torch.nn.Linear(H, D_out)

    #Randomly 0 middle layer~Change to 3
    def forward(self, x):
        h_relu = self.input_linear(x).clamp(min=0)
        for _ in range(random.randint(0, 3)):
            h_relu = self.middle_linear(h_relu).clamp(min=0)
        y_pred = self.output_linear(h_relu)
        return y_pred

The proper code is below Customize pytorch


pytorch It's really easy to use. In Japan, it seems that there are only chainers, but I think it's very easy to write, fast and convenient. Everyone should give it a try.

Referenced sites

--Pytorch super introduction http://qiita.com/miyamotok0105/items/1fd1d5c3532b174720cd Various things are written from the background of pytorch to the base

--Practice Pytorch http://qiita.com/perrying/items/857df46bb6cdc3047bd8 Another introduction to Japanese pytorch. It's pretty organized

--PyTorch: Tutorial Japanese translation http://caffe.classcat.com/2017/04/14/pytorch-tutorial-tensor/ Japanese translation of the pytorch tutorial. Is it machine translation? This is hard to read

--I tried implementing DeepPose with PyTorch http://qiita.com/ynaka81/items/85659dff4d1c2c593f21 deepPose pytorch implementation

--tripletLoss function http://docs.chainer.org/en/stable/_modules/chainer/functions/loss/triplet.html Chainer's tripet Loss. You can write in almost the same notation with pytorch

Recommended Posts

Introduction to Lightning pytorch
Introduction to PyTorch (1) Automatic differentiation
[Details (?)] Introduction to pytorch ~ CNN CIFAR10 ~
Introduction to MQTT (Introduction)
Introduction to Scrapy (1)
Introduction to Scrapy (3)
Introduction to Supervisor
Introduction to Tkinter 1: Introduction
pytorch super introduction
Introduction to PyQt
Introduction to Scrapy (2)
[Linux] Introduction to Linux
Introduction to Scrapy (4)
Introduction to discord.py (2)
Introduction to discord.py
[PyTorch] Introduction to document classification using BERT
[Python] Introduction to CNN with Pytorch MNIST
[Introduction to Pytorch] I played with sinGAN ♬
[Super Introduction to Machine Learning] Learn Pytorch tutorials
Introduction to Web Scraping
[PyTorch] Introduction to Japanese document classification using BERT
Introduction to Nonparametric Bayes
Introduction to EV3 / MicroPython
Introduction to Python language
Introduction to TensorFlow-Image Recognition
Introduction to OpenCV (python)-(2)
[Pytorch] numpy to tensor
Introduction to PyQt4 Part 1
[Super Introduction to Machine Learning] Learn Pytorch tutorials
Introduction to Dependency Injection
Introduction to Private Chainer
PyTorch introduction (virtual environment)
PyTorch Super Introduction PyTorch Basics
Introduction to machine learning
[Introduction to Pytorch] I tried categorizing Cifar10 with VGG16 ♬
AOJ Introduction to Programming Topic # 1, Topic # 2, Topic # 3, Topic # 4
Introduction to electronic paper modules
A quick introduction to pytest-mock
Introduction to dictionary lookup algorithm
[Learning memorandum] Introduction to vim
opencv-python Introduction to image processing
Introduction to Python Django (2) Win
Kubernetes Scheduler Introduction to Homebrew
An introduction to machine learning
[Introduction to cx_Oracle] Overview of cx_Oracle
[Introduction to pytorch-lightning] First Lit ♬
Introduction to Anomaly Detection 1 Basics
Introduction to RDB with sqlalchemy Ⅰ
[Introduction to Systre] Fibonacci Retracement ♬
Introduction to Nonlinear Optimization (I)
Introduction to serial communication [Python]
AOJ Introduction to Programming Topic # 5, Topic # 6
Introduction to Deep Learning ~ Learning Rules ~
[Introduction to Python] <list> [edit: 2020/02/22]
Introduction to Python (Python version APG4b)
An introduction to Python Programming
PyTorch Sangokushi (Ignite / Catalyst / Lightning)
[Introduction to cx_Oracle] (8th) cx_Oracle 8.0 release
Introduction to discord.py (3) Using voice
An introduction to Bayesian optimization
Deep Reinforcement Learning 1 Introduction to Reinforcement Learning