100 langues de traitement knock 2020 [00-79 réponse]

Cet article est la suite de Language Processing 100 Knock 2020 [Chapitre 7: Word Vector].

Cet article traite de l'apprentissage automatique dans le chapitre 8 (70-79).

Lien

J'ai inclus uniquement le code dans cet article. Veuillez vous référer au lien ci-dessous pour un supplément à l'énoncé du problème et comment le résoudre.

Traitement du langage 100 knock 2020 Chapitre 8: Réseau neuronal

Chapitre 8: Réseau neuronal

70. Caractéristiques par somme de vecteurs de mots

import pandas as pd
import gensim
import numpy as np
train = pd.read_csv('train.txt',sep='\t',header=None)
valid = pd.read_csv('valid.txt',sep='\t',header=None)
test = pd.read_csv('test.txt',sep='\t',header=None)
model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)

d = {'b':0, 't':1, 'e':2, 'm':3}
y_train = train.iloc[:,0].replace(d)
y_train.to_csv('y_train.txt',header=False, index=False)
y_valid = valid.iloc[:,0].replace(d)
y_valid.to_csv('y_valid.txt',header=False, index=False)
y_test = test.iloc[:,0].replace(d)
y_test.to_csv('y_test.txt',header=False, index=False)

def write_X(file_name, df):
    with open(file_name,'w') as f:
        for text in df.iloc[:,1]:
            vectors = []
            for word in text.split():
                if word in model.vocab:
                    vectors.append(model[word])
            if (len(vectors)==0):
                vector = np.zeros(300)
            else:
                vectors = np.array(vectors)
                vector = vectors.mean(axis=0)
            vector = vector.astype(np.str).tolist()
            output = ' '.join(vector)+'\n'
            f.write(output)
write_X('X_train.txt', train)
write_X('X_valid.txt', valid)
write_X('X_test.txt', test)

71. Prédiction par réseau neuronal monocouche

import torch
import numpy as np
X_train = np.loadtxt(base+'X_train.txt', delimiter=' ')
X_train = torch.tensor(X_train, dtype=torch.float32)
W = torch.randn(300, 4)
softmax = torch.nn.Softmax(dim=1)
print (softmax(torch.matmul(X_train[:1], W)))
print (softmax(torch.matmul(X_train[:4], W)))

72. Calcul de la perte et de la pente

y_train = np.loadtxt(base+'y_train.txt')
y_train = torch.tensor(y_train, dtype=torch.int64)
loss = torch.nn.CrossEntropyLoss()
print (loss(torch.matmul(X_train[:1], W),y_train[:1]))
print (loss(torch.matmul(X_train[:4], W),y_train[:4]))

ans = [] #Vérifiez ci-dessous
for s,i in zip(softmax(torch.matmul(X_train[:4], W)),y_train[:4]):
  ans.append(-np.log(s[i]))
print (np.mean(ans))

73. Apprentissage par la méthode probabiliste de descente de gradient

from torch.utils.data import TensorDataset, DataLoader
class LogisticRegression(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.net = torch.nn.Sequential(
            torch.nn.Linear(300, 4),
        )
    def forward(self, X):
        return self.net(X)

model = LogisticRegression()
ds = TensorDataset(X_train, y_train)
#Créer DataLoader
loader = DataLoader(ds, batch_size=1, shuffle=True)

loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.net.parameters(), lr=1e-1)

for epoch in range(10):
    for xx, yy in loader:
        y_pred = model(xx)
        loss = loss_fn(y_pred, yy)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()


74. Mesure du taux de réponse correcte

def accuracy(pred, label):
  pred = np.argmax(pred.data.numpy(), axis=1)
  label = label.data.numpy()
  return (pred == label).mean()


X_valid = np.loadtxt(base+'X_valid.txt', delimiter=' ')
X_valid = torch.tensor(X_valid, dtype=torch.float32)
y_valid = np.loadtxt(base+'y_valid.txt')
y_valid = torch.tensor(y_valid, dtype=torch.int64)

pred = model(X_train)
print (accuracy(pred, y_train))
pred = model(X_valid)
print (accuracy(pred, y_valid))

75. Graphique des taux de perte et de précision

%load_ext tensorboard
!rm -rf ./runs
%tensorboard --logdir ./runs
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
from torch.utils.data import TensorDataset, DataLoader
class LogisticRegression(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.net = torch.nn.Sequential(
            torch.nn.Linear(300, 4),
        )
    def forward(self, X):
        return self.net(X)

model = LogisticRegression()
ds = TensorDataset(X_train, y_train)
#Créer DataLoader
loader = DataLoader(ds, batch_size=1, shuffle=True)

loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.net.parameters(), lr=1e-1)

for epoch in range(10):
    for xx, yy in loader:
        y_pred = model(xx)
        loss = loss_fn(y_pred, yy)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    with torch.no_grad():
      y_pred = model(X_train)
      loss = loss_fn(y_pred, y_train) 
      writer.add_scalar('Loss/train', loss, epoch)
      writer.add_scalar('Accuracy/train', accuracy(y_pred,y_train), epoch)

      y_pred = model(X_valid)
      loss = loss_fn(y_pred, y_valid)
      writer.add_scalar('Loss/valid', loss, epoch)
      writer.add_scalar('Accuracy/valid', accuracy(y_pred,y_valid), epoch)

76. Point de contrôle

from torch.utils.data import TensorDataset, DataLoader
class LogisticRegression(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.net = torch.nn.Sequential(
            torch.nn.Linear(300, 4),
        )
    def forward(self, X):
        return self.net(X)

model = LogisticRegression()
ds = TensorDataset(X_train, y_train)
#Créer DataLoader
loader = DataLoader(ds, batch_size=1, shuffle=True)

loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.net.parameters(), lr=1e-1)

for epoch in range(10):
    for xx, yy in loader:
        y_pred = model(xx)
        loss = loss_fn(y_pred, yy)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    with torch.no_grad():
      y_pred = model(X_train)
      loss = loss_fn(y_pred, y_train) 
      writer.add_scalar('Loss/train', loss, epoch)
      writer.add_scalar('Accuracy/train', accuracy(y_pred,y_train), epoch)

      y_pred = model(X_valid)
      loss = loss_fn(y_pred, y_valid)
      writer.add_scalar('Loss/valid', loss, epoch)
      writer.add_scalar('Accuracy/valid', accuracy(y_pred,y_valid), epoch)

      torch.save(model.state_dict(), base+'output/'+str(epoch)+'.model')
      torch.save(optimizer.state_dict(), base+'output/'+str(epoch)+'.param')

77. Mini lot

import time
from torch.utils.data import TensorDataset, DataLoader
class LogisticRegression(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.net = torch.nn.Sequential(
            torch.nn.Linear(300, 4),
        )
    def forward(self, X):
        return self.net(X)

model = LogisticRegression()
ds = TensorDataset(X_train, y_train)
loss_fn = torch.nn.CrossEntropyLoss()


ls_bs = [2**i for i in range(15)]
ls_time = []
for bs in ls_bs:
  loader = DataLoader(ds, batch_size=bs, shuffle=True)
  optimizer = torch.optim.SGD(model.net.parameters(), lr=1e-1)
  for epoch in range(1):
      start = time.time()
      for xx, yy in loader:
          y_pred = model(xx)
          loss = loss_fn(y_pred, yy)
          optimizer.zero_grad()
          loss.backward()
          optimizer.step()
  ls_time.append(time.time()-start)
print (ls_time)

78. Apprentissage sur GPU

import time
from torch.utils.data import TensorDataset, DataLoader
class LogisticRegression(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.net = torch.nn.Sequential(
            torch.nn.Linear(300, 4),
        )
    def forward(self, X):
        return self.net(X)

model = LogisticRegression()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)

ds = TensorDataset(X_train.to(device), y_train.to(device))
loss_fn = torch.nn.CrossEntropyLoss()

ls_bs = [2**i for i in range(15)]
ls_time = []
for bs in ls_bs:
  loader = DataLoader(ds, batch_size=bs, shuffle=True)
  optimizer = torch.optim.SGD(model.net.parameters(), lr=1e-1)
  for epoch in range(1):
      start = time.time()
      for xx, yy in loader:
          y_pred = model(xx)
          loss = loss_fn(y_pred, yy)
          optimizer.zero_grad()
          loss.backward()
          optimizer.step()
  ls_time.append(time.time()-start)
print (ls_time)

79. Réseau neuronal multicouche

import time
from torch.utils.data import TensorDataset, DataLoader
class MLP(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.net = torch.nn.Sequential(
            torch.nn.Linear(300, 32),
            torch.nn.ReLU(),
            torch.nn.Linear(32, 4),
        )
    def forward(self, X):
        return self.net(X)

model = MLP()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)

ds = TensorDataset(X_train.to(device), y_train.to(device))
loss_fn = torch.nn.CrossEntropyLoss()

loader = DataLoader(ds, batch_size=1024, shuffle=True)
optimizer = torch.optim.SGD(model.net.parameters(), lr=1e-1)
for epoch in range(100):
    start = time.time()
    for xx, yy in loader:
        y_pred = model(xx)
        loss = loss_fn(y_pred, yy)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    with torch.no_grad():
      y_pred = model(X_train.to(device))
      loss = loss_fn(y_pred, y_train.to(device)) 
      writer.add_scalar('Loss/train', loss, epoch)
      train_acc = accuracy(y_pred.cpu(),y_train.cpu())
      writer.add_scalar('Accuracy/train', acc, epoch)
     
      y_pred = model(X_valid.to(device))
      loss = loss_fn(y_pred, y_valid.to(device))
      writer.add_scalar('Loss/valid', loss, epoch)
      valid_acc = accuracy(y_pred.cpu(),y_valid.cpu())
      writer.add_scalar('Accuracy/valid', acc, epoch)
print (train_acc, valid_acc)

Recommended Posts

100 langues de traitement knock 2020 [00-79 réponse]
100 traitements linguistiques Knock 2020 [00 ~ 69 réponse]
100 traitements linguistiques Knock 2020 [00 ~ 49 réponse]
100 traitements linguistiques Knock 2020 [00 ~ 59 réponse]
100 coups de traitement linguistique (2020): 28
100 coups de traitement linguistique (2020): 38
100 traitement de la langue frapper 00 ~ 02
100 Language Processing Knock 2020 Chapitre 1
100 coups de traitement du langage amateur: 17
100 Traitement du langage Knock-52: Stemming
100 Traitement du langage Knock Chapitre 1
100 traitements linguistiques Knock 2020 [00 ~ 89 réponse]
100 coups de langue amateur: 07
Traitement du langage 100 coups 00 ~ 09 Réponse
100 Language Processing Knock 2020 Chapitre 3
100 Language Processing Knock 2020 Chapitre 2
100 coups de traitement du langage amateur: 09
100 coups en traitement du langage amateur: 47
Traitement 100 langues knock-53: Tokenisation
100 coups de traitement du langage amateur: 97
100 coups de traitement du langage amateur: 67
100 coups de traitement du langage avec Python 2015
100 traitement du langage Knock-51: découpage de mots
100 Language Processing Knock-58: Extraction de Taple
100 Language Processing Knock-57: Analyse des dépendances
100 traitement linguistique knock-50: coupure de phrase
100 Language Processing Knock Chapitre 1 (Python)
100 Language Processing Knock Chapitre 2 (Python)
100 Language Processing Knock-25: Extraction de modèles
Traitement du langage 100 Knock-87: similitude des mots
J'ai essayé 100 traitements linguistiques Knock 2020
100 Language Processing Knock-56: analyse de co-référence
Résolution de 100 traitements linguistiques Knock 2020 (01. "Patatokukashi")
100 coups de traitement du langage amateur: Résumé
100 Language Processing Knock 2020 Chapitre 2: Commandes UNIX
100 Language Processing Knock 2015 Chapitre 5 Analyse des dépendances (40-49)
100 traitements de langage avec Python
100 Language Processing Knock Chapitre 1 en Python
100 Language Processing Knock 2020 Chapitre 4: Analyse morphologique
100 Language Processing Knock 2020 Chapitre 9: RNN, CNN
100 traitement du langage knock-76 (en utilisant scicit-learn): étiquetage
100 Language Processing Knock-55: extraction d'expressions uniques
J'ai essayé 100 traitements linguistiques Knock 2020: Chapitre 3
100 Language Processing Knock-82 (mot de contexte): Extraction de contexte
100 traitements de langage avec Python (chapitre 3)
100 Language Processing Knock: Chapitre 1 Mouvement préparatoire
100 Language Processing Knock 2020 Chapitre 6: Apprentissage automatique
100 Traitement du langage Knock Chapitre 4: Analyse morphologique
Traitement du langage 100 knock-86: affichage vectoriel Word
100 Language Processing Knock 2020 Chapitre 10: Traduction automatique (90-98)
100 Language Processing Knock 2020 Chapitre 5: Analyse des dépendances
100 Language Processing Knock-28: Suppression du balisage MediaWiki
100 Traitement du langage Knock 2020 Chapitre 7: Vecteur de mots
100 Language Processing Knock 2020 Chapitre 8: Neural Net
100 traitement du langage knock-59: analyse de la formule S
Le débutant en Python a essayé 100 traitements de langage Knock 2015 (05 ~ 09)
100 traitement du langage knock-31 (en utilisant des pandas): verbe
100 langues de traitement frappent 2020 "pour Google Colaboratory"
J'ai essayé 100 traitements linguistiques Knock 2020: Chapitre 1
100 Language Processing Knock 2020 Chapitre 1: Mouvement préparatoire
100 traitement du langage knock-73 (en utilisant scikit-learn): apprentissage