使用 TorchText 进行语言翻译
原文: https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html
注意
单击此处的下载完整的示例代码
本教程说明如何使用torchtext
的几个便捷类来预处理包含英语和德语句子的著名数据集的数据,并使用它来训练序列到序列模型,并注意将德语句子翻译成英语 。
它基于 PyTorch 社区成员 Ben Trevett 的本教程,并由 Seth Weidman 在 Ben 的允许下创建。
在本教程结束时,您将能够:
-
Preprocess sentences into a commonly-used format for NLP modeling using the following torchtext convenience classes:
<cite>字段</cite>和 <cite>TranslationDataset</cite>
torchtext
具有用于创建数据集的实用程序,可以轻松地对其进行迭代,以创建语言翻译模型。 一个关键类是字段,它指定应该对每个句子进行预处理的方式,另一个关键类是 <cite>TranslationDataset</cite> ; torchtext
有几个这样的数据集; 在本教程中,我们将使用 Multi30k 数据集,其中包含约 30,000 个英语和德语句子(平均长度约为 13 个单词)。
注意:本教程中的标记化需要 Spacy 我们使用 Spacy,因为它为英语以外的其他语言的标记化提供了强大的支持。 torchtext
提供了basic_english
标记器,并支持其他英语标记器(例如摩西),但对于语言翻译(需要多种语言),Spacy 是您的最佳选择。
要运行本教程,请先使用pip
或conda
安装spacy
。 接下来,下载英语和德语 Spacy 分词器的原始数据:
python -m spacy download en
python -m spacy download de
安装 Spacy 后,以下代码将根据Field
中定义的标记器,标记TranslationDataset
中的每个句子。
from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator
SRC = Field(tokenize = "spacy",
tokenizer_language="de",
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = "spacy",
tokenizer_language="en",
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
出:
downloading training.tar.gz
downloading validation.tar.gz
downloading mmt_task1_test2016.tar.gz
现在我们已经定义了train_data
,我们可以看到torchtext
的Field
的一个非常有用的功能:build_vocab
方法现在允许我们创建与每种语言相关的词汇
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
一旦运行了这些代码行,SRC.vocab.stoi
将是一个词典,其词汇表中的标记作为键,而其对应的索引作为值; SRC.vocab.itos
将是相同的字典,其中的键和值被交换。 在本教程中,我们不会广泛使用此事实,但这在您将遇到的其他 NLP 任务中可能很有用。
BucketIterator
我们将使用的最后torchtext
个特定功能是BucketIterator
,它很容易使用,因为它以TranslationDataset
作为第一个参数。 具体来说,正如文档所说:定义一个迭代器,该迭代器将相似长度的示例批处理在一起。 在为每个新纪元生产新鲜改组的批次时,最大程度地减少所需的填充量。 有关使用的存储过程,请参阅池。
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
可以像DataLoader``s; below, in the ``train
和evaluate
函数一样调用这些迭代器,只需使用以下命令即可调用它们:
for i, batch in enumerate(iterator):
每个batch
然后具有src
和trg
属性:
src = batch.src
trg = batch.trg
定义我们的nn.Module
和Optimizer
这大部分是从torchtext
角度出发的:构建了数据集并定义了迭代器,本教程的其余部分仅将模型定义为nn.Module
以及Optimizer
,然后对其进行训练。
具体来说,我们的模型遵循在此处中描述的架构(您可以在此处找到更多注释的版本)。
注意:此模型只是可用于语言翻译的示例模型; 我们选择它是因为它是任务的标准模型,而不是因为它是用于翻译的推荐模型。 如您所知,目前最先进的模型基于“变形金刚”; 您可以在此处看到 PyTorch 的实现 Transformer 层的功能; 特别是,以下模型中使用的“注意”与变压器模型中存在的多头自我注意不同。
import random
from typing import Tuple
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch import Tensor
class Encoder(nn.Module):
def __init__(self,
input_dim: int,
emb_dim: int,
enc_hid_dim: int,
dec_hid_dim: int,
dropout: float):
super().__init__()
self.input_dim = input_dim
self.emb_dim = emb_dim
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.dropout = dropout
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)
self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self,
src: Tensor) -> Tuple[Tensor]:
embedded = self.dropout(self.embedding(src))
outputs, hidden = self.rnn(embedded)
hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)))
return outputs, hidden
class Attention(nn.Module):
def __init__(self,
enc_hid_dim: int,
dec_hid_dim: int,
attn_dim: int):
super().__init__()
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.attn_in = (enc_hid_dim * 2) + dec_hid_dim
self.attn = nn.Linear(self.attn_in, attn_dim)
def forward(self,
decoder_hidden: Tensor,
encoder_outputs: Tensor) -> Tensor:
src_len = encoder_outputs.shape[0]
repeated_decoder_hidden = decoder_hidden.unsqueeze(1).repeat(1, src_len, 1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
energy = torch.tanh(self.attn(torch.cat((
repeated_decoder_hidden,
encoder_outputs),
dim = 2)))
attention = torch.sum(energy, dim=2)
return F.softmax(attention, dim=1)
class Decoder(nn.Module):
def __init__(self,
output_dim: int,
emb_dim: int,
enc_hid_dim: int,
dec_hid_dim: int,
dropout: int,
attention: nn.Module):
super().__init__()
self.emb_dim = emb_dim
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.output_dim = output_dim
self.dropout = dropout
self.attention = attention
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)
self.out = nn.Linear(self.attention.attn_in + emb_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def _weighted_encoder_rep(self,
decoder_hidden: Tensor,
encoder_outputs: Tensor) -> Tensor:
a = self.attention(decoder_hidden, encoder_outputs)
a = a.unsqueeze(1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
weighted_encoder_rep = torch.bmm(a, encoder_outputs)
weighted_encoder_rep = weighted_encoder_rep.permute(1, 0, 2)
return weighted_encoder_rep
def forward(self,
input: Tensor,
decoder_hidden: Tensor,
encoder_outputs: Tensor) -> Tuple[Tensor]:
input = input.unsqueeze(0)
embedded = self.dropout(self.embedding(input))
weighted_encoder_rep = self._weighted_encoder_rep(decoder_hidden,
encoder_outputs)
rnn_input = torch.cat((embedded, weighted_encoder_rep), dim = 2)
output, decoder_hidden = self.rnn(rnn_input, decoder_hidden.unsqueeze(0))
embedded = embedded.squeeze(0)
output = output.squeeze(0)
weighted_encoder_rep = weighted_encoder_rep.squeeze(0)
output = self.out(torch.cat((output,
weighted_encoder_rep,
embedded), dim = 1))
return output, decoder_hidden.squeeze(0)
class Seq2Seq(nn.Module):
def __init__(self,
encoder: nn.Module,
decoder: nn.Module,
device: torch.device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
def forward(self,
src: Tensor,
trg: Tensor,
teacher_forcing_ratio: float = 0.5) -> Tensor:
batch_size = src.shape[1]
max_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
encoder_outputs, hidden = self.encoder(src)
# first input to the decoder is the <sos> token
output = trg[0,:]
for t in range(1, max_len):
output, hidden = self.decoder(output, hidden, encoder_outputs)
outputs[t] = output
teacher_force = random.random() < teacher_forcing_ratio
top1 = output.max(1)[1]
output = (trg[t] if teacher_force else top1)
return outputs
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
# ENC_EMB_DIM = 256
# DEC_EMB_DIM = 256
# ENC_HID_DIM = 512
# DEC_HID_DIM = 512
# ATTN_DIM = 64
# ENC_DROPOUT = 0.5
# DEC_DROPOUT = 0.5
ENC_EMB_DIM = 32
DEC_EMB_DIM = 32
ENC_HID_DIM = 64
DEC_HID_DIM = 64
ATTN_DIM = 8
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)
attn = Attention(ENC_HID_DIM, DEC_HID_DIM, ATTN_DIM)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)
model = Seq2Seq(enc, dec, device).to(device)
def init_weights(m: nn.Module):
for name, param in m.named_parameters():
if 'weight' in name:
nn.init.normal_(param.data, mean=0, std=0.01)
else:
nn.init.constant_(param.data, 0)
model.apply(init_weights)
optimizer = optim.Adam(model.parameters())
def count_parameters(model: nn.Module):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
Out:
The model has 1,856,685 trainable parameters
注意:特别是在对语言翻译模型的性能进行评分时,我们必须告诉nn.CrossEntropyLoss
函数忽略仅填充目标的索引。
PAD_IDX = TRG.vocab.stoi['<pad>']
criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)
最后,我们可以训练和评估该模型:
import math
import time
def train(model: nn.Module,
iterator: BucketIterator,
optimizer: optim.Optimizer,
criterion: nn.Module,
clip: float):
model.train()
epoch_loss = 0
for _, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model: nn.Module,
iterator: BucketIterator,
criterion: nn.Module):
model.eval()
epoch_loss = 0
with torch.no_grad():
for _, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time: int,
end_time: int):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
Out:
Epoch: 01 | Time: 0m 35s
Train Loss: 5.667 | Train PPL: 289.080
Val. Loss: 5.201 | Val. PPL: 181.371
Epoch: 02 | Time: 0m 35s
Train Loss: 4.968 | Train PPL: 143.728
Val. Loss: 5.096 | Val. PPL: 163.375
Epoch: 03 | Time: 0m 35s
Train Loss: 4.720 | Train PPL: 112.221
Val. Loss: 4.989 | Val. PPL: 146.781
Epoch: 04 | Time: 0m 35s
Train Loss: 4.586 | Train PPL: 98.094
Val. Loss: 4.841 | Val. PPL: 126.612
Epoch: 05 | Time: 0m 35s
Train Loss: 4.430 | Train PPL: 83.897
Val. Loss: 4.809 | Val. PPL: 122.637
Epoch: 06 | Time: 0m 35s
Train Loss: 4.331 | Train PPL: 75.997
Val. Loss: 4.797 | Val. PPL: 121.168
Epoch: 07 | Time: 0m 35s
Train Loss: 4.240 | Train PPL: 69.434
Val. Loss: 4.694 | Val. PPL: 109.337
Epoch: 08 | Time: 0m 35s
Train Loss: 4.116 | Train PPL: 61.326
Val. Loss: 4.714 | Val. PPL: 111.452
Epoch: 09 | Time: 0m 35s
Train Loss: 4.004 | Train PPL: 54.815
Val. Loss: 4.563 | Val. PPL: 95.835
Epoch: 10 | Time: 0m 36s
Train Loss: 3.922 | Train PPL: 50.519
Val. Loss: 4.452 | Val. PPL: 85.761
| Test Loss: 4.456 | Test PPL: 86.155 |
下一步
- 在上查看使用
torchtext
的 Ben Trevett 其余教程。 - 敬请关注使用其他
torchtext
功能以及nn.Transformer
通过下一个单词预测进行语言建模的教程!
脚本的总运行时间:(6 分钟 10.266 秒)
Download Python source code: torchtext_translation_tutorial.py
Download Jupyter notebook: torchtext_translation_tutorial.ipynb
由狮身人面像画廊生成的画廊