神经机器翻译与代码(下)

2020-01-02 14:53:17 浏览数 (1)

编辑 | sunlei 发布 | ATYUN订阅号

代码

本文中蓝色字体为外部链接,部分外部链接无法从文章中直接跳转,请点击【阅读原文】以访问。

在这篇文章中,我们将使用德语到英语词汇的数据集作为语言学习抽认卡的基础。

该数据集可以从ManyThings.org网站获得,其中的示例来自Tatoeba项目。数据集由德语和英语单词组成,打算与Anki flashcard软件一起使用。

我们将在本教程中使用的数据集可在此处下载:

德语-英语deu-eng.zip

将数据集下载到当前工作目录并解压;

代码语言:javascript复制
from numpy import array
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.utils.vis_utils import plot_model
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
from keras.callbacks import ModelCheckpoint
# load a clean dataset
def load_clean_sentences(filename):
  return load(open(filename, 'rb'))
# fit a tokenizer
def create_tokenizer(lines):
  tokenizer = Tokenizer()
  tokenizer.fit_on_texts(lines)
  return tokenizer
# max sentence length
def max_length(lines):
  return max(len(line.split()) for line in lines)
# encode and pad sequences
def encode_sequences(tokenizer, length, lines):
  # integer encode sequences
  X = tokenizer.texts_to_sequences(lines)
  # pad sequences with 0 values
  X = pad_sequences(X, maxlen=length, padding='post')
  return X
# one hot encode target sequence
def encode_output(sequences, vocab_size):
  ylist = list()
  for sequence in sequences:
  encoded = to_categorical(sequence, num_classes=vocab_size)
  ylist.append(encoded)
  y = array(ylist)
  y = y.reshape(sequences.shape[0], sequences.shape[1], vocab_size)
  return y
# define NMT model
def define_model(src_vocab, tar_vocab, src_timesteps,   tar_timesteps, n_units):
  model = Sequential()
  model.add(Embedding(src_vocab, n_units, input_length=src_timesteps, mask_zero=True))
  model.add(LSTM(n_units))
  model.add(RepeatVector(tar_timesteps))
  model.add(LSTM(n_units, return_sequences=True))
  model.add(TimeDistributed(Dense(tar_vocab, activation='softmax')))
  return model
# load datasets
dataset = load_clean_sentences('english-german-both.pkl')
train = load_clean_sentences('english-german-train.pkl')
test = load_clean_sentences('english-german-test.pkl')
# prepare english tokenizer
eng_tokenizer = create_tokenizer(dataset[:, 0])
eng_vocab_size = len(eng_tokenizer.word_index)   1
eng_length = max_length(dataset[:, 0])
print('English Vocabulary Size: %d' % eng_vocab_size)
print('English Max Length: %d' % (eng_length))
# prepare german tokenizer
ger_tokenizer = create_tokenizer(dataset[:, 1])
ger_vocab_size = len(ger_tokenizer.word_index)   1
ger_length = max_length(dataset[:, 1])
print('German Vocabulary Size: %d' % ger_vocab_size)
print('German Max Length: %d' % (ger_length))
# prepare training data
trainX = encode_sequences(ger_tokenizer, ger_length, train[:, 1])
trainY = encode_sequences(eng_tokenizer, eng_length, train[:, 0])
trainY = encode_output(trainY, eng_vocab_size)
# prepare validation data
testX = encode_sequences(ger_tokenizer, ger_length, test[:, 1])
testY = encode_sequences(eng_tokenizer, eng_length, test[:, 0])
testY = encode_output(testY, eng_vocab_size)
# define model
model = define_model(ger_vocab_size, eng_vocab_size, ger_length, eng_length, 256)
model.compile(optimizer='adam', loss='categorical_crossentropy')
# summarize defined model
print(model.summary())
plot_model(model, to_file='model.png', show_shapes=True)
# fit model
filename = 'model.h5'
checkpoint = ModelCheckpoint(filename, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
model.fit(trainX, trainY, epochs=30, batch_size=64, validation_data=(testX, testY),callbacks=[checkpoint], verbose=2)
#see complete code here
#<a href="https://github.com/umer7/nmt">https://github.com/umer7/nmt</a>

参考文件:

《机器翻译的统计方法》,1990年。

概述:基于实例的机器翻译,1999。

使用RNN编码器-解码器学习语法表示,用于统计机器翻译,2014年。

联合学习对齐和翻译的神经机器翻译,2014。

谷歌的神经机器翻译系统:弥合人类和机器翻译之间的差距,2016。

神经网络的序列到序列学习,2014。

循环连续翻译模型,2013年。

基于各国的统计机器翻译的连续空间翻译模型,2013年。

报道链接:

https://medium.com/@umerfarooq_26378/neural-machine-translation-with-code-68c425044bbd

0 人点赞