针对中文文本蕴含任务,本案例实现的是经典的循环神经网络+注意力方法模型,循环神经网络为BiGRU,注意力方法为BIDAF。模型主要包含以下几个模块:嵌入层、BiGRU编码层、注意力交互层、融合层、组合和输出层。接下来会进行详细讲解如何利用NeuronBlocks以Json文件的形式构建这些模块。模型结构如下图所示,p指的是前提(premise),h指的假设(hypothesis)
嵌入层主要是将文本分词后,利用词表查找功能转化为相应的词向量,为后续模型的输入做准备。该层中以Embedding定义,conf设置词向量参数,本案例中我们使用搜狗新闻预训练得到的词向量,所以dim应与保持预训练的维度保持一致设为300,cols输入文本分别为前提(premise)和假设(hypothesis)。
{
"layer": "Embedding",
"conf": {
"word": {
"cols": ["premise_text", "hypothesis_text"],
"dim": 300
}
}
}
双向GRU对向量化后的premise和hypothesis进行编码,获得更高层的语义信息表示。
设置模型输入,例如premise_text经过向量化变为premise
"model_inputs": {
"premise": ["premise_text"],
"hypothesis": ["hypothesis_text"]
}
对premise和hypothesis进行dropout操作,可以设置dropout系数
{
"layer_id": "premise_dropout",
"layer": "Dropout",
"conf": {
"dropout": 0
},
"inputs": ["premise"]
},
{
"layer_id": "hypothesis_dropout",
"layer": "Dropout",
"conf": {
"dropout": 0
},
"inputs": ["hypothesis"]
},
利用BiGRU对dropout后的premise进行编码,此处可设置其隐藏层节点大小,层数,dropout系数等
{
"layer_id": "premise_bigru",
"layer": "BiGRU",
"conf": {
"hidden_dim": 128,
"dropout": 0.3,
"num_layers": 2
},
"inputs": ["premise_dropout"]
},
利用对premise编码的BiGRU对hypothesis再次编码,两者共享参数。
{
"layer_id": "hypothesis_bigru",
"layer": "premise_bigru",
"inputs": ["hypothesis_dropout"]
},
BiAttFlow注意力方法使premise和hypothesis进行交互,得到premise和hypothesis信息相互感知的上下文表征。
{
"layer_id": "premise_attn",
"layer": "BiAttFlow",
"conf": {
},
"inputs": ["premise_bigru","hypothesis_bigru"]
},
{
"layer_id": "hypothesis_attn",
"layer": "BiAttFlow",
"conf": {
},
"inputs": ["hypothesis_bigru", "premise_bigru"]
}
BiGRU对交互后premise和hypothesis再次编码,使两者信息融合得更加充分。
{
"layer_id": "premise_bigru_final",
"layer": "BiGRU",
"conf": {
"hidden_dim": 128,
"num_layers": 1
},
"inputs": ["premise_attn"]
},
{
"layer_id": "hypothesis_bigru_final",
"layer": "BiGRU",
"conf": {
"hidden_dim": 128,
"num_layers": 1
},
"inputs": ["hypothesis_attn"]
}
premise 和 hypothesis 最大化池化操作,得到对应的句子向量
{
"layer_id": "premise_pooling",
"layer": "Pooling",
"conf": {
"pool_axis": 1,
"pool_type": "max"
},
"inputs": ["premise_bigru_final"]
},
{
"layer_id": "hypothesis_pooling",
"layer": "Pooling",
"conf": {
"pool_axis": 1,
"pool_type": "max"
},
"inputs": ["hypothesis_bigru_final"]
},
premise 和 hypothesis 拼接、做差、点积,获得两者的语义向量表示,输入到多层感知机中进行分类。
{
"layer_id": "comb",
"layer": "Combination",
"conf": {
"operations": ["origin", "difference", "dot_multiply"]
},
"inputs": ["premise_pooling", "hypothesis_pooling"]
},
{
"output_layer_flag": true,
"layer_id": "output",
"layer": "Linear",
"conf": {
"hidden_dim": [128, 3],
"activation": "PReLU",
"batch_norm": true,
"last_hidden_activation": false
},
"inputs": ["comb"]
}
模型损失函数
"loss": {
"losses": [
{
"type": "CrossEntropyLoss",
"conf": {
"size_average": true
},
"inputs": ["output","label"]
}
]
},
模型评价指标
"metrics": ["accuracy"]
到此我们将模型结构用Json参数的形式构建完成,这里来设置模型的其他一些重要参数
NeuronBlocks支持英文和中文
"language": "Chinese",
训练集、验证集、测试集文件路径及预训练词向量的文件路径
"inputs": {
"use_cache": false,
"dataset_type": "classification",
"data_paths": {
"train_data_path": "./dataset/chinese_nli/cnli_train.txt",
"valid_data_path": "./dataset/chinese_nli/cnli_dev.txt",
"test_data_path": "./dataset/chinese_nli/cnli_test.txt",
"predict_data_path": "./dataset/chinese_nli/cnli_test.txt",
"pre_trained_emb": "./dataset/sogou_embed/sgns.sogou.word"
}
优化器、学习率、批次大小,训练轮数等超参数设置
"optimizer": {
"name": "SGD",
"params": {
"lr": 0.2,
"momentum": 0.9,
"nesterov": true
}
},
"lr_decay": 0.95,
"minimum_lr": 0.005,
"epoch_start_lr_decay": 1,
"use_gpu": false,
"batch_size": 64,
"batch_num_to_show_results": 100,
"max_epoch": 6,
"steps_per_validation": 1000,
"max_lengths": {
"premise": 32,
"hypothesis": 32
}
模型配置文件的定义可以参考Neuronblocks官方教程
已配置好的文件可以参考?conf_chinese_nli_bigru_biAttnflow.json
如果您有Nvidia的显卡,请打开该折叠部分,并完成
Pytorch 1.8对一些旧特性不再支持,我们需要对源代码做一些调整
pytorch/pytorch#43227
# ./block_zoo/BiGRU.py 第85行
#在str_len后增加.cpu()
#修改前
string_packed = nn.utils.rnn.pack_padded_sequence(string, str_len, batch_first=True)
#修改后
string_packed = nn.utils.rnn.pack_padded_sequence(string, str_len.cpu(), batch_first=True)
# ./block_zoo/attentions/BiAttFlow.py 第55行
# 将注释取消
#修改前
# self.W = nn.Linear(layer_conf.input_dims[0][-1]*3, 1)
#修改后
self.W = nn.Linear(layer_conf.input_dims[0][-1]*3, 1)
提示: 在下文中,?PROJECTROOT表示本项目的根目录。
数据准备:
PROJECTROOT/dataset/chinese_nli/
PROJECTROOT/dataset/sogou_embed/
Json文件准备:Json模型配置文件放在PROJECTROOT/model_zoo/nlp_tasks/chinese_nli/
训练中文文本蕴含任务模型:
PROJECTROOT
python train.py --conf_path=model_zoo/nlp_tasks/chinese_nli/conf_chinese_nli_bigru_biAttnflow.json #------------------------------------------------------# # 如果你有多张显卡,你可以用以下方法指定运行的显卡 CUDA_VISIBLE_DEVICES=1 python train.py --conf_path=model_zoo/nlp_tasks/chinese_nli/conf_chinese_nli_bigru_biAttnflow.json #------------------------------------------------------#
训练模型日志部分展示:
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>2021-02-07 22:35:03,630 INFO LearningMachine.py train 314: Epoch 1 batch idx: 1900; lr: 0.200000; since last log, loss=0.875404; accuracy: 0.589375
2021-02-07 22:35:18,388 INFO LearningMachine.py train 322: Valid & Test : Epoch 1
2021-02-07 22:35:18,391 INFO LearningMachine.py evaluate 408: Starting valid ...
2021-02-07 22:35:18,391 INFO corpus_utils.py get_batches 237: Start making batches
2021-02-07 22:35:20,321 INFO corpus_utils.py get_batches 398: Batches got!
2021-02-07 22:36:33,065 INFO LearningMachine.py evaluate 619: Epoch 1, valid accuracy: 0.591011 loss: 0.873733
</code></span></span></span></span>
PROJECTROOT
python test.py --conf_path=model_zoo/nlp_tasks/chinese_nli/conf_chinese_nli_bigru_biAttnflow.json
PROJECTROOT
python predict.py --conf_path=model_zoo/nlp_tasks/chinese_nli/conf_chinese_nli_bigru_biAttnflow.json --predict_mode='interactive'
首先我们对环境进行了配置,并对需要用到的数据进行了获取和划分。随后,我们通过配置文件的方式来快速构建文本蕴含的模型,并进一步进行了训练、测试划分。在整个流程中只有数据的获取和划分需要自己进行一定的编程。Neuronblocks对工作量的减少功不可没。