用深度神经网络处理NER命名实体识别问题

sunxinyu 2017-05-31

本文结构:
 

  1. 什么是命名实体识别(NER)
  2. 怎么识别?

cs224d Day 7: 项目2-用DNN处理NER问题

课程项目描述地址

什么是NER?

命名实体识别(NER)是指识别文本中具有特定意义的实体,主要包括人名、地名、机构名、专有名词等。命名实体识别是信息提取、问答系统、句法分析、机器翻译等应用领域的重要基础工具,作为结构化信息提取的重要步骤。摘自 BosonNLP

怎么识别?

先把解决问题的逻辑说一下,然后解释主要的代码,有兴趣的话,完整代码请去 这里看 。

代码是在 Tensorflow 下建立只有一个隐藏层的 DNN 来处理 NER 问题。

1.问题识别:

NER 是个分类问题。

给一个单词,我们需要根据上下文判断,它属于下面四类的哪一个,如果都不属于,则类别为0,即不是实体,所以这是一个需要分成 5 类的问题:

<ol class="dp-py">
<li class="alt"><span><span>• Person (PER) </span></span></li>
<li><span>• Organization (ORG) </span></li>
<li class="alt"><span>• Location (LOC) </span></li>
<li><span>• Miscellaneous (MISC) </span></li>
</ol>

我们的训练数据有两列,第一列是单词,第二列是标签。

<ol class="dp-py">
<li class="alt"><span><span>EU ORG </span></span></li>
<li><span>rejects O </span></li>
<li class="alt"><span>German MISC </span></li>
<li><span>Peter PER </span></li>
<li class="alt"><span>BRUSSELS LOC </span></li>
</ol>

2.模型:

接下来我们用深度神经网络对其进行训练。

模型如下:

输入层的 x^(t) 为以 x_t 为中心的窗口大小为3的上下文语境,x_t 是 one-hot 向量,x_t 与 L 作用后就是相应的词向量,词向量的长度为 d = 50 :

用深度神经网络处理NER命名实体识别问题

我们建立一个只有一个隐藏层的神经网络,隐藏层维度是 100,y^ 就是得到的预测值,维度是 5:

用深度神经网络处理NER命名实体识别问题

用交叉熵来计算误差:

用深度神经网络处理NER命名实体识别问题

J 对各个参数进行求导:

用深度神经网络处理NER命名实体识别问题

用深度神经网络处理NER命名实体识别问题

得到如下求导公式:

用深度神经网络处理NER命名实体识别问题

在 TensorFlow 中求导是自动实现的,这里用Adam优化算法更新梯度,不断地迭代,使得loss越来越小直至收敛。

3.具体实现

在 def test_NER() 中,我们进行 max_epochs 次迭代,每次,用 training data 训练模型 得到一对 train_loss, train_acc ,再用这个模型去预测 validation data,得到一对 val_loss, predictions ,我们选择最小的 val_loss ,并把相应的参数 weights 保存起来,最后我们是要用这些参数去预测 test data 的类别标签:

<ol class="dp-j">
<li class="alt"><span><span>def test_NER(): </span></span></li>
<li><span>config = Config() </span></li>
<li class="alt"><span>with tf.Graph().as_default(): </span></li>
<li><span>model = NERModel(config) # 最主要的类 </span></li>
<li class="alt"><span> </span></li>
<li><span>init = tf.initialize_all_variables() </span></li>
<li class="alt"><span>saver = tf.train.Saver() </span></li>
<li><span> </span></li>
<li class="alt"><span>with tf.Session() as session: </span></li>
<li><span>best_val_loss = <span class="keyword">float</span><span>(</span><span class="string">'inf'</span><span>) # 最好的值时,它的 loss 它的 迭代次数 epoch </span></span></li>
<li class="alt"><span>best_val_epoch = <span> </span></span></li>
<li><span> </span></li>
<li class="alt"><span>session.run(init) </span></li>
<li><span><span class="keyword">for</span><span> epoch in xrange(config.max_epochs): </span></span></li>
<li class="alt"><span>print <span class="string">'Epoch {}'</span><span>.format(epoch) </span></span></li>
<li><span>start = time.time() </span></li>
<li class="alt"><span>### </span></li>
<li><span>train_loss, train_acc = model.run_epoch(session, model.X_train, </span></li>
<li class="alt"><span>model.y_train) # <span>.把 train 数据放进迭代里跑,得到 loss 和 accuracy </span></span></li>
<li><span>val_loss, predictions = model.predict(session, model.X_dev, model.y_dev) # <span>.用这个model去预测 dev 数据,得到loss 和 prediction </span></span></li>
<li class="alt"><span>print <span class="string">'Training loss: {}'</span><span>.format(train_loss) </span></span></li>
<li><span>print <span class="string">'Training acc: {}'</span><span>.format(train_acc) </span></span></li>
<li class="alt"><span>print <span class="string">'Validation loss: {}'</span><span>.format(val_loss) </span></span></li>
<li><span><span class="keyword">if</span><span> val_loss < best_val_loss: # 用 val 数据的loss去找最小的loss </span></span></li>
<li class="alt"><span>best_val_loss = val_loss </span></li>
<li><span>best_val_epoch = epoch </span></li>
<li class="alt"><span><span class="keyword">if</span><span> not os.path.exists(</span><span class="string">"./weights"</span><span>): </span></span></li>
<li><span>os.makedirs(<span class="string">"./weights"</span><span>) </span></span></li>
<li class="alt"><span> </span></li>
<li><span>saver.save(session, <span class="string">'./weights/ner.weights'</span><span>) # 把最小的 loss 对应的 weights 保存起来 </span></span></li>
<li class="alt"><span><span class="keyword">if</span><span> epoch - best_val_epoch > config.early_stopping: </span></span></li>
<li><span><span class="keyword">break</span><span> </span></span></li>
<li class="alt"><span>### </span></li>
<li><span>confusion = calculate_confusion(config, predictions, model.y_dev) # <span>.把 dev 的lable数据放进去,计算prediction的confusion </span></span></li>
<li class="alt"><span>print_confusion(confusion, model.num_to_tag) </span></li>
<li><span>print <span class="string">'Total time: {}'</span><span>.format(time.time() - start) </span></span></li>
<li class="alt"><span> </span></li>
<li><span>saver.restore(session, <span class="string">'./weights/ner.weights'</span><span>) # 再次加载保存过的 weights,用 test 数据做预测,得到预测结果 </span></span></li>
<li class="alt"><span>print <span class="string">'Test'</span><span> </span></span></li>
<li><span>print <span class="string">'=-=-='</span><span> </span></span></li>
<li class="alt"><span>print <span class="string">'Writing predictions to q2_test.predicted'</span><span> </span></span></li>
<li><span>_, predictions = model.predict(session, model.X_test, model.y_test) </span></li>
<li class="alt"><span>save_predictions(predictions, <span class="string">"q2_test.predicted"</span><span>) # 把预测结果保存起来 </span></span></li>
<li><span> </span></li>
<li class="alt"><span><span class="keyword">if</span><span> __name__ == </span><span class="string">"__main__"</span><span>: </span></span></li>
<li><span>test_NER() </span></li>
</ol>

4.模型是怎么训练的呢?

首先导入数据 training,validation,test:

<ol class="dp-c">
<li class="alt"><span><span class="preprocessor"># Load the training set</span><span> </span></span></li>
<li><span>docs = du.load_dataset(<span class="string">'data/ner/train'</span><span>) </span></span></li>
<li class="alt"><span> </span></li>
<li><span><span class="preprocessor"># Load the dev set (for tuning hyperparameters)</span><span> </span></span></li>
<li class="alt"><span>docs = du.load_dataset(<span class="string">'data/ner/dev'</span><span>) </span></span></li>
<li><span> </span></li>
<li class="alt"><span><span class="preprocessor"># Load the test set (dummy labels only)</span><span> </span></span></li>
<li><span>docs = du.load_dataset(<span class="string">'data/ner/test.masked'</span><span>) </span></span></li>
</ol>

把单词转化成 one-hot 向量后,再转化成词向量:

<ol class="dp-py">
<li class="alt"><span><span class="keyword">def</span><span> add_embedding(</span><span class="special">self</span><span>): </span></span></li>
<li><span>  <span class="comment"># The embedding lookup is currently only implemented for the CPU</span><span> </span></span></li>
<li class="alt"><span>  with tf.device(<span class="string">'/cpu:0'</span><span>): </span></span></li>
<li><span> </span></li>
<li class="alt"><span>    embedding = tf.get_variable(<span class="string">'Embedding'</span><span>, [len(</span><span class="special">self</span><span>.wv), </span><span class="special">self</span><span>.config.embed_size])    </span><span class="comment"># assignment 中的 L    </span><span> </span></span></li>
<li><span>    window = tf.nn.embedding_lookup(embedding, <span class="special">self</span><span>.input_placeholder)                </span><span class="comment"># 在 L 中直接把window大小的context的word vector搞定</span><span> </span></span></li>
<li class="alt"><span>    window = tf.reshape( </span></li>
<li><span>      window, [-<span>, </span><span class="special">self</span><span>.config.window_size * </span><span class="special">self</span><span>.config.embed_size]) </span></span></li>
<li class="alt"><span> </span></li>
<li><span>    <span class="keyword">return</span><span> window </span></span></li>
</ol>

建立神经层,包括用 xavier 去初始化第一层, L2 正则化和用 dropout 来减小过拟合的处理:

<ol class="dp-py">
<li class="alt"><span><span class="keyword">def</span><span> add_model(</span><span class="special">self</span><span>, window): </span></span></li>
<li><span> </span></li>
<li class="alt"><span>  with tf.variable_scope(<span class="string">'Layer1'</span><span>, initializer=xavier_weight_init()) as scope:        </span><span class="comment"># 用initializer=xavier去初始化第一层</span><span> </span></span></li>
<li><span>    W = tf.get_variable(                                                                <span class="comment"># 第一层有 W,b1,h</span><span> </span></span></li>
<li class="alt"><span>        <span class="string">'W'</span><span>, [</span><span class="special">self</span><span>.config.window_size * </span><span class="special">self</span><span>.config.embed_size, </span></span></li>
<li><span>              <span class="special">self</span><span>.config.hidden_size]) </span></span></li>
<li class="alt"><span>    b1 = tf.get_variable(<span class="string">'b1'</span><span>, [</span><span class="special">self</span><span>.config.hidden_size]) </span></span></li>
<li><span>    h = tf.nn.tanh(tf.matmul(window, W) + b1) </span></li>
<li class="alt"><span>    <span class="keyword">if</span><span> </span><span class="special">self</span><span>.config.l2:                                                                </span><span class="comment"># L2 regularization for W</span><span> </span></span></li>
<li><span>        tf.add_to_collection(<span class="string">'total_loss'</span><span>, </span><span class="number">0.5</span><span> * </span><span class="special">self</span><span>.config.l2 * tf.nn.l2_loss(W))    </span><span class="comment"># 0.5 * self.config.l2 * tf.nn.l2_loss(W)</span><span> </span></span></li>
<li class="alt"><span> </span></li>
<li><span>  with tf.variable_scope(<span class="string">'Layer2'</span><span>, initializer=xavier_weight_init()) as scope: </span></span></li>
<li class="alt"><span>    U = tf.get_variable(<span class="string">'U'</span><span>, [</span><span class="special">self</span><span>.config.hidden_size, </span><span class="special">self</span><span>.config.label_size]) </span></span></li>
<li><span>    b2 = tf.get_variable(<span class="string">'b2'</span><span>, [</span><span class="special">self</span><span>.config.label_size]) </span></span></li>
<li class="alt"><span>    y = tf.matmul(h, U) + b2 </span></li>
<li><span>    <span class="keyword">if</span><span> </span><span class="special">self</span><span>.config.l2: </span></span></li>
<li class="alt"><span>        tf.add_to_collection(<span class="string">'total_loss'</span><span>, </span><span class="number">0.5</span><span> * </span><span class="special">self</span><span>.config.l2 * tf.nn.l2_loss(U)) </span></span></li>
<li><span>  output = tf.nn.dropout(y, <span class="special">self</span><span>.dropout_placeholder)                                    </span><span class="comment"># 返回 output,两个variable_scope都带dropout</span><span> </span></span></li>
<li class="alt"><span> </span></li>
<li><span>  <span class="keyword">return</span><span> output </span></span></li>
</ol>

关于 L2正则化 和 dropout 是什么, 如何减小过拟合问题的,可以看 这篇博客,总结的简单明了。

用 cross entropy 来计算 loss:

<ol class="dp-py">
<li class="alt"><span><span class="keyword">def</span><span> add_loss_op(</span><span class="special">self</span><span>, y): </span></span></li>
<li><span> </span></li>
<li class="alt"><span>  cross_entropy = tf.reduce_mean(                                                        <span class="comment"># 1.关键步骤:loss是用cross entropy定义的</span><span> </span></span></li>
<li><span>      tf.nn.softmax_cross_entropy_with_logits(y, <span class="special">self</span><span>.labels_placeholder))                </span><span class="comment"># y是模型预测值,计算cross entropy</span><span> </span></span></li>
<li class="alt"><span>  tf.add_to_collection(<span class="string">'total_loss'</span><span>, cross_entropy)            </span><span class="comment"># Stores value in the collection with the given name.</span><span> </span></span></li>
<li><span>                                                              <span class="comment"># collections are not sets, it is possible to add a value to a collection several times.</span><span> </span></span></li>
<li class="alt"><span>  loss = tf.add_n(tf.get_collection(<span class="string">'total_loss'</span><span>))            </span><span class="comment"># Adds all input tensors element-wise. inputs: A list of Tensor with same shape and type</span><span> </span></span></li>
<li><span> </span></li>
<li class="alt"><span>  <span class="keyword">return</span><span> loss </span></span></li>
</ol>

接着用 Adam Optimizer 把loss最小化:

<ol class="dp-py">
<li class="alt"><span><span class="keyword">def</span><span> add_training_op(</span><span class="special">self</span><span>, loss): </span></span></li>
<li><span> </span></li>
<li class="alt"><span>  optimizer = tf.train.AdamOptimizer(<span class="special">self</span><span>.config.lr) </span></span></li>
<li><span>  global_step = tf.Variable(<span>, name=</span><span class="string">'global_step'</span><span>, trainable=</span><span class="special">False</span><span>) </span></span></li>
<li class="alt"><span>  train_op = optimizer.minimize(loss, global_step=global_step)    <span class="comment"># 2.关键步骤:用 AdamOptimizer 使 loss 达到最小,所以更关键的是 loss</span><span> </span></span></li>
<li><span> </span></li>
<li class="alt"><span>  <span class="keyword">return</span><span> train_op </span></span></li>
</ol>

相关推荐