Connecting language and knowledge with heterogeneous representations for neural relation extraction

2019-04-18 17:18:09 浏览数 (3)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/JN_rainbow/article/details/88972193

This is a paper about relationship extraction on NAACL 2019. Connecting language and knowledge with heterogeneous representations for neural relation extraction

Problem

In the process of building a knowledge base, we usually extract the relationship between entities from sentences. If the entity already exists in the knowledge base, then we can use the knowledge in the knowledge base to improve the results of relation extraction.

The usual practice is to train two models, one is the RE model, and another is the KBE knowledge model(Knowledge Base Embedding). But there is little research to properly unify these models systematically.

Contribution

In this paper, a Heterogeneous REpresentations for neural Relation Extraction(HRERE) of RE and KBE is proposed. The framework unifies the RE model and the KBE model, and the framework can effectively enhance the relationship between the two. The gap between the language representions and knoledge representions can be reduced as much as possible leading to significant improvements over the state-of-the-art in RE.

Solution

The RE model uses a Bi-LSTM with multiple levels of attention mechanism to predict the ralationship between entity pairs. The KBE model borrows from ComplEx proposed by Trouillon et al in 2016, which can nudge the language model to agree with facts in the KB.

The framework introduces three loss functions, namely the RE model language representation loss, the KBE model knowledge representation loss, and the cross entropy loss of the two distributions. JL=−1N∑i=1Nlog⁡p(ri∣Si;Θ(L)) J_L = - frac{1}{N}sum^N_{i=1}log p(r_i|S_i;Theta^{(L)}) JL​=−N1​i=1∑N​logp(ri​∣Si​;Θ(L)) JG=−1N∑i=1Nlog⁡p(ri∣(hi,ti)Θ(L)) J_G = - frac{1}{N}sum^N_{i=1}log p(r_i|(h_i,t_i)Theta^{(L)}) JG​=−N1​i=1∑N​logp(ri​∣(hi​,ti​)Θ(L)) JD=−1N∑i=1Nlog⁡p(ri∗∣Si;Θ(L)) J_D = - frac{1}{N}sum^N_{i=1}log p(r_i^*|S_i;Theta^{(L)}) JD​=−N1​i=1∑N​logp(ri∗​∣Si​;Θ(L)) 其中ri∗=argmax⁡r∈R∪NAp(r∣(hi,ti);Θ(G))r_i^* = arg max_{rin Rcup{NA}}p(r|(h_i,t_i);Theta^{(G)})ri∗​=argmaxr∈R∪NA​p(r∣(hi​,ti​);Θ(G)) min⁡ΘJ=JL JG JD λ∣∣Θ∣∣22 min_{Theta} J = J_L J_G J_D lambda||Theta||_2^2 Θmin​J=JL​ JG​ JD​ λ∣∣Θ∣∣22​

Understanding

The essence of this paper is to improve the results of relation extraction RE through the existing knowledge base. By training the KBE model on the existing knowledge base to form the knowledge representation, the RE model predicts the relationship between the entity pairs through the language model, so that the prediction results can be as close as possible to existing knowledge.

This paper describes and evaluates a novel neural framework for jointly learningrepresentations for RE and KBE tasks that uses a cross-entropy loss function to ensure both representations are learned together, resulting in significant improvements over the current state-of-theart for the RE task.

Limitation

In real-life scenarios, we often want to extract entity pairs and their relationships from a sentence, rather than extracting relationship by a given entity pair.

But this paper gives me ideas on how to extract relationships in the above scenarios. We can construct a new loss function to represent entity extraction and relationship extraction, thus reducing the error propagation of the step model.

Reference

Connecting language and knowledge with heterogeneous representations for neural relation extraction

0 人点赞