一种端到端的事件共指消解方法OACSTPCD
An End-to-end Event Coreference Resolution Method
事件共指消解任务主要是判断不同事件提及是否指向同一件事件.事件共指消解不仅能有效缓解事件抽取任务中存在的信息冗余问题,而且为事件内容补全提供了有效途径.尽管许多学者利用深度学习方法对事件共指消解进行了大量研究.但是大部分事件共指消解模型中仍然存在显式信息表示不足、论元噪声引入以及共指事件分布稀疏等问题.针对上述问题,提出了一种利用显式论元信息和重构事件链的端到端事件共指消解方法.首先,使用名为OneIE事件抽取模型提取事件的触发词和论元以获取事件的结构化信息;随后,使用Trans-former编码器对事件提及上下文进行表示,并将置信分数引入论元信息编码以缓解其可能带来的误差传递;同时,采用门控机制对论元在触发词的水平和垂直方向上的信息进行分解,并根据论元和触发词的相关系数融合两个方向的信息,过滤论元中的噪声;然后,使用前馈网络计算事件提及对共指得分;最后,通过重构事件链验证事件提及的合法性以纠正由共指事件稀疏性带来的模型训练结果偏差.为了验证方法的有效性,本文基于数据集ACE2005进行实验.结果表明,本文模型在端到端事件共指消解任务上具有一定的先进性,其中CoNLL和AVG指标平均高出基线模型5.67%和6.24%.
The event coreference resolution(ECR)is mainly to determine whether different event mentions refer to the same event.ECR not only effectively alleviates the problem of information redundancy in event extraction tasks,but also provides an effective way for event completion.Although many scholars have conducted extensive research on ECR using deep learning methods and achieved significant achievements,there are still issues in most ECR models,such as insufficient explicit information representation,noise introduced by arguments,and sparse distribution of coreference events.Aiming at the above problems,an end-to-end ECR method using explicit argument information and event chain reconstruc-tion was proposed.First,an event extraction model called OneIE was used to extract event triggers and arguments.Then,a Transformer encoder is used to express the context of the event mentions,and the confidence score was introduced into the argument information coding to mitigate the error transmission.Meanwhile,the information of the argument in the horizontal and vertical directions of the trigger was decomposed by the gat-ing mechanism,and the noise of the argument was filtered by fusing the information of the directions according to the correlation coefficient of the argument and the trigger.Afterwards,the coreference score of the event pairs was calculated by the feed forward network.Finally,to verify the validity of the event mentions,the event chains were reconstructed to correct the deviation of the model caused by the sparse event corefer-ence.In order to verify the effectiveness of our method,the proposed model is trained and tested on the public dataset ACE2005.The experiment-al results showed that our model in end-to-end ECR task is 5.67%and 6.24%higher than the other models in the scores of CoNLL and AVG on average.
刘浏;蒋国权;环志刚;刘姗姗;刘茗;丁鲲
国防科技大学 第六十三研究所,江苏 南京 210007||宿迁学院 信息工程学院,江苏 宿迁 223800国防科技大学 第六十三研究所,江苏 南京 210007
计算机与自动化
事件共指消解自然语言处理预训练语言模型
event coreference resolutionNLPpre-trained language model
《工程科学与技术》 2024 (001)
82-88 / 7
国家自然科学基金项目(71901215);江苏省"333工程"培养资金资助项目(BRA2020418);中国博士后科学基金资助项目(2021MD703983);国防科技大学科研计划项目(ZK20-46);宿迁市科技计划项目(K202128)
评论