华南理工大学学报(自然科学版) ›› 2022, Vol. 50 ›› Issue (4): 1-9.doi: 10.12141/j.issn.1000-565X.210427

所属专题: 2022年计算机科学与技术

• 计算机科学与技术 • 上一篇    下一篇

基于多模型集成的语义文本相似性判断

苏锦钿洪晓斌余珊珊3   

  1. 1. 华南理工大学 计算机科学与工程学院,广东 广州 510640; 2. 华南理工大学 机械与汽车工程学院,
    广东 广州 510640; 3. 广东药科大学 医药信息工程学院,广东 广州 510006

  • 收稿日期:2021-06-29 修回日期:2021-09-16 出版日期:2022-04-25 发布日期:2021-09-24
  • 通信作者: 洪晓斌 (1979-),男,博士,教授,主要从事网络化智能测控技术及应用等研究 E-mail: mexbhong@ scut. edu. cn
  • 作者简介:苏锦钿 (1980-),男,博士,副教授,主要从事自然语言处理、深度学习和程序语言设计等研究
  • 基金资助:
    广东省重点领域科技计划项目;国家自然科学基金

Semantic Textual Similarity Justification based on Multi-Model Ensemble

SU JindianHONG XiaobinYU Shanshan3   

  1. 1. School of Computer Science & Engineering,South China University of Technology,Guangzhou 510640,Guangdong,China;
    2. School of Mechanical & Automotive Engineering,South China University of Technology,Guangzhou 510640,Guangdong,China;
    3. College of Medical Information Engineering,Guangdong Pharmaceutical University,Guangzhou 510006,Guangdong,China
  • Received:2021-06-29 Revised:2021-09-16 Online:2022-04-25 Published:2021-09-24
  • Contact: 洪晓斌 (1979-),男,博士,教授,主要从事网络化智能测控技术及应用等研究 E-mail: mexbhong@ scut. edu. cn
  • About author:苏锦钿 (1980-),男,博士,副教授,主要从事自然语言处理、深度学习和程序语言设计等研究

摘要: 作为目前自然语言处理及人工智能领域的主流方法,各种预训练语言模型由于在语言建模、特征表示、模型结构、训练目标及训练语料等方面存在差异,导致它们在下游任务中的表现各有优劣。为了更好地融合不同预训练语言模型中的知识及在下游任务中的学习能力,结合语义文本相似性判断任务的特点提出一种多模型集成方法MME-STS(Multi-Model Ensemble for Semantic Textual Similarity),给出相应的模型总体架构及相应的特征表示,并针对多模型的集成问题分别提出基于平均值、基于全连接层训练和基于Adaboost算法的三种不同的集成策略。实验结果表明,MMF-STS在国际语义评测SemEval 2014任务4的SICK和SemEval 2017 STS-B数据集上的Pearson共关系值和Spearman相关系数值均超过单个预训练语言模型方法。

关键词: 深度学习, 语义文本相似度, 自然语言处理, 预训练语言模型, 多模型集成

Abstract: As the mainstream and typical methods in current natural language processing and artificial intelligence, various pre-trained language models perform differently on the downstream tasks, due to their different language modeling, feature representation, model structure, training tasks and pre-training corpus, et al. In order to better ensemble the knowledge in different pre-trained language models and utilize their learning abilities on the downstream tasks, we propose a multi-model ensemble method MME-STS (Multi-Model Ensemble for Semantic Textual Similarity) for semantic textual similarity justification tasks. The model structure and the corresponding feature representations are presented, and three different ensemble strategies based on average values, full-connected layer training and Adaboost algorithm with respect to model ensemble are also proposed. Experimental results show that MME-STS outperforms significantly over single pre-trained language model-based approaches on the two benchmark datasets of SemEval 2014 task 4 SICK and SemEval 2017 STS-B corpus in terms of Pearson correlation coefficient and Spearman coefficient metrics.

Key words: Deep learning, Semantic Textual Similarity, Natural Language Processing, Pre-trained Language Model, Model Ensemble