计算机应用研究2025,Vol.42Issue(6):1641-1647,7.DOI:10.19734/j.issn.1001-3695.2024.12.0466
基于特性分流的多模态对话情绪感知算法
Multimodal dialogue emotion perception algorithm based on feature divergence
摘要
Abstract
Multimodal emotion perception is crucial for monitoring personal health and providing medical care in the field of proactive health.Current multimodal dialogue emotion perception technologies face challenges in fusing information across dif-ferent modalities,particularly in capturing local relationships between modalities.The proposed multimodal fusion algorithm based on feature diversion,MEPAD(multimodal emotion perception algorithm with feature diversion),addressed these chal-lenges by capturing global information in dialogues using graph neural networks and integrating homogeneous and specific fea-tures across modalities through the hypercomplex number system and pairwise feature fusion mechanisms.Experiments on the IEMOCAP and MOSEI datasets demonstrate that MEPAD significantly outperforms existing methods in multimodal dialogue emotion perception tasks,highlighting its effectiveness and potential in handling complex emotional data.This research offers new insights for the application of multimodal emotion perception technology in proactive health.关键词
多模态情绪感知/图神经网络/超复数数系/成对特征融合/对话情绪感知Key words
multimodal emotion recognition/graph neural networks/hypercomplex number system/pairwise feature fusion/dialogue emotion perception分类
计算机与自动化引用本文复制引用
任钦泽,袁野,傅柯婷,付军秀,徐康,刘娜..基于特性分流的多模态对话情绪感知算法[J].计算机应用研究,2025,42(6):1641-1647,7.基金项目
主动健康服务数智化技术区域综合应用示范(2023YFC3605800) (2023YFC3605800)