| 注册
首页|期刊导航|南京大学学报(自然科学版)|多模型融合的时空特征运动想象脑电解码方法

多模型融合的时空特征运动想象脑电解码方法

凌六一 李卫校 冯彬

南京大学学报(自然科学版)2024,Vol.60Issue(1):65-75,11.
南京大学学报(自然科学版)2024,Vol.60Issue(1):65-75,11.DOI:10.13232/j.cnki.jnju.2024.01.007

多模型融合的时空特征运动想象脑电解码方法

Multi-model fusion temporal-spatial feature motor imagery electroencephalogram decoding method

凌六一 1李卫校 2冯彬1

作者信息

  • 1. 安徽理工大学电气与信息工程学院,淮南,232001||安徽理工大学人工智能学院,淮南,232001
  • 2. 安徽理工大学电气与信息工程学院,淮南,232001
  • 折叠

摘要

Abstract

Motor imagery electroencephalogram(MI-EEG)has been applied in brain computer interface(BCI)to assist patients with upper and lower limb dysfunction in rehabilitation training.However,the limited decoding performance of MI-EEG and over-reliance on pre-processing are restricting the broad growth of brain computer interface(BCI).We propose a multi-model fusion temporal-spatial feature motor imagery electroencephalogram decoding method(MMFTSF).The MMFTSF uses temporal-spatial convolutional networks to extract shallow features,multi-head probsparse self-attention mechanism to focus on the most valuable features,temporal convolutional networks to extract high-dimensional temporal features,fully connected layer with softmax classifier for classification,and convolutional-based sliding window and spatial information enhancement module to further improve decoding performance from MI-EEG.Experimental results have shown that the proposed reaches 89.03%on public BCI competition IV-2a dataset,which demonstrate MMFTSF has ideal classification performance on MI-EEG.

关键词

概率稀疏注意力/运动想象/卷积神经网络/时间卷积网络

Key words

probsparse self-attention/motor imagery/convolutional neural networks/temporal convolutional networks

分类

电子信息工程

引用本文复制引用

凌六一,李卫校,冯彬..多模型融合的时空特征运动想象脑电解码方法[J].南京大学学报(自然科学版),2024,60(1):65-75,11.

基金项目

安徽理工大学环境友好材料与职业健康研究院(芜湖)研发专项(ALW2022YF06),安徽高校协同创新项目(GXXT-2022-053) (芜湖)

南京大学学报(自然科学版)

OA北大核心CSTPCD

0469-5097

访问量7
|
下载量0
段落导航相关论文