计算机科学与探索2026,Vol.20Issue(1):291-300,10.DOI:10.3778/j.issn.1673-9418.2503026
基于多注意力机制的脊柱病灶MRI影像识别模型
Multiple Attention Mechanisms for Spinal Lesions MRI Images Recognition Model
摘要
Abstract
Manual detection of spinal lesions is a time-consuming task that heavily relies on the expertise of specialists in the field.Consequently,the automatic identification of spinal lesions has become essential.Developing accurate automated systems for the detection and classification of spinal lesions is therefore crucial.However,this endeavor presents signifi-cant challenges due to considerable variability in lesion size,location,and structure.Additionally,spinal tumors often exhibit high radiological similarity to the rare disease Brucellosis,which can further complicate diagnosis.To address these challenges,this paper proposes a novel and enhanced spinal lesions MRI images recognition model.A bi-directional feature pyramid network backbone based on ResNet-101 is introduced.Deformable convolution is utilized instead of traditional convolution across various layers to extract richer semantic features.Multiple attention mechanisms including self-attention mechanisms and soft attention mechanisms are integrated into different modules to effectively fuse the most informative feature components.An improved balanced cross-entropy loss function is implemented to alleviate the data imbalance issue arising from the rarity of both spinal tumors and Brucellosis.Validation conducted on a clinical dataset provided by a hospital in Dalian demonstrates that the recognition precision rate reaches 94.2%,while the recall rate achieves 90.8%.Experimental results indicate that the proposed method outperforms other models in terms of recognition performance.关键词
脊柱病灶识别/双向特征金字塔/多注意力机制/可变卷积/多特征融合Key words
spinal lesions recognition/bi-directional feature pyramid network/multi-attention mechanisms/deformable convolution/multi-feature fusion分类
信息技术与安全科学引用本文复制引用
周慧,宋新景..基于多注意力机制的脊柱病灶MRI影像识别模型[J].计算机科学与探索,2026,20(1):291-300,10.基金项目
"集思融智-医工交叉"联合基金(LH-JSRZ-202203).This work was supported by the Joint Fund of"Integrative Intelligence and Interdisciplinary Medicine-Engineering"(LH-JSRZ-202203). (LH-JSRZ-202203)