生物启发多特征融合学习的室内可见光位置感知方法OA北大核心
Bio-inspired multi-feature fusion learning method for indoor visible light position sensing
为了提高Elman室内可见光位置感知模型稳健性和定位精度,提出生物启发多特征融合学习的室内可见光位置感知方法.该方法首先对获取的可见光图像进行预处理,以确保特征提取的准确性;然后,通过将预训练的神经网络模型中不同层次的特征进行融合,构建一个位置感知特征库,从而提升特征表达能力和丰富度,以此来提高模型的位置感知精度;最后,采用蜣螂优化(DB0)算法优化Elman神经网络的拓扑结构和权重参数,以解决传统Elman神经网络在室内位置感知中容易陷入局部最优的问题,并加速收敛速度和增强泛化性能.实验结果表明:在4 mx3.5 mx3 m的立体空间内,所提算法平均定位误差为0.21 m,平均定位误差小于0.4 m的概率达到91.3%,相较于Elman算法,定位精度提高了 22.3%.
To enhance the robustness and positioning accuracy of the Elman indoor visible light position sensing model,a bio-in-spired multi-feature fusion learning method for indoor visible light position sensing is proposed.This method first preprocesses the acquired visible light images to ensure the accuracy of feature extraction.Then,by fusing features from different levels of a pre-trained neural network model,it constructs a position-sensing feature library,thereby enhancing feature representation capa bility and richness,which improves the model's position sensing precision.Finally,the dung beetle optimization(DBO)algo-rithm is employed to optimize the topology and weight parameters of the Elman neural network,addressing issues where tradi-tional Elman neural networks easily fall into local optima in indoor position sensing,accelerating convergence speed,and enhanc-ing generalization performance.The experimental results show that within a 3D space of 4 mx3.5 mx3 m,the proposed algorithm achieves an average positioning error of 0.21 m,with 91.3%probability of average positioning error is less than 0.4 m,improving positioning accuracy by 22.3%compared to the Elman algorithm.
韦吉月;张峰;孟祥艳;赵黎;李帅
西安工业大学电子信息工程学院,西安 710021西安工业大学电子信息工程学院,西安 710021西安工业大学电子信息工程学院,西安 710021西安工业大学电子信息工程学院,西安 710021西安工业大学电子信息工程学院,西安 710021
电子信息工程
室内可见光位置感知视觉成像蜣螂优化算法Elman神经网络
indoor visible light position sensingvisual imagingdung beetle optimization algorithmElman neural network
《光通信技术》 2025 (1)
25-30,6
国家自然科学基金项目(12004292)资助陕西省科技厅一般项目-工业领域(12022GY-072)资助西安市科技局高校院所科技人员服务企业项目(124GXFW0034)资助.
评论