| 注册
首页|期刊导航|电工技术学报|基于残差U-Net和自注意力Transformer编码器的磁场预测方法

基于残差U-Net和自注意力Transformer编码器的磁场预测方法

金亮 尹振豪 刘璐 宋居恒 刘元凯

电工技术学报2024,Vol.39Issue(10):2937-2952,16.
电工技术学报2024,Vol.39Issue(10):2937-2952,16.DOI:10.19595/j.cnki.1000-6753.tces.230265

基于残差U-Net和自注意力Transformer编码器的磁场预测方法

Magnetic Field Prediction Method Based on Residual U-Net and Self-Attention Transformer Encoder

金亮 1尹振豪 1刘璐 2宋居恒 2刘元凯2

作者信息

  • 1. 省部共建电工装备可靠性与智能化国家重点实验室(河北工业大学) 天津 300401||河北省电磁场与可靠性重点实验室(河北工业大学) 天津 300401
  • 2. 省部共建电工装备可靠性与智能化国家重点实验室(河北工业大学) 天津 300401
  • 折叠

摘要

Abstract

Accurate simulation of electromagnetic characteristics in electrical equipment relies on the finite element method.However,the increasing complexity of large electrical machines and transformers poses challenges,leading to prolonged simulation time and significant computational resource consumption.At the same time,the finite element method cannot establish a priori model.When design parameters,structures,or operating conditions change,it is necessary to reestablish the model.Considering the powerful feature extraction ability of deep learning,this paper proposes a magnetic field prediction method based on a residual U-Net and a self-attention Transformer encoder.The finite element method is used to obtain the dataset for deep learning training.The deep learning model can be trained once and used for multiple predictions,addressing the limitations of the finite element method and reducing computational time and resource consumption. Firstly,this paper leverages the inherent advantages of the convolutional neural network(CNN)in image processing,particularly the U-shaped CNN,known as U-Net,based on the encoder and decoder structure.This architecture exhibits a stronger ability to capture fine details and learn from limited samples than the traditional CNN.To mitigate network degradation and address convolutional operation limitations,short residual connections and Transformer modules are introduced to the U-Net architecture,creating the ResUnet-Transformer model.The short residual connections accelerate network training,while the self-attention mechanism from the Transformer network facilitates the effective interaction of global features.Secondly,this paper introduces the Targeted Dropout algorithm and adaptive learning rate to suppress overfitting and enhance the accuracy of magnetic field predictions.The Targeted Dropout algorithm incorporates post-pruning strategies into the training process of neural networks,effectively mitigating overfitting and improving the model's generalization.Additionally,an adaptive learning rate is implemented using the cosine annealing algorithm based on the Adam optimization algorithm,gradually reducing the learning rate as the objective function converges to the optimal value and avoiding oscillations or non-convergence.Finally,the ResUnet-Transformer model is validated through engineering cases involving permanent magnet synchronous motors(PMSM)and amorphous metal transformers(AMT). On the PMSM dataset,training the ResUnet-Transformer model with 250 samples and testing it with 100 samples,the mean square error(MSE)and mean absolute percentage error(MAPE)are used as performance evaluation metrics.Compared to CNN,U-Net,and Linknet models,the ResUnet-Transformer model achieves the highest prediction accuracy,with an MSE of 0.07×10-3 and a MAPE of 1.4%.The prediction efficiency of the 100 test samples using the ResUnet-Transformer model surpasses the finite element method by 66.1%.Maintaining consistency in structural and parameter settings,introducing the Targeted Dropout algorithm and cosine annealing algorithm improves the prediction accuracy by 36.4%and 26.3%,respectively.To evaluate the model's generalization capability,the number of training samples for PMSM and AMT datasets is varied,and the model is tested using 100 samples.Inadequate training samples result in poor magnetic field prediction performance.When the training dataset size increases to 300,the prediction error does not decrease but shows a slight rise.However,with further increases in the training dataset size,the error significantly decreases,and the MAPE for the PMSM and AMT datasets reaches 0.7%and 0.5%,respectively,with just 500 training samples.

关键词

有限元方法/电磁场/深度学习/U-Net Transformer

Key words

Finite element method/electromagnetic field/deep learning/U-net/Transformer

分类

信息技术与安全科学

引用本文复制引用

金亮,尹振豪,刘璐,宋居恒,刘元凯..基于残差U-Net和自注意力Transformer编码器的磁场预测方法[J].电工技术学报,2024,39(10):2937-2952,16.

基金项目

国家自然科学基金面上项目(51977148),国家自然科学基金重大研究计划项目(92066206)和中央引导地方科技发展专项自由探索项目(226Z4503G)资助. (51977148)

电工技术学报

OA北大核心CSTPCD

1000-6753

访问量0
|
下载量0
段落导航相关论文