计算机科学与探索2024,Vol.18Issue(5):1160-1181,22.DOI:10.3778/j.issn.1673-9418.2306024
深度学习的自然场景文本识别方法综述
Survey on Natural Scene Text Recognition Methods of Deep Learning
摘要
Abstract
Natural scene text recognition holds significant value in both academic research and practical applica-tions,making it one of the research hotspots in the field of computer vision.However,the recognition process faces challenges such as diverse text styles and complex background environments,leading to unsatisfactory efficiency and accuracy.Traditional text recognition methods based on manually designed features have limited representation capabilities,which are insufficient for effectively handling complex tasks in natural scene text recognition.In recent years,significant progress has been made in natural scene text recognition by adopting deep learning methods.This paper systematically reviews the recent research work in this area.Firstly,the natural scene text recognition methods are categorized into segmentation-based and non-segmentation-based approaches based on character segmentation required or not.The non-segmentation-based methods are further subdivided according to their technical implementation characteristics,and the working principles of the most representative methods in each category are described.Next,commonly used datasets and evaluation metrics are introduced,and the performance of various methods is compared on these datasets.The advantages and limitations of different approaches are discussed from multiple perspectives.Finally,the shortcomings and challenges are given,and the future development trends are also put forward.关键词
文本识别/深度学习/自然场景Key words
text recognition/deep learning/natural scene分类
信息技术与安全科学引用本文复制引用
曾凡智,冯文婕,周燕..深度学习的自然场景文本识别方法综述[J].计算机科学与探索,2024,18(5):1160-1181,22.基金项目
国家自然科学基金(61972091) (61972091)
广东省自然科学基金(2022A1515010101,2021A1515012639) (2022A1515010101,2021A1515012639)
广东省普通高校重点研究项目(2019KZDXM007,2020ZDZX3049) (2019KZDXM007,2020ZDZX3049)
佛山市科技创新项目(2020001003285) (2020001003285)
广东省教育科学规划课题(2021GXJK445). This work was supported by the National Natural Science Foundation of China(61972091),the Natural Science Foundation of Guang-dong Province(2022A1515010101,2021A1515012639),the Key Research Project of Universities of Guangdong Province(2019KZDXM007,2020ZDZX3049),the Science and Technology Innovation Project of Foshan(2020001003285),and the Educational Science Planning Project of Guangdong Province(2021GXJK445). (2021GXJK445)