| 注册
首页|期刊导航|农业工程学报|基于特征点检测的生猪行为识别方法

基于特征点检测的生猪行为识别方法

倪彭飞 周素茵 叶俊华 徐爱俊

农业工程学报2025,Vol.41Issue(6):173-184,12.
农业工程学报2025,Vol.41Issue(6):173-184,12.DOI:10.11975/j.issn.1002-6819.202408188

基于特征点检测的生猪行为识别方法

Recognizing pig behavior using feature point detection

倪彭飞 1周素茵 1叶俊华 2徐爱俊3

作者信息

  • 1. 浙江农林大学数学与计算机科学学院,杭州 311300
  • 2. 浙江农林大学环境与资源学院,杭州 311300
  • 3. 浙江农林大学数学与计算机科学学院,杭州 311300||全省农业智能感知与机器人重点实验室,杭州 311300
  • 折叠

摘要

Abstract

Pig farming has been shifting towards the intensive and intelligent development in recent years,particularly with the advancements in artificial intelligence,deep learning and automation technologies.The machine vision and deep learning can be integrated to realize the non-invasive individual identification and behavior monitoring.It is crucial to determine the characteristic information that generated by pigs during daily activities.However,the existing extraction of pig feature has confined to the complex and inefficient recognition on the pig behavior,due to the frequent changes in pig posture.In this study,a feature points detection model(YOLO-ASF-P2)was proposed to extract the feature points in the key areas of the pig's body.Additionally,a pig behavior recognition model(CNN-BiGRU)was also introduced to combine the temporal information from the feature points.Firstly,the video and image data of pigs were collected by multi-angle cameras that deployed in the pig house.Two datasets were then formed for the pig feature point detection and behavior recognition.Traditional extraction of pig feature was often associated with the complex calculations,redundant feature information and low model robustness.Therefore,the original YOLOv8s-Pose model was improved to result in the YOLO-ASF-P2 model.The feature information of the P2 detection layer was utilized for the small targets.The attention scale sequence fusion(ASF)architecture was combined to focus on the key feature points of live pigs.The scale sequence feature fusion module(SSFF)was used the Gaussian kernel and nearest neighbor interpolation,in order to align the multi-scale feature maps of different downsampling rates(such as P2,P3,P4,and P5 detection layers).The same resolution was obtained as the high-resolution feature map.The triple feature encoding(TFE)module was used to capture the local fine details of small targets,and then fuse the local and global feature information.The channel and position attention mechanism module(CPAM)was used to capture and refine the spatial positioning information related to small targets.The important feature was effectively extracted from the feature map in different channels.The positioning accuracy of the model was also improved.The CNN-BiGRU model was used to recognize the pig behavior.The bidirectional gated recurrent unit(BiGRU)units were also utilized to capture the forward and backward information of sequence data in a bidirectional manner.The output was then weighted using the attention mechanism module(AttentionBlock).The excellent and stable performance was achieved in the self-built dataset.The average recognition accuracy of the model reached 96%for the three behaviors of sitting,standing,and lying.Specifically,the detection accuracy of YOLO-ASF-P2 reached 92.5%,the recall rate was 90%,the average precision(AP50~95)was 68.2%,the parameter volume was only 18.4 M,and the performance was 39.6 G.These values were 1.1%,2.3%,1.5%,and 32.9%higher than those of the original model,respectively,where the model parameter volume was reduced by 17.5%.The average precision(AP50-95)and accuracy of YOLO-ASF-P2 were improved by 17.4%and 2.9%,respectively,compared with the MMPose.While almost the same level of recall was maintained to enhance the performance of detection.The YOLO-ASF-P2 was improved the accuracy,recall rate and average precision(AP50-95)with the reduced number of parameters,compared with the RTMPose.The similar accuracy was achieved,compared with the YOLOv5s-Pose.Both the recall rate and average precision(AP50-95)were improved,compared with the YOLOv5s-Pose and YOLOv7s-Pose.The slightly lower accuracy was also observed,compared with the YOLOv7s-Pose.The lightweight model was achieved in the better performance to recognize the pig feature points.The CNN-BiGRU model of pig behavior also shared the high average recognition accuracy and stable performance.The parameter volume was 0.151 M,and the performance was 27.1 G.In summary,the integrated YOLO-ASF-P2 and CNN-BiGRU models were significantly improved the accuracy and robustness of pig feature point detection and behavior recognition.The finding can also offer the valuable tools for the intensive and intelligent development of pig farming.

关键词

特征点检测/生猪行为/深度学习/时间序列/YOLOv8sPose/BiGRU

Key words

feature point detection/pig behavior/deep learning/time series/YOLOv8sPose/BiGRU

分类

信息技术与安全科学

引用本文复制引用

倪彭飞,周素茵,叶俊华,徐爱俊..基于特征点检测的生猪行为识别方法[J].农业工程学报,2025,41(6):173-184,12.

基金项目

浙江省"领雁"研发攻关计划项目(2022C02050) (2022C02050)

农业工程学报

OA北大核心

1002-6819

访问量7
|
下载量0
段落导航相关论文