面向莲蓬自动采摘的轻量级目标识别与精准定位方法
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金项目(32171899)


Lightweight Target Recognition and Precise Localization Method for Automated Lotus Seedpod Harvesting
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在莲蓬采摘过程中,实现高效、精准的莲蓬检测与定位对提高采摘效率并降低误采风险具有重要意义,但现有方法多依赖计算资源密集且结构复杂的深度学习模型,难以满足现场实时应用需求。为此,本文提出了一种适用于莲蓬采摘场景的轻量级目标检测与定位方法,基于轻量级语义分割模型(Lightweight lotus segmentation network, LLSegNet),在DeepLabv3+框架中采用MobileNetV2作为骨干网络。针对采摘过程莲蓬的多尺度变化、细节捕捉难度及背景干扰等问题,引入密集空洞空间金字塔池化、条带池化、卷积块注意力及高效通道注意力网络等关键增强策略,以提升不同尺度下特征提取与表达能力,同时保持整体模型的轻量化。通过Ubuntu 20.04操作系统试验平台,使用PyTorch 2.3.1框架对模型进行训练与评估。结果表明,LLSegNet模型平均交并比(mIoU)和平均像素准确率(mPA)分别达到86.1%和92.5%,模型内存占用量为15.9 MB,帧率(FPS)为73.4 f/s,均优于主流语义分割模型。基于LLSegNet模型生成的高质量语义分割结果,提出了一种结合图像处理与骨架分析的采摘点定位方法,通过图像预处理、骨架提取、几何分析及法向量扩展映射,实现了采摘点精准定位,成功率为88.5%。研究结果表明,本文方法不仅提升了莲蓬识别与定位精度,还实现了模型高效轻量化,具备在资源受限农业环境中应用与进一步推广的潜力。

    Abstract:

    During lotus seedpod harvesting, efficient and precise detection and localization are essential for improving harvesting efficiency and minimizing the risk of mispicking. However, existing lotus seedpod recognition methods predominantly rely on computationally intensive and structurally complex deep learning models, rendering them impractical for real-time field applications. To address this limitation, a lightweight object detection and localization method optimized for lotus seedpod harvesting scenarios was proposed. The method was based on the lightweight lotus segmentation network (LLSegNet), a lightweight semantic segmentation model that employed MobileNetV2 as the backbone within the DeepLabv3+ framework. To cope with challenges such as multi-scale variation, difficulty in capturing fine details, and background interference during harvesting, key enhancement strategies, including dense atrous spatial pyramid pooling, strip pooling, convolutional block attention, and an efficient channel attention network. These improvements enhanced multi-scale feature extraction and representation while maintaining the lightweight nature of the overall model. Experiments were carried out on an Ubuntu 20.04 platform utilizing the PyTorch 2.3.1 framework for model training and evaluation. The results demonstrated that the LLSegNet model attained an average intersection over union (mIoU) of 86.1% and an average pixel accuracy (mPA) of 92.5%, with a memory footprint of 15.9 MB and a frame rate (FPS) of 73.4 f/s, all of which were superior to mainstream semantic segmentation models. Furthermore, leveraging the high-quality semantic segmentation results produced by the LLSegNet model, a harvesting point localization method that integrated image processing and skeleton analysis was proposed. This method accomplished precise localization of harvesting points through a combination of image preprocessing, skeleton extraction, geometric analysis, and normal vector expansion mapping, achieving a success rate of 88.5%. The findings demonstrated that the proposed method not only improved detection and localization accuracy but also remained computationally efficient, showing strong potential for deployment and further application in resource-constrained agricultural environments.

    参考文献
    相似文献
    引证文献
引用本文

唐涛,叶秉良,胡淼,丰睿,俞高红.面向莲蓬自动采摘的轻量级目标识别与精准定位方法[J].农业机械学报,2026,57(8):13-22. TANG Tao, YE Bingliang, HU Miao, FENG Rui, YU Gaohong. Lightweight Target Recognition and Precise Localization Method for Automated Lotus Seedpod Harvesting[J]. Transactions of the Chinese Society for Agricultural Machinery,2026,57(8):13-22.

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-01-16
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2026-04-15
  • 出版日期:
文章二维码