Abstract:During the intelligent harvesting of premium tea, existing object detection algorithms face challenges of insufficient detection precision for tender tea shoots and slow inference speed, making deployment on edge computing devices difficult. To address these issues, YOLO-RET, a lightweight tea shoot detection model was proposed based on an improved YOLO v8n architecture. By introducing a redesigned lightweight feature extraction module (RGCSPELAN module) and an enhanced multi-scale feature fusion pyramid structure (EMBSFPN), the model significantly enhanced feature extraction and fusion capabilities while substantially reducing model parameters. Additionally, a novel loss function, Focaler-IoU, was incorporated to address sample distribution imbalance, further improving detection accuracy and robustness. Experimental results demonstrated that compared with YOLO v5n, YOLO v8n, YOLO v8s, and YOLO v10n, YOLO-RET achieved precision improvements of 3.2, 3.0, 0.8, and 4.3 percentage points respectively, with corresponding mAP@0.5 improvements of 2.7, 3.2, 0.4, and 3.1 percentage points. Furthermore, YOLO-RET reduced the parameter count by 43% and computational complexity by 2.2 ×10? compared with that of the original YOLO v8n. The algorithm was successfully deployed on an ATK-DLRK3568 development board with quantization optimization, which not only lowered hardware resource requirements but also maintained high recognition accuracy while enhancing inference efficiency and reducing the computational burden on edge devices. The research result can provide an efficient and accurate solution for real-time object detection deployment on edge computing platforms.