Abstract:In complex unstructured farmland environments, accurate extraction of navigation lines is crucial for agricultural machinery and agricultural robots to achieve autonomous operation. Challenging factors such as variable lighting, undulating terrain, and weed interference that are common in agricultural environments, making traditional image processing methods perform poorly in terms of adaptability, accuracy, and real-time performance, it’s difficult to meet the visual navigation needs of smart agriculture. To address these issues, a ResAC-UNet deep learning network model was proposed based on an improved UNet. This model used the ResNet-50 network to replace the encoder structure of the traditional UNet to enhance feature extraction capabilities. The segmentation speed and real-time response capabilities were improved through optimized jump connections. The ASPP module was introduced in the bottleneck part of the network to achieve multi-scale receptive field modeling, while maintaining high-resolution features and capturing richer contextual information. In addition, the model integrate CBAM to enhance the accurate perception of crop row boundaries, effectively prevented the loss of key feature information, and further improved the segmentation quality. Based on the segmentation results, the row anchor method and RANSAC algorithm were used to achieve high-precision navigation line extraction and smoothing. The acquired front view image was converted into a bird’s-eye view to eliminate the perspective effect, and a top view of the crop row with ROI was generated and retained. The experimental results showed that the ResAC-UNet model achieved 99.23%, 95.44%, 85.23% and 94.71% in precision, MPA, MIoU and recall, respectively, which was better than the current mainstream segmentation networks such as Segformer, DDRNet, HRNet and DeepLabV3+. The average inference time of ResAC-UNet was 15.26ms, which met the real-time recognition requirements of intelligent agricultural machinery visual navigation. Three navigation lines can be extracted in the ROI area of the camera. The maximum angle error of the middle navigation line was only 0.96°, and the maximum pixel deviation was 4.3, which realized the reliable extraction of high-quality navigation lines. Compared with other navigation path extraction methods, the proposed method had higher accuracy and stability. The research result can provide an efficient and robust visual perception solution for the autonomous navigation of intelligent agricultural machinery in the field, which had certain practical value.