Abstract:An improved MobileNet-V2 convolutional neural network model was proposed, aiming at addressing the balance between computational resource consumption and recognition accuracy in apple leaf disease identification. Firstly, data augmentation techniques were employed to expand the diversity of image samples, thereby enhancing the model's generalization ability. Next, a transfer learning strategy was introduced, where pretraining and freezing some layer parameters reduced training time and computational resource consumption. The model further optimized feature extraction capability by incorporating group convolutions, a squeeze-and-excitation (SE) module from the channel attention mechanism, and weight pruning strategies, which enhanced both computational efficiency and accuracy. Experimental results demonstrated that, after optimization, the model achieved an accuracy of 99.1%, surpassing the value of the original MobileNet-V2 model by 5.9 percentage points, and outperforming traditional convolutional neural network models, like ResNet50, VGG16, and Xception by 4.6, 9.4, and 4.0 percentage points, respectively. The model's average precision was 98.9%, average recall was 98.75%, and F1 score was 98.82%, with only 4.06×10? parameters. In practical deployment, the model achieved fast and accurate disease detection, with an average inference time of 507 ms per image, demonstrating its practical value and potential for widespread application, offering insights for the development of smart agriculture.