Abstract:High-throughput 3D crop phenotyping is one of the core methodologies in modern crop phenomics research, providing crucial data support for holistic morphological structure analysis, precise evaluation of plant architectural traits, and genotype-phenotype association analysis. Aiming to address the challenges of low efficiency and limited data accuracy inherent in traditional manual measurements, a high-throughput 3D crop phenotyping data acquisition platform was developed based on an unmanned ground vehicle (UGV). The performance of four mainstream sensors (FLIR visible light camera, Kinect DK, Velodyne VLP-16, and Livox Avia) and their corresponding 3D reconstruction algorithms for crop phenotyping were systematically investigated. Specifically, it was compared the 3D reconstruction from visible light images based on structure-from-motion (SfM) and multi-view stereo (MVS), 3D reconstruction from RGB-depth images based on iterative closest point (ICP), point cloud reconstruction from solid-state LiDAR leveraging LiDAR-inertial odometry (LIO) and point cloud stitching from mechanical rotating LiDAR by using uniform velocity frame superposition. Experiments were conducted on potted lettuce plants in a greenhouse, where point cloud data acquired by the four methods underwent standardized processing. An automated processing pipeline was developed, enabling precise extraction and analysis of key phenotypic parameters, such as plant height and maximum canopy width. This research thoroughly explored and analyzed the characteristics, advantages, and disadvantages of each method. Their applicability was comprehensively evaluated based on point cloud quality, reconstruction efficiency, phenotypic trait accuracy and system cost. The findings can not only provide experimental basis for sensor selection and algorithm development of 3D phenotyping UGVs but also can offer valuable references for breeders and agronomists in selecting efficient and accurate phenotyping data acquisition approaches.