Abstract:With the rapid development of aquaculture, problems such as excessive feeding leading to feed waste and water pollution, as well as insufficient feeding causing malnutrition in fish populations, have become increasingly prominent. An intelligent and precise feeding system for cage culture was proposed based on vision and multiple sensors. By integrating multi-source data such as RGB images, depth images, pressure sensors, and acceleration sensors, the system quantified the feeding intensity of adult fish in real time and achieved precise feeding. The system used an improved YOLO v8n-seg model for RGB image segmentation, dividing the water surface fluctuation state into three categories: strong, weak, and none. Within the segmented water surface state region, the HSV color detection method was employed to detect the area of fish feed on the water surface. The system analyzed the difference in depth values between two consecutive frames in the depth image by using the frame difference method, quantifying water surface fluctuations into three levels: strong, weak, and none. Key features were extracted from the data collected by pressure sensors and acceleration sensors, and a random forest model was used to classify the feeding state of the fish population to compensate for the limitations of single visual features. Ultimately, the results of the five data decision modules were fused through a weighted fusion strategy to establish a real-time feeding decision model. Through multiple field tests, the accuracy of feeding intensity assessment of this feeding system based on feeding behavior reached 95.45%, with a feeding error rate of 1.72%. It can accurately identify the feeding intensity of fish populations, effectively reduced feed waste and water pollution, and had good practicality and real-time performance in actual cage culture environments.