Robust deep learning based shrimp counting in an industrial farm setting
Shrimp production is one of the fastest growing sectors in the aquaculture industry. Despite extensive research in recent years, stocking densities in shrimp systems still depend on manual sampling which is neither time nor cost efficient and additionally challenges shrimp welfare. This paper compares the performance of automatic shrimp counting solutions for commercial Recirculating Aquaculture System (RAS) based farming systems, using eight Deep Learning based methods. The entire dataset includes 1379 images of shrimps in RAS farming tanks, taken at a distance using an iPhone 11 mini. These were manually annotated, with bounding boxes for every clearly visible shrimp. The dataset was partitioned into training (60 %, 828 samples), validation (20 %, 276 samples) and test (20 %, 275 samples) splits for purposes of training and evaluating the models. The present work demonstrates that state-of-the-art object detection models outperform manual counting and achieve high performance across the entire production range and at various circumstances known to be challenging for object detection (dim light, overlapping and small animals, various acquisition devices and image resolutions and camera distance to object). Highest counting performance was obtained with models based on YOLOv5m6 and Faster R–CNN (as opposed to neural network autoencoder architecture to estimate a density map). The best model generalizes well on an independent test set and even shows promising performance when tested with different taxa. The model performs best at densities below 200 shrimps per image with an overall error of 5.97 %. It is assumed that this performance can be improved by increasing the dataset size, especially with images at high shrimp stocking density, and it is strongly believed that a performance below the 5 % error threshold is close to being achieved, which will allow for deployment of the model in an industrial setting.