Fisheries are one of the few disciplines in biology where data collection continues to rely on the removal and destruction of the objects being studied. This practice has come under increasing scrutiny in recent years as research projects have been terminated due to denied permits. In some instances, research budgets have had to absorb the cost of purchasing quota for the fish captured, with difficulty in publishing results due to animal welfare concerns. In this paper, we proposed a non-extractive sampling system to localize fishes from underwater images obtained at aquaculture farms, which suffered from several issues; being, 1) low luminance issues, which could be significantly in detecting fishes, 2) severe water‟s turbidity due to fish mass caged in an area, and 3) protection / enclosure designed for camera to ensure fishes shy away from it. These images acquired in highly turbid waters are difficult to be recovered; due to 1) the fish feeding process (fish feed) which add noises to the already turbid water, and 2) the existing healthy biodiversity at the aquaculture farms. In this work, we investigate the performance of Faster R-CNN to localize fishes in the highly turbid dataset, under different network architectures at the base network. Different base network such as MobileNet, MobileNetV2, DenseNet, and ResNet, are employed. Experimental results show that MobileNetV2, with learning rate 0.01, with 500 iteration and 15 epoch, and with 87.52 % classification accuracy, is the most feasible to be deployed in the resource constrained environment, with about 6.7M parameters requiring 27.2 MB storage. These findings will be useful when the equipment is embedded with the Faster R-CNN when placed underwater for monitoring purposes.
|