Rotated target classification is a key problem in the automatic target recognition of sonar images. We propose herein a novel rotated target classification method of sonar images featured by classification after alignment. The proposed method first aligns the orientation of the rotated targets to normal orientation, and then uses the aligned images to train and test Convolution Neural Networks (CNN) to predict target categories. On the basis of rotated target detection of Synthetic Aperture Sonar (SAS) images, the proposed method was applied to classify the detected rotated targets and compared with the widely used data augmentation method. The results demonstrate that the proposed method significantly improves the classification accuracy, accelerates both the training and inference of CNN, and decreases the number of parameters of CNN.
Rotated target recognition is a challenge for Convolutional Neural Networks (CNN), and the current solution is to make CNN rotational invariant through data augmentation. However, data augmentation makes CNN easy to overfit small scale sonar image datasets, and increases its numbers of parameters and training time. This paper proposes to recognize rotated targets of sonar images using a novel CNN with Rotated Inputs (RICNN), which doesn’t need data augmentation. During training, RICNN was trained with sonar images of targets only at one orientation, which avoid it to learn multiple rotated versions of the same targets, and reduces both number of parameters and training time of CNN. During testing, RICNN calculated classification scores for each test image and its all-possible rotated versions. The max of these classification scores were used to simultaneously estimate the category and orientation of each target. Besides, to improve the generalization of RICNN on imbalanced sonar datasets, this paper also designs an imbalanced data sampler. Experiments on a self-made small, imbalanced sonar image rotated target recognition dataset show that the improved RICNN achieves 4.25% higher classification accuracy than data augmentation, and reduces the number of parameters and training time to 2.25% and 19.2% of that of data augmentation method. Moreover, RICNN achieves comparable orientation estimation accuracy with a CNN orientation regressor trained with data augmentation. Codes, dataset are publicly available.
Weight sharing across different locations makes Convolutional Neural Networks (CNNs) space shift invariant, i.e., the weights learned in one location can be applied to recognize objects in other locations. However, weight sharing mechanism has been lacked in Rotated Pattern Recognition (RPR) tasks, and CNNs have to learn training samples in different orientations by rote. As such rote-learning strategy has greatly increased the difficulty of training, a new solution for RPR tasks, Pre-Rotation Only At Inference time (PROAI), is proposed to provide CNNs with rotation invariance. The core idea of PROAI is to share CNN weights across multiple rotated versions of the test sample. At the training time, a CNN was trained with samples only in one angle; at the inference-time, test samples were pre-rotated at different angles and then fed into the CNN to calculate classification confidences; at the end both the category and the orientation were predicted using the position of the max value of these confidences. By adopting PROAI, the recognition ability learned at one orientation can be generalized to patterns at any other orientation, and both the number of parameters and the training time of CNN in RPR tasks can be greatly reduced. Experiments show that PROAI enables CNNs with less parameters and training time to achieve state-of-the-art classification and orientation performance on both rotated MNIST and rotated Fashion MNIST datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.