Paper
20 December 2021 A new self-supervised method for supervised learning
Yuhang Yang, Zilin Ding, Xuan Cheng, Xiaomin Wang, Ming Liu
Author Affiliations +
Proceedings Volume 12155, International Conference on Computer Vision, Application, and Design (CVAD 2021); 121550E (2021) https://doi.org/10.1117/12.2626541
Event: International Conference on Computer Vision, Application, and Design (CVAD 2021), 2021, Sanya, China
Abstract
In traditional self-supervised visual feature learning, convolutional neural networks (ConvNets) trained by a proposed pretext task with only unlabeled data encode high-level semantic visual representations for downstream tasks of interest. The proposed pretext tasks are mostly based on images or videos. In this work, starting from the feature layers, we propose a completely new pretext task formulated within ConvNets, and use it to enhance the supervised learning of fully labeled datasets. We discard the channels on feature maps after particular convolutional layers to generate self-supervised labels, and combine them with the original labels for classification. Our objective is to mine richer feature information by making ConvNets understand which channels are missing at the same time of classification. Experiments show that our improvement is effective across multiple models and datasets.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yuhang Yang, Zilin Ding, Xuan Cheng, Xiaomin Wang, and Ming Liu "A new self-supervised method for supervised learning", Proc. SPIE 12155, International Conference on Computer Vision, Application, and Design (CVAD 2021), 121550E (20 December 2021); https://doi.org/10.1117/12.2626541
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Visualization

Mining

Data modeling

Gallium nitride

Image classification

Video

Back to Top