Paper
3 January 2020 Perceptual spatial-temporal video compressive sensing network
Wan Liu, Xuemei Xie, Zhifu Zhao, Guangming Shi
Author Affiliations +
Proceedings Volume 11373, Eleventh International Conference on Graphics and Image Processing (ICGIP 2019); 113731M (2020) https://doi.org/10.1117/12.2558039
Event: Eleventh International Conference on Graphics and Image Processing, 2019, Hangzhou, China
Abstract
Deep neural networks have been applied to video compressive sensing (VCS) task recently. The existing DNN-based VCS methods compress and reconstruct the scene video only in space or time dimensions, which ignores the spatial-temporal correlation of the video. And they generally utilize pixel-wise loss as the loss function, which causes the results to be over-smoothed. In this paper, we propose a perceptual spatial-temporal VCS network. The spatial-temporal VCS network, which compresses and recovers the video in both space and time dimensions, can preserve the spatial-temporal correlation of the video. Besides, we refine the perceptual loss by selecting specific feature-wise loss terms and adding a pixel-wise loss term. The refined perceptual loss can guide the spatial-temporal network to retain more textures and structures. Experimental results show the proposed method can achieve better visual effect with less recovery time than the state-of-the-art.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Wan Liu, Xuemei Xie, Zhifu Zhao, and Guangming Shi "Perceptual spatial-temporal video compressive sensing network", Proc. SPIE 11373, Eleventh International Conference on Graphics and Image Processing (ICGIP 2019), 113731M (3 January 2020); https://doi.org/10.1117/12.2558039
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Compressed sensing

Feature extraction

Convolution

Neural networks

Back to Top