Very deep convolutional neural networks (CNNs) have shown great power in image compressed sensing (CS) reconstruction and achieved significant improvements against traditional methods. Among these CNN-based methods, the number of convolutional feature maps is critical to the performance of the network. However, existing algorithms only perform average weighting processing on feature maps, and do not make full use of image feature differences to adaptively assign feature weights. To address this issue, we propose an attention mechanism network for image compression sensing reconstruction (AM-CSNet). AM-CSNet uses multiple attention modules (AM) to adaptively learn feature weights in the channel and spatial dimensions, which makes the model more lightweight and efficient. To maximize the performance of AM-CSNet, we use Residual Feature Aggregation Group (RFAG) to fully retain the features on different residual branches. Extensive CS experiments demonstrate that the proposed AM-CSNet is superior to many other state-of-the-art methods, such as TIP-CSNet and SCSNet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.