This paper focuses on a new method for brain tumor segmentation in Magnetic Resonance Imaging (MRI), using a modified residual block and CBAM for the U-Net network. To deepen the network, we replace the convolutional layer with a residual block with a CBAM module. We also insert the CBAM dual-attention module after skip connection and upsampling at each layer. It solves the problem that the low-level features contain a lot of redundant information because the skip connection connects the feature maps extracted by the encoder directly to the corresponding layer of the decoder. The performance is evaluated on the MRI dataset of Medical Image Computing and Computer Aided Intervention Society (MICCAI) 2018 Brain Tumor Segmentation Challenge. Numerical results are presented in the form of Specifity, Sensitivity, HD_95 and Dice coefficient (DICE) for GD-enhancing tumor (ET), tumor core (TC) and whole tumor (WT), respectively. We compare the proposed method with expert manual method and other state-of- art methods. Experiments show that RDAU-Net achieves state-of-the-art performance.
A superpixel consists of a series of small regions composed of pixel points that are located next to each other and have similar features such as color, luminance, and texture. Most of these small areas retain the original information of the image, which facilitates faster follow-up processing. Most of the existing superpixel segmentation methods do not use deep learning network architectures. Some methods use deep learning, but they also have a very simple network architecture. And there are a few superpixel segmentation methods combined with deep learning. In this method, a more complex convolutional neural network with an attention module is applied to superpixel segmentation. The resulting superpixel segmentation network is more complex, and the addition of the attention module allows for more accurate chunking of the images, thus yielding more comprehensive and detailed segmentation results. By experimenting on a public dataset BSDS500, the method has higher accuracy in superpixel segmentation. Also, the segmentation speed of the method is similar to that of the existing simple segmentation networks.
Infrared images can be differentiated between target and background according to the distinction in thermal radiation, which generally works under any lighting condition. In comparison, visible images available in abundant detail, consistent as human visual system. In this study, a novel approach for the fusion of infrared and visible images is proposed built on a dual-discriminator GAN with attention module (DDGANA). Our approach establishes confrontation training between one generator and two discriminators. The goal of the generator is to output result with contrast information and details. The two discriminants aim to increase the similarity between the images generated by the generator and the infrared and visible images. After sequential adversarial training, DDGANA outputs images that have preserved the high contrast and the abundant texture detail. Experiments on the TNO dataset prove that our approach obtains an improved performance over the other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.