Aiming at the problem of poor fusion quality of traditional algorithms, especially the lack of texture information in infrared images or visible images inability to obtain sufficiently bright images results in images with poor signal-to-noise ratios and superimposed significant read noise in lousy weather conditions, a deep learning method of infrared-visible images fusion based on encoder-decoder architecture is proposed. The image fusion problem is cleverly transformed into the issue of maintaining the structure and intensity ratio of the infrared-visible image. The corresponding loss function is designed to expand the weight difference between thermal target and background. In addition, a single image super-resolution reconstruction based on a regression network is introduced to tackle the traditional network mapping function not suitable for natural scenes. The forward generation and reverse regression models are considered to reduce the irrelevant function mapping space and approach the authentic scene data through double mapping constraints. Compared with other state-of-the-art approaches, our experimental results achieve appealing performance on visual effects and objective assessments. In addition, we can stably provide high-resolution reconstruction results consistent with the human visual observation in different scenes while avoiding the trade-off between spatial resolution and thermal radiation information typical of conventional fusion imaging.
Wide field-of-view (FOV) and high-resolution (HR) imaging systems have become indispensable information acquisition equipment in many applications, such as video surveillance, target detection and remotely sensed imagery. However, due to the constraints of spatial sampling and detector processing level, the ability of remote sensing to obtain high spatial resolution is limited, especially in the wide FOV imaging. To solve these problems, we propose a multi-scale feature extraction (MSFE) network to realize super-resolution imaging in a low-light-level (LLL) environment. In order to perform data fusion and information extraction for low resolution (LR) images, the network extracts high-frequency detail information from different dimensions by combining the channel attention mechanism module and skip connection module. In this way, redundant low-frequency signals can pass through the network tail-ends, furthermore, the more important high-frequency components calculation can be focused. The qualitative and quantitative analysis results show that the proposed method achieves the most advanced performance compared with other state-of-the-art methods, which shows the superiority of the design framework and the effectiveness of presenting modules.
Wide field-of-view (FOV) and high-resolution (HR) imaging systems have become indispensable information acquisition equipment in many applications, such as video surveillance, target detection and remotely sensed imagery. However, due to the constraints of spatial sampling and detector processing level, the ability of remote sensing to obtain high spatial resolution is limited, especially in the wide FOV imaging. To solve these problems, we propose a multi-scale feature extraction (MSFE) network to realize super-resolution imaging in a low-light- level (LLL) environment. In order to perform data fusion and information extraction for low resolution (LR) images, the network extracts high-frequency detail information from different dimensions by combining the channel attention mechanism module and skip connection module. In this way, redundant low-frequency signals can pass through the network tail-ends, furthermore, the more important high-frequency components calculation can be focused. The qualitative and quantitative analysis results show that the proposed method achieves the most advanced performance compared with other state-of-the-art methods, which shows the superiority of the design framework and the effectiveness of presenting modules.
In recent decades, with the rapid development of image sensor technology, image acquisition has gradually evolved from a single sensor mode to a multi-sensor mode. The data information obtained by a single sensor is limited, and the use of multi-source data fusion can provide a more accurate understanding of the observation scene. This paper proposes a network structure of infrared visible color night vision image fusion based on deep learning. The network adopts a fusion-encoding-decoding structure for end-to-end learning to achieve the purpose of color night vision image fusion, making the image more in line with human visual effects. The fusion structure contains a multi-scale feature extraction block and a channel attention block, which perform feature extraction on low-resolution infrared images and visible images respectively. The multi-scale feature block can expand the receptive field and avoid losing too much feature information. The channel attention block can improve the sensitivity of the network to channel characteristics. A certain number of convolutional layers and deconvolutional layers are used in the network to realize the encoding and decoding of the feature map to achieve the purpose of restoring the color fusion image. After experimental verification, our method has a good fusion effect, rich colors, and conforms to the human visual effect.
A terahertz imaging method based on aperture coding is proposed to solve the problems of large pixel size and low resolution of the terahertz imaging detector. The forward model of the terahertz coded incoherent imaging system is established, and the optimal coding imaging strategy is discussed. By adding coded modulation to the aperture, the image detected by the imaging detector can generate pixel-level light intensity conversion. Through the multi-frame aperture coding simulation experiment, the pixel aliasing problem caused by the detector pixel size is effectively solved, and the imaging resolution is improved to the diffraction limit of the approximate lens, so as to guide the future experiments.
KEYWORDS: Reconstruction algorithms, Integral imaging, Detection and tracking algorithms, Electron multiplying charge coupled devices, Charge-coupled devices, Image quality, Image processing, 3D acquisition, 3D image processing, 3D displays
Electron-multiplying charge-coupled device (EMCCD) has the characteristic of single photon response under a low-light environment. It is proposed that the reconstruction algorithm of low-light integral imaging by EMCCD reconstruct the details of the target under a low-light environment. First, the algorithm acquires a series of element images by EMCCD integral imaging system. Second, as grayscale values of different element images of the same target meet Poisson distribution, the algorithm introduces a local self-adaptive factor and derives the posterior probability distribution of grayscale value of the target. Finally, it calculates the new element images by posterior probability distribution and reconstructs the target image by updated element images. Experimental results show that the peak signal-to-noise ratio of the reconstructed image by the proposed method is 4.3 dB higher than that of conventional Bayesian estimation. Considering the reconstructed image quality and computational complexity, the overall quality of the reconstructed image is the best when using the 7 × 7 neighborhood range to calculate the local self-adaptive factor in the algorithm. Experimental results show that the proposed algorithm greatly improves the quality of the reconstructed image of the target under low-light environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.