Image dehazing technology is a hot topic in the fields of image processing and computer vision, aiming to obtain details and texture features of the original scene from foggy images, and then obtain clear and fog free images. Most of the existing research methods are suitable for tasks in low fog scenarios. As the fog concentration increases, the image reconstruction quality of the algorithm significantly decreases, accompanied by detail loss and distortion. In addition, most existing algorithms require a large amount of foggy datasets during model training, and model training takes a long time, which reduces the practicality of the model. In response to the above issues, this paper proposes an image dehazing model based on a small sample multi attention mechanism and multi frequency branch fusion (MFBF-Net). This model can effectively extract high-frequency and low-frequency detail information in the image, and reconstruct the real image to the greatest extent possible. The experimental results show that the dehazing model proposed in this paper exhibits good dehazing performance on small sample datasets, and has good performance in different concentrations of foggy scenes.
Non-line-of-sight(NLOS) imaging through fog has been extensively researched in the fields of optics and computer vision. However, due to the influence of strong backscattering and diffuse reflection generated by the dense fog on the temporal-spatial correlations of photons returning from the target object, the reconstruction quality of most existing methods is significantly reduced under dense fog conditions. In this study, we define the optical imaging process in a foggy environment and propose a hybrid intelligent enhancement perception(HIEP) system based on Time-of-Flight(ToF) methods and physics-driven Swin transformer(ToFormer) to eliminate scattering effects and reconstruct targets under heterogeneous fog with varying optical thickness. Furthermore, we assembled a prototype of the HIEP system and established the Active Non-Line-of-Sight Imaging Through Dense Fog(NLOSTDF) dataset to train the reconstruction network. The experimental results demonstrate that even in dense fog short-distance scenarios with an optical thickness of up to 2.5 and imaging distances less than 6 meters, our approach achieves clear imaging of the target scene, surpassing existing optical and computer vision methods.
Image restoration is a popular and challenge task, which is regarded as a classical inverse problem. Condat-V ũ primal-dual algorithm based on proximal operator is one of successful optimization methods. It is further reformulated as a primal-dual proximal network, where one iteration in the original algorithm corresponds to one layer in the network. The drawback of primal-dual network is that blur kernels should be given as prior information, however, it is usually very hard to be known in the real situation. In this work, we propose a deep encoder-decoder primal-dual proximal network, named ED-PDPNet. In each layer, the blur kernels and the projections between the primal and dual variables are designed as encoder-decoder modules, in this way, the network can be learned in an end-to-end way and all the parameters in the primal-dual algorithm are learned. The proposed method is applied on the MNIST and BSD68 datasets for image restoration. The preliminary results show that the proposed method by combining simple encoder-decoder modules obtained very promising and competitive performance compared to the state-of-the-art methods. In addition, the proposed network is shown to be a lightweight network with fewer learning parameters in comparison to the recent popular transformer-based method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.