Most existing binocular stereo matching algorithms require a trade-off between accuracy and speed, unable to achieve both simultaneously. One reason lies in the complexity and variability of scenes that stereo matching tasks must handle, where disparities in heavily textured, weakly textured, and occluded areas are often difficult to infer correctly. Therefore, this paper proposes the Learnable Upsampling Bilateral Grid Refinement for Stereo Matching Network (LUGNet). Through learnable bilateral grid upsampling guided by the left image, LUGNet calculates offsets for cost volume upsampling, while simultaneously leveraging the network to automatically learn interpolation weights to accommodate features of different datasets. Ultimately, LUGNet achieves error rates comparable to high-precision networks with a parameter count of 2.6M and an inference time of 58ms.
The diffusion model has been widely applied in various aspects of artificial intelligence due to its flexible and diverse generative performance. However, there is a lack of research on applying diffusion models in the field of depth map restoration. This is primarily because the diverse generation capabilities are not sufficient to meet the requirements for depth map completion. Depth map completion requires rational completion in the current scene. In order to adapt diffusion models for depth map completion tasks, this paper proposes the Multi Condition Diffusion Model (MCDM). It allows the addition of conditional information to constrain the model’s rational completion of ill-regions. The MultiConditionLN module effectively adds multiple conditions to the depth map completion task. This module uses the completion region mask and the input image as conditions to constrain the model’s generation process. This enables the model to complete the regions that need restoration based on the scene of the input image. The proposed model achieves promising results on depth map datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.