Hybrid cameras, which include a high-resolution camera surrounded by multiple low-resolution cameras, offer advantages over light-field cameras by avoiding trade-offs between spatial and angular resolution. However, reconstructing light field from hybrid cameras data requires overcoming the dual challenge of enhancing both spatial and angular resolution. Challenges include matching images from cameras with different resolutions and creating a trainable real capture dataset for hybrid cameras data. This study proposes a parametric representation of the neural light field with decoupled light field data, based on the spatial-angular consistency of the light field. The proposed method decomposes the light field into angle and color information, parametrically represents these two components using coordinate-based neural network, and builds neural disparity field and neural central view field. Using the disparity propagation equation, we connect two modules in series, forming a differentiable network architecture. To separate angle and color information in the light field, we match discrete camera images with continuous images expressed by neural central view field using a two-step training strategy. Initially, neural central view field is trained, followed by matching discrete camera images with continuous images expressed by neural central view field to train neural disparity field. Experimental results demonstrate that proposed method effectively utilizes the spatial and angular information of hybrid cameras data, resulting in high-quality light field reconstruction.
Implicit Neural Representation (INR) achieves the mapping between coordinates and signal values through multilayer perceptrons (MLPs). However, coordinate-based MLPs struggle with high-frequency information due to spectral bias. A common solution to the spectral bias problem is to use Fourier features with fixed encoding frequencies instead of spatial coordinates as inputs to the MLPs. Natural scenes exhibit distinct frequency spectra across local areas, with the main regions characterized by low frequencies and the edges by high frequencies. The fixed encoding frequency Fourier feature will lead to redundant input information and reduce the running speed of MLPs. Therefore, this paper proposes an Implicit Neural Representation with Adaptive Learnable Filters framework (NRALF), which is controlled by a learnable parameter to achieve low-pass, band-pass, high-pass, all-pass and all-no-pass functions. This filter filters the Fourier features of different frequencies, making the filtered coding space more sparse, thereby enhancing the expression ability of MLPs for natural scenes and significantly improving the convergence speed of MLPs. Experimental results show that the proposed method has higher accuracy and faster convergence speed in scene representation.
Accurate calibration of imaging system parameters is the basis of focal stack computational imaging system. In this paper, we set the focal plane and the corresponding imaging parameters as the preset position of the imaging systems, and define the focus measure by the Sum-modified Laplacian (SML) and feature point density to describe the focus degree of the resolution test chart. The preset positions are obtained by calculating the maximum focus measure, which can be used to realize the reconstruction of the scene depth map and the all-in-focus image. The experimental results show that the preset position calibration method proposed in this paper can achieve the high-precision reconstruction of the depth map and the all-in-focus image of the 3D scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.