KEYWORDS: Visualization, Spatial resolution, Information visualization, Medical image reconstruction, Photons, Reconstruction algorithms, Medical imaging, Monte Carlo methods
Image reconstruction in nuclear medicine produces valuable volumetric data of vital markers in living bodies. Visual scene reconstruction methods, that aim to recreate a scene from camera images, are also continuously improved by the recent advancements of light-fields and camera systems. The parallels of the two fields are increasingly noticeable as we now have the computing power and methods to take into account transparent materials and ray trace the scattering and other effects of lights for visual scene reconstruction. In this paper, we aim to highlight and analyze the similarities and potential synergies of the two methods.
Multi-camera networks are becoming ubiquitous in a variety of applications related to medical imaging, education, entertainment, autonomous vehicles, civil security, defense etc. The foremost task in deploying a multi-camera network is camera calibration, which usually involves introducing an object with known geometry into the scene. However, most of the aforementioned applications necessitate non-intrusive automatic camera calibration. To this end, a class of camera auto-calibration methods imposes constraints on the camera network rather than on the scene. In particular, the inclusion of stereo cameras in a multi-camera network is known to improve calibration accuracy and preserve scale. Yet most of the methods relying on stereo cameras use custom-made stereo pairs, and such stereo pairs can definitely be considered imperfect; while the baseline distance can be fixed, one cannot guarantee the optical axes of two cameras to be parallel in such cases. In this paper, we propose a characterization of the imperfections in those stereo pairs with the assumption that such imperfections are within a considerably small, reasonable deviation range from the ideal values. Once the imperfections are quantified, we use an auto-calibration method to calibrate a set of stereo cameras. We provide a comparison of these results with those obtained under parallel optical axes assumption. The paper also reports results obtained from the utilization of synthetic visual data.
The recent advances in light-The recent advances in light-field acquisition and display systems bring closer the day when they become commercially available and accessible to wide audiences for numerous use cases. Their usefulness and potential benefits have already been disseminated in the field and they started emerging in both industry and entertainment applications. The long-term goal of the scientific community and future manufacturers is to research and develop fully immersive, yet seamless and efficient systems that can achieve the ultimate visual experience. However, certain paths leading to such goals are blocked by technological and physical limitations, and also significant challenges that have to be coped with. Although some issues that rise regarding the development of capture and display systems may actually be nearly impossible to overcome, the potential for light-field applications is indeed immense, thus worth the vast scientific effort. In this paper, we systematically analyze and present the current and future relevant limitations and challenges regarding the research and development of light-field systems. As current limitations are primarily application-specific, both challenges and potentials are approached from the angle of end-user applications. The paper separately highlights the use case scenarios for industry and entertainment, and for everyday commercial usage. Currently existing light-field systems are assessed and introduced from a technical perspective and also with regards to usability, and potential future systems are described based on state-of-art technologies and research focuses. Aspects of practical usage, such as scalability and price, are thoroughly detailed for both light-field capture and visualization.
Real-time video transmission services are unquestionably dominating the flow of data over the Internet, and their percentage of the global IP packet traffic is still continuously increasing. As novel visualization technologies are emerging, they tend to demand higher bandwidth requirements; they offer more visually, but in order to do so, they need more data to be transmitted. The research and development of the past decades in optical engineering enabled light-field displays to surface and appear in the industry and on the market, and light-field video services are already on the horizon. However, the data volumes of high-quality light-field contents can be immense, creating storing, coding and transmission challenges. If we consider the representation of light-field content as a series of 2D views, then for a single video frame, angular resolution determines the number of views within the field of view, and spatial resolution defines the 2D size of those views. In this paper, we present the results of an experiment carried out to investigate the perceptual differences between different angular and spatial resolution parametrization of a light-field video service. The study highlights how the two resolution values affect each other regarding perceived quality, and how the combined effects are detected, perceived and experienced by human observers. By achieving an understanding of the related visual phenomena, especially degradations that are unique for light-field visualization, the design and development of resource-efficient light-field video services and applications become more straightforward.
Light-field visualization allows the users to freely choose a preferred location for observation within the display’s valid field of view. As such 3D visualization technology offers continuous motion parallax, the users location determines the perceived orientation of the visualized content, if we consider static objects and scenes. In case of interactive light-field visualization, the arbitrary rotation of content enables efficient orientation changes without the need for actual user movement. However, the preference of content orientation is a subjective matter, yet it is possible to be objectively managed and assessed as well. In this paper, we present a series of subjective tests we carried out on a real light-field display that addresses static content orientation preference. The state-of-the-art objective methodologies were used to evaluate the experimental setup and the content. We used the subjective results in order to develop our own objective metric for canonical orientation selection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.