Proceedings Article | 15 March 2019
KEYWORDS: Head, 3D displays, Facial recognition systems, Computer aided design, 3D modeling, 3D image processing, Zoom lenses, 3D acquisition, MATLAB, Calibration, CAD systems, Image processing, Education and training, Graphics processing units
Stereoscopic vision modules have seen limited success in both engineering and consumer world, due to the required additional hardware (image acquisition, Virtual Reality headsets, 3D glasses). In the last years, especially the gaming and education sectors have benefited from such specialized headgear, providing virtual or augmented reality. However, many other industrial and biomedical applications such as e.g. computer aided design (CAD) or tomographic data display, so far have not fully exploited the increased 3D rendering capabilities of present-day computer hardware. We present an approach to use standard desktop PC hardware (monitor and webcam) to display user-position aware projections of 3D data without additional headgear. The user position is detected from webcam images, and the rendered 3D data (i.e. the view) is adjusted to match the corresponding user position, resulting in a quasi virtual reality rendering, albeit without the 3D effect of proper 3D head-gear. The approach has many applications from medical imaging, to construction and CAD, to architecture, to exhibitions, arts and performances. Depending on the user location, i.e. the detected head position, the data is rendered differently to attribute for the user view angle (zoom) and direction. As the user moves his or her head in front of the monitor, different features of the rendered object become visible. As the user moves closer to the screen, the view angle of the rendered data is decreased, resulting in a zoomed-in version of the rendered object.