Paper
10 February 2009 Crossmodal information for visual and haptic discrimination
Flip Phillips, Eric J. L. Egan
Author Affiliations +
Proceedings Volume 7240, Human Vision and Electronic Imaging XIV; 72400H (2009) https://doi.org/10.1117/12.817167
Event: IS&T/SPIE Electronic Imaging, 2009, San Jose, California, United States
Abstract
Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximal perception of objects. The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing the detection, discrimination, and production of 3D shape. A stimulus set of 25 complex, natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unimodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli. Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossmodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Flip Phillips and Eric J. L. Egan "Crossmodal information for visual and haptic discrimination", Proc. SPIE 7240, Human Vision and Electronic Imaging XIV, 72400H (10 February 2009); https://doi.org/10.1117/12.817167
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visualization

Haptic technology

Information visualization

Spatial frequencies

Error analysis

Statistical analysis

3D acquisition

Back to Top