Paper
1 March 1990 Unifying Voice And Hand Indication Of Spatial Layout
Tomoichi Takahashi, Akira Hakata, Noriyuki Shima, Yukio Kobayashi
Author Affiliations +
Proceedings Volume 1198, Sensor Fusion II: Human and Machine Strategies; (1990) https://doi.org/10.1117/12.969988
Event: 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems, 1989, Philadelphia, PA, United States
Abstract
A method of unifying voice and hand pointing information to indicate an object on a map is proposed. Our approach is to represent two different kinds of information in a unified form and merge their information. Voice indications are transformed into a set of terms of an object's attributes and relationships of objects with associated values which show their ambiguity. A hand pointing gesture is also transformed into a term which represents that one of a number or objects pointed at takes priority. IMAGE can identify an object indicated with voice and hand pointing by selecting the object which primarily satisfies the combined effectiveness of the relationships and terms.
© (1990) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tomoichi Takahashi, Akira Hakata, Noriyuki Shima, and Yukio Kobayashi "Unifying Voice And Hand Indication Of Spatial Layout", Proc. SPIE 1198, Sensor Fusion II: Human and Machine Strategies, (1 March 1990); https://doi.org/10.1117/12.969988
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Information visualization

Databases

Human-machine interfaces

Visualization

Sensor fusion

Image sensors

Speech recognition

RELATED CONTENT


Back to Top