PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8289, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality has long been used for training simulations in fields from medicine to welding to vehicular operation, but
simulations involving more complex cognitive skills present new design challenges. Foreign language learning, for
example, is increasingly vital in the global economy, but computer-assisted education is still in its early stages.
Immersive virtual reality is a promising avenue for language learning as a way of dynamically creating believable scenes
for conversational training and role-play simulation. Visual immersion alone, however, only provides a starting point.
We suggest that the addition of social interactions and motivated engagement through narrative gameplay can lead to
truly effective language learning in virtual environments. In this paper, we describe the development of a novel
application for teaching Mandarin using CAVE-like VR, physical props, human actors and intelligent virtual agents, all
within a semester-long multiplayer mystery game. Students travel (virtually) to China on a class field trip, which soon
becomes complicated with intrigue and mystery surrounding the lost manuscript of an early Chinese literary classic.
Virtual reality environments such as the Forbidden City and a Beijing teahouse provide the setting for learning language,
cultural traditions, and social customs, as well as the discovery of clues through conversation in Mandarin with
characters in the game.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research is aimed at examining the added value of using Virtual Reality (VR) in a driving
simulator to prevent road accidents, specifically by improving drivers' skills when confronted
with extreme situations. In an experiment, subjects completed a driving scenario using two
platforms: A 3-D Virtual Reality display system using an HMD (Head-Mounted Display), and a
standard computerized display system based on a standard computer monitor.
The results show that the average rate of errors (deviating from the driving path) in a VR
environment is significantly lower than in the standard one. In addition, there was no
compensation between speed and accuracy in completing the driving mission. On the contrary:
The average speed was even slightly faster in the VR simulation than in the standard
environment. Thus, generally, despite the lower rate of deviation in VR setting, it is not achieved
by driving slower. When the subjects were asked about their personal experiences from the
training session, most of the subjects responded that among other things, the VR session caused
them to feel a higher sense of commitment to the task and their performance. Some of them even
stated that the VR session gave them a real sensation of driving.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion.
Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems
(cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design,
ergonomics studies) or in scientific studies of visual perception.
The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g.
vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering
head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a
non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static
observers due to image distortions when rendering image for viewpoints different from a sweet spot.
We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based
on a cylindrical screen projection system.
Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal
cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was
introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing
participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of
higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification
is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality
systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Meta!Blast (http://www.metablast.org) is designed to address the challenges students often encounter in understanding
cell and metabolic biology. Developed by faculty and students in biology, biochemistry, computer science, game design,
pedagogy, art and story, Meta!Blast is being created using Maya (http://usa.autodesk.com/maya/) and the Unity 3D
(http://unity3d.com/) game engine, for Macs and PCs in classrooms; it has also been exhibited in an immersive
environment. Here, we describe the pipeline from protein structural data and holographic information to art to the threedimensional
(3D) environment to the game engine, by which we provide a publicly-available interactive 3D cellular
world that mimics a photosynthetic plant cell.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the teaching of an immersive environments class on the Spring of 2011. The class had students
from undergraduate as well as graduate art related majors. Their digital background and interests were also diverse.
These variables were channeled as different approaches throughout the semester. Class components included
fundamentals of stereoscopic computer graphics to explore spatial depth, 3D modeling and skeleton animation to in turn
explore presence, exposure to formats like a stereo projection wall and dome environments to compare field of view
across devices, and finally, interaction and tracking to explore issues of embodiment. All these components were
supported by theoretical readings discussed in class. Guest artists presented their work in Virtual Reality, Dome
Environments and other immersive formats. Museum professionals also introduced students to space science
visualizations, which utilize immersive formats. Here I present the assignments and their outcome, together with insights
as to how the creation of immersive environments can be learned through constraints that expose students to situations of
embodied cognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality was a technological wonder in its early days, and it was widely held to be a domain where men were the
main practitioners. However, a survey done in 2007 of VR Artworks (Immersive Virtual Environments or VEs) showed
that women have actually created the majority of artistic immersive works. This argues against the popular idea that the
field has been totally dominated by men. While men have made great contributions in advancing the field, especially
technologically, it appears most artistic works emerge from a decidedly feminine approach. Such an approach seems well
suited to immersive environments as it incorporates aspects of inclusion, wholeness, and a blending of the body and the
spirit. Female attention to holistic concerns fits the gestalt approach needed to create in a fully functional yet open-ended
virtual world, which focuses not so much on producing a finished object (like a text or a sculpture) but rather on creating
a possibility for becoming, like bringing a child into the world. Immersive VEs are not objective works of art to be hung
on a wall and critiqued. They are vehicles for experience, vessels to live within for a piece of time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel markerless 3D facial motion capture system using only one common camera. This system is simple
and easy to transfer facial expressions of a user's into virtual world. It has robustly tracking facial feature points
associated with head movements. In addition, it estimates high accurate 3D points' locations. We designed novel
approaches to the followings; Firstly, for precisely 3D head motion tracking, we applied 3D constraints using a 3D face
model on conventional 2D feature points tracking approach, called Active Appearance Model (AAM). Secondly, for
dealing with various expressions of a user's, we designed 2D face generic models from around 5000 images data and 3D
shape data including symmetric and asymmetric facial expressions. Lastly, for accurately facial expression cloning, we
invented a manifold space to successfully transfer 2D low dimensional feature points to 3D high dimensional points. The
manifold space is defined by eleven facial expression bases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the suitability of the Microsoft Kinect device for capturing real-world objects and places. Our new geometry
scanning system permits the user to obtain detailed triangle models of non-moving objects with a tracked Kinect. The
system generates a texture map for the triangle mesh using video frames from the Kinect's color camera and displays a
continually-updated preview of the textured model in real-time, allowing the user to re-scan the scene from any direction
to fill holes or increase the texture resolution. We also present filtering methods to maintain a high-quality model of
reasonable size by removing overlapping or low-precision range scans. Our approach works well in the presence of
degenerate geometry or when closing loops about the scanned subject. We demonstrate the ability of our system to acquire
3D models at human scale with a prototype implementation in the StarCAVE, a virtual reality environment at the University
of California, San Diego. We designed the capturing algorithm to support the scanning of large areas, provided that accurate
tracking is available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new face relighting algorithm powered by a large database of face images captured
under various known lighting conditions (a Multi-PIE database). Key insight of our algorithm is that a face
can be represented by an assemble of patches from many other faces. The algorithm finds the most similar face
patches in the database in terms of the lighting and the appearance. By assembling the matched patches, we can
visualize the input face under various lighting conditions. Unlike existing face relighting algorithms, we neither
use any kinds of face model nor make a physical assumption. Instead, our algorithm is a data-driven approach,
synthesizing the appearance of the image patch using the appearance of the example patch. Using a data-driven
approach, we can account for various intrinsic facial features including the non-Lambertian skin properties as
well as the hair. Also, our algorithm is insensitive to the face misalignment. We demonstrate the performance of
our algorithm by face relighting and face recognition experiments. Especially, the synthesized results show that
the proposed algorithm can successfully handle various intrinsic features of an input face. Also, from the face
recognition experiment, we show that our method is comparable to the most recent face relighting work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This text will examine how avatars and the socially interactive, online virtual worlds in which they become embodied
may lead to an understanding of identity and of self-perception, how such shifts in awareness may relate to the notion of
the undividedly holistic 'self' and whether such perceptual shifts may be instrumental in bringing forth virtual states of
experiential creative activity which may also have their precursors in the literary pseudonym, particularly as evidenced
in Fernando Pessoa's conception of 'heteronyms.'
The output of my study is a self-observational social system of my own creation, of which the agents are a coterie of
avatars of both sexes, endowed with distinct physical attributes, both human and non-human; with uniquely emergent
personalities which have progressed towards further idiosyncrasy over a period of three years. I, their creator am also
the observer of their undertakings, their interactions, and their creative output, all of which manifest as disparate facets
of my own persona and my artistic activity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ManifestAR is an international artists' collective working with emergent forms of augmented reality as
interventionist public art. The group sees this medium as a way of transforming public space and institutions by
installing virtual objects, which respond to and overlay the configuration of located physical meaning. This paper
will describe the ManifestAR vision, which is outlined in the groups manifesto.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented reality is way of both altering the visible and revealing the invisible. It offers new opportunities for artistic
exploration through virtual interventions in real space. In this paper, the author describes the implementation of two art
installations using different AR technologies, one using optical marker tracking on mobile devices and one integrating
stereoscopic projections into the physical environment. The first artwork, De Ondas y Abejas (The Waves and the Bees),
is based on the widely publicized (but unproven) hypothesis of a link between cellphone radiation and the phenomenon
of bee colony collapse disorder. Using an Android tablet, viewers search out small fiducial markers in the shape of
electromagnetic waves hidden throughout the gallery, which reveal swarms of bees scattered on the floor. The piece also
creates a generative soundscape based on electromagnetic fields. The second artwork, Urban Fauna, is a series of
animations in which features of the urban landscape become plants and animals. Surveillance cameras become flocks of
birds while miniature cellphone towers, lampposts, and telephone poles grow like small seedlings in time-lapse
animation. The animations are presented as small stereoscopic projections, integrated into the physical space of the
gallery. These two pieces explore the relationship between nature and technology through the visualization of invisible
forces and hidden alternate realities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our contemporary imaginings of technological engagement with digital environments has transitioned from flying
through Virtual Reality to mobile interactions with the physical world through personal media devices. Experiences
technologically mediated through social interactivity within physical environments are now being preferenced over
isolated environments such as CAVEs or HMDs. Examples of this trend can be seen in early tele-collaborative artworks
which strove to use advanced networking to join multiple participants in shared virtual environments. Recent
developments in mobile AR allow untethered access to such shared realities in places far removed from labs and home
entertainment environments, and without the bulky and expensive technologies attached to our bodies that accompany
most VR. This paper addresses the emerging trend favoring socially immersive artworks via mobile Augmented Reality
rather than sensorially immersive Virtual Reality installations.
With particular focus on AR as a mobile, locative technology, we will discuss how concepts of immersion and
interactivity are evolving with this new medium. Immersion in context of mobile AR can be redefined to describe
socially interactive experiences. Having distinctly different sensory, spatial and situational properties, mobile AR offers
a new form for remixing elements from traditional virtual reality with physically based social experiences. This type of
immersion offers a wide array of potential for mobile AR art forms. We are beginning to see examples of how artists can
use mobile AR to create social immersive and interactive experiences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As cities world-wide adopt and implement reforestation initiatives to plant millions of trees in urban areas, they are
engaging in what is essentially a massive ecological and social experiment. Existing air-borne, space-borne, and fieldbased
imaging and inventory mechanisms fail to provide key information on urban tree ecology that is crucial to
informing management, policy, and supporting citizen initiatives for the planting and stewardship of trees. The
shortcomings of the current approaches include: spatial and temporal resolution, poor vantage point, cost constraints and
biological metric limitations. Collectively, this limits their effectiveness as real-time inventory and monitoring tools.
Novel methods for imaging and monitoring the status of these emerging urban forests and encouraging their ongoing
stewardship by the public are required to ensure their success. This art-science collaboration proposes to re-envision
citizens' relationship with urban spaces by foregrounding urban trees in relation to local architectural features and
simultaneously creating new methods for urban forest monitoring. We explore creating a shift from overhead imaging or
field-based tree survey data acquisition methods to continuous, ongoing monitoring by citizen scientists as part of a
mobile augmented reality experience. We consider the possibilities of this experience as a medium for interacting with
and visualizing urban forestry data and for creating cultural engagement with urban ecology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proliferation of technological devices and artistic strategies has brought about an urgent and justifiable need to
capture site-specific time-based virtual reality experiences. Interactive art experiences are specifically dependent on the
orchestration of multiple sources including hardware, software, site-specific location, visitor inputs and 3D stereo and
sensory interactions. Although a photograph or video may illustrate a particular component of the work, such as an
illustration of the artwork or a sample of the sound, these only represent a fraction of the overall experience. This paper
seeks to discuss documentation strategies that combine multiple approaches and capture the interactions between art
projection, acting, stage design, sight movement, dialogue and audio design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study was conducted to examine the impact, in terms of cognitive demands, of a restricted field of view
(FOV) on semi-natural locomotion in virtual reality (VR). Participants were divided into two groups: high-FOV
and low-FOV. They were asked to perform basic movements using a locomotion interface while simultaneously
performing one of two memory tasks (spatial or verbal) or no memory task. The memory tasks were intended to
simulate the competing demands when a user has primary tasks to perform while using an unnatural interface to
move through the virtual world. Results show that participants remembered fewer spatial or verbal items when
performing locomotion movements with a low FOV than with a high FOV. This equivalent verbal and spatial
detriment may indicate that locomotion movements with a restricted FOV require additional general cognitive
resources as opposed to spatial or verbal resource pools. This also emphasizes the importance of this research, as
users of a system may allow primary task performance to suffer when performing locomotion. Movement start
and completion times were also measured to examine resource requirements of specific aspects of movements.
Understanding specific performance problems resulting from concurrent tasks can inform the design of systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for
reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training
environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if
co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing
hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time
transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to
empower LVC interaction in a reconfigurable, mixed reality environment.
This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the
dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge
technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and
virtually through commercial and developed game engines. Evaluation involving military trained personnel found this
system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the
battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server
process all live and virtual entity data from system components to create a cohesive virtual world across all distributed
simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and
virtual immersive environments for training in real-time across many distributed systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral
data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we
utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to
convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to
add a fully immersive element to the visualization.
Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the
understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the
flow phenomena, and can be a great help to medical experts for treatment planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a human-computer interface that enables the operator to plan a surgical procedure on the musculoskeletal
(MS) model of the patient's lower limbs, send the modified model to the bio-mechanical analysis module, and export the
scenario parameters to the surgical navigation system. The interface provides the operator with tools for: importing
customized MS model of the patient, cutting bones and manipulating/removal of bony fragments, repositioning muscle
insertion points, muscle removal and placing implants. After planning the operator exports the modified MS model for
bio-mechanical analysis of the functional outcome. If the simulation result is satisfactory the exported scenario data may
be directly used during the actual surgery.
The advantages of the developed interface are the possibility of installing it in various hardware configurations and
coherent operation regardless of the devices used. The hardware configurations proposed to be used with the interface
are: (a) a standard computer keyboard and mouse, and a 2-D display, (b) a touch screen as a single device for both input
and output, or (c) a 3-D display and a haptic device for natural manipulation of 3-D objects.
The interface may be utilized in two main fields. Experienced surgeons may use it to simulate their intervention plans
and prepare input data for a surgical navigation system while student or novice surgeons can use it for simulating results
of their hypothetical procedure.
The interface has been developed in the TLEMsafe project (www.tlemsafe.eu) funded by the European Commission FP7
program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examined what effect the reaching distance had on the prediction of a visually perceived location using reaching
action. A system presenting a virtual object must execute the process of interaction when a body is directly at the
visually perceived location of the virtual object to enable direct interaction between an observer's body and the virtual
object. Conventional techniques assume that the visually perceived location is the same as the location defined by
binocular disparity. However, both locations are often different. We proposed a new technique in our previous studies to
predict the visually perceived location using an observer's action. We also demonstrated prediction using an action where
an observer reached out to a virtual object. This study was an examination into the range of applications of our proposed
approach. An observer in an experiment reached out to a virtual object, and the reaching distance was the experimental
variable. The results did not support the effect of the reaching distance on prediction. We demonstrated that our
technique could be applied to a wide range of reaching distances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in
an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human
computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using
single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal
platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined
with human voice is proposed which will minimize the mean square error. This will loosen the strict environment
needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for
direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically
requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in
complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is
to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this
paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature
level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or
having less technical knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article deals with virtual tools based on principles of open source philosophy in implementation area of
composite lay-up technology. It describes virtual software and hardware elements that are necessary for work in
augmented reality environment. In the beginning it focuses on general problems of applications of virtual components
and in composite lay-up process. It clarifies the fundamental philosophy of new created application and the process
called visual scripting that was used for program development. Further it provides closer view on particular logical
sections, where necessary data are harvested and compared with values from virtual arrays. Also the new device is
described with adjustment of operating desk, what enables detailed control of any realized manufacturing process. This
positioning table can determine and set the position of the working plane using the commands in computer interface or
manual changes. Information about exact position of individual layers are obtained in real time thanks to the built-in
sensors. One of them manages the data change of the desk position (X, Y, Z), other checks the rotation around main axis
situated in the center of the table. New software consists of 4 main logical areas with data packets comming from internal
computer components as well as from external devices. In the end the displaying section is able to realize the movement
process of virtual item (composite layer) according to its trajectory. Article presents new attitude in realization of
composite lay-up process. Finally it deals with possible future improvements and other application possibilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.