This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous
nature for source localization tasks. The system combines augmented reality interfaces capabilities with
human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator
to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning
algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the
environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed
sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused
AR view is displayed which helped the users pin point source information or help the operator with the goals of the
mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions
are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go
and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection
missions. Usability tests and operator workload analysis are also investigated.
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as
multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with
heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human
interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated
with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial
robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real
multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of
heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The
advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from
accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking
is able to reduce operator workload.
There is a strong demand for efficient explosive detecting devices and deployment methods in the field. In this study we
present a prototype mast that uses a telescoping pulley system for optimal performance on top of an unmanned ground
vehicle to be able to be controlled wirelessly. The mast and payload reaches up eight feet from the platform with a
gripper that can pick up objects. The current mobile platform operators using a remote-control devices to move the arm
and the robot itself from a safe distance away. It is equipped with a pulley system that can also be used to extend a
camera or explosive detection sensor under a vehicle. The mast is outfitted with sensors. The simple master-slave
strategy will not be sufficient as the navigation and sensory inputs will become complex. In this paper we provide a
tested software/hardware framework that allows a mobile platform and the expanded arm to offload operator tasks to
autonomous behaviors while maintaining tele-operations. This will implement semi-autonomous behaviors. This
architecture involves a server which communicates commands and receives sensor inputs via a wireless modem to the
mobile platform. This server can take requests from multiple client processes which have prioritized access to on-board
sensor readings and can command the steering. The clients would include the tele-operation soldier unit, and any number
of other autonomous behaviors linked to particular sensor information or triggered by the operator. For instance, the
behavior of certain tasks can be controlled by low-latency clients with sensory information to prevent collisions, place
sensor pods precisely, return to preplanned positions, home the units location or even perform image enhancements or
object recognition on streamed video.
KEYWORDS: Inspection, Data modeling, Cameras, Video, Sensors, Calibration, Motion models, Electromechanical design, Data communications, Control systems
Our research has focused on how to expand the capabilities of an Omni-Directional Inspection Robot (ODIS) to
assist in vehicle inspections at traffic control checkpoints with a standoff distance of 450m. We have implemented
an mast, extendible to eight feet, capable of carrying a sensor payload that has an RS-232 connection with a simple
set of commands to control its operation. We have integrated a communications chain that provides the desired
distance and sufficient speed to transmit a live digital feed to the operator control unit (OCU). We have also created
a physically-based simulation of ODIS and our mast inside of Webots and have taken data to calibrate a motion
response model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.