This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot
competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while
operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the
Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the
ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot
drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among
distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including
an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's
performance in the competition will demonstrate the system’s successes in real-world testing.
Deploying a world wide force that is strategically responsive and dominant at every point on the spectrum of conflict involves the cooperative system development and use of advanced technologies that yield revolutionary capabilities to support the war-fighters needs. This presentation describes an agent based control architecture and prototype implementation developed by ARDEC that enables command and control of multiple unmanned platforms and associated mission packages for collaborative target hand-off/engagement. Current prototypes provide the ability to remotely locate, track and predict the movement of enemy targets on the battlefield using a variety of sensor systems hosted on multiple, non-homogeneous SUAVs and UGVs.
This paper presents formalisms for describing societies of cooperating behavior-based mobile robots, including the coordination between members of homogeneous teams, members of heterogeneous castes, assemblages of behaviors on individual robots, as well as perceptual strategies within primitive sensorimotor behaviors. This formal language is intended to facilitate proving properties about systems described in it.
This paper describes ongoing research into methods to allow a mobile robot to effectively function in a manufacturing environment; specifically, generation of the ballistic motion phase of the docking behavior. This overall docking behavior causes the robot to move to a workstation and park in an appropriate position. The docking behavior consists of two distinct types of motion. Ballistic motion rapidly moves the robot to an area near the dock where recognition of the dock triggers the slower, more accurate orienting motion for the final positioning. The ballistic motion is supported with two simple low-level behaviors: a phototropic (light seeking) behavior and a temporal (motion) detection behavior. The phototropic or temporal activity perceptual strategy maneuvers the vehicle toward the bright light or the abundant motion usually associated with workstations. These vision algorithms have been selected because strict knowledge of the initial position of the dock is not needed and each requires limited computational resources. This system has been implemented and shown to successfully generate ballistic motion in support of docking in typical manufacturing environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.