This work describes a software/hardware framework where cognitive architectures can be realized and applied to
control different kinds of robotic platforms. The framework can be interfaced with a robot prototype mediating
the sensory-motor loop. Moreover 2D and 3D kinematic or dynamic simulation environments can be used to
evaluate the robotic system cognitive capabilities. Here, we address design choices and implementation issues
related to the proposed robotic programming environment, taking attention to its modular structure, important
characteristic for a flexible and powerful framework. The main advantage introduced by the proposed architecture
consists in the rapid development of applications, that can be easily tested on different robotic platforms either
real or simulated, because the differences are properly masked by the architecture. Simultaneously, to validate
the functionality of the proposed system an "ad hoc" simulator is implemented.
In this paper a new general purpose perceptual control architecture is presented and applied to robot navigation
in cluttered environments. In nature, insects show the ability to react to certain stimuli with simple reflexes
using direct sensory-motor pathways, which can be considered as basic behaviors, while high brain regions provide
secondary pathway allowing the emergence of a cognitive behavior which modulates the basic abilities. Taking
inspiration from this evidence, our architecture modulates, through a reinforcement learning, a set of competitive
and concurrent basic behaviors in order to accomplish the task assigned through a reward function. The core of
the architecture is constituted by the Representation layer, where different stimuli, triggering competitive reflexes,
are fused to form a unique abstract picture of the environment. The representation is formalized by means of
Reaction-Diffusion nonlinear partial differential equations, under the paradigm of the Cellular Neural Networks,
whose dynamics converges to steady-state Turing patterns. A suitable unsupervised learning, introduced at
the afferent (input) stage, leads to the shaping of the basins of attractions of the Turing patterns in order to
incrementally drive the association between sensor stimuli and patterns. In this way, at the end of the leaning
stage, each pattern is characteristic of a particular behavior modulation, while its trained basin of attraction
contains the set of all the environment conditions, as recorded through the sensors, leading to the emergence of
that particular behavior modulation. Robot simulations are reported to demonstrate the potentiality and the
effectiveness of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.