Visual learning is an important aspect of fly life. Flies are able to extract visual cues from objects, like colors,
vertical and horizontal distributedness, and others, that can be used for learning to associate a meaning to
specific features (i.e. a reward or a punishment). Interesting biological experiments show trained stationary
flying flies avoiding flying towards specific visual objects, appearing on the surrounding environment. Wild-type
flies effectively learn to avoid those objects but this is not the case for the learning mutant rutabaga defective in
the cyclic AMP dependent pathway for plasticity. A bio-inspired architecture has been proposed to model the
fly behavior and experiments on roving robots were performed. Statistical comparisons have been considered
and mutant-like effect on the model has been also investigated.
Behavioral experiments on fruit flies had shown that they are attracted by near objects and they prefer front-to-back motion.
In this paper a visual orientation model is implemented on the Eye-Ris vision system and tested using a roving platform.
Robotic experiments are used to collect statistical data regarding the system behaviour: followed trajectories, dwelling
time, distribution of gaze direction and others strictly resembling the biological experimental setup on the flies. The
statistical analysis has been performed in different scenarios where the robot faces with different object distribution in the
arena. The acquired data has been used to validate the proposed model making a comparison with the fruit fly experiments.
In this paper a new general purpose perceptual control architecture is presented and applied to robot navigation
in cluttered environments. In nature, insects show the ability to react to certain stimuli with simple reflexes
using direct sensory-motor pathways, which can be considered as basic behaviors, while high brain regions provide
secondary pathway allowing the emergence of a cognitive behavior which modulates the basic abilities. Taking
inspiration from this evidence, our architecture modulates, through a reinforcement learning, a set of competitive
and concurrent basic behaviors in order to accomplish the task assigned through a reward function. The core of
the architecture is constituted by the Representation layer, where different stimuli, triggering competitive reflexes,
are fused to form a unique abstract picture of the environment. The representation is formalized by means of
Reaction-Diffusion nonlinear partial differential equations, under the paradigm of the Cellular Neural Networks,
whose dynamics converges to steady-state Turing patterns. A suitable unsupervised learning, introduced at
the afferent (input) stage, leads to the shaping of the basins of attractions of the Turing patterns in order to
incrementally drive the association between sensor stimuli and patterns. In this way, at the end of the leaning
stage, each pattern is characteristic of a particular behavior modulation, while its trained basin of attraction
contains the set of all the environment conditions, as recorded through the sensors, leading to the emergence of
that particular behavior modulation. Robot simulations are reported to demonstrate the potentiality and the
effectiveness of the approach.
This paper aims to describe how the AnaFocus' Eye-RIS family of vision systems has been successfully embedded
within the roving robots developed under the framework of SPARK and SPARK II European projects to solve the
action-oriented perception problem in real time. Indeed, the Eye-RIS family is a set of vision systems which are
conceived for single-chip integration using CMOS technologies. The Eye-RIS systems employ a bio-inspired
architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two
steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are
integrated close to the sensors. These structures handle basically analog information. At the second step, processing is
realized on digitally-coded information data by means of digital processors. On the other hand, SPARK I and SPARK II
are European research projects which goal is to develop completely new sensing-perceiving-moving artefacts inspired by
the basic principles of living systems and based on the concept of "selforganization". As a result, its low-power
consumption together with its huge image-processing capabilities makes the Eye-RIS vision system a suitable choice to
be embedded within the roving robots developed under the framework of SPARK projects and to implement in real time
the resulting mathematical models for action-oriented perception.
Segmentation is the process of representing a digital image into multiple meaningful regions. Since these applications
require more computational power in real time applications, we have implemented a new segmentation algorithm using
the capabilities of Eye-RIS Vision System to execute the algorithm in very short time. The segmentation algorithm is
implemented mainly in three steps. In the first step, which is pre-processing step, the images are acquired and noise
filtering through Gaussian function is performed. In the second step, Sobel operators based edge detection approach is
implemented on the system. In the last step, morphologic and logic operations are used to segment the images as post
processing. The experimental results performed for different images show the accuracy of the proposed segmentation
algorithm. Visual inspection and timing analysis (7.83 ms, 127 frame/sec) prove that the proposed segmentation
algorithm can be executed for real time video processing applications. Also, these results prove the capability of Eye-RIS
Vision System for real time image processing applications
This paper describes a correlation-based navigation algorithm, based on an unsupervised learning paradigm for spiking neural networks, called Spike Timing Dependent Plasticity (STDP). This algorithm was implemented on a new bio-inspired hybrid mini-robot called TriBot to learn and increase its behavioral capabilities. In fact
correlation based algorithms have been found to explain many basic behaviors in simple animals. The main interesting consequence of STDP is that the system is able to learn high-level sensor features, based on a set of basic reflexes, depending on some low-level sensor inputs. TriBot is composed of 3 modules, the first two
being identical and inspired by the Whegs hybrid robot. The peculiar characteristics of the robot consists in the innovative shape of the three-spoke appendages that allow to increase stability of the structure. The last module is composed of two standard legs with 3 degrees of freedom each. Thanks to the cooperation among these
modules, TriBot is able to face with irregular terrains overcoming potential deadlock situations, to climb high obstacles compared to its size and to manipulate objects. Robot experiments will be reported to demonstrate the potentiality and the effectiveness of the approach.
In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory
system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for
action-oriented perception applied to a legged robot is presented. An important problem we address is how to
utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is
to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce
the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the
Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor
chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific
sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load,
distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore
a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition
and processing. This choice was made because FPGAs permit the implementation of customized digital logic
blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the
multi-sensory architecture proposed can achieve real time capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.