Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates.
In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle.
For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.
For robust and safe cross country driving, an autonomous ground vehicle must be able to handle conflicts, which may arise from limitations of perception performance, of the dynamics of the vehicle's active camera head and from the feasibility of locomotion maneuvers. This paper describes the interaction and coordination of image processing, gaze control and behavior decision. The behavior decision module specifies the perception tasks for the image processing experts according to the mission, the capabilities of the vehicle and the knowledge about the external world accumulated up to the present time. Depending on its perception task received, an image processing expert specifies combinations of so-called regions of attention (RoA) for each object in 3D object coordinates. These RoA cover relevant object parts and should be visible with a resolution and in a manner as required by the measurement techniques applied. The gaze control unit analyzes the combinations of RoA of all image processing experts in order to plan, optimize and perform a sequence of smooth pursuits, interrupted by saccades. This dynamic interaction has been demonstrated in different complex and scalable autonomous missions with the UBM test vehicle VAMORS. The mission described in this paper makes the vehicle meet an unexpected ditch of unknown size and position forcing the vehicle to reactive behavior regarding locomotion, gaze control as well as image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.