This paper presents a tracking method to detect and track independently moving targets, attempting to traverse the railway, in monocular camera sequences. This method is capable of tracking the maximum number of pixels belonging to an object. The method starts by detecting and separating moving objects due to background subtraction and an energy vector-based clustering. Next, the method performs the step of tracking locally. Tracking starts by generating initial optical flow of all object pixels by propagating the optical flow of Harris corner points (calculated by Lucas–Kanade technique) using normal distribution. An iterative procedure, including Kalman filtering with adaptive parameters, color intensity difference-based optimization, and validation constraints, is then implemented to reach precise and robust optical flow estimation for the majority of the pixels of the tracked objects. Different experimental results are presented, evaluated, and discussed to show the effectiveness of the method of tracking objects that may move in complex and overlapping trajectories.
In recent years, a variety of nonlinear dimensionality reduction techniques (NLDR) have been proposed in the literature. They aim to address the limitations of traditional techniques such as PCA and classical scaling. Most of these techniques assume that the data of interest lie on an embedded non-linear manifold within the higher-dimensional space. They provide a mapping from the high-dimensional space to the low-dimensional embedding and may be viewed, in the context of machine learning, as a preliminary feature extraction step, after which pattern recognition algorithms are applied. Laplacian Eigenmaps (LE) is a nonlinear graph-based dimensionality reduction method. It has been successfully applied in many practical problems such as face recognition. However the construction of LE graph suffers, similarly to other graph-based DR techniques from the following issues: (1) the neighborhood graph is artificially defined in advance, and thus does not necessary benefit the desired DR task; (2) the graph is built using the nearest neighbor criterion which tends to work poorly due to the high-dimensionality of original space; and (3) its computation depends on two parameters whose values are generally uneasy to assign, the neighborhood size and the heat kernel parameter. To address the above-mentioned problems, for the particular case of the LPP method (a linear version of LE), L. Zhang et al.1 have developed a novel DR algorithm whose idea is to integrate graph construction with specific DR process into a unified framework. This algorithm results in an optimized graph rather than a predefined one.
KEYWORDS: Image segmentation, Image processing algorithms and systems, Tolerancing, RGB color model, Phase transfer function, Process modeling, Databases, Image processing, Detection and tracking algorithms, Reliability
In this paper, we propose an orthophotoplan segmentation method based on watershed algorithm combined with
an efficient region merging strategy for roof detection. The preliminary segmentation is obtained by the watershed
algorithm with an optimal couple of colorimetric invariant/color gradient optimized for the application. The use
of the appropriate couple of invariant/gradient permits to limit illumination changes (shadows, brightness, etc)
affecting the images. Even if the watershed based results are good, the images are over-segmented. That is why,
a region merging procedure is proposed. This procedure uses a merging criteria based on 2D modeling of roof
ridges and region features adapted to the orthophotoplan particularities. The proposed strategy is evaluated on
100 real roof images with a ground truth image segmentation in order to demonstrate the effectiveness and the
reliability of the proposed approach.
The presented work is conducted in the framework of the ANR-VTT PANsafer project (Towards a safer level crossing).
One of the objectives of the project is to develop a video surveillance system that will be able to detect and recognize
potential dangerous situation around level crossings. This paper addresses the problem of cameras positioning and
orientation in order to view optimally monitored scenes. In general, adjusting cameras position and orientation is
achieved experimentally and empirically by considering geometrical different configurations. This step requires a lot of
time to adjust approximately the total and common fields of view of the cameras, especially when constrained
environments, like level crossing environments, are considered. In order to simplify this task and to get more precise
cameras positioning and orientation, we propose in this paper a method that optimizes automatically the total and
common cameras fields with respect to the desired scene. Based on descriptive geometry, the method estimates the best
cameras position and orientation by optimizing surfaces of 2D domains that are obtained by projecting/intersecting the
field of view of each camera on/with horizontal and vertical planes. The proposed method is evaluated and tested to
demonstrate its effectiveness.
This paper is focused on the characterization of GNSS signals reception environment by estimation of the
percentage of visible sky in real-time. On previous works, a new segmentation technique based on a color
watershed using an adaptive combination of color and texture information was proposed. This information was
represented by two morphological gradients, a classical color gradient and a morphological texture gradient based
on mathematical morphology or co-occurrence matrices. The segmented images were then classified into two
regions: sky and not-sky. However, this approach has high computational cost and thus, cannot be applied in
real-time. On this paper, we present this adaptive segmentation method with a texture gradient calculated by
the Gabor filter and a region-tracking method based on a block-matching estimation. This last step reduces
the execution time of the application in order to respect the real-time conditions. Since the application works
for fish-eye images, a calibration and rectification method is required before tracking and is also presented on
this paper. The calibration method presented is based on the straight line condition and thus does not use real
word coordinates. This prevents measurement errors. The tracking results are compared to the results of the
classification method (which has already been evaluated on previous works). The evaluation shows that the
proposed method has a very low error and decreases the execution time by ten times.
Objects detection and tracking is a key function for many applications like video surveillance, robotic, intelligent
transportation systems,...etc. This problem is widely treated in the literature in terms of sensors (video cameras, laser
range finder, Radar) and methodologies. This paper proposes a new approach for detecting and tracking objects using
stereo vision with linear cameras. After the matching process applied to edge points extracted from the images, the
reconstructed points in the scene are clustered using spectral analysis. The obtained clusters are then tracked throughout
their center of gravity using a Kalman filter and a Nearest Neighbour (NN) based data association algorithm. The
approach is tested and evaluated on real data to demonstrate its effectiveness for obstacle detection and tracking in front
of a vehicle. This work is a part of a project that aims to develop advanced driving aid systems, supported by the CPER,
STIC and Volubilis programs.
This paper presents a new image segmentation method based on the combination of texture and color informations.
The method first computes the morphological color and texture gradients. The color gradient is analyzed
taking into account the different color spaces. The texture gradient is computed using the luminance component
of the HSL color space. The texture gradient procedure is achieved using a morphological filter and a granulometric
and local energy analysis. To overcome the limitations of a linear/barycentric combination, the two
morphological gradients are then mixed using a gradient component fusion strategy (to fuse the three components
of the color gradient and the unique component of the texture gradient) and an adaptive technique to choose
the weighting coefficients. The segmentation process is finally performed by applying the watershed technique
using different type of germ images. The segmentation method is evaluated in different object classification
applications using the k-means algorithm. The obtained results are compared with other known segmentation
methods. The evaluation analysis shows that the proposed method gives better results, especially with hard
image acquisition conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.