Correlation filter based tracking algorithms have recently shown favorable performance in terms of high frame rates. However, a significant problem is that the context information is not be fully used which can result in model drift under challenging situations, such as fast motion and occlusion. In this paper, we propose an adaptive context-aware correlation framework which can improve the discriminative power and detect target within a large neighborhood. Firstly, we construct a context-aware correlation filter model and a peak extraction method is proposed to select the context patches adaptively, which can be regarded as hard negative samples mining. Secondly, a simple yet effective multi-region detection strategy is proposed to improve the anti-occlusion ability and prevent model drift. Thirdly, we adopt high-confidence model update method to avoid model corruption. We integrate the proposed framework with the existing DCF tracker, experimental results show that the proposed framework improves the accuracy by 9.1% and the success rate by 7.1%.
Recently, a complex-valued convolutional neural network (CV-CNN) has been used for the classification of polarimetric synthetic aperture radar (PolSAR) images, and has shown superior performance to most traditional algorithms. However, it usually yields unreliable results for the pixels distributing within heterogeneous regions or the edge areas. To solve this problem, in this paper, an edge reassigning scheme based on Markov random field (MRF) is considered to combine with the CV-CNN. In this scheme,both the polarimetric statistical property and label context information are employed. The experiments performed on a benchmark PolSAR image of Flevoland has demonstrated the superior performance of the proposed algorithm.
Most of the visual tracking algorithms are very sensitive to the initialized bounding-box of the tracking object, while, how to obtain a precise bounding-box in the first frame needs further research. In this paper, we propose an automatic algorithm to refine the references of the tracking object after a roughly selected bounding-box in the first frame. Based on the input rough location and scale information, the proposed algorithm exploits the region merger algorithm based on maximal similarity to segment the superpixel regions into foreground or background. In order to improve the segmentation effect, a feature clustering strategy is exploited to obtain reliable foreground label and background label and color histogram in HSI space is exploited to describe the superpixel feature. The final refinement bounding-box is the minimal enclosing rectangle of the foreground region. Extensive experiments are performed and the results indicate that the proposed algorithm can reliably refine the initial bounding-box relying only on the first frame information and improve the robustness of the tracking algorithms distinctively.
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.