With the development of feature extraction technique, one image object can be represented by multiple heterogeneous features from different views that locate in high-dimensional space. Multiple features can reflect various characteristics of the same object; they contain compatible and complementary information among each other, integrating them together used in the special image processing application that can obtain better performance. However, facing these multi-view features, most dimensionality reduction methods fail to completely achieve the desired effect. Therefore, how to construct an uniform low-dimensional embedding subspace, which exploits useful information from multi-view features is still an important and urgent issue to be solved. So, we propose an innovative fusion dimension reduction method named tensor dispersion-based multi-view feature embedding (TDMvFE). TDMvFE reconstructs a feature subspace of each object by utilizing its k nearest neighbors, which preserves the underlying neighborhood structure of the original manifold in the lower dimensional mapping space. The new method fully exploits the channel correlations and spatial complementarities from the multi-view features by tensor dispersion analysis model. Furthermore, the method constructs an optimization model and derives an iterative procedure to generate the unified low dimensional embedding. Various evaluations based on the applications of image classification and retrieval demonstrate the effectiveness of our proposed method on multi-view feature fusion dimension reduction. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 1 scholarly publication.
Image retrieval
Dimension reduction
Feature extraction
Image processing
Image classification
Optimization (mathematics)
Matrices