Paper
17 September 2007 Multiresolution training of Kohonen neural networks
Author Affiliations +
Abstract
This paper analyses a trade-off between convergence rate and distortion obtained through a multi-resolution training of a Kohonen Competitive Neural Network. Empirical results show that a multi-resolution approach can improve the training stage of several unsupervised pattern classification algorithms including K-means clustering, LBG vector quantization, and competitive neural networks. While, previous research concentrated on convergence rate of on-line unsupervised training. New results, reported in this paper, show that the multi-resolution approach can be used to improve training quality (measured as a derivative of the rate distortion function) on the account of convergence speed. The probability of achieving a desired point in the quality/convergence-rate space of Kohonen Competitive Neural Networks (KCNN) is evaluated using a detailed Monte Carlo set of experiments. It is shown that multi-resolution can reduce the distortion by a factor of 1.5 to 6 while maintaining the convergence rate of traditional KCNN. Alternatively, the convergence rate can be improved without loss of quality. The experiments include a controlled set of synthetic data, as well as, image data. Experimental results are reported and evaluated.
© (2007) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Dan E. Tamir "Multiresolution training of Kohonen neural networks", Proc. SPIE 6700, Mathematics of Data/Image Pattern Recognition, Compression, Coding, and Encryption X, with Applications, 67000B (17 September 2007); https://doi.org/10.1117/12.735394
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Distortion

Quantization

Monte Carlo methods

Neural networks

Quality measurement

RGB color model

Image compression

Back to Top