Paper
6 April 1995 Optimal robustness in noniterative learning
Author Affiliations +
Abstract
If M given training patterns are not extremely similar, the analog N-vectors representing them are generally separable in the N-space. Then a one-layered binary perceptron containing P neurons (P equals >log2M) is generally sufficient to do the pattern recognition job. The connection matrix between the input (linear) layer and the neuron layer can be calculated in a noniterative manner. Real-time pattern recognition experiments implementing this theoretical result were reported in this and other national conferences last year. It is demonstrated in these experiments that the noniterative training is very fast, (can be done in real time), and the recognition of the untrained patterns is very robust and very accurate. The present paper concentrates at the theoretical foundation of this noniteratively trained perceptron. The theory starts from an N-dimension Euclidean-geometry approach. An optimally robust learning scheme is then derived. The robustness and the speed of this optimal learning scheme are to be compared with those of the conventional iterative learning schemes.
© (1995) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chia-Lun John Hu "Optimal robustness in noniterative learning", Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); https://doi.org/10.1117/12.205178
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Pattern recognition

Radon

Binary data

Neurons

Analog electronics

Lithium

Machine learning

Back to Top