An iterative learning algorithm for feedforward neural networks with random weights

Feilong Cao, Dianhui Wang, Houying Zhu, Yuguang Wang

Research output: Contribution to journalArticlepeer-review

42 Citations (Scopus)

Abstract

Feedforward neural networks with random weights (FNNRWs), as random basis function approximators, have received considerable attention due to their potential applications in dealing with large scale datasets. Special characteristics of such a learner model come from weights specification, that is, the input weights and biases are randomly assigned and the output weights can be analytically evaluated by a Moore-Penrose generalized inverse of the hidden output matrix. When the size of data samples becomes very large, such a learning scheme is infeasible for problem solving. This paper aims to develop an iterative solution for training FNNRWs with large scale datasets, where a regularization model is employed to potentially produce a learner model with improved generalization capability. Theoretical results on the convergence and stability of the proposed learning algorithm are established. Experiments on some UCI benchmark datasets and a face recognition dataset are carried out, and the results and comparisons indicate the applicability and effectiveness of our proposed learning algorithm for dealing with large scale datasets.

Original languageEnglish
Pages (from-to)546-557
Number of pages12
JournalInformation Sciences
Volume328
DOIs
Publication statusPublished - 20 Jan 2016
Externally publishedYes

Keywords

  • Neural networks with random weights
  • Learning algorithm
  • Stability
  • Convergence

Fingerprint

Dive into the research topics of 'An iterative learning algorithm for feedforward neural networks with random weights'. Together they form a unique fingerprint.

Cite this