An unsupervised parameter learning model for RVFL neural network

Yongshan Zhang, Jia Wu*, Zhihua Cai, Bo Du, Philip S. Yu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

71 Citations (Scopus)

Abstract

With the direct input–output connections, a random vector functional link (RVFL) network is a simple and effective learning algorithm for single-hidden layer feedforward neural networks (SLFNs). RVFL is a universal approximator for continuous functions on compact sets with fast learning property. Owing to its simplicity and effectiveness, RVFL has attracted significant interest in numerous real-world applications. In reality, the performance of RVFL is often challenged by randomly assigned network parameters. In this paper, we propose a novel unsupervised network parameter learning method for RVFL, named sparse pre-trained random vector functional link (SP-RVFL for short) network. The proposed SP-RVFL uses a sparse autoencoder with ℓ 1 -norm regularization to adaptively learn superior network parameters for specific learning tasks. By doing so, the learned network parameters in SP-RVFL are embedded with the valuable information of input data, which alleviate the randomly generated parameter issue and improve the algorithmic performance. Experiments and comparisons on 16 diverse benchmarks from different domains confirm the effectiveness of the proposed SP-RVFL. The corresponding results also demonstrate that RVFL outperforms extreme learning machine (ELM).

Original languageEnglish
Pages (from-to)85-97
Number of pages13
JournalNeural Networks
Volume112
DOIs
Publication statusPublished - 1 Apr 2019

Keywords

  • Autoencoder
  • Classification applications
  • Pre-trained parameters
  • Random vector functional link network
  • Randomized feedforward neural networks
  • ℓ -norm regularization

Fingerprint

Dive into the research topics of 'An unsupervised parameter learning model for RVFL neural network'. Together they form a unique fingerprint.

Cite this