Generalized hidden-mapping minimax probability machine for the training and reliability learning of several classical intelligent models

Zhaohong Deng*, Junyong Chen, Te Zhang, Longbing Cao, Shitong Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Minimax Probability Machine (MPM) is a binary classifier that optimizes the upper bound of the misclassification probability. This upper bound of the misclassification probability can be used as an explicit indicator to characterize the reliability of the classification model and thus makes the classification model more transparent. However, the existing related work is constrained to linear models or the corresponding nonlinear models by applying the kernel trick. To relax such constraints, we propose the Generalized Hidden-Mapping Minimax Probability Machine (GHM-MPM). GHM-MPM is a generalized MPM. It is capable of training many classical intelligent models, such as feedforward neural networks, fuzzy logic systems, and linear and kernelized linear models for classification tasks, and realizing the reliability learning of these models simultaneously. Since the GHM-MPM, similarly to the classical MPM, was originally developed only for binary classification, it is further extended to multi-class classification by using the obtained reliability indices of the binary classifiers of two arbitrary classes. The experimental results show that GHM-MPM makes the trained models more transparent and reliable than those trained by classical methods.

Original languageEnglish
Pages (from-to)302-319
Number of pages18
JournalInformation Sciences
Volume436-437
DOIs
Publication statusPublished - Apr 2018
Externally publishedYes

Keywords

  • Classification
  • Fuzzy logical systems
  • Kernel tricks
  • Minimax probability
  • Neural networks
  • Reliability learning

Fingerprint

Dive into the research topics of 'Generalized hidden-mapping minimax probability machine for the training and reliability learning of several classical intelligent models'. Together they form a unique fingerprint.

Cite this