TY - JOUR
T1 - Generalized hidden-mapping minimax probability machine for the training and reliability learning of several classical intelligent models
AU - Deng, Zhaohong
AU - Chen, Junyong
AU - Zhang, Te
AU - Cao, Longbing
AU - Wang, Shitong
PY - 2018/4
Y1 - 2018/4
N2 - Minimax Probability Machine (MPM) is a binary classifier that optimizes the upper bound of the misclassification probability. This upper bound of the misclassification probability can be used as an explicit indicator to characterize the reliability of the classification model and thus makes the classification model more transparent. However, the existing related work is constrained to linear models or the corresponding nonlinear models by applying the kernel trick. To relax such constraints, we propose the Generalized Hidden-Mapping Minimax Probability Machine (GHM-MPM). GHM-MPM is a generalized MPM. It is capable of training many classical intelligent models, such as feedforward neural networks, fuzzy logic systems, and linear and kernelized linear models for classification tasks, and realizing the reliability learning of these models simultaneously. Since the GHM-MPM, similarly to the classical MPM, was originally developed only for binary classification, it is further extended to multi-class classification by using the obtained reliability indices of the binary classifiers of two arbitrary classes. The experimental results show that GHM-MPM makes the trained models more transparent and reliable than those trained by classical methods.
AB - Minimax Probability Machine (MPM) is a binary classifier that optimizes the upper bound of the misclassification probability. This upper bound of the misclassification probability can be used as an explicit indicator to characterize the reliability of the classification model and thus makes the classification model more transparent. However, the existing related work is constrained to linear models or the corresponding nonlinear models by applying the kernel trick. To relax such constraints, we propose the Generalized Hidden-Mapping Minimax Probability Machine (GHM-MPM). GHM-MPM is a generalized MPM. It is capable of training many classical intelligent models, such as feedforward neural networks, fuzzy logic systems, and linear and kernelized linear models for classification tasks, and realizing the reliability learning of these models simultaneously. Since the GHM-MPM, similarly to the classical MPM, was originally developed only for binary classification, it is further extended to multi-class classification by using the obtained reliability indices of the binary classifiers of two arbitrary classes. The experimental results show that GHM-MPM makes the trained models more transparent and reliable than those trained by classical methods.
KW - Classification
KW - Fuzzy logical systems
KW - Kernel tricks
KW - Minimax probability
KW - Neural networks
KW - Reliability learning
UR - http://www.scopus.com/inward/record.url?scp=85041467606&partnerID=8YFLogxK
U2 - 10.1016/j.ins.2018.01.034
DO - 10.1016/j.ins.2018.01.034
M3 - Article
SN - 0020-0255
VL - 436-437
SP - 302
EP - 319
JO - Information Sciences
JF - Information Sciences
ER -