Abstract
Tree-based models and deep neural networks are two schools of effective classification methods in machine learning. While tree-based models are robust irrespective of data domain, deep neural networks have advantages in handling high-dimensional data. Adding a differentiable neural decision forest to the neural network can generally help exploit the benefits of both models. Therefore, traditional decision trees diverge into a bagging version (i.e., random forest) and a boosting version (i.e., gradient boost decision tree). In this work, we aim to harness the advantages of both bagging and boosting by applying gradient boost to a neural decision forest. We propose a gradient boost that can learn the residual using neural decision forest, considering the residual as a part for the final prediction. Besides, we design a structure for learning the parameters of neural decision forest and gradient boost module in contiguous steps, which is extendable to incorporate multiple gradient-boosting modules in an end-to-end manner. Our extensive experiments on several public datasets demonstrate the competitive performance and efficiency of our model against a series of baseline methods in solving various machine learning tasks.
| Original language | English |
|---|---|
| Pages (from-to) | 330-342 |
| Number of pages | 13 |
| Journal | IEEE Transactions on Services Computing |
| Volume | 16 |
| Issue number | 1 |
| Early online date | 9 Dec 2021 |
| DOIs | |
| Publication status | Published - 2023 |
Fingerprint
Dive into the research topics of 'Gradient boosted neural decision forest'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver