Learning averaged one-dependence estimators by attribute weighting

Jia Wu, Zhihua Cai

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)


Averaged One-Dependence Estimators (AODE) as the most effective improved Naive Bayes (NB) algorithm is a probabilistic classification learning technique. It addresses the attribute independence assumption of naive Bayes by averaging all of the dependence estimators. Researchers have proposed out many effective methods to improve the performance of AODE, such as attribute weighted method, backwards sequential elimination method, lazy elimination method and so on. In this paper, our research is focused on weighted method. We firstly present a new filter method for setting attribute weights for using with AODE, and we explore two attribute weighed methods: the gain ratio method and the
correlation-based feature selection method for our new attribute weighted model via AODE. Then we present an improved algorithm based on the new weighted model called Decision Tree-based Attribute Weighted Averaged One-dependence Estimator, simply DTWAODE. In DTWAODE, the weight for an attribute is set according to its depth in the decision tree building on the training samples. We experimentally tested DTWAODE in Weka system, using the whole 36 standard UCI data sets. The experimental results show that our new attribute weighted model via AODE performance effectively. And our new algorithm based on the model performs better than other attributed weighted AODE algorithms used to compare.
Original languageEnglish
Pages (from-to)1063-1073
Number of pages11
JournalJournal of Information and Computational Science
Issue number7
Publication statusPublished - 2011
Externally publishedYes


  • Naive Bayes
  • AODE
  • decision tree
  • attribute weighting
  • classification accuracy


Dive into the research topics of 'Learning averaged one-dependence estimators by attribute weighting'. Together they form a unique fingerprint.

Cite this