AODE
   HOME

TheInfoList



OR:

Averaged one-dependence estimators (AODE) is a probabilistic classification learning technique. It was developed to address the attribute-independence problem of the popular
naive Bayes classifier In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Baye ...
. It frequently develops substantially more accurate classifiers than naive Bayes at the cost of a modest increase in the amount of computation.Webb, G. I., J. Boughton, and Z. Wang (2005)
"Not So Naive Bayes: Aggregating One-Dependence Estimators"
''Machine Learning'', 58(1), 5–24.


The AODE classifier

AODE seeks to estimate the probability of each class ''y'' given a specified set of features ''x''1, ... ''x''n, P(''y'' , ''x''1, ... ''x''n). To do so it uses the formula :\hat(y\mid x_1, \ldots x_n)=\frac where \hat(\cdot) denotes an estimate of P(\cdot), F(\cdot) is the frequency with which the argument appears in the sample data and ''m'' is a user specified minimum frequency with which a term must appear in order to be used in the outer summation. In recent practice ''m'' is usually set at 1.


Derivation of the AODE classifier

We seek to estimate P(''y'' , ''x''1, ... ''x''n). By the definition of conditional probability :P(y\mid x_1, \ldots x_n)=\frac. For any 1\leq i\leq n, :P(y, x_1, \ldots x_n)=P(y, x_i)P(x_1, \ldots x_n\mid y, x_i). Under an assumption that ''x''1, ... ''x''n are independent given ''y'' and ''xi'', it follows that :P(y, x_1, \ldots x_n)=P(y, x_i)\prod_^n P(x_j\mid y, x_i). This formula defines a special form of One Dependence Estimator (ODE), a variant of the
naive Bayes classifier In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Baye ...
that makes the above independence assumption that is weaker (and hence potentially less harmful) than the naive Bayes' independence assumption. In consequence, each ODE should create a less biased estimator than naive Bayes. However, because the base probability estimates are each conditioned by two variables rather than one, they are formed from less data (the training examples that satisfy both variables) and hence are likely to have more variance. AODE reduces this variance by averaging the estimates of all such ODEs.


Features of the AODE classifier

Like naive Bayes, AODE does not perform model selection and does not use tuneable parameters. As a result, it has low variance. It supports
incremental learning In computer science, incremental learning is a method of machine learning in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning a ...
whereby the classifier can be updated efficiently with information from new examples as they become available. It predicts class probabilities rather than simply predicting a single class, allowing the user to determine the confidence with which each classification can be made. Its probabilistic model can directly handle situations where some data are missing. AODE has computational complexity O(ln^2) at training time and O(kn^2) at classification time, where ''n'' is the number of features, ''l'' is the number of training examples and ''k'' is the number of classes. This makes it infeasible for application to high-dimensional data. However, within that limitation, it is linear with respect to the number of training examples and hence can efficiently process large numbers of training examples.


Implementations

The free
Weka The weka, also known as the Māori hen or woodhen (''Gallirallus australis'') is a flightless bird species of the rail family. It is endemic to New Zealand. It is the only extant member of the genus ''Gallirallus''. Four subspecies are recognize ...
machine learning suite includes an implementation of AODE.


See also

*
Cluster-weighted modeling In data mining, cluster-weighted modeling (CWM) is an algorithm-based approach to non-linear prediction of outputs (dependent variables) from inputs (independent variables) based on density estimation using a set of models (clusters) that are each ...


References

{{DEFAULTSORT:Aode Classification algorithms Bayesian estimation Statistical classification