Decision tree learning
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations.
This article is about decision trees in machine learning. For the use of the term in decision analysis, see Decision tree.
Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.[1]
Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.[2]
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making).
Decision trees used in data mining are of two main types:
The term classification and regression tree (CART) analysis is an umbrella term used to refer to either of the above procedures, first introduced by Breiman et al. in 1984.[7] Trees used for regression and trees used for classification have some similarities – but also some differences, such as the procedure used to determine where to split.[7]
Some techniques, often called ensemble methods, construct more than one decision tree:
A special case of a decision tree is a decision list,[14] which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods[15] and monotonic constraints to be imposed.[16]
Notable decision tree algorithms include:
ID3 and CART were invented independently at around the same time (between 1970 and 1980), yet follow a similar approach for learning a decision tree from training tuples.
It has also been proposed to leverage concepts of fuzzy set theory for the definition of a special version of decision tree, known as Fuzzy Decision Tree (FDT).[23]
In this type of fuzzy classification, generally, an input vector is associated with multiple classes, each with a different confidence value.
Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.[24]
Extensions[edit]
Decision graphs[edit]
In a decision tree, all paths from the root node to the leaf node proceed by way of conjunction, or AND. In a decision graph, it is possible to use disjunctions (ORs) to join two more paths together using minimum message length (MML).[43] Decision graphs have been further extended to allow for previously unstated new attributes to be learnt dynamically and used at different places within the graph.[44] The more general coding scheme results in better predictive accuracy and log-loss probabilistic scoring. In general, decision graphs infer models with fewer leaves than decision trees.
Alternative search methods[edit]
Evolutionary algorithms have been used to avoid local optimal decisions and search the decision tree space with little a priori bias.[45][46]
It is also possible for a tree to be sampled using MCMC.[47]
The tree can be searched for in a bottom-up fashion.[48] Or several trees can be constructed parallelly to reduce the expected number of tests till classification.[38]