Decision tree model
   HOME

TheInfoList



OR:

In computational complexity the decision tree model is the
model of computation In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how ...
in which an algorithm is considered to be basically a decision tree, i.e., a sequence of ''queries'' or ''tests'' that are done adaptively, so the outcome of the previous tests can influence the test is performed next. Typically, these tests have a small number of outcomes (such as a
yes–no question In linguistics, a yes–no question, also known as a binary question, a polar question, or a general question is a question whose expected answer is one of two choices, one that provides an affirmative answer to the question versus one that provid ...
) and can be performed quickly (say, with unit computational cost), so the worst-case time complexity of an algorithm in the decision tree model corresponds to the depth of the corresponding decision tree. This notion of computational complexity of a problem or an algorithm in the decision tree model is called its decision tree complexity or query complexity. Decision trees models are instrumental in establishing
lower bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an eleme ...
s for complexity theory for certain classes of computational problems and algorithms. Several variants of decision tree models have been introduced, depending on the
computational model A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, chemistry and biology to economics, psychology, ...
and type of query algorithms are allowed to perform. For example, a decision tree argument is used to show that a comparison sort of n items must take n\log(n) comparisons. For comparison sorts, a query is a comparison of two items a,\,b, with two outcomes (assuming no items are equal): either a < b or a > b. Comparison sorts can be expressed as a decision tree in this model, since such sorting algorithms only perform these types of queries.


Comparison trees and lower bounds for sorting

Decision trees are often employed to understand algorithms for sorting and other similar problems; this was first done by Ford and Johnson. For example, many sorting algorithms are comparison sorts, which means that they only gain information about an input sequence x_1,x_2,\ldots,x_n via local comparisons: testing whether x_i < x_j, x_i = x_j, or x_i > x_j. Assuming that the items to be sorted are all distinct and comparable, this can be rephrased as a yes-or-no question: is x_i > x_j? These algorithms can be modeled as binary decision trees, where the queries are comparisons: an internal node corresponds to a query, and the node's children correspond to the next query when the answer to the question is yes or no. For leaf nodes, the output corresponds to a permutation \pi that describes how the input sequence was scrambled from the fully ordered list of items. (The inverse of this permutation, \pi^, re-orders the input sequence.) One can show that comparison sorts must use \Omega(n\log(n)) comparisons through a simple argument: for an algorithm to be correct, it must be able to output every possible permutation of n elements; otherwise, the algorithm would fail for that particular permutation as input. So, its corresponding decision tree must have at least as many leaves as permutations: n! leaves. Any binary tree with at least n! leaves has depth at least \log_2(n!) = \Omega(n\log_2(n)), so this is a lower bound on the run time of a comparison sorting algorithm. In this case, the existence of numerous comparison-sorting algorithms having this time complexity, such as
mergesort In computer science, merge sort (also commonly spelled as mergesort) is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same ...
and
heapsort In computer science, heapsort is a comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like selection sort, heapsort divides its input into a sorted and an unsorted region, and it iteratively shrinks the ...
, demonstrates that the bound is tight. This argument does not use anything about the type of query, so it in fact proves a lower bound for any sorting algorithm that can be modeled as a binary decision tree. In essence, this is a rephrasing of the information-theoretic argument that a correct sorting algorithm must learn at least \log_2(n!) bits of information about the input sequence. As a result, this also works for randomized decision trees as well. Other decision tree lower bounds do use that the query is a comparison. For example, consider the task of only using comparisons to find the smallest number among n numbers. Before the smallest number can be determined, every number except the smallest must "lose" (compare greater) in at least one comparison. So, it takes at least n-1 comparisons to find the minimum. (The information-theoretic argument here only gives a lower bound of \log(n).) A similar argument works for general lower bounds for computing
order statistic In statistics, the ''k''th order statistic of a statistical sample is equal to its ''k''th-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference. Importan ...
s.


Linear and algebraic decision trees

Linear decision trees generalize the above comparison decision trees to computing functions that take real vectors x \in \mathbb^n as input. The tests in linear decision trees are linear functions: for a particular choice of real numbers a_0, \dots, a_n, output the sign of a_0 + \textstyle\sum_^n a_ix_i. (Algorithms in this model can only depend on the sign of the output.) Comparison trees are linear decision trees, because the comparison between x_i and x_j corresponds to the linear function x_i - x_j. From its definition, linear decision trees can only specify functions f whose fibers can be constructed by taking unions and intersections of half-spaces. ''Algebraic decision trees'' are a generalization of linear decision trees that allow the test functions to be polynomials of degree d. Geometrically, the space is divided into semi-algebraic sets (a generalization of hyperplane). These decision tree models, defined by Rabin and Reingold, are often used for proving lower bounds in computational geometry. For example, Ben-Or showed that element uniqueness (the task of computing f: \mathbb^n \to \, where f(x) is 0 if and only if there exist distinct coordinates i, j such that x_i = x_j) requires an algebraic decision tree of depth \Omega(n\log(n)). This was first showed for linear decision models by Dobkin and Lipton. They also show a n^2 lower bound for linear decision trees on the knapsack problem, generalized to algebraic decision trees by Steele and Yao.


Boolean decision tree complexities

For Boolean decision trees, the task is to compute the value of an n-bit
Boolean function In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set (usually , or ). Alternative names are switching function, used especially in older computer science literature, and truth function ...
f: \^n \to \ for an input x \in \^n. The queries correspond to reading a bit of the input, x_i, and the output is f(x). Each query may be dependent on previous queries. There are many types of computational models using decision trees that could be considered, admitting multiple complexity notions, called ''complexity measures''.


Deterministic decision tree

If the output of a decision tree is f(x), for all x\in \^n, the decision tree is said to "compute" f. The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained. D(f), the ''deterministic decision tree'' complexity of f is the smallest depth among all deterministic decision trees that compute f.


Randomized decision tree

One way to define a ''randomized decision tree'' is to add additional nodes to the tree, each controlled by a probability p_i. Another equivalent definition is to define it as a distribution over deterministic decision trees. Based on this second definition, the complexity of the randomized tree is defined as the largest depth among all the trees in the support of the underlying distribution. R_2(f) is defined as the complexity of the lowest-depth randomized decision tree whose result is f(x) with probability at least 2/3 for all x\in \^n (i.e., with bounded 2-sided error). R_2(f) is known as the
Monte Carlo Monte Carlo (; ; french: Monte-Carlo , or colloquially ''Monte-Carl'' ; lij, Munte Carlu ; ) is officially an administrative area of the Principality of Monaco, specifically the ward of Monte Carlo/Spélugues, where the Monte Carlo Casino is ...
randomized decision-tree complexity, because the result is allowed to be incorrect with bounded two-sided error. The
Las Vegas Las Vegas (; Spanish for "The Meadows"), often known simply as Vegas, is the 25th-most populous city in the United States, the most populous city in the state of Nevada, and the county seat of Clark County. The city anchors the Las Vegas ...
decision-tree complexity R_0(f) measures the ''expected'' depth of a decision tree that must be correct (i.e., has zero-error). There is also a one-sided bounded-error version which is denoted by R_1(f).


Nondeterministic decision tree

The nondeterministic decision tree complexity of a function is known more commonly as the certificate complexity of that function. It measures the number of input bits that a nondeterministic algorithm would need to look at in order to evaluate the function with certainty. Formally, the certificate complexity of f at x is the size of the smallest subset of indices S \subset /math> such that, for all y \in \^n, if y_i = x_i for all i \in S, then f(y) = f(x). The certificate complexity of f is the maximum certificate complexity over all x. The analogous notion where one only requires the verifier to be correct with 2/3 probability is denoted RC(f).


Quantum decision tree

The quantum decision tree complexity Q_2(f) is the depth of the lowest-depth quantum decision tree that gives the result f(x) with probability at least 2/3 for all x\in \^n . Another quantity, Q_E(f), is defined as the depth of the lowest-depth quantum decision tree that gives the result f(x) with probability 1 in all cases (i.e. computes f exactly). Q_2(f) and Q_E(f) are more commonly known as ''quantum query complexities'', because the direct definition of a quantum decision tree is more complicated than in the classical case. Similar to the randomized case, we define Q_0(f) and Q_1(f). These notions are typically bounded by the notions of degree and approximate degree. The ''degree'' of f, denoted \deg(f), is the smallest degree of any polynomial p satisfying f(x) = p(x) for all x \in \^n. The ''approximate degree'' of f, denoted \widetilde(f), is the smallest degree of any polynomial p satisfying p(x) \in ,1/3/math> whenever f(x) = 0 and p(x) \in /3, 1/math> whenever f(x) = 1. Beals et al. established that Q_0(f) \geq \deg(f)/2 and Q_2(f) \geq \widetilde(f)/2.


Relationships between boolean function complexity measures

It follows immediately from the definitions that for all n-bit Boolean functions f,Q_2(f) \leq R_2(f) \leq R_1(f) \leq R_0(f) \leq D(f) \leq n, and Q_2(f) \leq Q_0(f) \leq D(f) \leq n. Finding the best upper bounds in the converse direction is a major goal in the field of query complexity. All of these types of query complexity are polynomially related. Blum and Impagliazzo, Hartmanis and Hemachandra, and Tardos independently discovered that D(f) \leq R_0(f)^2.
Noam Nisan Noam Nisan ( he, נעם ניסן; born June 20, 1961) is an Israeli computer scientist, a professor of computer science at the Hebrew University of Jerusalem. He is known for his research in computational complexity theory and algorithmic game th ...
found that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity: D(f) = O(R_2(f)^3). (Nisan also showed that D(f) = O(R_1(f)^2).) A tighter relationship is known between the Monte Carlo and Las Vegas models: R_0(f) = O(R_2(f)^2 \log R_2(f)).Kulkarni, R. and Tal, A. On Fractional Block Sensitivity. Electronic Colloquium on Computational Complexity (ECCC). Vol. 20. 2013. This relationship is optimal up to polylogarithmic factors. As for quantum decision tree complexities, D(f) = O(Q_2(f)^4), and this bound is tight. Midrijanis showed that D(f) = O(Q_0(f)^3), improving a quartic bound due to Beals et al. It is important to note that these polynomial relationships are valid only for ''total'' Boolean functions. For partial Boolean functions, that have a domain a subset of \^n, an exponential separation between Q_0(f) and D(f) is possible; the first example of such a problem was discovered by Deutsch and Jozsa.


Sensitivity conjecture

For a
Boolean function In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set (usually , or ). Alternative names are switching function, used especially in older computer science literature, and truth function ...
f: \^n \to \, the ''sensitivity'' of f is defined to be the maximum sensitivity of f over all x, where the sensitivity of f at x is the number of single-bit changes in x that change the value of f(x). Sensitivity is related to the notion of total influence from the
analysis of Boolean functions In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on \^n or \^n (such functions are sometimes known as pseudo-Boolean functions) from a spectral perspective. The functions studied a ...
, which is equal to ''average'' sensitivity over all x. The sensitivity conjecture is the conjecture that sensitivity is polynomially related to query complexity; that is, there exists exponent c, c' such that, for all f, D(f) = O(s(f)^c) and s(f) = O(D(f)^). One can show through a simple argument that s(f) \leq D(f), so the conjecture is specifically concerned about finding a lower bound for sensitivity. Since all of the previously-discussed complexity measures are polynomially related, the precise type of complexity measure is not relevant. However, this is typically phrased as the question of relating sensitivity with block sensitivity. The ''block sensitivity'' of f, denoted bs(f), is defined to be the maximum block sensitivity of f over all x. The block sensitivity of f at x is the maximum number t of disjoint subsets S_1, \ldots, S_t \subset /math> such that, for any of the subsets S_i, flipping the bits of x corresponding to S_i changes the value of f(x). Since block sensitivity takes a maximum over more choices of subsets, s(f) \leq bs(f). Further, block sensitivity is polynomially related to the previously discussed complexity measures; for example, Nisan's paper introducing block-sensitivity showed that bs(f) \leq D(f) = O(bs(f)^4). So, one could rephrase the sensitivity conjecture as showing that, for some c, bs(f)=O(s(f)^c). In 1992, Nisan and Szegedy conjectured that c = 2 suffices. This would be tight, as Rubinstein in 1995 showed a quadratic separation between sensitivity and block sensitivity. In July 2019, 27 years after the conjecture was initially posed, Hao Huang from
Emory University Emory University is a private research university in Atlanta, Georgia. Founded in 1836 as "Emory College" by the Methodist Episcopal Church and named in honor of Methodist bishop John Emory, Emory is the second-oldest private institution of ...
proved the sensitivity conjecture, showing that bs(f) = O(s(f)^4). This proof is notably succinct, proving this statement in two pages when prior progress towards the sensitivity conjecture had been limited.


Summary of known results

This table summarizes results on separations between Boolean function complexity measures. The complexity measures are, in order, deterministic, zero-error randomized, two-sided-error randomized, certificate, randomized certificate, block sensitivity, sensitivity, exact quantum, degree, quantum, and approximate degree complexities. The number in the A-th row and B-th column denotes bounds on the exponent c, which is the infimum of all k satisfying A(f) = O(B(f)^k) for all boolean functions f. For example, the entry in the D-th row and s-th column is "3, 6", so D(f) = O(\operatorname(f)^) for all f, and there exists a function g such that D(g) = \Omega(\operatorname(g)^).


See also

* Comparison sort * Decision tree *
Aanderaa–Karp–Rosenberg conjecture In theoretical computer science, the Aanderaa–Karp–Rosenberg conjecture (also known as the Aanderaa–Rosenberg conjecture or the evasiveness conjecture) is a group of related conjectures about the number of questions of the form "Is there an ...
* Minimum spanning tree#Decision trees


References


Surveys

* {{DEFAULTSORT:Decision Tree Model Computational complexity theory Models of computation Decision trees