VP (class)
   HOME

TheInfoList



OR:

In computational complexity theory, ''arithmetic circuits'' are the standard model for computing
polynomials In mathematics, a polynomial is an expression (mathematics), expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addition, subtrac ...
. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is "what is the most efficient way to compute a given polynomial f?"


Definitions

An ''arithmetic circuit'' C over the field F and the set of variables x_1, \ldots, x_n is a directed acyclic graph as follows. Every node in it with
indegree In mathematics, and more specifically in graph theory, a directed graph (or digraph) is a graph that is made up of a set of vertices connected by directed edges, often called arcs. Definition In formal terms, a directed graph is an ordered pa ...
zero is called an ''input gate'' and is labeled by either a variable x_i or a field element in F. Every other gate is labeled by either + or \times; in the first case it is a ''sum'' gate and in the second a ''product'' gate. An ''arithmetic formula'' is a circuit in which every gate has outdegree one (and so the underlying graph is a
directed tree In graph theory, a tree is an undirected graph in which any two vertices are connected by ''exactly one'' path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by ''a ...
). A circuit has two complexity measures associated with it: size and depth. The ''size'' of a circuit is the number of gates in it, and the ''depth'' of a circuit is the length of the longest directed path in it. For example, the circuit in the figure has size six and depth two. An arithmetic circuit computes a polynomial in the following natural way. An input gate computes the polynomial it is labeled by. A sum gate v computes the sum of the polynomials computed by its children (a gate u is a ''child'' of v if the directed edge (v,u) is in the graph). A product gate computes the product of the polynomials computed by its children. Consider the circuit in the figure, for example: the input gates compute (from left to right) x_1, x_2 and 1, the sum gates compute x_1 + x_2 and x_2 + 1, and the product gate computes (x_1 + x_2) x_2 (x_2 +1).


Overview

Given a polynomial f, we may ask ourselves what is the best way to compute it — for example, what is the smallest size of a circuit computing f. The answer to this question consists of two parts. The first part is finding some circuit that computes f; this part is usually called ''upper bounding'' the complexity of f. The second part is showing that no other circuit can do better; this part is called ''lower bounding'' the complexity of f. Although these two tasks are strongly related, proving lower bounds is usually harder, since in order to prove a lower bound one needs to argue about all circuits at the same time. Note that we are interested in the formal computation of polynomials, rather than the functions that the polynomials define. For example, consider the polynomial x^2 +x; over the field of two elements this polynomial represents the zero function, but it is not the zero polynomial. This is one of the differences between the study of arithmetic circuits and the study of
Boolean circuits In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input ...
. In Boolean complexity, one is mostly interested in computing a function, rather than some representation of it (in our case, a representation by a polynomial). This is one of the reasons that make Boolean complexity harder than arithmetic complexity. The study of arithmetic circuits may also be considered as one of the intermediate steps towards the study of the Boolean case, which we hardly understand.


Upper bounds

As part of the study of the complexity of computing polynomials, some clever circuits (alternatively algorithms) were found. A well-known example is Strassen's algorithm for matrix product. The straightforward way for computing the product of two n \times n matrices requires a circuit of size order n^3. Strassen showed that we can, in fact, multiply two matrices using a circuit of size roughly n^. Strassen's basic idea is a clever way for multiplying two by two matrices. This idea is the starting point of the best theoretical way for multiplying two matrices that takes time roughly n^. Another interesting story lies behind the computation of the determinant of an n \times n matrix. The naive way for computing the determinant requires circuits of size roughly n!. Nevertheless, we know that there are circuits of size polynomial in n for computing the determinant. These circuits, however, have depth that is linear in n. Berkowitz came up with an improvement: a circuit of size polynomial in n, but of depth O(\log^2(n)). We would also like to mention the best circuit known for the
permanent Permanent may refer to: Art and entertainment * ''Permanent'' (film), a 2017 American film * ''Permanent'' (Joy Division album) * "Permanent" (song), by David Cook Other uses * Permanent (mathematics), a concept in linear algebra * Permanent (cy ...
of an n \times n matrix. As for the determinant, the naive circuit for the permanent has size roughly n!. However, for the permanent the best circuit known has size roughly 2^n, which is given by Ryser's formula: for an n \times n matrix X = (x_), : \operatorname(X)=(-1)^n\sum_(-1)^ \prod_^n \sum_ x_ (this is a depth three circuit).


Lower bounds

In terms of proving lower bounds, our knowledge is very limited. Since we study the computation of formal polynomials, we know that polynomials of very large degree require large circuits, for example, a polynomial of degree 2^ require a circuit of size roughly 2^n. So, the main goal is to prove lower bound for polynomials of small degree, say, polynomial in n. In fact, as in many areas of mathematics, counting arguments tell us that there are polynomials of polynomial degree that require circuits of superpolynomial size. However, these counting arguments usually do not improve our understanding of computation. The following problem is the main open problem in this area of research: ''find an explicit polynomial of polynomial degree that requires circuits of superpolynomial size''. The state of the art is a \Omega(n \log d) lower bound for the size of a circuit computing, e.g., the polynomial x_1^d + \cdots + x_n^d given by Strassen and by Baur and Strassen. More precisely, Strassen used Bézout's lemma to show that any circuit that simultaneously computes the n polynomials x_1^d, \ldots, x_n^d is of size \Omega(n \log d), and later Baur and Strassen showed the following: given an arithmetic circuit of size s computing a polynomial f, one can construct a new circuit of size at most O(s) that computes f and all the n partial derivatives of f. Since the partial derivatives of x_1^d + \cdots + x_n^d are dx_1^, \ldots, dx_n^, the lower bound of Strassen applies to x_1^d + \cdots + x_n^d as well. This is one example where some upper bound helps in proving lower bounds; the construction of a circuit given by Baur and Strassen implies a lower bound for more general polynomials. The lack of ability to prove lower bounds brings us to consider simpler models of computation. Some examples are: monotone circuits (in which all the field elements are nonnegative real numbers), constant depth circuits, and multilinear circuits (in which every gate computes a
multilinear polynomial In algebra, a multilinear polynomial is a multivariate polynomial that is linear (meaning affine) in each of its variables separately, but not necessarily simultaneously. It is a polynomial in which no variable occurs to a power of 2 or higher; tha ...
). These restricted models have been studied extensively and some understanding and results were obtained.


Algebraic P and NP

The most interesting open problem in computational complexity theory is the
P vs. NP The P versus NP problem is a major List of unsolved problems in computer science, unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly sol ...
problem. Roughly, this problem is to determine whether a given problem can be solved as easily as it can be shown that a solution exists to the given problem. In his seminal work Valiant suggested an algebraic analog of this problem, the ''VP vs. VNP'' problem. The class VP is the algebraic analog of P; it is the class of polynomials f of polynomial degree that have polynomial size circuits over a fixed field K. The class VNP is the analog of NP. VNP can be thought of as the class of polynomials f of polynomial degree such that given a monomial we can determine its coefficient in f efficiently, with a polynomial size circuit. One of the basic notions in complexity theory is the notion of ''completeness''. Given a class of polynomials (such as VP or VNP), a complete polynomial f for this class is a polynomial with two properties: (1) it is part of the class, and (2) any other polynomial g in the class is easier than f, in the sense that if f has a small circuit then so does g. Valiant showed that the permanent is complete for the class VNP. So in order to show that VP is not equal to VNP, one needs to show that the permanent does not have polynomial size circuits. This remains an outstanding open problem.


Depth reduction

One benchmark in our understanding of the computation of polynomials is the work of Valiant, Skyum, Berkowitz and Rackoff. They showed that if a polynomial f of degree r has a circuit of size s, then f also has a circuit of size polynomial in r and s of depth O(\log(r) \log(s)). For example, any polynomial of degree n that has a polynomial size circuit, also has a polynomial size circuit of depth roughly \log^2 (n). This result generalizes the circuit of Berkowitz to any polynomial of polynomial degree that has a polynomial size circuit (such as the determinant). The analog of this result in the Boolean setting is believed to be false. One corollary of this result is a simulation of circuits by relatively small formulas, formulas of quasipolynomial size: if a polynomial f of degree r has a circuit of size s, then it has a formula of size s^. This simulation is easier than the depth reduction of Valiant el al. and was shown earlier by Hyafil.L. Hyafil. ''On the parallel evaluation of multivariate polynomials.'' SIAM J. Comput. 8(2), pp. 120–123, 1979.


See also

*
Polynomial evaluation In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial P(x_1, x_2) = 2x_1x_2 + x_1^3 + 4 at ...
for a more general and less formal discussion of the complexity of polynomial evaluation.


Further reading

* * *


Footnotes

{{DEFAULTSORT:Arithmetic Circuit Complexity Circuit complexity