Grothendieck Constant
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the Grothendieck inequality states that there is a universal constant K_G with the following property. If ''M''''ij'' is an ''n'' × ''n'' (
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
or
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
)
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
with : \Big, \sum_ M_ s_i t_j \Big, \le 1 for all (real or complex) numbers ''s''''i'', ''t''''j'' of absolute value at most 1, then : \Big, \sum_ M_ \langle S_i, T_j \rangle \Big, \le K_G for all vectors ''S''''i'', ''T''''j'' in the
unit ball Unit may refer to: Arts and entertainment * UNIT, a fictional military organization in the science fiction television series ''Doctor Who'' * Unit of action, a discrete piece of action (or beat) in a theatrical presentation Music * ''Unit'' (alb ...
''B''(''H'') of a (real or complex)
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
''H'', the constant K_G being independent of ''n''. For a fixed Hilbert space of dimension ''d'', the smallest constant that satisfies this property for all ''n'' × ''n'' matrices is called a Grothendieck constant and denoted K_G(d). In fact, there are two Grothendieck constants, K_G^(d) and K_G^(d), depending on whether one works with real or complex numbers, respectively. The Grothendieck inequality and Grothendieck constants are named after Alexander Grothendieck, who proved the existence of the constants in a paper published in 1953.


Motivation and the operator formulation

Let A = (a_) be an m \times n matrix. Then A defines a linear operator between the normed spaces (\mathbb R^m, \, \cdot \, _p) and (\mathbb R^n, \, \cdot \, _q) for 1 \leq p, q \leq \infty. The (p \to q)-norm of A is the quantity \, A \, _ = \max_ \, Ax \, _q. If p = q, we denote the norm by \, A \, _p. One can consider the following question: For what value of p and q is \, A \, _ maximized? Since A is linear, then it suffices to consider p such that \ contains as many points as possible, and also q such that \, Ax \, _q is as large as possible. By comparing \, x \, _p for p = 1, 2, \ldots, \infty, one sees that \, A \, _ \geq \, A \, _ for all 1 \leq p, q \leq \infty. One way to compute \, A \, _ is by solving the following quadratic integer program: \begin \max & \qquad \sum_ A_ x_i y_j \\ \text & \qquad (x, y) \in \^ \end To see this, note that \sum_ A_ x_i y_j = \sum_i (Ay)_i x_i, and taking the maximum over x \in \^m gives \, Ay \, _1. Then taking the maximum over y \in \^n gives \, A \, _ by the convexity of \ and by the triangle inequality. This quadratic integer program can be relaxed to the following semidefinite program: \begin \max & \qquad \sum_ A_ \langle x^, y^ \rangle \\ \text & \qquad x^, \ldots, x^, y^, \ldots, y^ \text (\mathbb R^d, \, \cdot \, _2) \end It is known that exactly computing \, A \, _ for 1 \leq q < p \leq \infty is
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
, while exacting computing \, A \, _p is
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
for p \not \in \. One can then ask the following natural question: How well does an optimal solution to the semidefinite program approximate \, A \, _? The Grothendieck inequality provides an answer to this question: There exists a fixed constant C > 0 such that, for any m, n \geq 1, for any m \times n matrix A, and for any Hilbert space H, \max_ \sum_ A_ \left\langle x^, y^ \right\rangle_H \leq C \, A \, _.


Bounds on the constants

The sequences K_G^(d) and K_G^(d) are easily seen to be increasing, and Grothendieck's result states that they are
bounded Boundedness or bounded may refer to: Economics * Bounded rationality, the idea that human rationality in decision-making is bounded by the available information, the cognitive limitations, and the time available to make the decision * Bounded e ...
,. so they have
limits Limit or Limits may refer to: Arts and media * ''Limit'' (manga), a manga by Keiko Suenobu * ''Limit'' (film), a South Korean film * Limit (music), a way to characterize harmony * "Limit" (song), a 2016 single by Luna Sea * "Limits", a 2019 ...
. Grothendieck proved that 1.57 \approx \frac \leq K_G^ \leq \operatorname\frac \approx 2.3, where K_G^ is defined to be \sup_d K_G^(d). . improved the result by proving that K_G^ \le \frac \approx 1.7822, conjecturing that the upper bound is tight. However, this conjecture was disproved by ..


Grothendieck constant of order ''d''

Boris Tsirelson Boris Semyonovich Tsirelson (May 4, 1950 – January 21, 2020) ( he, בוריס סמיונוביץ' צירלסון, russian: Борис Семёнович Цирельсон) was a Russian–Israeli mathematician and Professor of Mathematics ...
showed that the Grothendieck constants K_G^(d) play an essential role in the problem of
quantum nonlocality In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not admit an interpretation in terms of a local realistic theory. Quantum nonlocality has been experimen ...
: the Tsirelson bound of any full correlation bipartite Bell inequality for a quantum system of dimension ''d'' is upperbounded by K_G^(2d^2).


Lower bounds

Some historical data on best known lower bounds of K_G^(d) is summarized in the following table.


Upper bounds

Some historical data on best known upper bounds of K_G^(d):


Applications


Cut norm estimation

Given an m \times n real matrix A = (a_), the cut norm of A is defined by \, A \, _\square = \max_ \left, \sum_ a_ \. The notion of cut norm is essential in designing efficient approximation algorithms for dense graphs and matrices. More generally, the definition of cut norm can be generalized for symmetric
measurable In mathematics, the concept of a measure is a generalization and formalization of Geometry#Length, area, and volume, geometrical measures (length, area, volume) and other common notions, such as mass and probability of events. These seemingly ...
functions W :
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
2 \to \mathbb R so that the cut norm of W is defined by \, W \, _\square = \sup_ \left, \int_ W \. This generalized definition of cut norm is crucial in the study of the space of graphons, and the two definitions of cut norm can be linked via the
adjacency matrix In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. In the special case of a finite simp ...
of a
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discre ...
. An application of the Grothendieck inequality is to give an efficient algorithm for approximating the cut norm of a given real matrix A; specifically, given an m \times n real matrix, one can find a number \alpha such that \, A \, _\square \leq \alpha \leq C \, A \, _\square, where C is an absolute constant. This approximation algorithm uses
semidefinite programming Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positive ...
. We give a sketch of this approximation algorithm. Let B = (b_) be (m + 1) \times (n + 1) matrix defined by \begin a_ & a_ & \ldots & a_ & -\sum_^n a_ \\ a_ & a_ & \ldots & a_ & -\sum_^n a_ \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_ & a_ & \ldots & a_ & -\sum_^n a_ \\ -\sum_^m a_ & -\sum_^m a_ & \ldots & -\sum_^m a_ & \sum_^n \sum_^m a_ \end. One can verify that \, A \, _\square = \, B \, _\square by observing, if S \in + 1 T \in + 1/math> form a maximizer for the cut norm of B, then S^* = \begin S, & \text m + 1 \not \in S, \\ \setminus S, & \text, \end \qquad T^* = \begin T, & \text n + 1 \not \in T, \\ \setminus S, & \text, \end \qquad form a maximizer for the cut norm of A. Next, one can verify that \, B \, _\square = \, B \, _/4, where \, B \, _ = \max \left\. Although not important in this proof, \, B \, _ can be interpreted to be the norm of B when viewed as a linear operator from \ell_\infty^m to \ell_1^m. Now it suffices to design an efficient algorithm for approximating \, A \, _. We consider the following semidefinite program: \text(A) = \max \left\. Then \text(A) \geq \, A \, _. The Grothedieck inequality implies that \text(A) \leq K_G^ \, A \, _. Many algorithms (such as interior-point methods, first-order methods, the bundle method, the
augmented Lagrangian method Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems a ...
) are known to output the value of a semidefinite program up to an additive error \varepsilon in time that is polynomial in the program description size and \log (1/\varepsilon). Therefore, one can output \alpha = \text(B) which satisfies \, A \, _\square \leq \alpha \leq C \, A \, _\square \qquad \text \qquad C = K_G^.


Szemerédi's regularity lemma

Szemerédi's regularity lemma is a useful tool in graph theory, asserting (informally) that any graph can be partitioned into a controlled number of pieces that interact with each other in a
pseudorandom A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic Determinism is a philosophical view, where all events are determined completely by previously exi ...
way. Another application of the Grothendieck inequality is to produce a partition of the vertex set that satisfies the conclusion of Szemerédi's regularity lemma, via the cut norm estimation algorithm, in time that is polynomial in the upper bound of Szemerédi's regular partition size but independent of the number of vertices in the graph. It turns out that the main "bottleneck" of constructing a Szemeredi's regular partition in polynomial time is to determine in polynomial time whether or not a given pair (X, Y) is close to being \varepsilon-regular, meaning that for all S \subset X, T \subset Y with , S, \geq \varepsilon , X, , , T, \geq \varepsilon , Y, , we have \left, \frac - \frac \ \leq \varepsilon, where e(X', Y') = , \, for all X', Y' \subset V and V, E are the vertex and edge sets of the graph, respectively. To that end, we construct an n \times n matrix A = (a_)_, where n = , V, , defined by a_ = \begin 1 - \frac, & \text xy \in E, \\ -\frac, & \text. \end Then for all S \subset X, T \subset Y, \left, \sum_ a_ \ = , S, , T, \left, \frac - \frac \. Hence, if (X, Y) is not \varepsilon-regular, then \, A \, _\square \geq \varepsilon^3 n^2. It follows that using the cut norm approximation algorithm together with the rounding technique, one can find in polynomial time S \subset X, T \subset Y such that \min\left\ \geq \left, \sum_ a_\ \geq \frac \varepsilon^3 n^2 \geq \frac \varepsilon^3 n^2. Then the algorithm for producing a Szemerédi's regular partition follows from the constructive argument of Alon et al.


Variants of the Grothendieck inequality


Grothendieck inequality of a graph

The Grothendieck inequality of a graph states that for each n \in \mathbb N and for each graph G = (\, E) without self loops, there exists a universal constant K > 0 such that every n \times n matrix A = (a_) satisfies that \max_ \sum_ a_ \left\langle x_i, x_j \right\rangle \leq K \max_ \sum_ a_ \varepsilon_1 \varepsilon_n. The Grothendieck constant of a graph G, denoted K(G), is defined to be the smallest constant K that satisfies the above property. The Grothendieck inequality of a graph is an extension of the Grothendieck inequality because the former inequality is the special case of the latter inequality when G is a
bipartite graph In the mathematical field of graph theory, a bipartite graph (or bigraph) is a graph whose vertices can be divided into two disjoint and independent sets U and V, that is every edge connects a vertex in U to one in V. Vertex sets U and V are ...
with two copies of \ as its bipartition classes. Thus, K_G = \sup_ \. For G = K_n, the n-vertex complete graph, the Grothendieck inequality of G becomes \max_ \sum_ a_ \left\langle x_i, x_j \right\rangle \leq K(K_n) \max_ \sum_ a_ \varepsilon_i \varepsilon_j. It turns out that K(K_n) \asymp \log n. On one hand, we have K(K_n) \lesssim \log n. Indeed, the following inequality is true for any n \times n matrix A = (a_), which implies that K(K_n) \lesssim \log n by the Cauchy-Schwarz inequality: \max_ \sum_ a_ \left\langle x_i, x_j \right\rangle \leq \log\left(\frac\right) \max_ \sum_ a_ \varepsilon_1 \varepsilon_n. On the other hand, the matching lower bound K(K_n) \gtrsim \log n is due to
Alon Alon or ALON may refer to: * Alon (name), an Israeli given name and surname * Alon, Mateh Binyamin, an Israeli settlement in the West Bank * Alon Inc, an American airplane builder, known for the Alon A-4 * Alon USA, an American energy company * Alu ...
, Makarychev, Makarychev and Naor in 2006. The Grothendieck inequality K(G) of a graph G depends upon the structure of G. It is known that \log \omega \lesssim K(G) \lesssim \log \vartheta, and K(G) \leq \frac, where \omega is the
clique number In the mathematical area of graph theory, a clique ( or ) is a subset of vertices of an undirected graph such that every two distinct vertices in the clique are adjacent. That is, a clique of a graph G is an induced subgraph of G that is comple ...
of G, i.e., the largest k \in \ such that there exists S \subset \ with , S, = k such that ij \in E for all distinct i, j \in S, and \vartheta = \min \left\. The parameter \vartheta is known as the Lovász theta function of the complement of G.


L^p Grothendieck inequality

In the application of the Grothendieck inequality for approximating the cut norm, we have seen that the Grothendieck inequality answers the following question: How well does an optimal solution to the semidefinite program \text(A) approximate \, A \, _, which can be viewed as an optimization problem over the unit cube? More generally, we can ask similar questions over convex bodies other than the unit cube. For instance, the following inequality is due to Naor and Schechtman and independently due to Guruswami et al: For every n \times n matrix A = (a_) and every p \geq 2, \max_ \sum_^n \sum_^n a_ \left\langle x_i, x_j \right\rangle \leq \gamma_p^2 \max_ \sum_^n \sum_^n a_ t_i t_j, where \gamma_p = \sqrt \left(\frac\right)^. The constant \gamma_p^2 is sharp in the inequality.
Stirling's formula In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of n. It is named after James Stirling, though a related but less ...
implies that \gamma_p^2 = p/e + O(1) as p \to \infty.


See also

* Pisier–Ringrose inequality


References


External links

* (NB: the historical part is not exact there.) {{Functional analysis Theorems in functional analysis Inequalities