Brascamp–Lieb Inequality
   HOME
*





Brascamp–Lieb Inequality
In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on ''n''-dimensional Euclidean space \mathbb^. It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb. The geometric inequality Fix natural numbers ''m'' and ''n''. For 1 ≤ ''i'' ≤ ''m'', let ''n''''i'' ∈ N and let ''c''''i'' > 0 so that :\sum_^m c_i n_i = n. Choose non-negative, integrable functions :f_i \in L^1 \left( \mathbb^ ; , + \infty\right) and surjective linear maps :B_i : \mathbb^n \to \mathbb^. Then the following inequality holds: :\int_ \prod_^m f_i \left( B_i x \right)^ \, \mathrm x \leq D^ \prod_^m \left( \int_ f_i (y) \, \mathrm y \right)^, where ''D'' is given by ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Keith Martin Ball
Keith Martin Ball FRS FRSE (born 26 December 1960) is a mathematician and professor at the University of Warwick. He was scientific director of the International Centre for Mathematical Sciences (ICMS) from 2010 to 2014. Education Ball was educated at Berkhamsted School and Trinity College, Cambridge where he studied the Cambridge Mathematical Tripos and was awarded a Bachelor of Arts degree in mathematics in 1982 and a PhD in 1987 for research supervised by Béla Bollobás. Research Keith Ball's research is in the fields of functional analysis, high-dimensional and discrete geometry and information theory. He is the author of ''Strange Curves, Counting Rabbits, & Other Mathematical Explorations''. Awards and honours Ball was elected a Fellow of the American Mathematical Society The American Mathematical Society (AMS) is an association of professional mathematicians dedicated to the interests of mathematical research and scholarship, and serves the national and interna ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cramér–Rao Bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any such estimator is at least as high as the inverse of the Fisher information. Equivalently, it expresses an upper bound on the precision (the inverse of variance) of unbiased estimators: the precision of any such estimator is at most the Fisher information. The result is named in honor of Harald Cramér and C. R. Rao, but has independently also been derived by Maurice Fréchet, Georges Darmois, as well as Alexander Aitken and Harold Silverstone. An unbiased estimator that achieves this lower bound is said to be (fully) '' efficient''. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Poincaré Inequality
In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality. Statement of the inequality The classical Poincaré inequality Let ''p'', so that 1 ≤ ''p'' < ∞ and Ω a subset bounded at least in one direction. Then there exists a constant ''C'', depending only on Ω and ''p'', so that, for every function ''u'' of the ''W''01,''p''(Ω) of zero-trace (a.k.a. zero on the boundary) functions, :\, u \, _ \leq C \, \nabla u \, _. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Journal Of Functional Analysis
The ''Journal of Functional Analysis'' is a mathematics journal published by Elsevier. Founded by Paul Malliavin, Ralph S. Phillips, and Irving Segal, its editors-in-chief are Daniel W. Stroock, Stefaan Vaes, and Cedric Villani. It is covered in databases including Scopus, the Science Citation Index, and the SCImago Journal Rank The SCImago Journal Rank (SJR) indicator is a measure of the prestige of scholarly journals that accounts for both the number of citations received by a journal and the prestige of the journals where the citations come from. Rationale Citati ... service. References {{Mathematics-journal-stub Elsevier academic journals Mathematics journals Publications established in 1967 Semi-monthly journals ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Nabla Symbol
The nabla symbol The nabla is a triangular symbol resembling an inverted Greek delta:Indeed, it is called ( ανάδελτα) in Modern Greek. \nabla or ∇. The name comes, by reason of the symbol's shape, from the Hellenistic Greek word for a Phoenician harp, and was suggested by the encyclopedist William Robertson Smith to Peter Guthrie Tait in correspondence.Letter from Smith to Tait, 10 November 1870: My dear Sir, The name I propose for ∇ is, as you will remember, Nabla... In Greek the leading form is ναβλᾰ... As to the thing it is a sort of harp and is said by Hieronymus and other authorities to have had the figure of ∇ (an inverted Δ). Quoted in Oxford English Dictionary entry "nabla".Notably it is sometimes claimed to be from the Hebrew nevel (נֶבֶל)—as in the Book of Isaiah, 5th chapter, 12th sentence: "וְהָיָה כִנּוֹר וָנֶבֶל תֹּף וְחָלִיל וָיַיִן מִשְׁתֵּיהֶם וְאֵת פֹּעַל יְה ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hessian Matrix
In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". Definitions and properties Suppose f : \R^n \to \R is a function taking as input a vector \mathbf \in \R^n and outputting a scalar f(\mathbf) \in \R. If all second-order partial derivatives of f exist, then the Hessian matrix \mathbf of f is a square n \times n matrix, usually defined and arranged as follows: \mathbf H_f= \begin \dfrac & \dfrac & \cdots & \dfrac \\ .2ex \dfrac & \dfrac & \cdots & \dfrac \\ .2ex \vdots & \vdots & \ddots & \vdots \\ .2ex \dfrac & \dfrac & \cdots & \dfrac \end, or, by stating an equation for the coefficients using indices i and j, (\mathbf H_f)_ = \fra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Log-concave Measure
In mathematics, a Borel measure ''μ'' on ''n''-dimensional Euclidean space \mathbb^ is called logarithmically concave (or log-concave for short) if, for any compact subsets ''A'' and ''B'' of \mathbb^ and 0 < ''λ'' < 1, one has : \mu(\lambda A + (1-\lambda) B) \geq \mu(A)^\lambda \mu(B)^, where ''λ'' ''A'' + (1 − ''λ'') ''B'' denotes the Minkowski sum of ''λ'' ''A'' and (1 − ''λ'') ''B''. Examples The Brunn–Minkowski inequality asserts that the Lebesgue measure is log-concave. The restriction of the Lebesgue measure to any convex set is also log-concave. By a theorem of Borell, a probability measure on R^d is log-concave if and only if it has a density with respect to the Lebesgue measure on some affine hyperplane, and this density is a logarithmically concave function. Thus, any Gaussian measure is log-concave. The Prékopa–Leindler inequality shows that a convolution of log-concave ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Positive Definite Matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number z^* Mz is positive for every nonzero complex column vector z, where z^* denotes the conjugate transpose of z. Positive semi-definite matrices are defined similarly, except that the scalars z^\textsfMz and z^* Mz are required to be positive ''or zero'' (that is, nonnegative). Negative-definite and negative semi-definite matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called indefinite. A matrix is thus positive-definite if and only if it is the matrix of a positive-definite quadratic form or Hermitian form. In other words, a matrix is positive-definite if and only if it defines a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Determinant
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix is denoted , , or . The determinant of a matrix is :\begin a & b\\c & d \end=ad-bc, and the determinant of a matrix is : \begin a & b & c \\ d & e & f \\ g & h & i \end= aei + bfg + cdh - ceg - bdi - afh. The determinant of a matrix can be defined in several equivalent ways. Leibniz formula expresses the determinant as a sum of signed products of matrix entries such that each summand is the product of different entries, and the number of these summands is n!, the factorial of (t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Logarithmically Concave Function
In convex analysis, a non-negative function is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality : f(\theta x + (1 - \theta) y) \geq f(x)^ f(y)^ for all and . If is strictly positive, this is equivalent to saying that the logarithm of the function, , is concave; that is, : \log f(\theta x + (1 - \theta) y) \geq \theta \log f(x) + (1-\theta) \log f(y) for all and . Examples of log-concave functions are the 0-1 indicator functions of convex sets (which requires the more flexible definition), and the Gaussian function. Similarly, a function is '' log-convex'' if it satisfies the reverse inequality : f(\theta x + (1 - \theta) y) \leq f(x)^ f(y)^ for all and . Properties * A log-concave function is also quasi-concave. This follows from the fact that the logarithm is monotone implying that the superlevel sets of this function are convex. * Every concave function that is nonnegative on its d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Identity Function
Graph of the identity function on the real numbers In mathematics, an identity function, also called an identity relation, identity map or identity transformation, is a function that always returns the value that was used as its argument, unchanged. That is, when is the identity function, the equality is true for all values of to which can be applied. Definition Formally, if is a set, the identity function on is defined to be a function with as its domain and codomain, satisfying In other words, the function value in the codomain is always the same as the input element in the domain . The identity function on is clearly an injective function as well as a surjective function, so it is bijective. The identity function on is often denoted by . In set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or ''diagonal'' of . Algebraic properties If is any function, then we have ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]