Hardy–Littlewood Inequality
   HOME

TheInfoList



OR:

In
mathematical analysis Analysis is the branch of mathematics dealing with continuous functions, limit (mathematics), limits, and related theories, such as Derivative, differentiation, Integral, integration, measure (mathematics), measure, infinite sequences, series ( ...
, the Hardy–Littlewood inequality, named after
G. H. Hardy Godfrey Harold Hardy (7 February 1877 – 1 December 1947) was an English mathematician, known for his achievements in number theory and mathematical analysis. In biology, he is known for the Hardy–Weinberg principle, a basic principle of pop ...
and
John Edensor Littlewood John Edensor Littlewood (9 June 1885 – 6 September 1977) was a British mathematician. He worked on topics relating to analysis, number theory, and differential equations and had lengthy collaborations with G. H. Hardy, Srinivasa Ramanu ...
, states that if f and g are nonnegative
measurable In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (length, area, volume) and other common notions, such as magnitude, mass, and probability of events. These seemingly distinct concepts hav ...
real functions vanishing at
infinity Infinity is something which is boundless, endless, or larger than any natural number. It is denoted by \infty, called the infinity symbol. From the time of the Ancient Greek mathematics, ancient Greeks, the Infinity (philosophy), philosophic ...
that are defined on n-
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coo ...
al
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces ...
\mathbb R^n, then :\int_ f(x)g(x) \, dx \leq \int_ f^*(x)g^*(x) \, dx where f^* and g^* are the
symmetric decreasing rearrangement In mathematics, the symmetric decreasing rearrangement of a function is a function which is symmetric and decreasing, and whose level sets are of the same size as those of the original function. Definition for sets Given a measurable set, A, in \R ...
s of f and g, respectively. The decreasing rearrangement f^* of f is defined via the property that for all r >0 the two super-level sets :E_f(r)=\left\ \quad and \quad E_(r)=\left\ have the same volume (n-dimensional Lebesgue measure) and E_(r) is a ball in \mathbb R^n centered at x=0, i.e. it has maximal symmetry.


Proof

The layer cake representation allows us to write the general functions f and g in the form f(x)= \int_0^\infty \chi_ \, dr \quad and \quad g(x)= \int_0^\infty \chi_ \, ds where r \mapsto \chi_ equals 1 for r< f(x) and 0 otherwise. Analogously, s \mapsto \chi_ equals 1 for s< g(x) and 0 otherwise. Now the proof can be obtained by first using Fubini's theorem to interchange the order of integration. When integrating with respect to x \in \mathbb R^n the conditions f(x)>r and g(x)>s the indicator functions x \mapsto \chi_(x) and x \mapsto \chi_(x) appear with the superlevel sets E_f(r) and E_g(s) as introduced above: : \int_ f(x)g(x) \, dx = \displaystyle\int_\int_0^\infty \chi_ \, dr \; \int_0^\infty \chi_ \, ds \, dx = \int_\int_0^\infty \int_0^\infty \chi_\; \chi_ \, dr \, ds \, dx :::= \int_0^\infty \int_0^\infty \int_\chi_(x) \; \chi_(x) \, dx \, dr \, ds = \int_0^\infty \int_0^\infty \int_\chi_(x) \, dx \, dr \, ds. Denoting by \mu the n -dimensional Lebesgue measure we continue by estimating the volume of the intersection by the minimum of the volumes of the two sets. Then, we can use the equality of the volumes of the superlevel sets for the rearrangements: :::= \int_0^\infty \int_0^\infty \mu\left(E_f(r)\cap E_g(s)\right) \, dr \, ds :::\leq \int_0^\infty \int_0^\infty \min\left\ \, dr \, ds :::= \int_0^\infty \int_0^\infty \min\left\ \, dr \, ds. Now, we use that the superlevel sets E_(r) and E_(s) are balls in \mathbb R^n centered at x=0, which implies that E_(r) \, \cap\, E_(s) is exactly the smaller one of the two balls: :::= \int_0^\infty \int_0^\infty \mu\left( E_(r) \cap E_(s) \right) \, dr \, ds :::= \int_ f^*(x)g^*(x) \, dx The last identity follows by reversing the initial five steps that even work for general functions. This finishes the proof.


An application

Let X be a normally-distributed random variable with mean \mu and finite non-zero variance \sigma^2. Using the Hardy–Littlewood inequality, it can be proved that for 0<\delta <1 the \delta^ reciprocal moment for the absolute value of X is bounded above as : \begin \operatorname\left frac\right &\leq 2^ \frac \text \mu\in \mathbb. \end The technique used to obtain the above property of the normal distribution can be applied to other unimodal distributions.


See also

* Rearrangement inequality * Chebyshev's sum inequality * Lorentz space


References

{{DEFAULTSORT:Hardy-Littlewood inequality Inequalities (mathematics) Articles containing proofs