Interpolation Inequality
   HOME
*





Interpolation Inequality
In the field of mathematical analysis, an interpolation inequality is an inequality of the form : \, u_ \, _ \leq C \, u_ \, _^ \, u_ \, _^ \dots \, u_ \, _^, \quad n \geq 2, where for 0\leq k \leq n, u_k is an element of some particular vector space X_k equipped with norm \, \cdot\, _k and \alpha_k is some real exponent, and C is some constant independent of u_0,..,u_n. The vector spaces concerned are usually function spaces, and many interpolation inequalities assume u_0 = u_1 = \cdots = u_n and so bound the norm of an element in one space with a combination norms in other spaces, such as Ladyzhenskaya's inequality and the Gagliardo-Nirenberg interpolation inequality, both given below. Nonetheless, some important interpolation inequalities involve distinct elements u_0,..,u_n, including Hölder's Inequality and Young's inequality for convolutions which are also presented below. Applications The main applications of interpolation inequalities lie in fields of study, s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematical Analysis
Analysis is the branch of mathematics dealing with continuous functions, limit (mathematics), limits, and related theories, such as Derivative, differentiation, Integral, integration, measure (mathematics), measure, infinite sequences, series (mathematics), series, and analytic functions. These theories are usually studied in the context of Real number, real and Complex number, complex numbers and Function (mathematics), functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry; however, it can be applied to any Space (mathematics), space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space). History Ancient Mathematical analysis formally developed in the 17th century during the Scientific Revolution, but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hölder Condition
In mathematics, a real or complex-valued function ''f'' on ''d''-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are nonnegative real constants ''C'', α > 0, such that : , f(x) - f(y) , \leq C\, x - y\, ^ for all ''x'' and ''y'' in the domain of ''f''. More generally, the condition can be formulated for functions between any two metric spaces. The number α is called the ''exponent'' of the Hölder condition. A function on an interval satisfying the condition with α > 1 is constant. If α = 1, then the function satisfies a Lipschitz condition. For any α > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder. We have the following chain of strict inclusions for functions over a closed and bounded non-trivial interval of the real line: : Continuously differentiable ⊂ Lipschitz continuous ⊂ α-Hölder continuous ⊂ uniformly continuous ⊂ continuous, where ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Riesz–Thorin Theorem
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about ''interpolation of operators''. It is named after Marcel Riesz and his student G. Olof Thorin. This theorem bounds the norms of linear maps acting between spaces. Its usefulness stems from the fact that some of these spaces have rather simpler structure than others. Usually that refers to which is a Hilbert space, or to and . Therefore one may prove theorems about the more complicated cases by proving them in two simple cases and then using the Riesz–Thorin theorem to pass from the simple cases to the complicated cases. The Marcinkiewicz theorem is similar but applies also to a class of non-linear maps. Motivation First we need the following definition: :Definition. Let be two numbers such that . Then for define by: . By splitting up the function in as the product and applying Hölder's inequality to its ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Sobolev Inequality
In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev. Sobolev embedding theorem Let denote the Sobolev space consisting of all real-valued functions on whose first weak derivatives are functions in . Here is a non-negative integer and . The first part of the Sobolev embedding theorem states that if , and are two real numbers such that :\frac-\frac = \frac -\frac, then :W^(\mathbf^n)\subseteq W^(\mathbf^n) and the embedding is continuous. In the special case of and , Sobolev embedding gives :W^(\mathbf^n) \subseteq L^(\mathbf^n) where is the Sobolev conjugate of , given byp. (Note that 1/p^*p.) Thus, a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Marcinkiewicz Interpolation Theorem
In mathematics, the Marcinkiewicz interpolation theorem, discovered by , is a result bounding the norms of non-linear operators acting on ''L''p spaces. Marcinkiewicz' theorem is similar to the Riesz–Thorin theorem about linear operators, but also applies to non-linear operators. Preliminaries Let ''f'' be a measurable function with real or complex values, defined on a measure space (''X'', ''F'', ω). The distribution function of ''f'' is defined by :\lambda_f(t) = \omega\left\. Then ''f'' is called weak L^1 if there exists a constant ''C'' such that the distribution function of ''f'' satisfies the following inequality for all ''t'' > 0: :\lambda_f(t)\leq \frac. The smallest constant ''C'' in the inequality above is called the weak L^1 norm and is usually denoted by \, f\, _ or \, f\, _. Similarly the space is usually denoted by ''L''1,''w'' or ''L''1,∞. (Note: This terminology is a bit misleading since the weak norm does not satisfy the triangl ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Landau–Kolmogorov Inequality
In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function ''f'' defined on a subset ''T'' of the real numbers: : \, f^\, _ \le C(n, k, T) ^ ^ \text 1\le k < n.


On the real line

For ''k'' = 1, ''n'' = 2 and ''T'' = [''c'',∞) or ''T'' = R, the inequality was first proved by Edmund Landau with the sharp constants ''C''(2, 1, [''c'',∞)) = 2 and ''C''(2, 1, R) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov, Andrey Kolmogorov found the sharp constants and arbitrary ''n'', ''k'': : C(n, k, \mathbb R) = a_ a_n^~, where ''a''''n'' are the Favard constants.


On the half-line

Following work by Matorin and others, the extremising functions were found by

Agmon's Inequality
In mathematical analysis, Agmon's inequalities, named after Shmuel Agmon,Lemma 13.2, in: Agmon, Shmuel, ''Lectures on Elliptic Boundary Value Problems'', AMS Chelsea Publishing, Providence, RI, 2010. . consist of two closely related interpolation inequalities between the Lebesgue space L^\infty and the Sobolev spaces H^s. It is useful in the study of partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function. The function is often thought of as an "unknown" to be sol ...s. Let u\in H^2(\Omega)\cap H^1_0(\Omega) where \Omega\subset\mathbb^3. Then Agmon's inequalities in 3D state that there exists a constant C such that : \displaystyle \, u\, _\leq C \, u\, _^ \, u\, _^, and : \displaystyle \, u\, _\leq C \, u\, _^ \, u\, _^. In 2D, the first inequality still holds, but not the second: let u\in H^2(\Omega)\cap H^1_0 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convolution
In mathematics (in particular, functional analysis), convolution is a operation (mathematics), mathematical operation on two function (mathematics), functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The choice of which function is reflected and shifted before the integral does not change the integral result (see #Properties, commutativity). The integral is evaluated for all values of shift, producing the convolution function. Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution (f*g) differs from cross-correlation (f \star g) only in that either or is reflected about the y-axis in convolution; thus it is a cross-c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

L^p Space
In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz . spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines. Applications Statistics In statistics, measures of central tendency and statistical dispersion, such as the mean, median, and standard deviation, are defined in terms of metrics, and measures of central tendency can be characterized as solutions to variational problems. In penalized regression, "L1 penalty" and "L2 penalty" refer to p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gradient
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent. In coordinate-free terms, the gradient of a function f(\bf) may be defined by: :df=\nabla f \cdot d\bf where ''df'' is the total infinitesimal change in ''f'' for an infinitesimal displacement d\bf, and is seen to be maximal when d\bf is in the direction of the gradi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Compactly Supported
In mathematics, the support of a real-valued function f is the subset of the function domain containing the elements which are not mapped to zero. If the domain of f is a topological space, then the support of f is instead defined as the smallest closed set containing all points not mapped to zero. This concept is used very widely in mathematical analysis. Formulation Suppose that f : X \to \R is a real-valued function whose domain is an arbitrary set X. The of f, written \operatorname(f), is the set of points in X where f is non-zero: \operatorname(f) = \. The support of f is the smallest subset of X with the property that f is zero on the subset's complement. If f(x) = 0 for all but a finite number of points x \in X, then f is said to have . If the set X has an additional structure (for example, a topology), then the support of f is defined in an analogous way as the smallest subset of X of an appropriate type such that f vanishes in an appropriate sense on its complement. T ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Besov Space
In mathematics, the Besov space (named after Oleg Vladimirovich Besov) B^s_(\mathbf) is a Complete metric space, complete quasinormed space which is a Banach space when . These spaces, as well as the similarly defined Triebel–Lizorkin spaces, serve to generalize more elementary function spaces such as Sobolev spaces and are effective at measuring regularity properties of functions. Definition Several equivalent definitions exist. One of them is given below. Let : \Delta_h f(x) = f(x-h) - f(x) and define the modulus of continuity by : \omega^2_p(f,t) = \sup_ \left \, \Delta^2_h f \right \, _p Let be a non-negative integer and define: with . The Besov space B^s_(\mathbf) contains all functions such that : f \in W^(\mathbf), \qquad \int_0^\infty \left, \frac \^q \frac < \infty.


Norm

The Besov space B^s_(\mathbf) is equipped with the norm : \left \, f \right \, _ = \left( \, f\, _^q + \int_0^\infty \left, \frac \^q \frac \rig ...
[...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]