Bernstein's Theorem (approximation Theory)
In approximation theory, Bernstein's theorem is a converse to Jackson's theorem. The first results of this type were proved by Sergei Bernstein in 1912. For approximation by trigonometric polynomials, the result is as follows: Let ''f'': , 2π → C be a 2''π''-periodic function, and assume ''r'' is a natural number, and 0 < ''α'' < 1. If there exists a number ''C''(''f'') > 0 and a sequence of s ''n'' ≥ ''n''0 such that : then ''f'' = ''P''''n''0 + ''φ'', where ''φ'' has a bounded ''r''-th derivative which is α-Hölder continuous. See also *[...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Approximation Theory
In mathematics, approximation theory is concerned with how function (mathematics), functions can best be approximation, approximated with simpler functions, and with quantitative property, quantitatively characterization (mathematics), characterizing the approximation error, errors introduced thereby. Note that what is meant by ''best'' and ''simpler'' will depend on the application. A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or Rational function, rational (ratio of polynomials) approximations. The objective is to make t ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Jackson's Inequality
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials. Statement: trigonometric polynomials For trigonometric polynomials, the following was proved by Dunham Jackson: :Theorem 1: If f: ,2\pito \C is an r times differentiable periodic function such that :: \left , f^(x) \right , \leq 1, \qquad x\in ,2\pi :then, for every positive integer n, there exists a trigonometric polynomial T_ of degree at most n-1 such that ::\left , f(x) - T_(x) \right , \leq \frac, \qquad x\in ,2\pi :where C(r) depends only on r. The Akhiezer– Krein– Favard theorem gives the sharp value of C(r) (called the Akhiezer–Krein–Favard constant): : C(r) = \frac \sum_^\infty \frac ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Sergei Bernstein
Sergei Natanovich Bernstein (russian: Серге́й Ната́нович Бернште́йн, sometimes Romanized as ; 5 March 1880 – 26 October 1968) was a Ukrainian and Russian mathematician of Jewish origin known for contributions to partial differential equations, differential geometry, probability theory, and approximation theory. Work Partial differential equations In his doctoral dissertation, submitted in 1904 to Sorbonne, Bernstein solved Hilbert's nineteenth problem on the analytic solution of elliptic differential equations. His later work was devoted to Dirichlet's boundary problem for non-linear equations of elliptic type, where, in particular, he introduced a priori estimates. Probability theory In 1917, Bernstein suggested the first axiomatic foundation of probability theory, based on the underlying algebraic structure. It was later superseded by the measure-theoretic approach of Kolmogorov. In the 1920s, he introduced a method for proving limit theorems ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Trigonometric Polynomials
In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin(''nx'') and cos(''nx'') with ''n'' taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform. The term ''trigonometric polynomial'' for the real-valued case can be seen as using the analogy: the functions sin(''nx'') and cos(''nx'') are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of ''e''''ix'', Laurent polynomials in ''z'' under the change of variables ''z'' = ''e''''i ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Periodic Function
A periodic function is a function that repeats its values at regular intervals. For example, the trigonometric functions, which repeat at intervals of 2\pi radians, are periodic functions. Periodic functions are used throughout science to describe oscillations, waves, and other phenomena that exhibit periodicity. Any function that is not periodic is called aperiodic. Definition A function is said to be periodic if, for some nonzero constant , it is the case that :f(x+P) = f(x) for all values of in the domain. A nonzero constant for which this is the case is called a period of the function. If there exists a least positive constant with this property, it is called the fundamental period (also primitive period, basic period, or prime period.) Often, "the" period of a function is used to mean its fundamental period. A function with period will repeat on intervals of length , and these intervals are sometimes also referred to as periods of the function. Geometrically, a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Natural Number
In mathematics, the natural numbers are those numbers used for counting (as in "there are ''six'' coins on the table") and ordering (as in "this is the ''third'' largest city in the country"). Numbers used for counting are called ''Cardinal number, cardinal numbers'', and numbers used for ordering are called ''Ordinal number, ordinal numbers''. Natural numbers are sometimes used as labels, known as ''nominal numbers'', having none of the properties of numbers in a mathematical sense (e.g. sports Number (sports), jersey numbers). Some definitions, including the standard ISO/IEC 80000, ISO 80000-2, begin the natural numbers with , corresponding to the non-negative integers , whereas others start with , corresponding to the positive integers Texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, while in other writings, that term is used instead for the integers (including negative integers). The natural ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Trigonometric Polynomial
In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin(''nx'') and cos(''nx'') with ''n'' taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform. The term ''trigonometric polynomial'' for the real-valued case can be seen as using the analogy: the functions sin(''nx'') and cos(''nx'') are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of ''e''''ix'', Laurent polynomials in ''z'' under the change of variables ''z'' = ''e''''ix' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hölder Condition
In mathematics, a real or complex-valued function ''f'' on ''d''-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are nonnegative real constants ''C'', α > 0, such that : , f(x) - f(y) , \leq C\, x - y\, ^ for all ''x'' and ''y'' in the domain of ''f''. More generally, the condition can be formulated for functions between any two metric spaces. The number α is called the ''exponent'' of the Hölder condition. A function on an interval satisfying the condition with α > 1 is constant. If α = 1, then the function satisfies a Lipschitz condition. For any α > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder. We have the following chain of strict inclusions for functions over a closed and bounded non-trivial interval of the real line: : Continuously differentiable ⊂ Lipschitz continuous ⊂ α-Hölder continuous ⊂ uniformly continuous ⊂ continuous, where ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Bernstein's Lethargy Theorem
In mathematics, a lethargy theorem is a statement about the distance of points in a metric space from members of a sequence of subspaces; one application in numerical analysis is to approximation theory, where such theorems quantify the difficulty of approximating general functions by functions of special form, such as polynomials. In more recent work, the convergence of a sequence of operators is studied: these operators generalise the projections of the earlier work. Bernstein's lethargy theorem Let V_1 \subset V_2 \subset \ldots be a strictly ascending sequence of finite-dimensional linear subspaces of a Banach space ''X'', and let \epsilon_1 \ge \epsilon_2 \ge \ldots be a decreasing sequence of real numbers tending to zero. Then there exists a point ''x'' in ''X'' such that the distance of ''x'' to ''V''''i'' is exactly \epsilon_i. See also * Bernstein's theorem (approximation theory) In approximation theory, Bernstein's theorem is a converse to Jackson's theorem. The ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Constructive Function Theory
In mathematical analysis, constructive function theory is a field which studies the connection between the smoothness of a function and its degree of approximation. It is closely related to approximation theory. The term was coined by Sergei Bernstein. Example Let ''f'' be a 2''π''-periodic function. Then ''f'' is ''α''- Hölder for some 0 < ''α'' < 1 if and only if for every natural ''n'' there exists a ''Pn'' of degree ''n'' such that : where ''C''(''f'') is a positive number depending on ''f''. The "only if" is due to , see [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |