Hamburger Moment Problem
   HOME
*





Hamburger Moment Problem
In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (''m''0, ''m''1, ''m''2, ...), does there exist a positive Borel measure ''μ'' (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that :m_n = \int_^\infty x^n\,d \mu(x) \text In other words, an affirmative answer to the problem means that (''m''0, ''m''1, ''m''2, ...) is the sequence of moments of some positive Borel measure ''μ''. The Stieltjes moment problem, Vorobyev moment problem, and the Hausdorff moment problem are similar but replace the real line by ,+\infty) (Stieltjes and Vorobyev; but Vorobyev formulates the problem in the terms of matrix theory), or a bounded interval (Hausdorff). Characterization The Hamburger moment problem is solvable (that is, (''m''''n'') is a sequence of moments) if and only if the corresponding Hankel kernel on the nonnegative integers : ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hilbert Space
In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that defines a distance function for which the space is a complete metric space. The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term ''Hilbert space'' for the abstract concept that under ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Orthogonal Polynomials
In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonality, orthogonal to each other under some inner product. The most widely used orthogonal polynomials are the classical orthogonal polynomials, consisting of the Hermite polynomials, the Laguerre polynomials and the Jacobi polynomials. The Gegenbauer polynomials form the most important class of Jacobi polynomials; they include the Chebyshev polynomials, and the Legendre polynomials as special cases. The field of orthogonal polynomials developed in the late 19th century from a study of continued fractions by Pafnuty Chebyshev, P. L. Chebyshev and was pursued by Andrey Markov, A. A. Markov and Thomas Joannes Stieltjes, T. J. Stieltjes. They appear in a wide variety of fields: numerical analysis (Gaussian quadrature, quadrature rules), probability theory, representation theory (of Lie group, Lie groups, quantum group, quantum groups, and re ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Carleman's Condition
In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure \mu satisfies Carleman's condition, there is no other measure \nu having the same moments as \mu. The condition was discovered by Torsten Carleman in 1922. Hamburger moment problem For the Hamburger moment problem (the moment problem on the whole real line), the theorem states the following: Let \mu be a measure on \R such that all the moments m_n = \int_^ x^n \, d\mu(x)~, \quad n = 0,1,2,\cdots are finite. If \sum_^\infty m_^ = + \infty, then the moment problem for (m_n) is ''determinate''; that is, \mu is the only measure on \R with (m_n) as its sequence of moments. Stieltjes moment problem For the Stieltjes moment problem In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (''m''0, ''m''1, ''m''2, ...) to be of the form :m_n = \in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hankel Matrix
In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.: \qquad\begin a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end. More generally, a Hankel matrix is any n \times n matrix A of the form A = \begin a_ & a_ & a_ & \ldots & \ldots &a_ \\ a_ & a_2 & & & &\vdots \\ a_ & & & & & \vdots \\ \vdots & & & & & a_\\ \vdots & & & & a_& a_ \\ a_ & \ldots & \ldots & a_ & a_ & a_ \end. In terms of the components, if the i,j element of A is denoted with A_, and assuming i\le j, then we have A_ = A_ for all k = 0,...,j-i. Properties * The Hankel matrix is a symmetric matrix. * Let J_n be the n \times n exchange matrix. If H is a m \times n Hankel matrix, then H = T J_n where T is a m \times n Toeplitz matrix. ** If T is real symmetric, then H = T J_n will have the same eigenvalues as T ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Polynomial
In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example of a polynomial of a single indeterminate is . An example with three indeterminates is . Polynomials appear in many areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated scientific problems; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, which are central concepts in algebra and algebraic geometry. Etymology The word ''polynomial'' join ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Extensions Of Symmetric Operators
In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems. This article discusses a few related problems of this type. The unifying theme is that each problem has an operator-theoretic characterization which gives a corresponding parametrization of solutions. More specifically, finding self-adjoint extensions, with various requirements, of symmetric operators is equivalent to finding unitary extensions of suitable partial isometry, partial isometries. Symmetric operators Let ''H'' be a Hilbert space. A linear operator ''A'' acting on ''H'' with dense domain Dom(''A'') is symmetric if :\langle ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Multiplication Operator
In operator theory, a multiplication operator is an operator defined on some vector space of functions and whose value at a function is given by multiplication by a fixed function . That is, T_f\varphi(x) = f(x) \varphi (x) \quad for all in the domain of , and all in the domain of (which is the same as the domain of ). This type of operator is often contrasted with composition operators. Multiplication operators generalize the notion of operator given by a diagonal matrix. More precisely, one of the results of operator theory is a spectral theorem that states that every self-adjoint operator on a Hilbert space is unitarily equivalent to a multiplication operator on an ''L''''2'' space. Example Consider the Hilbert space of complex-valued square integrable functions on the interval . With , define the operator T_f\varphi(x) = x^2 \varphi (x) for any function in . This will be a self-adjoint bounded linear operator, with domain all of and with norm . Its spectrum wil ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Self-adjoint Operator
In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space ''V'' with inner product \langle\cdot,\cdot\rangle (equivalently, a Hermitian operator in the finite-dimensional case) is a linear map ''A'' (from ''V'' to itself) that is its own adjoint. If ''V'' is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of ''A'' is a Hermitian matrix, i.e., equal to its conjugate transpose ''A''. By the finite-dimensional spectral theorem, ''V'' has an orthonormal basis such that the matrix of ''A'' relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension. Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical observables such as positi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Spectral Measure
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechani ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Symmetric Operator
In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space ''V'' with inner product \langle\cdot,\cdot\rangle (equivalently, a Hermitian operator in the finite-dimensional case) is a linear map ''A'' (from ''V'' to itself) that is its own adjoint. If ''V'' is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of ''A'' is a Hermitian matrix, i.e., equal to its conjugate transpose ''A''. By the finite-dimensional spectral theorem, ''V'' has an orthonormal basis such that the matrix of ''A'' relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension. Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical observables such as posi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]