Ultraspherical Polynomials
   HOME
*





Ultraspherical Polynomials
In mathematics, Gegenbauer polynomials or ultraspherical polynomials ''C''(''x'') are orthogonal polynomials on the interval minus;1,1with respect to the weight function (1 − ''x''2)''α''–1/2. They generalize Legendre polynomials and Chebyshev polynomials, and are special cases of Jacobi polynomials. They are named after Leopold Gegenbauer. Characterizations File:Plot of the Gegenbauer polynomial C n^(m)(x) with n=10 and m=1 in the complex plane from -2-2i to 2+2i with colors created with Mathematica 13.1 function ComplexPlot3D.svg, Plot of the Gegenbauer polynomial C n^(m)(x) with n=10 and m=1 in the complex plane from -2-2i to 2+2i with colors created with Mathematica 13.1 function ComplexPlot3D File:Mplwp gegenbauer Cn05a1.svg, Gegenbauer polynomials with ''α''=1 File:Mplwp gegenbauer Cn05a2.svg, Gegenbauer polynomials with ''α''=2 File:Mplwp gegenbauer Cn05a3.svg, Gegenbauer polynomials with ''α''=3 File:Gegenbauer polynomials.gif, An animation showing ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Newtonian Potential
In mathematics, the Newtonian potential or Newton potential is an operator in vector calculus that acts as the inverse to the negative Laplacian, on functions that are smooth and decay rapidly enough at infinity. As such, it is a fundamental object of study in potential theory. In its general nature, it is a singular integral operator, defined by convolution with a function having a mathematical singularity at the origin, the Newtonian kernel Γ which is the fundamental solution of the Laplace equation. It is named for Isaac Newton, who first discovered it and proved that it was a harmonic function in the special case of three variables, where it served as the fundamental gravitational potential in Newton's law of universal gravitation. In modern potential theory, the Newtonian potential is instead thought of as an electrostatic potential. The Newtonian potential of a compactly supported integrable function ''f'' is defined as the convolution u(x) = \Gamma * f(x) = \int_ \Gamma(x ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Romanovski Polynomials
In mathematics, the Romanovski polynomials are one of three finite subsets of real orthogonal polynomials discovered by Vsevolod Romanovsky (Romanovski in French transcription) within the context of probability distribution functions in statistics. They form an orthogonal subset of a more general family of little-known Routh polynomials introduced by Edward John Routh in 1884. The term Romanovski polynomials was put forward by Raposo, with reference to the so-called 'pseudo-Jacobi polynomials in Lesky's classification scheme. It seems more consistent to refer to them as Romanovski–Routh polynomials, by analogy with the terms Romanovski–Bessel and Romanovski–Jacobi used by Lesky for two other sets of orthogonal polynomials. In some contrast to the standard classical orthogonal polynomials, the polynomials under consideration differ, in so far as for arbitrary parameters only ''a finite number of them are orthogonal'', as discussed in more detail below. The differential equatio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rogers Polynomials
In mathematics, the Rogers polynomials, also called Rogers–Askey–Ismail polynomials and continuous q-ultraspherical polynomials, are a family of orthogonal polynomials introduced by in the course of his work on the Rogers–Ramanujan identities. They are ''q''-analogs of ultraspherical polynomials, and are the Macdonald polynomials for the special case of the ''A''1 affine root system . and discuss the properties of Rogers polynomials in detail. Definition The Rogers polynomials can be defined in terms of the ''q''-Pochhammer symbol and the basic hypergeometric series In mathematics, basic hypergeometric series, or ''q''-hypergeometric series, are ''q''-analogue generalizations of generalized hypergeometric series, and are in turn generalized by elliptic hypergeometric series. A series ''x'n'' is called ... by : C_n(x;\beta, q) = \frace^ _2\phi_1(q^,\beta;\beta^q^;q,q\beta^e^) where ''x'' = cos(''θ''). References * * * * * *{{Citation , last1=Rogers , ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Diagonal Matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is \left begin 3 & 0 \\ 0 & 2 \end\right/math>, while an example of a 3×3 diagonal matrix is \left begin 6 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end\right/math>. An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with ''n'' columns and ''n'' rows is diagonal if \forall i,j \in \, i \ne j \implies d_ = 0. However, the main diagonal entries are unrestricted. The term ''diagonal matrix'' may s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Spectral Methods
Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain " basis functions" (for example, as a Fourier series which is a sum of sinusoids) and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible. Spectral methods and finite element methods are closely related and built on the same ideas; the main difference between them is that spectral methods use basis functions that are generally nonzero over the whole domain, while finite element methods use basis functions that are nonzero only on small subdomains ( compact support). Consequently, spectral methods connect variables ''globally'' while finite elements do so ''locally''. Partially for this reason, spectral methods have excellent error properties, with the so-called "exponential convergence" being the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Askey–Gasper Inequality
In mathematics, the Askey–Gasper inequality is an inequality for Jacobi polynomials proved by and used in the proof of the Bieberbach conjecture. Statement It states that if \beta\geq 0, \alpha+\beta\geq -2, and -1\leq x\leq 1 then :\sum_^n \frac \ge 0 where :P_k^(x) is a Jacobi polynomial. The case when \beta=0 can also be written as :_3F_2 \left (-n,n+\alpha+2,\tfrac(\alpha+1);\tfrac(\alpha+3),\alpha+1;t \right)>0, \qquad 0\leq t-1. In this form, with a non-negative integer, the inequality was used by Louis de Branges in his proof of the Bieberbach conjecture. Proof gave a short proof of this inequality, by combining the identity :\begin \frac &\times _3F_2 \left (-n,n+\alpha+2,\tfrac(\alpha+1);\tfrac(\alpha+3),\alpha+1;t \right) = \\ &= \frac \times _3F_2\left (-n+2j,n-2j+\alpha+1,\tfrac(\alpha+1);\tfrac(\alpha+2),\alpha+1;t \right ) \end with the Clausen inequality. Generalizations give some generalizations of the Askey–Gasper inequality to basic hypergeometri ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Positive-definite Function
In mathematics, a positive-definite function is, depending on the context, either of two types of function. Most common usage A ''positive-definite function'' of a real variable ''x'' is a complex-valued function f: \mathbb \to \mathbb such that for any real numbers ''x''1, …, ''x''''n'' the ''n'' × ''n'' matrix : A = \left(a_\right)_^n~, \quad a_ = f(x_i - x_j) is positive ''semi-''definite (which requires ''A'' to be Hermitian; therefore ''f''(−''x'') is the complex conjugate of ''f''(''x'')). In particular, it is necessary (but not sufficient) that : f(0) \geq 0~, \quad , f(x), \leq f(0) (these inequalities follow from the condition for ''n'' = 1, 2.) A function is ''negative semi-definite'' if the inequality is reversed. A function is ''definite'' if the weak inequality is replaced with a strong ( 0). Examples If (X, \langle \cdot, \cdot \rangle) is a real inner product space, then g_y \colon X \to \mathbb, x \mapsto \exp(i \langle y, x \rangle ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Zonal Spherical Harmonic
In the mathematical study of rotational symmetry, the zonal spherical harmonics are special spherical harmonics that are invariant under the rotation through a particular fixed axis. The zonal spherical functions are a broad extension of the notion of zonal spherical harmonics to allow for a more general symmetry group. On the two-dimensional sphere, the unique zonal spherical harmonic of degree ℓ invariant under rotations fixing the north pole is represented in spherical coordinates by Z^(\theta,\phi) = P_\ell(\cos\theta) where is a Legendre polynomial of degree . The general zonal spherical harmonic of degree ℓ is denoted by Z^_(\mathbf), where x is a point on the sphere representing the fixed axis, and y is the variable of the function. This can be obtained by rotation of the basic zonal harmonic Z^(\theta,\phi). In ''n''-dimensional Euclidean space, zonal spherical harmonics are defined as follows. Let x be a point on the (''n''−1)-sphere. Define Z^_ to be the dual re ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Spherical Harmonics
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. Since the spherical harmonics form a complete set of orthogonal functions and thus an orthonormal basis, each function defined on the surface of a sphere can be written as a sum of these spherical harmonics. This is similar to periodic functions defined on a circle that can be expressed as a sum of circular functions (sines and cosines) via Fourier series. Like the sines and cosines in Fourier series, the spherical harmonics may be organized by (spatial) angular frequency, as seen in the rows of functions in the illustration on the right. Further, spherical harmonics are basis functions for irreducible representations of SO(3), the group of rotations in three dimensions, and thus play a central role in the group theoretic discussion of SO(3). Spherical harmonics originate ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Poisson Kernel
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson. Poisson kernels commonly find applications in control theory and two-dimensional problems in electrostatics. In practice, the definition of Poisson kernels are often extended to ''n''-dimensional problems. Two-dimensional Poisson kernels On the unit disc In the complex plane, the Poisson kernel for the unit disc is given by P_r(\theta) = \sum_^\infty r^e^ = \frac = \operatorname\left(\frac\right), \ \ \ 0 \le r < 1. This can be thought of in two ways: either as a function of ''r'' and ''θ'', or as a family of functions of ''θ'' indexed by ''r''. If D = \ is the open