HOME
*





Hilbert Matrix
In linear algebra, a Hilbert matrix, introduced by , is a square matrix with entries being the unit fractions : H_ = \frac. For example, this is the 5 × 5 Hilbert matrix: : H = \begin 1 & \frac & \frac & \frac & \frac \\ \frac & \frac & \frac & \frac & \frac \\ \frac & \frac & \frac & \frac & \frac \\ \frac & \frac & \frac & \frac & \frac \\ \frac & \frac & \frac & \frac & \frac \end. The Hilbert matrix can be regarded as derived from the integral : H_ = \int_0^1 x^ \, dx, that is, as a Gramian matrix for powers of ''x''. It arises in the least squares approximation of arbitrary functions by polynomials. The Hilbert matrices are canonical examples of ill-conditioned matrices, being notoriously difficult to use in numerical computation. For example, the 2-norm condition number of the matrix above is about 4.8. Historical note introduced the Hilbert matrix to study the following question in approximation theory: "Assume that , is a real interval. Is it then po ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Algebra
Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cauchy Matrix
In mathematics, a Cauchy matrix, named after Augustin-Louis Cauchy, is an ''m''×''n'' matrix with elements ''a''''ij'' in the form : a_=;\quad x_i-y_j\neq 0,\quad 1 \le i \le m,\quad 1 \le j \le n where x_i and y_j are elements of a field \mathcal, and (x_i) and (y_j) are injective sequences (they contain ''distinct'' elements). The Hilbert matrix is a special case of the Cauchy matrix, where :x_i-y_j = i+j-1. \; Every submatrix of a Cauchy matrix is itself a Cauchy matrix. Cauchy determinants The determinant of a Cauchy matrix is clearly a rational fraction in the parameters (x_i) and (y_j). If the sequences were not injective, the determinant would vanish, and tends to infinity if some x_i tends to y_j. A subset of its zeros and poles are thus known. The fact is that there are no more zeros and poles: The determinant of a square Cauchy matrix A is known as a Cauchy determinant and can be given explicitly as : \det \mathbf=     (Schechter 1959, eqn 4; Cau ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Approximation Theory
In mathematics, approximation theory is concerned with how function (mathematics), functions can best be approximation, approximated with simpler functions, and with quantitative property, quantitatively characterization (mathematics), characterizing the approximation error, errors introduced thereby. Note that what is meant by ''best'' and ''simpler'' will depend on the application. A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or Rational function, rational (ratio of polynomials) approximations. The objective is to make t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Numerical Linear Algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences ar ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Acta Mathematica
''Acta Mathematica'' is a peer-reviewed open-access scientific journal covering research in all fields of mathematics. According to Cédric Villani, this journal is "considered by many to be the most prestigious of all mathematical research journals".. According to the ''Journal Citation Reports'', the journal has a 2020 impact factor of 4.273, ranking it 5th out of 330 journals in the category "Mathematics". Publication history The journal was established by Gösta Mittag-Leffler in 1882 and is published by Institut Mittag-Leffler, a research institute for mathematics belonging to the Royal Swedish Academy of Sciences. The journal was printed and distributed by Springer from 2006 to 2016. Since 2017, Acta Mathematica has been published electronically and in print by International Press. Its electronic version is open access without publishing fees. Poincaré episode The journal's "most famous episode" (according to Villani) concerns Henri Poincaré, who won a prize offered ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Method Of Moments (statistics)
In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis. It starts by expressing the population moments (i.e., the expected values of powers of the random variable under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters. The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Pearson. Method Suppose that the problem is to estimate k unknown parameters \theta_, \theta_2, \dots, \theta_k characterizing the distribution f_W(w; \theta) of the random va ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Principal Diagonal
In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix. The following four matrices have their main diagonals indicated by red ones: :\begin \color & 0 & 0\\ 0 & \color & 0\\ 0 & 0 & \color\end \qquad \begin \color & 0 & 0 & 0 \\ 0 & \color & 0 & 0 \\ 0 & 0 & \color & 0 \end \qquad \begin \color & 0 & 0 \\ 0 & \color & 0 \\ 0 & 0 & \color \\ 0 & 0 & 0 \end \qquad \begin \color & 0 & 0 & 0 \\ 0 & \color & 0 & 0 \\ 0 & 0 &\color & 0 \\ 0 & 0 & 0 & \color \end \qquad Antidiagonal The antidiagonal (sometimes counter diagonal, secondary diagonal, trailing diagonal, minor diagonal, off diagonal, or bad diagonal) of an order N square matrix B is the collection of entries b_ such that i + j = N+1 for all 1 \leq i, j \leq N. That is, it runs from the top right corner to the bottom left corner. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Binomial Coefficient
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written \tbinom. It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula :\binom nk = \frac, which using factorial notation can be compactly expressed as :\binom = \frac. For example, the fourth power of is :\begin (1 + x)^4 &= \tbinom x^0 + \tbinom x^1 + \tbinom x^2 + \tbinom x^3 + \tbinom x^4 \\ &= 1 + 4x + 6 x^2 + 4x^3 + x^4, \end and the binomial coefficient \tbinom =\tfrac = \tfrac = 6 is the coefficient of the term. Arranging the numbers \tbinom, \tbinom, \ldots, \tbinom in successive rows for n=0,1,2,\ldots gives a triangular array called Pascal's triangle, satisfying the recurrence relation :\binom = \binom + \binom. The binomial coefficients occur in many areas of mathematics, a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Matrix Inverse
In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) ''inverse'' of , denoted by . Matrix inversion is the process of finding the matrix that satisfies the prior equation for a given invertible matrix . A square matrix that is ''not'' invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any finite region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices (-by- matrices for which ) do not hav ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Glaisher–Kinkelin Constant
In mathematics, the Glaisher–Kinkelin constant or Glaisher's constant, typically denoted , is a mathematical constant, related to the -function and the Barnes -function. The constant appears in a number of sums and integrals, especially those involving gamma functions and zeta functions. It is named after mathematicians James Whitbread Lee Glaisher and Hermann Kinkelin. Its approximate value is: : = ...   . The Glaisher–Kinkelin constant can be given by the limit: :A=\lim_ \frac where is the hyperfactorial. This formula displays a similarity between and which is perhaps best illustrated by noting Stirling's formula: :\sqrt=\lim_ \frac which shows that just as is obtained from approximation of the factorials, can also be obtained from a similar approximation to the hyperfactorials. An equivalent definition for involving the Barnes -function, given by where is the gamma function is: :A=\lim_ \frac. The Glaisher–Kinkelin constant also appears in evaluations of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Factorial
In mathematics, the factorial of a non-negative denoted is the product of all positive integers less than or equal The factorial also equals the product of n with the next smaller factorial: \begin n! &= n \times (n-1) \times (n-2) \times (n-3) \times \cdots \times 3 \times 2 \times 1 \\ &= n\times(n-1)!\\ \end For example, 5! = 5\times 4! = 5 \times 4 \times 3 \times 2 \times 1 = 120. The value of 0! is 1, according to the convention for an empty product. Factorials have been discovered in several ancient cultures, notably in Indian mathematics in the canonical works of Jain literature, and by Jewish mystics in the Talmudic book '' Sefer Yetzirah''. The factorial operation is encountered in many areas of mathematics, notably in combinatorics, where its most basic use counts the possible distinct sequences – the permutations – of n distinct objects: there In mathematical analysis, factorials are used in power series for the exponential function an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Stirling's Approximation
In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of n. It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre. One way of stating the approximation involves the logarithm of the factorial: \ln(n!) = n\ln n - n +O(\ln n), where the big O notation means that, for all sufficiently large values of n, the difference between \ln(n!) and n\ln n-n will be at most proportional to the logarithm. In computer science applications such as the worst-case lower bound for comparison sorting, it is convenient to use instead the binary logarithm, giving the equivalent form \log_2 (n!) = n\log_2 n - n\log_2 e +O(\log_2 n). The error term in either base can be expressed more precisely as \tfrac12\log(2\pi n)+O(\tfrac1n), corresponding to an approximate formula for the factorial itself, n! \sim \sqrt\left(\frac\righ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]