HOME
*





Approximation Theory
In mathematics, approximation theory is concerned with how function (mathematics), functions can best be approximation, approximated with simpler functions, and with quantitative property, quantitatively characterization (mathematics), characterizing the approximation error, errors introduced thereby. Note that what is meant by ''best'' and ''simpler'' will depend on the application. A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or Rational function, rational (ratio of polynomials) approximations. The objective is to make t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Chebyshev Polynomials
The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebyshev polynomials of the first kind T_n are defined by : T_n(\cos \theta) = \cos(n\theta). Similarly, the Chebyshev polynomials of the second kind U_n are defined by : U_n(\cos \theta) \sin \theta = \sin\big((n + 1)\theta\big). That these expressions define polynomials in \cos\theta may not be obvious at first sight, but follows by rewriting \cos(n\theta) and \sin\big((n+1)\theta\big) using de Moivre's formula or by using the angle sum formulas for \cos and \sin repeatedly. For example, the double angle formulas, which follow directly from the angle sum formulas, may be used to obtain T_2(\cos\theta)=\cos(2\theta)=2\cos^2\theta-1 and U_1(\cos\theta)\sin\theta=\sin(2\theta)=2\cos\theta\sin\theta, which are respectively a polynomial in \cos\th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Orthonormal Basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space \R^n is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for \R^n arises in this fashion. For a general inner product space V, an orthonormal basis can be used to define normalized orthogonal coordinates on V. Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of a finite-dimensional inner product space to the study of \R^n under dot product. Every finite-dimensional inner product space has an orthonormal basis, which may be ob ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Function Approximation
In general, a function approximation problem asks us to select a function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied mathematics, and computer science in particular , such as predicting the growth of microbes in microbiology. Function approximations are used where theoretical models are unavailable or hard to compute. One can distinguish two major classes of function approximation problems: First, for known target functions approximation theory is the branch of numerical analysis that investigates how certain known functions (for example, special functions) can be approximated by a specific class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.). Second, the target function, call it ''g'', may be unknown; instead of an explicit formula, only a set of poi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fourier Series
A Fourier series () is a summation of harmonically related sinusoidal functions, also known as components or harmonics. The result of the summation is a periodic function whose functional form is determined by the choices of cycle length (or ''period''), the number of components, and their amplitudes and phase parameters. With appropriate choices, one cycle (or ''period'') of the summation can be made to approximate an arbitrary function in that interval (or the entire function if it too is periodic). The number of components is theoretically infinite, in which case the other parameters can be chosen to cause the series to converge to almost any ''well behaved'' periodic function (see Pathological and Dirichlet–Jordan test). The components of a particular function are determined by ''analysis'' techniques described in this article. Sometimes the components are known first, and the unknown function is ''synthesized'' by a Fourier series. Such is the case of a discrete-ti ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Estimation Theory
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An ''estimator'' attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered: * The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest * The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector. Examples For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


East Journal On Approximations
The East Journal on Approximations is a journal about approximation theory published in Sofia, Bulgaria Bulgaria (; bg, България, Bǎlgariya), officially the Republic of Bulgaria,, ) is a country in Southeast Europe. It is situated on the eastern flank of the Balkans, and is bordered by Romania to the north, Serbia and North Macedon .... External links East Journal on Approximations web site References Mathematics journals Publications established in 1995 {{math-journal-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Constructive Approximation
''Constructive Approximation'' is "an international mathematics journal dedicated to Approximations and Expansions and related research in computation, function theory, functional analysis, interpolation spaces and interpolation of operators, numerical analysis, space of functions, special functions, and applications." References External links Constructive Approximation web site Mathematics journals Approximation theory English-language journals Publications established in 1985 Springer Science+Business Media academic journals Bimonthly journals {{math-journal-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Journal Of Approximation Theory
The ''Journal of Approximation Theory'' is "devoted to advances in pure and applied approximation theory and related areas." References External links ''Journal of Approximation Theory'' web site''Journal of Approximation Theory'' home page at Elsevier
Mathematics journals Approximation theory Publications established in 1968 Elsevier academic journals English-language journals Monthly journals {{math-journal-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Newton's Method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a single-variable function defined for a real variable , the function's derivative , and an initial guess for a root of . If the function satisfies sufficient assumptions and the initial guess is close, then :x_ = x_0 - \frac is a better approximation of the root than . Geometrically, is the intersection of the -axis and the tangent of the graph of at : that is, the improved guess is the unique root of the linear approximation at the initial point. The process is repeated as :x_ = x_n - \frac until a sufficiently precise value is reached. This algorithm is first in the class of Householder's methods, succeeded by Halley's method. The method can also be extended to complex functions an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Remez Algorithm
The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm ''L''∞ sense. It is sometimes referred to as Remes algorithm or Reme algorithm. A typical example of a Chebyshev space is the subspace of Chebyshev polynomials of order ''n'' in the space of real continuous functions on an interval, ''C'' 'a'', ''b'' The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum absolute difference between the polynomial and the function. In this case, the form of the solution is precised by the equioscillation theorem. Procedure The Remez algorithm starts with the function f to be approximated and a set X of n + 2 sample points x_1, x_2, ...,x_ in the approximation interval, usually the extrema of Chebyshev polynomial linear ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]