Analytic Combinatorics
Analytic combinatorics uses techniques from complex analysis to solve problems in enumerative combinatorics, specifically to find asymptotic estimates for the coefficients of generating functions. History One of the earliest uses of analytic techniques for an enumeration problem came from Srinivasa Ramanujan and G. H. Hardy's work on integer partitions, starting in 1918, first using a Tauberian theorem and later the circle method. Walter Hayman's 1956 paper "A Generalisation of Stirling's Formula" is considered one of the earliest examples of the saddle-point method. In 1990, Philippe Flajolet and Andrew Odlyzko developed the theory of singularity analysis. In 2009, Philippe Flajolet and Robert Sedgewick wrote the book '' Analytic Combinatorics'', which presents analytic combinatorics with their viewpoint and notation. Some of the earliest work on multivariate generating functions started in the 1970s using probabilistic methods. Development of further multivaria ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Complex Analysis
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering. As a differentiable function of a complex variable is equal to the sum function given by its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, '' holomorphic functions''. The concept can be extended to functions of several complex variables. Complex analysis is contrasted with real analysis, which dea ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Symbolic Method (combinatorics)
In combinatorics, the symbolic method is a technique for counting combinatorial objects. It uses the internal structure of the objects to derive formulas for their generating functions. The method is mostly associated with Philippe Flajolet and is detailed in Part A of his book with Robert Sedgewick, ''Analytic Combinatorics'', while the rest of the book explains how to use complex analysis in order to get asymptotic and probabilistic results on the corresponding generating functions. During two centuries, generating functions were popping up via the corresponding recurrences on their coefficients (as can be seen in the seminal works of Bernoulli, Euler, Arthur Cayley, Schröder, Ramanujan, Riordan, Knuth, , etc.). It was then slowly realized that the generating functions were capturing many other facets of the initial discrete combinatorial objects, and that this could be done in a more direct formal way: The recursive nature of some combinatorial structures translates ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Method Of Steepest Descent
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals. The integral to be estimated is often of the form :\int_Cf(z)e^\,dz, where ''C'' is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration ''C'' into a new path integration ''C′'' so that the following conditions hold: # ''C′'' passes through one or more zeros of the derivative ''g''′(''z''), # the imaginary part of ''g''(''z'') is constant on ''C′''. The method of steepest descent was first published by , who used it to estimate Bessel functions and pointed out that it occurred in the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Saddle Point
In mathematics, a saddle point or minimax point is a Point (geometry), point on the surface (mathematics), surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a Critical point (mathematics), critical point), but which is not a local extremum of the function. An example of a saddle point is when there is a critical point with a relative minimum along one axial direction (between peaks) and a relative maxima and minima, maximum along the crossing axis. However, a saddle point need not be in this form. For example, the function f(x,y) = x^2 + y^3 has a critical point at (0, 0) that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in the y-direction. The name derives from the fact that the prototypical example in two dimensions is a surface (mathematics), surface that ''curves up'' in one direction, and ''curves down'' in a different dir ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Contour Integral
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane. Contour integration is closely related to the calculus of residues, a method of complex analysis. One use for contour integrals is the evaluation of integrals along the real line that are not readily found by using only real variable methods. It also has various applications in physics. Contour integration methods include: * direct integration of a complex-valued function along a curve in the complex plane * application of the Cauchy integral formula * application of the residue theorem One method can be used, or a combination of these methods, or various limiting processes, for the purpose of finding these integrals or sums. Curves in the complex plane In complex analysis, a contour is a type of curve in the complex plane. In contour integration, contours provide a precise definition of the curves on which an integral may be suitab ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Entire Function
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function f(z) has a root at w, then f(z)/(z-w), taking the limit value at w, is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function. A transcendental entire function is an entire function that is not a polynomial. Just as meromorphic functions can be viewed as a generalization of rational fractions, entire functions can be viewed as a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Taylor Series
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. The partial sum formed by the first terms of a Taylor series is a polynomial of degree that is called the th Taylor polynomial of the function. Taylor polynomials are approximations of a function, which become generally more accurate as increases. Taylor's theorem gives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function is convergent, its sum is the limit ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Radius Of Convergence
In mathematics, the radius of convergence of a power series is the radius of the largest Disk (mathematics), disk at the Power series, center of the series in which the series Convergent series, converges. It is either a non-negative real number or \infty. When it is positive, the power series absolute convergence, converges absolutely and compact convergence, uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function (singularities are those values of the argument for which the function is not defined), the radius of convergence is the shortest or minimum of all the respective distances (which are all non-negative numbers) calculated from the center of the disk of convergence to the respective singularities of the function. Definition For a power series ''f'' defined as: :f(z) = \sum_^\infty c_n (z-a)^n, where *''a'' is ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Singularity (mathematics)
In mathematics, a singularity is a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to be well-behaved in some particular way, such as by lacking differentiability or analyticity. For example, the reciprocal function f(x) = 1/x has a singularity at x = 0, where the value of the function is not defined, as involving a division by zero. The absolute value function g(x) = , x, also has a singularity at x = 0, since it is not differentiable there. The algebraic curve defined by \left\ in the (x, y) coordinate system has a singularity (called a cusp) at (0, 0). For singularities in algebraic geometry, see singular point of an algebraic variety. For singularities in differential geometry, see singularity theory. Real analysis In real analysis, singularities are either discontinuities, or discontinuities of the derivative (sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities: ty ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Nth Root
In mathematics, an th root of a number is a number which, when raised to the power of , yields : r^n = \underbrace_ = x. The positive integer is called the ''index'' or ''degree'', and the number of which the root is taken is the ''radicand.'' A root of degree 2 is called a ''square root'' and a root of degree 3, a '' cube root''. Roots of higher degree are referred by using ordinal numbers, as in ''fourth root'', ''twentieth root'', etc. The computation of an th root is a root extraction. For example, is a square root of , since , and is also a square root of , since . The th root of is written as \sqrt /math> using the radical symbol \sqrt. The square root is usually written as , with the degree omitted. Taking the th root of a number, for fixed , is the inverse of raising a number to the th power, and can be written as a fractional exponent: \sqrt = x^. For a positive real number , \sqrt denotes the positive square root of and \sqrt /math> denotes the pos ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |