HOME

TheInfoList



OR:

In mathematics, an alternating series is an
infinite series In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mat ...
of the form \sum_^\infty (-1)^n a_n or \sum_^\infty (-1)^ a_n with for all . The signs of the general terms alternate between positive and negative. Like any series, an alternating series converges if and only if the associated sequence of partial sums converges.


Examples

The geometric series 1/2 − 1/4 %2B 1/8 − 1/16 %2B %E2%8B%AF sums to 1/3. The
alternating harmonic series In mathematics, the harmonic series is the infinite series formed by summing all positive unit fractions: \sum_^\infty\frac = 1 + \frac + \frac + \frac + \frac + \cdots. The first n terms of the series sum to approximately \ln n + \gamma, where ...
has a finite sum but the harmonic series does not. The
Mercator series In mathematics, the Mercator series or Newton–Mercator series is the Taylor series for the natural logarithm: :\ln(1+x)=x-\frac+\frac-\frac+\cdots In summation notation, :\ln(1+x)=\sum_^\infty \frac x^n. The series converges to the natura ...
provides an analytic expression of the
natural logarithm The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
: \sum_^\infty \frac x^n \;=\; \ln (1+x). The functions sine and cosine used in
trigonometry Trigonometry () is a branch of mathematics that studies relationships between side lengths and angles of triangles. The field emerged in the Hellenistic world during the 3rd century BC from applications of geometry to astronomical studies. ...
can be defined as alternating series in
calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arit ...
even though they are introduced in elementary algebra as the ratio of sides of a right triangle. In fact, \sin x = \sum_^\infty (-1)^n \frac, and \cos x = \sum_^\infty (-1)^n \frac . When the alternating factor is removed from these series one obtains the
hyperbolic function In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the ...
s sinh and cosh used in calculus. For integer or positive index α the
Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrar ...
of the first kind may be defined with the alternating series J_\alpha(x) = \sum_^\infty \frac ^ where is the
gamma function In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except t ...
. If is a
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form ...
, the
Dirichlet eta function In mathematics, in the area of analytic number theory, the Dirichlet eta function is defined by the following Dirichlet series, which converges for any complex number having real part > 0: \eta(s) = \sum_^ = \frac - \frac + \frac - \frac + \cd ...
is formed as an alternating series \eta(s) = \sum_^ = \frac - \frac + \frac - \frac + \cdots that is used in
analytic number theory In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Diri ...
.


Alternating series test

The theorem known as "Leibniz Test" or the
alternating series test In mathematical analysis, the alternating series test is the method used to show that an alternating series is convergent when its terms (1) decrease in absolute value, and (2) approach zero in the limit. The test was used by Gottfried Leibniz ...
tells us that an alternating series will converge if the terms converge to 0 monotonically. Proof: Suppose the sequence a_n converges to zero and is monotone decreasing. If m is odd and m, we obtain the estimate S_n - S_m \le a_ via the following calculation: \begin S_n - S_m & = \sum_^n(-1)^k\,a_k\,-\,\sum_^m\,(-1)^k\,a_k\ = \sum_^n\,(-1)^k\,a_k \\ & =a_ - a_ + a_ - a_ + \cdots + a_n\\ & = a_-(a_-a_) - (a_-a_) - \cdots - a_n \le a_ \le a_. \end Since a_n is monotonically decreasing, the terms -(a_m - a_) are negative. Thus, we have the final inequality: S_n - S_m \le a_m. Similarly, it can be shown that -a_m \le S_n - S_m . Since a_m converges to 0, our partial sums S_m form a
Cauchy sequence In mathematics, a Cauchy sequence (; ), named after Augustin-Louis Cauchy, is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all but a finite numbe ...
(i.e., the series satisfies the
Cauchy criterion The Cauchy convergence test is a method used to test infinite series for convergence. It relies on bounding sums of terms in the series. This convergence criterion is named after Augustin-Louis Cauchy who published it in his textbook Cours d'Analy ...
) and therefore converge. The argument for m even is similar.


Approximating sums

The estimate above does not depend on n. So, if a_n is approaching 0 monotonically, the estimate provides an
error bound The approximation error in a data value is the discrepancy between an exact value and some '' approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute e ...
for approximating infinite sums by partial sums: \left, \sum_^\infty(-1)^k\,a_k\,-\,\sum_^m\,(-1)^k\,a_k\\le , a_, .That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series. Indeed if you take 1-1/2+1/3-1/4+... = \ln 2 and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through a_ is enough, but in fact this is twice as many terms as needed. Indeed, the error after summing first 9999 elements is 0.0000500025, and so taking the partial sum up through a_ is sufficient. This series happens to have the property that constructing a new series with a_n -a_ also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal. This was improved by the Calabrese bound, discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound. In fact this is also not optimal for series where this property applies 2 or more times, which is described by Johnsonbaugh error bound. If one can apply the property an infinite number of times, Euler's transform applies.


Absolute convergence

A series \sum a_n
converges absolutely In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series \textstyle\sum_^\infty a_n is sai ...
if the series \sum , a_n, converges. Theorem: Absolutely convergent series are convergent. Proof: Suppose \sum a_n is absolutely convergent. Then, \sum , a_n, is convergent and it follows that \sum 2, a_n, converges as well. Since 0 \leq a_n + , a_n, \leq 2, a_n, , the series \sum (a_n + , a_n, ) converges by the comparison test. Therefore, the series \sum a_n converges as the difference of two convergent series \sum a_n = \sum (a_n + , a_n, ) - \sum , a_n, .


Conditional convergence

A series is conditionally convergent if it converges but does not converge absolutely. For example, the harmonic series \sum_^\infty \frac, diverges, while the alternating version \sum_^\infty \frac, converges by the
alternating series test In mathematical analysis, the alternating series test is the method used to show that an alternating series is convergent when its terms (1) decrease in absolute value, and (2) approach zero in the limit. The test was used by Gottfried Leibniz ...
.


Rearrangements

For any series, we can create a new series by rearranging the order of summation. A series is unconditionally convergent if any rearrangement creates a series with the same convergence as the original series. Absolutely convergent series are unconditionally convergent. But the Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence. The general principle is that addition of infinite sums is only commutative for absolutely convergent series. For example, one false proof that 1=0 exploits the failure of associativity for infinite sums. As another example, by
Mercator series In mathematics, the Mercator series or Newton–Mercator series is the Taylor series for the natural logarithm: :\ln(1+x)=x-\frac+\frac-\frac+\cdots In summation notation, :\ln(1+x)=\sum_^\infty \frac x^n. The series converges to the natura ...
\ln(2) = \sum_^\infty \frac = 1 - \frac + \frac - \frac + \cdots. But, since the series does not converge absolutely, we can rearrange the terms to obtain a series for \tfrac 1 2 \ln(2): \begin & \quad \left(1-\frac\right)-\frac +\left(\frac-\frac\right) -\frac+\left(\frac -\frac\right)-\frac+\cdots \\ pt& = \frac-\frac+\frac -\frac+\frac-\frac +\cdots \\ pt& = \frac\left(1-\frac + \frac -\frac+\frac- \frac+ \cdots\right)= \frac \ln(2). \end


Series acceleration

In practice, the numerical summation of an alternating series may be sped up using any one of a variety of
series acceleration In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the ...
techniques. One of the oldest techniques is that of
Euler summation In the mathematics of convergent and divergent series, Euler summation is a summation method. That is, it is a method for assigning a value to a series, different from the conventional method of taking limits of partial sums. Given a series Σ'' ...
, and there are many modern techniques that can offer even more rapid convergence.


See also

*
Grandi's series In mathematics, the infinite series , also written : \sum_^\infty (-1)^n is sometimes called Grandi's series, after Italian mathematician, philosopher, and priest Guido Grandi, who gave a memorable treatment of the series in 1703. It is a diverge ...
* Nörlund–Rice integral


Notes


References

* Earl D. Rainville (1967) ''Infinite Series'', pp 73–6,
Macmillan Publishers Macmillan Publishers (occasionally known as the Macmillan Group; formally Macmillan Publishers Ltd and Macmillan Publishing Group, LLC) is a British publishing company traditionally considered to be one of the 'Big Five' English language publ ...
. * {{DEFAULTSORT:Alternating Series Mathematical series Real analysis