Divergent series
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, a divergent series is an infinite series that is not convergent, meaning that the infinite
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is called ...
of the partial sums of the series does not have a finite
limit Limit or Limits may refer to: Arts and media * ''Limit'' (manga), a manga by Keiko Suenobu * ''Limit'' (film), a South Korean film * Limit (music), a way to characterize harmony * "Limit" (song), a 2016 single by Luna Sea * "Limits", a 2019 ...
. If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge. A counterexample is the harmonic series :1 + \frac + \frac + \frac + \frac + \cdots =\sum_^\infty\frac. The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme. In specialized mathematical contexts, values can be objectively assigned to certain series whose sequences of partial sums diverge, in order to make meaning of the divergence of the series. A ''summability method'' or ''summation method'' is a partial function from the set of series to values. For example, Cesàro summation assigns Grandi's divergent series :1 - 1 + 1 - 1 + \cdots the value . Cesàro summation is an '' averaging'' method, in that it relies on the
arithmetic mean In mathematics and statistics, the arithmetic mean ( ) or arithmetic average, or just the '' mean'' or the ''average'' (when the context is clear), is the sum of a collection of numbers divided by the count of numbers in the collection. The co ...
of the sequence of partial sums. Other methods involve
analytic continuation In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a ...
s of related series. In
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which ...
, there are a wide variety of summability methods; these are discussed in greater detail in the article on
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
.


History

Before the 19th century, divergent series were widely used by
Leonhard Euler Leonhard Euler ( , ; 15 April 170718 September 1783) was a Swiss mathematician, physicist, astronomer, geographer, logician and engineer who founded the studies of graph theory and topology and made pioneering and influential discoveries ...
and others, but often led to confusing and contradictory results. A major problem was Euler's idea that any divergent series should have a natural sum, without first defining what is meant by the sum of a divergent series.
Augustin-Louis Cauchy Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. H ...
eventually gave a rigorous definition of the sum of a (convergent) series, and for some time after this, divergent series were mostly excluded from mathematics. They reappeared in 1886 with
Henri Poincaré Jules Henri Poincaré ( S: stress final syllable ; 29 April 1854 – 17 July 1912) was a French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as "Th ...
's work on asymptotic series. In 1890, Ernesto Cesàro realized that one could give a rigorous definition of the sum of some divergent series, and defined Cesàro summation. (This was not the first use of Cesàro summation, which was used implicitly by
Ferdinand Georg Frobenius Ferdinand Georg Frobenius (26 October 1849 – 3 August 1917) was a German mathematician, best known for his contributions to the theory of elliptic functions, differential equations, number theory, and to group theory. He is known for the famou ...
in 1880; Cesàro's key contribution was not the discovery of this method, but his idea that one should give an explicit definition of the sum of a divergent series.) In the years after Cesàro's paper, several other mathematicians gave other definitions of the sum of a divergent series, although these are not always compatible: different definitions can give different answers for the sum of the same divergent series; so, when talking about the sum of a divergent series, it is necessary to specify which summation method one is using.


Theorems on methods for summing divergent series

A summability method ''M'' is '' regular'' if it agrees with the actual limit on all
convergent series In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence (a_0, a_1, a_2, \ldots) defines a series that is denoted :S=a_0 +a_1+ a_2 + \cdots=\sum_^\infty a_k. The th partial ...
. Such a result is called an '' Abelian theorem'' for ''M'', from the prototypical Abel's theorem. More subtle, are partial converse results, called '' Tauberian theorems'', from a prototype proved by Alfred Tauber. Here ''partial converse'' means that if ''M'' sums the series ''Σ'', and some side-condition holds, then ''Σ'' was convergent in the first place; without any side-condition such a result would say that ''M'' only summed convergent series (making it useless as a summation method for divergent series). The function giving the sum of a convergent series is ''
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
'', and it follows from the Hahn–Banach theorem that it may be extended to a summation method summing any series with bounded partial sums. This is called the ''
Banach limit In mathematical analysis, a Banach limit is a continuous linear functional \phi: \ell^\infty \to \mathbb defined on the Banach space \ell^\infty of all bounded complex-valued sequences such that for all sequences x = (x_n), y = (y_n) in \ell^\in ...
''. This fact is not very useful in practice, since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking the
axiom of choice In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that ''a Cartesian product of a collection of non-empty sets is non-empty''. Informally put, the axiom of choice says that given any collection ...
or its equivalents, such as Zorn's lemma. They are therefore nonconstructive. The subject of divergent series, as a domain of
mathematical analysis Analysis is the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic functions. These theories are usually studied ...
, is primarily concerned with explicit and natural techniques such as
Abel summation In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit. If a series converges, the individual terms of the series must ...
, Cesàro summation and
Borel summation In mathematics, Borel summation is a summation method for divergent series, introduced by . It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several va ...
, and their relationships. The advent of Wiener's tauberian theorem marked an epoch in the subject, introducing unexpected connections to Banach algebra methods in
Fourier analysis In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph ...
. Summation of divergent series is also related to extrapolation methods and
sequence transformation In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, mo ...
s as numerical techniques. Examples of such techniques are Padé approximants, Levin-type sequence transformations, and order-dependent mappings related to renormalization techniques for large-order
perturbation theory In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle ...
in
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, ...
.


Properties of summation methods

Summation methods usually concentrate on the sequence of partial sums of the series. While this sequence does not converge, we may often find that when we take an average of larger and larger numbers of initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series. A ''summation method'' can be seen as a function from a set of sequences of partial sums to values. If A is any summation method assigning values to a set of sequences, we may mechanically translate this to a ''series-summation method'' A''Σ'' that assigns the same values to the corresponding series. There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively. * Regularity. A summation method is ''regular'' if, whenever the sequence ''s'' converges to ''x'', Equivalently, the corresponding series-summation method evaluates * Linearity. A is ''linear'' if it is a linear functional on the sequences where it is defined, so that for sequences ''r'', ''s'' and a real or complex scalar ''k''. Since the terms of the series ''a'' are linear functionals on the sequence ''s'' and vice versa, this is equivalent to A''Σ'' being a linear functional on the terms of the series. * Stability (also called ''translativity''). If ''s'' is a sequence starting from ''s''0 and ''s''′ is the sequence obtained by omitting the first value and subtracting it from the rest, so that , then A(''s'') is defined if and only if A(''s''′) is defined, and Equivalently, whenever for all ''n'', then Another way of stating this is that the
shift rule The shift rule is a mathematical rule for sequences and series. Here n and N are natural numbers In mathematics, the natural numbers are those numbers used for counting (as in "there are ''six'' coins on the table") and ordering (as in "thi ...
must be valid for the series that are summable by this method. The third condition is less important, and some significant methods, such as
Borel summation In mathematics, Borel summation is a summation method for divergent series, introduced by . It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several va ...
, do not possess it. One can also give a weaker alternative to the last condition. * Finite re-indexability. If ''a'' and ''a''′ are two series such that there exists a
bijection In mathematics, a bijection, also known as a bijective function, one-to-one correspondence, or invertible function, is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other ...
f: \mathbb \rightarrow \mathbb such that for all ''i'', and if there exists some N \in \mathbb such that for all ''i'' > ''N'', then (In other words, ''a''′ is the same series as ''a'', with only finitely many terms re-indexed.) This is a weaker condition than ''stability'', because any summation method that exhibits ''stability'' also exhibits ''finite re-indexability'', but the converse is not true.) A desirable property for two distinct summation methods A and B to share is ''consistency'': A and B are consistent if for every sequence ''s'' to which both assign a value, (Using this language, a summation method A is regular iff it is consistent with the standard sum Σ.) If two methods are consistent, and one sums more series than the other, the one summing more series is ''stronger''. There are powerful numerical summation methods that are neither regular nor linear, for instance nonlinear
sequence transformation In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, mo ...
s like Levin-type sequence transformations and Padé approximants, as well as the order-dependent mappings of perturbative series based on renormalization techniques. Taking regularity, linearity and stability as axioms, it is possible to sum many divergent series by elementary algebraic manipulations. This partly explains why many different summation methods give the same answer for certain series. For instance, whenever the geometric series :\begin G(r,c) & = \sum_^\infty cr^k & & \\ & = c + \sum_^\infty cr^ & & \text \\ & = c + r \sum_^\infty cr^k & & \text \\ & = c + r \, G(r,c), & & \text \\ G(r,c) & = \frac ,\text & & \\ \end can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value. However, when ''r'' is a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of infinity.


Classical summation methods

The two classical summation methods for series, ordinary convergence and absolute convergence, define the sum as a limit of certain partial sums. These are included only for completeness; strictly speaking they are not true summation methods for divergent series since, by definition, a series is divergent only if these methods do not work. Most but not all summation methods for divergent series extend these methods to a larger class of sequences.


Absolute convergence

Absolute convergence defines the sum of a sequence (or set) of numbers to be the limit of the net of all partial sums , if it exists. It does not depend on the order of the elements of the sequence, and a classical theorem says that a sequence is absolutely convergent if and only if the sequence of absolute values is convergent in the standard sense.


Sum of a series

Cauchy's classical definition of the sum of a series defines the sum to be the limit of the sequence of partial sums . This is the default definition of convergence of a sequence.


Nørlund means

Suppose ''pn'' is a sequence of positive terms, starting from ''p''0. Suppose also that :\frac \rightarrow 0. If now we transform a sequence s by using ''p'' to give weighted means, setting :t_m = \frac then the limit of ''tn'' as ''n'' goes to infinity is an average called the '' Nørlund mean'' N''p''(''s''). The Nørlund mean is regular, linear, and stable. Moreover, any two Nørlund means are consistent.


Cesàro summation

The most significant of the Nørlund means are the Cesàro sums. Here, if we define the sequence ''pk'' by :p_n^k = then the Cesàro sum ''C''''k'' is defined by Cesàro sums are Nørlund means if , and hence are regular, linear, stable, and consistent. ''C''0 is ordinary summation, and ''C''1 is ordinary Cesàro summation. Cesàro sums have the property that if then ''C''''h'' is stronger than ''C''''k''.


Abelian means

Suppose is a strictly increasing sequence tending towards infinity, and that . Suppose :f(x) = \sum_^\infty a_n e^ converges for all real numbers ''x'' > 0. Then the ''Abelian mean'' ''A''''λ'' is defined as :A_\lambda(s) = \lim_ f(x). More generally, if the series for ''f'' only converges for large ''x'' but can be analytically continued to all positive real ''x'', then one can still define the sum of the divergent series by the limit above. A series of this type is known as a generalized
Dirichlet series In mathematics, a Dirichlet series is any series of the form \sum_^\infty \frac, where ''s'' is complex, and a_n is a complex sequence. It is a special case of general Dirichlet series. Dirichlet series play a variety of important roles in analy ...
; in applications to physics, this is known as the method of '' heat-kernel regularization''. Abelian means are regular and linear, but not stable and not always consistent between different choices of ''λ''. However, some special cases are very important summation methods.


Abel summation

If , then we obtain the method of ''Abel summation''. Here :f(x) = \sum_^\infty a_n e^ = \sum_^\infty a_n z^n, where ''z'' = exp(−''x''). Then the limit of ''f''(''x'') as ''x'' approaches 0 through positive reals is the limit of the power series for ''f''(''z'') as ''z'' approaches 1 from below through positive reals, and the Abel sum ''A''(''s'') is defined as :A(s) = \lim_ \sum_^\infty a_n z^n. Abel summation is interesting in part because it is consistent with but more powerful than Cesàro summation: whenever the latter is defined. The Abel sum is therefore regular, linear, stable, and consistent with Cesàro summation.


Lindelöf summation

If , then (indexing from one) we have :f(x) = a_1 + a_2 2^ + a_3 3^ + \cdots . Then ''L''(''s''), the ''Lindelöf sum'' , is the limit of ''f''(''x'') as ''x'' goes to positive zero. The Lindelöf sum is a powerful method when applied to power series among other applications, summing power series in the Mittag-Leffler star. If ''g''(''z'') is analytic in a disk around zero, and hence has a
Maclaurin series Maclaurin or MacLaurin is a surname. Notable people with the surname include: * Colin Maclaurin (1698–1746), Scottish mathematician * Normand MacLaurin (1835–1914), Australian politician and university administrator * Henry Normand MacLaurin ...
''G''(''z'') with a positive radius of convergence, then in the Mittag-Leffler star. Moreover, convergence to ''g''(''z'') is uniform on compact subsets of the star.


Analytic continuation

Several summation methods involve taking the value of an
analytic continuation In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a ...
of a function.


Analytic continuation of power series

If Σ''a''''n''''x''''n'' converges for small complex ''x'' and can be analytically continued along some path from ''x'' = 0 to the point ''x'' = 1, then the sum of the series can be defined to be the value at ''x'' = 1. This value may depend on the choice of path. One of the first examples of potentially different sums for a divergent series, using analytic continuation, was given by Callet, who observed that if 1 \leq m < n then \frac = \frac = 1 - x^m + x^n - x^ + x^ - \dots Evaluating at x=1 , one gets 1 - 1 + 1 - 1 + \dots = \frac . However, the gaps in the series are key. For m=1, n=3 for example, we actually would get 1-1+0+1-1+0+1-1+\dots = \frac , so different sums correspond to different placements of the 0 's.


Euler summation

Euler summation is essentially an explicit form of analytic continuation. If a power series converges for small complex ''z'' and can be analytically continued to the open disk with diameter from to 1 and is continuous at 1, then its value at ''q'' is called the Euler or (E,''q'') sum of the series Σ''a''''n''. Euler used it before analytic continuation was defined in general, and gave explicit formulas for the power series of the analytic continuation. The operation of Euler summation can be repeated several times, and this is essentially equivalent to taking an analytic continuation of a power series to the point ''z'' = 1.


Analytic continuation of Dirichlet series

This method defines the sum of a series to be the value of the analytic continuation of the Dirichlet series :f(s) = \frac + \frac + \frac+ \cdots at ''s'' = 0, if this exists and is unique. This method is sometimes confused with zeta function regularization. If ''s'' = 0 is an isolated singularity, the sum is defined by the constant term of the Laurent series expansion.


Zeta function regularization

If the series :f(s) = \frac + \frac + \frac+ \cdots (for positive values of the ''a''''n'') converges for large real ''s'' and can be analytically continued along the real line to ''s'' = −1, then its value at ''s'' = −1 is called the zeta regularized sum of the series ''a''1 + ''a''2 + ... Zeta function regularization is nonlinear. In applications, the numbers ''a''''i'' are sometimes the eigenvalues of a self-adjoint operator ''A'' with compact resolvent, and ''f''(''s'') is then the trace of ''A''−''s''. For example, if ''A'' has eigenvalues 1, 2, 3, ... then ''f''(''s'') is the
Riemann zeta function The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter (zeta), is a mathematical function of a complex variable defined as \zeta(s) = \sum_^\infty \frac = \frac + \frac + \frac + \cdots for \operatorname(s) > ...
, ''ζ''(''s''), whose value at ''s'' = −1 is −, assigning a value to the divergent series . Other values of ''s'' can also be used to assign values for the divergent sums , and in general :\zeta(-s)=\sum_^\infty n^s=1^s + 2^s + 3^s + \cdots = -\frac\, , where ''Bk'' is a Bernoulli number.


Integral function means

If ''J''(''x'') = Σ''p''''n''''x''''n'' is an integral function, then the ''J'' sum of the series ''a''0 + ... is defined to be :\lim_\frac, if this limit exists. There is a variation of this method where the series for ''J'' has a finite radius of convergence ''r'' and diverges at ''x'' = ''r''. In this case one defines the sum as above, except taking the limit as ''x'' tends to ''r'' rather than infinity.


Borel summation

In the special case when ''J''(''x'') = ''e''''x'' this gives one (weak) form of
Borel summation In mathematics, Borel summation is a summation method for divergent series, introduced by . It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several va ...
.


Valiron's method

Valiron's method is a generalization of Borel summation to certain more general integral functions ''J''. Valiron showed that under certain conditions it is equivalent to defining the sum of a series as : \lim_\sqrt\sum_ e^(a_0+\cdots+a_h) where ''H'' is the second derivative of ''G'' and ''c''(''n'') = ''e''−''G''(''n''), and ''a''0 + ... + ''a''''h'' is to be interpreted as 0 when ''h'' < 0.


Moment methods

Suppose that ''dμ'' is a measure on the real line such that all the moments : \mu_n=\int x^n \, d\mu are finite. If ''a''0 + ''a''1 + ... is a series such that :a(x)=\frac+\frac+\cdots converges for all ''x'' in the support of ''μ'', then the (''dμ'') sum of the series is defined to be the value of the integral : \int a(x) \, d\mu if it is defined. (If the numbers ''μ''''n'' increase too rapidly then they do not uniquely determine the measure ''μ''.)


Borel summation

For example, if ''dμ'' = ''e''−''x'' ''dx'' for positive ''x'' and 0 for negative ''x'' then ''μ''''n'' = ''n''!, and this gives one version of
Borel summation In mathematics, Borel summation is a summation method for divergent series, introduced by . It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several va ...
, where the value of a sum is given by :\int_0^\infty e^\sum\frac \, dt. There is a generalization of this depending on a variable ''α'', called the (B′,''α'') sum, where the sum of a series ''a''0 + ... is defined to be : \int_0^\infty e^\sum\frac \, dt if this integral exists. A further generalization is to replace the sum under the integral by its analytic continuation from small ''t''.


Miscellaneous methods


BGN hyperreal summation

This summation method works by using an extension to the real numbers known as the hyperreal numbers. Since the hyperreal numbers include distinct infinite values, these numbers can be used to represent the values of divergent series. The key method is to designate a particular infinite value that is being summed, usually \omega, which is used as a unit of infinity. Instead of summing to an arbitrary infinity (as is typically done with \infty), the BGN method sums to the specific hyperreal infinite value labeled \omega. Therefore, the summations are of the form :\sum_^\omega f(x) This allows the usage of standard formulas for finite series such as arithmetic progressions in an infinite context. For instance, using this method, the sum of the progression 1 + 2 + 3 + \ldots is \frac + \frac, or, using just the most significant infinite hyperreal part, \frac.


Hausdorff transformations

.


Hölder summation


Hutton's method

In 1812 Hutton introduced a method of summing divergent series by starting with the sequence of partial sums, and repeatedly applying the operation of replacing a sequence ''s''0, ''s''1, ... by the sequence of averages , , ..., and then taking the limit .


Ingham summability

The series ''a''1 + ... is called Ingham summable to ''s'' if : \lim_ \sum_ a_n\frac\left frac\right= s. Albert Ingham showed that if ''δ'' is any positive number then (C,−''δ'') (Cesàro) summability implies Ingham summability, and Ingham summability implies (C,''δ'') summability .


Lambert summability

The series ''a''1 + ... is called Lambert summable to ''s'' if :\lim_ \sum_ a_n\frac = s. If a series is (C,''k'') (Cesàro) summable for any ''k'' then it is Lambert summable to the same value, and if a series is Lambert summable then it is Abel summable to the same value .


Le Roy summation

The series ''a''0 + ... is called Le Roy summable to ''s'' if : \lim_ \sum_ \frac a_n = s.


Mittag-Leffler summation

The series ''a''0 + ... is called Mittag-Leffler (M) summable to ''s'' if :\lim_ \sum_ \frac = s.


Ramanujan summation

Ramanujan summation is a method of assigning a value to divergent series used by Ramanujan and based on the Euler–Maclaurin summation formula. The Ramanujan sum of a series ''f''(0) + ''f''(1) + ... depends not only on the values of ''f'' at integers, but also on values of the function ''f'' at non-integral points, so it is not really a summation method in the sense of this article.


Riemann summability

The series ''a''1 + ... is called (R,''k'') (or Riemann) summable to ''s'' if : \lim_ \sum_ a_n\left(\frac\right)^k = s. The series ''a''1 + ... is called R2 summable to ''s'' if : \lim_ \frac\sum_n \frac(a_1+\cdots + a_n) = s.


Riesz means

If ''λ''''n'' form an increasing sequence of real numbers and : A_\lambda(x)=a_0+\cdots+a_n \text \lambda_n then the Riesz (R,''λ'',''κ'') sum of the series ''a''0 + ... is defined to be : \lim_ \frac \int_0^\omega A_\lambda(x)(\omega-x)^ \, dx.


Vallée-Poussin summability

The series ''a''1 + ... is called VP (or Vallée-Poussin) summable to ''s'' if : \lim_ \sum_^ a_k\frac = \lim_ \left _0+a_1\frac+a_2\frac+\cdots\right= s, where \Gamma(x) is the gamma function. .


See also

*
Silverman–Toeplitz theorem In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a matrix transformation of a con ...


Notes


References

* . * . * . * . * . * . * . * * Werner Balser: "From Divergent Power Series to Analytic Functions", Springer-Verlag, LNM 1582, ISBN 0-387-58268-1 (1994). * William O. Bray and Časlav V. Stanojević(Eds.): "Analysis of Divergence", Springer, ISBN 978-1-4612-7467-4 (1999). * Alexander I. Saichev and Wojbor Woyczynski:"Distributions in the Physical and Engineering Sciences, Volume 1", Chap.8 "Summation of divergent series and integrals",Springer (2018). {{Series (mathematics) Mathematical series Summability methods Asymptotic analysis Summability theory