HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, summation is the
addition Addition (usually signified by the Plus and minus signs#Plus sign, plus symbol, +) is one of the four basic Operation (mathematics), operations of arithmetic, the other three being subtraction, multiplication, and Division (mathematics), divis ...
of a sequence of numbers, called ''addends'' or ''summands''; the result is their ''sum'' or ''total''. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined. Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article. The summation of an explicit sequence is denoted as a succession of additions. For example, summation of is denoted , and results in 9, that is, . Because addition is associative and commutative, there is no need for parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one summand results in the summand itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0. Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100
natural number In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positive in ...
s may be written as . Otherwise, summation is denoted by using Σ notation, where \sum is an enlarged capital Greek letter sigma. For example, the sum of the first natural numbers can be denoted as :\sum_^n i For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example, :\sum_^n i = \frac. Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.


Notation


Capital-sigma notation

Mathematical notation uses a symbol that compactly represents summation of many similar terms: the ''summation symbol'', \sum, an enlarged form of the upright capital Greek letter sigma. This is defined as \sum_^n a_i = a_m + a_ + a_ + \cdots + a_ + a_n where is the "index of summation" or "dummy variable", is an indexed variable representing each term of the sum; is the "lower bound of summation", and is the "upper bound of summation". The "" under the summation symbol means that the index starts out equal to . The index, , is incremented by one for each successive term, stopping when . This is read as "sum of , from to ". However, some notations may include the index at the upper bound of summation, or omit the indec at the lower bound as in \sum_ ^ a_i or \sum_m ^n a_i , respectively. In some cases, there are sigma notation where the range of bounds is omitted, which denotes the dummy variable only, like \sum_i a_i . Here is an example showing the summation of squares: \sum_^6 i^2 = 3^2+4^2+5^2+6^2 = 86. In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as i, j, k, and n; the latter is also often used for the upper bound of a summation. Alternatively, the index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to ''n''. For example, one might write that \sum a_i = \sum_^n a_i. Generalizations of this notation are often used, in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example, \sum_ f(k) is an alternative notation for \sum_^ f(k), the sum of f(k) over all ( integers) k in the specified range. Similarly, \sum_ f(x) is the sum of f(x) over all elements x in the set S, and \sum_\;\mu(d) is the sum of \mu(d) over all positive integers d dividing n. There are also ways to generalize the use of many sigma notations. For example, one writes double summation as two sigma notations with different dummy variables \sum_^n \sum_^k a_ . Considering that the both sigma notation's range are the same, the double sigma notations can be wrapped into a single notation, so the double summation is rewritten as \sum_^n \sum_^n a_ = \sum_^n a_. The term is sometimes used when discussing the summation presented above. Contrast to the infinite series, the upper bound tends to
infinity Infinity is something which is boundless, endless, or larger than any natural number. It is denoted by \infty, called the infinity symbol. From the time of the Ancient Greek mathematics, ancient Greeks, the Infinity (philosophy), philosophic ...
\sum_^\infty a_i , which results in converge if there is a result of the sum, or diverge if otherwise. The bound in the infinite series's sigma notation can be alternatively denoted as \sum_ a_i . Relatedly, the similar notation is used for the product of a sequence, where \prod, an enlarged form of the Greek capital letter pi, is used instead of \sum.


Special cases

It is possible to sum fewer than 2 numbers: * If the summation has one summand x, then the evaluated sum is x. * If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the '' empty sum''. These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if n=m in the definition above, then there is only one term in the sum; if n=m-1, then there is none.


Algebraic sum

The phrase 'algebraic sum' refers to a sum of terms which may have positive or negative signs. Terms with positive signs are added, while terms with negative signs are subtracted. e.g. +1 −1


History

The origin of the summation notation dates back to 1675 when Gottfried Wilhelm Leibniz, in a letter to Henry Oldenburg, suggested the symbol \int to mark the sum of differentials (
Latin Latin ( or ) is a classical language belonging to the Italic languages, Italic branch of the Indo-European languages. Latin was originally spoken by the Latins (Italic tribe), Latins in Latium (now known as Lazio), the lower Tiber area aroun ...
: ''calculus summatorius''), hence the S-shape. The renaming of this symbol to ''
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
'' arose later in exchanges with Johann Bernoulli. In 1755, the summation symbol Σ is attested in Leonhard Euler's '' Institutiones calculi differentialis''. Euler uses the symbol in expressions like \sum (2wx + w^2) = x^2. The usage of sigma notation was later attested by mathematicians such as Lagrange, who denoted \sum and \sum ^n in 1772. Fourier and C. G. J. Jacobi also denoted the sigma notation in 1829, but Fourier included lower and upper bounds as in \sum_^e^ \ldots. Other than sigma notation, the capital letter ''S'' is attested as a summation symbol for series in 1823, which was apparently widespread.


Formal definition

Summation may be defined recursively as follows: :\sum_^b g(i)=0, for b; : :\sum_^b g(i)=g(b)+\sum_^ g(i), for b \geqslant a.


Measure theory notation

In the notation of measure and integration theory, a sum can be expressed as a definite integral, :\sum_^b f(k) = \int_ f\,d\mu where , b/math> is the subset of the integers from a to b, and where \mu is the counting measure over the integers.


Calculus of finite differences

Given a function that is defined over the integers in the interval , the following equation holds: :f(n)-f(m)= \sum_^ (f(i+1)-f(i)). This is known as a telescoping series and is the analogue of the fundamental theorem of calculus in calculus of finite differences, which states that: :f(n)-f(m)=\int_m^n f'(x)\,dx, where :f'(x)=\lim_ \frac is the derivative of . An example of application of the above equation is the following: :n^k=\sum_^ \left((i+1)^k-i^k\right). Using binomial theorem, this may be rewritten as: :n^k=\sum_^ \biggl(\sum_^ \binom i^j\biggr). The above formula is more commonly used for inverting of the difference operator \Delta, defined by: :\Delta(f)(n)=f(n+1)-f(n), where is a function defined on the nonnegative integers. Thus, given such a function , the problem is to compute the antidifference of , a function F=\Delta^f such that \Delta F=f. That is, F(n+1)-F(n)=f(n). This function is defined up to the addition of a constant, and may be chosen as''Handbook of Discrete and Combinatorial Mathematics'', Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, . :F(n)=\sum_^ f(i). There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case where f(n)=n^k and, by linearity, for every polynomial function of .


Approximation by definite integrals

Many such approximations can be obtained by the following connection between sums and
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
s, which holds for any increasing function ''f'': :\int_^ f(s)\ ds \le \sum_^ f(i) \le \int_^ f(s)\ ds. and for any decreasing function ''f'': :\int_^ f(s)\ ds \le \sum_^ f(i) \le \int_^ f(s)\ ds. For more general approximations, see the Euler–Maclaurin formula. For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance :\frac\sum_^ f\left(a+i\fracn\right) \approx \int_a^b f(x)\ dx, since the right-hand side is by definition the limit for n\to\infty of the left-hand side. However, for a given summation ''n'' is fixed, and little can be said about the error in the above approximation without additional assumptions about ''f'': it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.


Identities

The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series.


General identities

: \sum_^t C\cdot f(n) = C\cdot \sum_^t f(n) \quad ( distributivity) : \sum_^t f(n) \pm \sum_^ g(n) = \sum_^t \left(f(n) \pm g(n)\right)\quad ( commutativity and associativity) : \sum_^t f(n) = \sum_^ f(n-p)\quad (index shift) : \sum_ f(n) = \sum_ f(\sigma(m)), \quad for a bijection from a finite set onto a set (index change); this generalizes the preceding formula. : \sum_^t f(n) =\sum_^j f(n) + \sum_^t f(n)\quad (splitting a sum, using associativity) : \sum_^f(n)=\sum_^f(n)-\sum_^f(n)\quad (a variant of the preceding formula) : \sum_^t f(n) = \sum_^ f(t-n)\quad (the sum from the first term up to the last is equal to the sum from the last down to the first) : \sum_^t f(n) = \sum_^ f(t-n)\quad (a particular case of the formula above) : \sum_^\sum_^ a_ = \sum_^\sum_^ a_\quad (commutativity and associativity, again) : \sum_ a_ = \sum_^n\sum_^i a_ = \sum_^n\sum_^n a_ = \sum_^\sum_^ a_\quad (another application of commutativity and associativity) : \sum_^ f(n) = \sum_^t f(2n) + \sum_^t f(2n+1)\quad (splitting a sum into its odd and even parts, for even indexes) : \sum_^ f(n) = \sum_^t f(2n) + \sum_^t f(2n-1)\quad (splitting a sum into its odd and even parts, for odd indexes) :\biggl(\sum_^ a_i\biggr) \biggl(\sum_^ b_j\biggr)=\sum_^n \sum_^n a_ib_j \quad ( distributivity) : \sum_^m\sum_^n = \biggl(\sum_^m a_i\biggr) \biggl( \sum_^n c_j \biggr)\quad (distributivity allows factorization) : \sum_^t \log_b f(n) = \log_b \prod_^t f(n)\quad (the logarithm of a product is the sum of the logarithms of the factors) : C^ = \prod_^t C^\quad (the exponential of a sum is the product of the exponential of the summands) : \sum^_\sum^_f(m,n)=\sum^_\sum^_f(n,m),\quadfor any function f from \mathbb\times\mathbb.


Powers and logarithm of arithmetic progressions

: \sum_^n c = nc\quad for every that does not depend on : \sum_^n i = \sum_^n i = \frac\qquad (Sum of the simplest arithmetic progression, consisting of the first ''n'' natural numbers.) : \sum_^n (2i-1) = n^2\qquad (Sum of first odd natural numbers) : \sum_^ 2i = n(n+1)\qquad (Sum of first even natural numbers) : \sum_^ \log i = \log (n!)\qquad (A sum of logarithms is the logarithm of the product) : \sum_^n i^2 = \sum_^n i^2 = \frac = \frac + \frac + \frac\qquad (Sum of the first squares, see square pyramidal number.) : \sum_^n i^3 = \biggl(\sum_^n i \biggr)^2 = \left(\frac\right)^2 = \frac + \frac + \frac\qquad ( Nicomachus's theorem) More generally, one has Faulhaber's formula for p>1 : \sum_^n k^ = \frac + \fracn^p + \sum_^p \binom p k \frac\,n^, where B_k denotes a Bernoulli number, and \binom p k is a binomial coefficient.


Summation index in exponents

In the following summations, is assumed to be different from 1. : \sum_^ a^i = \frac (sum of a geometric progression) : \sum_^ \frac = 2-\frac (special case for ) : \sum_^ i a^i =\frac ( times the derivative with respect to of the geometric progression) : \begin \sum_^ \left(b + i d\right) a^i &= b \sum_^ a^i + d \sum_^ i a^i\\ & = b \left(\frac\right) + d \left(\frac\right)\\ & = \frac+\frac \end :::(sum of an arithmetico–geometric sequence)


Binomial coefficients and factorials

There exist very many summation identities involving binomial coefficients (a whole chapter of '' Concrete Mathematics'' is devoted to just the basic techniques). Some of the most basic ones are the following.


Involving the binomial theorem

: \sum_^n a^ b^i=(a + b)^n, the binomial theorem : \sum_^n = 2^n, the special case where : \sum_^n p^i (1-p)^=1, the special case where , which, for 0 \le p \le 1, expresses the sum of the binomial distribution : \sum_^ i = n(2^), the value at of the derivative with respect to of the binomial theorem : \sum_^n \frac = \frac, the value at of the antiderivative with respect to of the binomial theorem


Involving permutation numbers

In the following summations, _P_ is the number of -permutations of . : \sum_^ _P_ = _P_(2^) : \sum_^n _P_ = \sum_^n \prod_^k (i+j) = \frac : \sum_^ i!\cdot = \sum_^ _P_ = \lfloor n! \cdot e \rfloor, \quad n \in \mathbb^+, where and \lfloor x\rfloor denotes the floor function.


Others

: \sum_^ \binom = \binom : \sum_^ = : \sum_^n i\cdot i! = (n+1)! - 1 : \sum_^n = :\sum_^n ^2 = :\sum_^n \frac = \frac


Harmonic numbers

: \sum_^n \frac = H_n\quad (the th harmonic number) : \sum_^n \frac = H^k_n\quad (a generalized harmonic number)


Growth rates

The following are useful approximations (using theta notation): : \sum_^n i^c \in \Theta(n^) for real ''c'' greater than −1 : : \sum_^n \frac \in \Theta(\log_e n) (See Harmonic number) : : \sum_^n c^i \in \Theta(c^n) for real ''c'' greater than 1 : : \sum_^n \log(i)^c \in \Theta(n \cdot \log(n)^) for non-negative real ''c'' : : \sum_^n \log(i)^c \cdot i^d \in \Theta(n^ \cdot \log(n)^) for non-negative real ''c'', ''d'' : : \sum_^n \log(i)^c \cdot i^d \cdot b^i \in \Theta (n^d \cdot \log(n)^c \cdot b^n) for non-negative real ''b'' > 1, ''c'', ''d''


See also

* Capital-pi notation * Einstein notation * Iverson bracket * Iterated binary operation * Kahan summation algorithm * Product (mathematics) * Summation by parts *


Notes


References


Bibliography

*


External links

* {{Authority control Mathematical notation Addition