HOME

TheInfoList



OR:

In
calculus Calculus is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. Originally called infinitesimal calculus or "the ...
, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule. The integration by parts formula states: \begin \int_a^b u(x) v'(x) \, dx & = \Big (x) v(x)\Biga^b - \int_a^b u'(x) v(x) \, dx\\ & = u(b) v(b) - u(a) v(a) - \int_a^b u'(x) v(x) \, dx. \end Or, letting u = u(x) and du = u'(x) \,dx while v = v(x) and dv = v'(x) \, dx, the formula can be written more compactly: \int u \, dv \ =\ uv - \int v \, du. The former expression is written as a definite integral and the latter is written as an indefinite integral. Applying the appropriate limits to the latter expression should yield the former, but the latter is not necessarily equivalent to the former. Mathematician Brook Taylor discovered integration by parts, first publishing the idea in 1715. More general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts.


Theorem


Product of two functions

The theorem can be derived as follows. For two continuously differentiable functions u(x) and v(x), the product rule states: \Big(u(x)v(x)\Big)' = u'(x) v(x) + u(x) v'(x). Integrating both sides with respect to x, \int \Big(u(x)v(x)\Big)'\,dx = \int u'(x)v(x)\,dx + \int u(x)v'(x) \,dx, and noting that an indefinite integral is an antiderivative gives u(x)v(x) = \int u'(x)v(x)\,dx + \int u(x)v'(x)\,dx, where we neglect writing the constant of integration. This yields the formula for integration by parts: \int u(x)v'(x)\,dx = u(x)v(x) - \int u'(x)v(x) \,dx, or in terms of the differentials du=u'(x)\,dx, dv=v'(x)\,dx, \quad \int u(x)\,dv = u(x)v(x) - \int v(x)\,du. This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values x = a and x = b and applying the fundamental theorem of calculus gives the definite integral version: \int_a^b u(x) v'(x) \, dx = u(b) v(b) - u(a) v(a) - \int_a^b u'(x) v(x) \, dx . The original integral \int uv' \, dx contains the derivative ; to apply the theorem, one must find , the antiderivative of , then evaluate the resulting integral \int vu' \, dx.


Validity for less smooth functions

It is not necessary for u and v to be continuously differentiable. Integration by parts works if u is absolutely continuous and the function designated v' is Lebesgue integrable (but not necessarily continuous). (If v' has a point of discontinuity then its antiderivative v may not have a derivative at that point.) If the interval of integration is not compact, then it is not necessary for u to be absolutely continuous in the whole interval or for v' to be Lebesgue integrable in the interval, as a couple of examples (in which u and v are continuous and continuously differentiable) will show. For instance, if u(x)= e^x/x^2, \, v'(x) =e^ u is not absolutely continuous on the interval , but nevertheless: \int_1^\infty u(x)v'(x)\,dx = \Big (x)v(x)\Big1^\infty - \int_1^\infty u'(x)v(x)\,dx so long as \left (x)v(x)\right1^\infty is taken to mean the limit of u(L)v(L)-u(1)v(1) as L\to\infty and so long as the two terms on the right-hand side are finite. This is only true if we choose v(x)=-e^. Similarly, if u(x)= e^,\, v'(x) =x^\sin(x) v' is not Lebesgue integrable on the interval , but nevertheless \int_1^\infty u(x)v'(x)\,dx = \Big (x)v(x)\Big1^\infty - \int_1^\infty u'(x)v(x)\,dx with the same interpretation. One can also easily come up with similar examples in which u and v are ''not'' continuously differentiable. Further, if f(x) is a function of bounded variation on the segment ,b and \varphi(x) is differentiable on ,b then \int_^f(x)\varphi'(x)\,dx=-\int_^ \widetilde\varphi(x)\,d(\widetilde\chi_(x)\widetilde f(x)), where d(\chi_(x)\widetilde f(x)) denotes the signed measure corresponding to the function of bounded variation \chi_(x)f(x), and functions \widetilde f, \widetilde \varphi are extensions of f, \varphi to \R, which are respectively of bounded variation and differentiable.


Product of many functions

Integrating the product rule for three multiplied functions, u(x), v(x), w(x), gives a similar result: \int_a^b u v \, dw \ =\ \Big v w\Bigb_a - \int_a^b u w \, dv - \int_a^b v w \, du. In general, for n factors \left(\prod_^n u_i(x) \right)' \ =\ \sum_^n u_j'(x)\prod_^n u_i(x), which leads to \left \prod_^n u_i(x) \righta^b \ =\ \sum_^n \int_a^b u_j'(x) \prod_^n u_i(x).


Visualization

Consider a parametric curve (x, y) = (f(t), g(t)). Assuming that the curve is locally one-to-one and integrable, we can define \begin x(y) &= f(g^(y)) \\ y(x) &= g(f^(x)) \end The area of the blue region is A_1=\int_^x(y) \, dy Similarly, the area of the red region is A_2=\int_^y(x)\,dx The total area ''A''1 + ''A''2 is equal to the area of the bigger rectangle, ''x''2''y''2, minus the area of the smaller one, ''x''1''y''1: \overbrace^+\overbrace^\ =\ \biggl.x \cdot y(x)\biggl, _^ \ =\ \biggl.y \cdot x(y)\biggl, _^ Or, in terms of ''t'', \int_^x(t) \, dy(t) + \int_^y(t) \, dx(t) \ =\ \biggl. x(t)y(t) \biggl, _^ Or, in terms of indefinite integrals, this can be written as \int x\,dy + \int y \,dx \ =\ xy Rearranging: \int x\,dy \ =\ xy - \int y \,dx Thus integration by parts may be thought of as deriving the area of the blue region from the area of rectangles and that of the red region. This visualization also explains why integration by parts may help find the integral of an inverse function ''f''−1(''x'') when the integral of the function ''f''(''x'') is known. Indeed, the functions ''x''(''y'') and ''y''(''x'') are inverses, and the integral ∫ ''x'' ''dy'' may be calculated as above from knowing the integral ∫ ''y'' ''dx''. In particular, this explains use of integration by parts to integrate logarithm and inverse trigonometric functions. In fact, if f is a differentiable one-to-one function on an interval, then integration by parts can be used to derive a formula for the integral of f^ in terms of the integral of f. This is demonstrated in the article, Integral of inverse functions.


Applications


Finding antiderivatives

Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate this single function into a product of two functions ''u''(''x'')''v''(''x'') such that the residual integral from the integration by parts formula is easier to evaluate than the single function. The following form is useful in illustrating the best strategy to take: \int uv\,dx = u \int v\,dx - \int\left(u' \int v\,dx \right)\,dx. On the right-hand side, ''u'' is differentiated and ''v'' is integrated; consequently it is useful to choose ''u'' as a function that simplifies when differentiated, or to choose ''v'' as a function that simplifies when integrated. As a simple example, consider: \int\frac\,dx\,. Since the derivative of ln(''x'') is , one makes (ln(''x'')) part ''u''; since the antiderivative of is −, one makes part ''v''. The formula now yields: \int\frac\,dx = -\frac - \int \biggl(\frac1\biggr) \biggl(-\frac1\biggr)\,dx\,. The antiderivative of − can be found with the power rule and is . Alternatively, one may choose ''u'' and ''v'' such that the product ''u''′ (∫''v'' ''dx'') simplifies due to cancellation. For example, suppose one wishes to integrate: \int\sec^2(x)\cdot\ln\Big(\bigl, \sin(x)\bigr, \Big)\,dx. If we choose ''u''(''x'') = ln(, sin(''x''), ) and ''v''(''x'') = sec2x, then ''u'' differentiates to \frac using the chain rule and ''v'' integrates to tan ''x''; so the formula gives: \int\sec^2(x)\cdot\ln\Big(\bigl, \sin(x)\bigr, \Big)\,dx = \tan(x)\cdot\ln\Big(\bigl, \sin(x)\bigr, \Big)-\int\tan(x)\cdot\frac1 \, dx\ . The integrand simplifies to 1, so the antiderivative is ''x''. Finding a simplifying combination frequently involves experimentation. In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below.


Polynomials and trigonometric functions

In order to calculate I=\int x\cos(x)\,dx\,, let: \begin u &= x\ &\Rightarrow\ &&du &= dx \\ dv &= \cos(x)\,dx\ &\Rightarrow\ && v &= \int\cos(x)\,dx = \sin(x) \end then: \begin \int x\cos(x)\,dx & = \int u\ dv \\ & = u\cdot v - \int v \, du \\ & = x\sin(x) - \int \sin(x)\,dx \\ & = x\sin(x) + \cos(x) + C, \end where ''C'' is a constant of integration. For higher powers of x in the form \int x^n e^x\,dx,\ \int x^n\sin(x)\,dx,\ \int x^n\cos(x)\,dx\,, repeatedly using integration by parts can evaluate integrals such as these; each application of the theorem lowers the power of x by one.


Exponentials and trigonometric functions

An example commonly used to examine the workings of integration by parts is I=\int e^x\cos(x)\,dx. Here, integration by parts is performed twice. First let \begin u &= \cos(x)\ &\Rightarrow\ &&du &= -\sin(x)\,dx \\ dv &= e^x\,dx\ &\Rightarrow\ &&v &= \int e^x\,dx = e^x \end then: \int e^x\cos(x)\,dx = e^x\cos(x) + \int e^x\sin(x)\,dx. Now, to evaluate the remaining integral, we use integration by parts again, with: \begin u &= \sin(x)\ &\Rightarrow\ &&du &= \cos(x)\,dx \\ dv &= e^x\,dx\,&\Rightarrow\ && v &= \int e^x\,dx = e^x. \end Then: \int e^x\sin(x)\,dx = e^x\sin(x) - \int e^x\cos(x)\,dx. Putting these together, \int e^x\cos(x)\,dx = e^x\cos(x) + e^x\sin(x) - \int e^x\cos(x)\,dx. The same integral shows up on both sides of this equation. The integral can simply be added to both sides to get 2\int e^x\cos(x)\,dx = e^x\bigl sin(x)+\cos(x)\bigr+ C, which rearranges to \int e^x\cos(x)\,dx = \frace^x\bigl sin(x)+\cos(x)\bigr+ C' where again C (and C' = \frac) is a constant of integration. A similar method is used to find the integral of secant cubed.


Functions multiplied by unity

Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times x is also known. The first example is \int \ln(x) dx. We write this as: I=\int\ln(x)\cdot 1\,dx\,. Let: u = \ln(x)\ \Rightarrow\ du = \frac dv = dx\ \Rightarrow\ v = x then: \begin \int \ln(x)\,dx & = x\ln(x) - \int\frac\,dx \\ & = x\ln(x) - \int 1\,dx \\ & = x\ln(x) - x + C \end where C is the constant of integration. The second example is the inverse tangent function \arctan(x): I=\int\arctan(x)\,dx. Rewrite this as \int\arctan(x)\cdot 1\,dx. Now let: u = \arctan(x)\ \Rightarrow\ du = \frac dv = dx\ \Rightarrow\ v = x then \begin \int\arctan(x)\,dx & = x\arctan(x) - \int\frac\,dx \\ pt& = x\arctan(x) - \frac + C \end using a combination of the inverse chain rule method and the natural logarithm integral condition.


LIATE rule

The LIATE rule is a rule of thumb for integration by parts. It involves choosing as ''u'' the function that comes first in the following list: * L – logarithmic functions: \ln(x),\ \log_b(x), etc. * I – inverse trigonometric functions (including hyperbolic analogues): \arctan(x),\ \arcsec(x),\ \operatorname(x), etc. * A – algebraic functions (such as polynomials): x^2,\ 3x^, etc. * T –
trigonometric functions In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all ...
(including hyperbolic analogues): \sin(x),\ \tan(x),\ \operatorname(x), etc. * E – exponential functions: e^x,\ 19^x, etc. The function which is to be ''dv'' is whichever comes last in the list. The reason is that functions lower on the list generally have simpler antiderivatives than the functions above them. The rule is sometimes written as "DETAIL", where ''D'' stands for ''dv'' and the top of the list is the function chosen to be ''dv''. An alternative to this rule is the ILATE rule, where inverse trigonometric functions come before logarithmic functions. To demonstrate the LIATE rule, consider the integral \int x \cdot \cos(x) \,dx. Following the LIATE rule, ''u'' = ''x'', and ''dv'' = cos(''x'') ''dx'', hence ''du'' = ''dx'', and ''v'' = sin(''x''), which makes the integral become x \cdot \sin(x) - \int 1 \sin(x) \,dx, which equals x \cdot \sin(x) + \cos(x) + C. In general, one tries to choose ''u'' and ''dv'' such that ''du'' is simpler than ''u'' and ''dv'' is easy to integrate. If instead cos(''x'') was chosen as ''u'', and ''x dx'' as ''dv'', we would have the integral \frac \cos(x) + \int \frac \sin(x) \,dx, which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere. Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate \int x^3 e^ \,dx, one would set u = x^2, \quad dv = x \cdot e^ \,dx, so that du = 2x \,dx, \quad v = \frac. Then \int x^3 e^ \,dx = \int \left(x^2\right) \left(xe^\right) \,dx = \int u \,dv = uv - \int v \,du = \frac - \int x e^ \,dx. Finally, this results in \int x^3 e^ \,dx = \frac + C. Integration by parts is often used as a tool to prove theorems in mathematical analysis.


Wallis product

The Wallis infinite product for \pi \begin \frac & = \prod_^\infty \frac = \prod_^\infty \left(\frac \cdot \frac\right) \\ pt& = \Big(\frac \cdot \frac\Big) \cdot \Big(\frac \cdot \frac\Big) \cdot \Big(\frac \cdot \frac\Big) \cdot \Big(\frac \cdot \frac\Big) \cdot \; \cdots \end may be derived using integration by parts.


Gamma function identity

The gamma function is an example of a special function, defined as an improper integral for z > 0 . Integration by parts illustrates it to be an extension of the factorial function: \begin \Gamma(z) & = \int_0^\infty e^ x^ dx \\ pt & = - \int_0^\infty x^ \, d\left(e^\right) \\ pt & = - \Biggl ^ x^\Biggl0^\infty + \int_0^\infty e^ d\left(x^\right) \\ pt & = 0 + \int_0^\infty \left(z-1\right) x^ e^ dx\\ pt & = (z-1)\Gamma(z-1). \end Since \Gamma(1) = \int_0^\infty e^ \, dx = 1, when z is a natural number, that is, z = n \in \mathbb , applying this formula repeatedly gives the
factorial In mathematics, the factorial of a non-negative denoted is the Product (mathematics), product of all positive integers less than or equal The factorial also equals the product of n with the next smaller factorial: \begin n! &= n \times ...
: \Gamma(n+1) = n!


Use in harmonic analysis

Integration by parts is often used in harmonic analysis, particularly Fourier analysis, to show that quickly oscillating integrals with sufficiently smooth integrands decay quickly. The most common example of this is its use in showing that the decay of function's Fourier transform depends on the smoothness of that function, as described below.


Fourier transform of derivative

If f is a k-times continuously differentiable function and all derivatives up to the kth one decay to zero at infinity, then its Fourier transform satisfies (\mathcalf^)(\xi) = (2\pi i\xi)^k \mathcalf(\xi), where f^ is the kth derivative of f. (The exact constant on the right depends on the convention of the Fourier transform used.) This is proved by noting that \frac e^ = -2\pi i\xi e^, so using integration by parts on the Fourier transform of the derivative we get \begin (\mathcalf')(\xi) &= \int_^\infty e^ f'(y)\,dy \\ &=\left ^ f(y)\right^\infty - \int_^\infty (-2\pi i\xi e^) f(y)\,dy \\ pt&=2\pi i\xi \int_^\infty e^ f(y)\,dy \\ pt&=2\pi i\xi \mathcalf(\xi). \end Applying this inductively gives the result for general k. A similar method can be used to find the Laplace transform of a derivative of a function.


Decay of Fourier transform

The above result tells us about the decay of the Fourier transform, since it follows that if f and f^ are integrable then \vert\mathcalf(\xi)\vert \leq \frac, \text I(f) = \int_^\infty \Bigl(\vert f(y)\vert + \vert f^(y)\vert\Bigr) \, dy. In other words, if f satisfies these conditions then its Fourier transform decays at infinity at least as quickly as . In particular, if k \geq 2 then the Fourier transform is integrable. The proof uses the fact, which is immediate from the definition of the Fourier transform, that \vert\mathcalf(\xi)\vert \leq \int_^\infty \vert f(y) \vert \,dy. Using the same idea on the equality stated at the start of this subsection gives \vert(2\pi i\xi)^k \mathcalf(\xi)\vert \leq \int_^\infty \vert f^(y) \vert \,dy. Summing these two inequalities and then dividing by gives the stated inequality.


Use in operator theory

One use of integration by parts in operator theory is that it shows that the (where ∆ is the Laplace operator) is a positive operator on L^2 (see ''L''''p'' space). If f is smooth and compactly supported then, using integration by parts, we have \begin \langle -\Delta f, f \rangle_ &= -\int_^\infty f''(x)\overline\,dx \\ pt&=-\left '(x)\overline\right^\infty + \int_^\infty f'(x)\overline\,dx \\ pt&=\int_^\infty \vert f'(x)\vert^2\,dx \geq 0. \end


Other applications

* Determining boundary conditions in Sturm–Liouville theory * Deriving the Euler–Lagrange equation in the calculus of variations


Repeated integration by parts

Considering a second derivative of v in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS: \int u v''\,dx = uv' - \int u'v'\,dx = uv' - \left( u'v - \int u''v\,dx \right). Extending this concept of repeated partial integration to derivatives of degree leads to \begin \int u^ v^\,dx &= u^ v^ - u^v^ + u^v^ - \cdots + (-1)^u^ v^ + (-1)^n \int u^ v^ \,dx.\\ pt&= \sum_^(-1)^k u^v^ + (-1)^n \int u^ v^ \,dx. \end This concept may be useful when the successive integrals of v^ are readily available (e.g., plain exponentials or sine and cosine, as in Laplace or Fourier transforms), and when the th derivative of u vanishes (e.g., as a polynomial function with degree (n-1)). The latter condition stops the repeating of partial integration, because the RHS-integral vanishes. In the course of the above repetition of partial integrations the integrals \int u^ v^\,dx \quad and \quad \int u^ v^\,dx \quad and \quad \int u^ v^\,dx \quad\text 1 \le m,\ell \le n get related. This may be interpreted as arbitrarily "shifting" derivatives between v and u within the integrand, and proves useful, too (see Rodrigues' formula).


Tabular integration by parts

The essential process of the above formula can be summarized in a table; the resulting method is called "tabular integration" and was featured in the film '' Stand and Deliver'' (1988). For example, consider the integral \int x^3 \cos x \,dx \quad and take \quad u^ = x^3, \quad v^ = \cos x. Begin to list in column A the function u^ = x^3 and its subsequent derivatives u^ until zero is reached. Then list in column B the function v^ = \cos x and its subsequent integrals v^ until the size of column B is the same as that of column A. The result is as follows: : The product of the entries in of columns A and B together with the respective sign give the relevant integrals in in the course of repeated integration by parts. yields the original integral. For the complete result in the must be added to all the previous products () of the of column A and the of column B (i.e., multiply the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc. ...) with the given This process comes to a natural halt, when the product, which yields the integral, is zero ( in the example). The complete result is the following (with the alternating signs in each term): \underbrace_ + \underbrace_ + \underbrace_ +\underbrace_+ \underbrace_. This yields \underbrace_ = x^3\sin x + 3x^2\cos x - 6x\sin x - 6\cos x + C. The repeated partial integration also turns out useful, when in the course of respectively differentiating and integrating the functions u^ and v^ their product results in a multiple of the original integrand. In this case the repetition may also be terminated with this index This can happen, expectably, with exponentials and trigonometric functions. As an example consider \int e^x \cos x \,dx. : In this case the product of the terms in columns A and B with the appropriate sign for index yields the negative of the original integrand (compare \underbrace_ = \underbrace_ + \underbrace_ + \underbrace_. Observing that the integral on the RHS can have its own constant of integration C', and bringing the abstract integral to the other side, gives: 2 \int e^x \cos x \,dx = e^x\sin x + e^x\cos x + C', and finally: \int e^x \cos x \,dx = \frac 12 \left(e^x ( \sin x + \cos x ) \right) + C, where C = \frac.


Higher dimensions

Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function ''u'' and vector-valued function (vector field) V. The product rule for divergence states: \nabla \cdot ( u \mathbf ) \ =\ u\, \nabla \cdot \mathbf V \ +\ \nabla u\cdot \mathbf V. Suppose \Omega is an open bounded subset of \R^n with a piecewise smooth boundary \Gamma=\partial\Omega. Integrating over \Omega with respect to the standard volume form d\Omega, and applying the divergence theorem, gives: \int_ u \mathbf \cdot \hat \,d\Gamma \ =\ \int_\Omega\nabla\cdot ( u \mathbf )\,d\Omega \ =\ \int_\Omega u\, \nabla \cdot \mathbf V\,d\Omega \ +\ \int_\Omega\nabla u\cdot \mathbf V\,d\Omega, where \hat is the outward unit normal vector to the boundary, integrated with respect to its standard Riemannian volume form d\Gamma. Rearranging gives: \int_\Omega u \,\nabla \cdot \mathbf V\,d\Omega \ =\ \int_\Gamma u \mathbf V \cdot \hat\,d\Gamma - \int_\Omega \nabla u \cdot \mathbf V \, d\Omega, or in other words \int_\Omega u\,\operatorname(\mathbf V)\,d\Omega \ =\ \int_\Gamma u \mathbf V \cdot \hat\,d\Gamma - \int_\Omega \operatorname(u)\cdot\mathbf V\,d\Omega . The regularity requirements of the theorem can be relaxed. For instance, the boundary \Gamma=\partial\Omega need only be Lipschitz continuous, and the functions ''u'', ''v'' need only lie in the Sobolev space H^1(\Omega).


Green's first identity

Consider the continuously differentiable vector fields \mathbf U = u_1\mathbf e_1+\cdots+u_n\mathbf e_n and v \mathbf e_1,\ldots, v\mathbf e_n, where \mathbf e_i is the ''i''-th standard basis vector for i=1,\ldots,n. Now apply the above integration by parts to each u_i times the vector field v\mathbf e_i: \int_\Omega u_i\frac\,d\Omega \ =\ \int_\Gamma u_i v \,\mathbf e_i\cdot\hat\mathbf\,d\Gamma - \int_\Omega \frac v\,d\Omega. Summing over ''i'' gives a new integration by parts formula: \int_\Omega \mathbf U \cdot \nabla v\,d\Omega \ =\ \int_\Gamma v \mathbf\cdot \hat\,d\Gamma - \int_\Omega v\, \nabla \cdot \mathbf\,d\Omega. The case \mathbf=\nabla u, where u\in C^2(\bar), is known as the first of Green's identities: \int_\Omega \nabla u \cdot \nabla v\,d\Omega\ =\ \int_\Gamma v\, \nabla u\cdot\hat\,d\Gamma - \int_\Omega v\, \nabla^2 u \, d\Omega.


See also

* Integration by parts for the Lebesgue–Stieltjes integral * Integration by parts for semimartingales, involving their quadratic covariation. *
Integration by substitution In calculus, integration by substitution, also known as ''u''-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and c ...
* Legendre transformation


Notes


Further reading

* * * *


External links

*
Integration by parts—from MathWorld
{{Integrals Integral calculus Mathematical identities Theorems in mathematical analysis Theorems in calculus es:Métodos de integración#Método de integración por partes