Bisection Algorithm
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the bisection method is a
root-finding method In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbe ...
that applies to any
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value ...
for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a
root In vascular plants, the roots are the organs of a plant that are modified to provide anchorage for the plant and take in water and nutrients into the plant body, which allows plants to grow taller and faster. They are most often below the sur ...
. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method. For
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exa ...
s, more elaborate methods exist for testing the existence of a root in an interval (
Descartes' rule of signs In mathematics, Descartes' rule of signs, first described by René Descartes in his work ''La Géométrie'', is a technique for getting information on the number of positive real roots of a polynomial. It asserts that the number of positive roots i ...
,
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of loca ...
,
Budan's theorem In mathematics, Budan's theorem is a theorem for bounding the number of real roots of a polynomial in an interval, and computing the parity of this number. It was published in 1807 by François Budan de Boislaurent. A similar theorem was publishe ...
). They allow extending the bisection method into efficient algorithms for finding all real roots of a polynomial; see
Real-root isolation In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one (and only one) real root of the polynomial, and ...
.


The method

The method is applicable for numerically solving the equation ''f''(''x'') = 0 for the
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
variable ''x'', where ''f'' is a
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value ...
defined on an interval 'a'', ''b''and where ''f''(''a'') and ''f''(''b'') have opposite signs. In this case ''a'' and ''b'' are said to bracket a root since, by the
intermediate value theorem In mathematical analysis, the intermediate value theorem states that if f is a continuous function whose domain contains the interval , then it takes on any given value between f(a) and f(b) at some point within the interval. This has two import ...
, the continuous function ''f'' must have at least one root in the interval (''a'', ''b''). At each step the method divides the interval in two parts/halves by computing the midpoint ''c'' = (''a''+''b'') / 2 of the interval and the value of the function ''f''(''c'') at that point. If ''c'' itself is a root then the process has succeeded and stops. Otherwise, there are now only two possibilities: either ''f''(''a'') and ''f''(''c'') have opposite signs and bracket a root, or ''f''(''c'') and ''f''(''b'') have opposite signs and bracket a root. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of ''f'' is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small. Explicitly, if ''f''(''c'')=0 then ''c'' may be taken as the solution and the process stops. Otherwise, if ''f''(''a'') and ''f''(''c'') have opposite signs, then the method sets ''c'' as the new value for ''b'', and if ''f''(''b'') and ''f''(''c'') have opposite signs then the method sets ''c'' as the new ''a''. In both cases, the new ''f''(''a'') and ''f''(''b'') have opposite signs, so the method is applicable to this smaller interval.


Iteration tasks

The input for the method is a continuous function ''f'', an interval 'a'', ''b'' and the function values ''f''(''a'') and ''f''(''b''). The function values are of opposite sign (there is at least one zero crossing within the interval). Each iteration performs these steps: # Calculate ''c'', the midpoint of the interval, ''c'' = . # Calculate the function value at the midpoint, ''f''(''c''). # If convergence is satisfactory (that is, ''c'' - ''a'' is sufficiently small, or , ''f''(''c''), is sufficiently small), return ''c'' and stop iterating. # Examine the sign of ''f''(''c'') and replace either (''a'', ''f''(''a'')) or (''b'', ''f''(''b'')) with (''c'', ''f''(''c'')) so that there is a zero crossing within the new interval. When implementing the method on a computer, there can be problems with finite precision, so there are often additional convergence tests or limits to the number of iterations. Although ''f'' is continuous, finite precision may preclude a function value ever being zero. For example, consider ; there is no floating-point value approximating that gives exactly zero. Additionally, the difference between ''a'' and ''b'' is limited by the floating point precision; i.e., as the difference between ''a'' and ''b'' decreases, at some point the midpoint of 'a'', ''b''will be numerically identical to (within floating point precision of) either ''a'' or ''b''.


Algorithm

The method may be written in
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
as follows: INPUT: Function ''f'', endpoint values ''a'', ''b'', tolerance ''TOL'', maximum iterations ''NMAX'' CONDITIONS: ''a'' < ''b'', either ''f''(''a'') < 0 and ''f''(''b'') > 0 or ''f''(''a'') > 0 and ''f''(''b'') < 0 OUTPUT: value which differs from a root of ''f''(''x'') = 0 by less than ''TOL'' ''N'' ← 1 while ''N'' ≤ ''NMAX'' do ''// limit iterations to prevent infinite loop'' ''c'' ← (''a'' + ''b'')/2 ''// new midpoint'' if ''f''(''c'') = 0 or (''b'' – ''a'')/2 < ''TOL'' then ''// solution found'' Output(''c'') Stop end if ''N'' ← ''N'' + 1 ''// increment step counter'' if sign(''f''(''c'')) = sign(''f''(''a'')) then ''a'' ← ''c'' else ''b'' ← ''c'' ''// new interval'' end while Output("Method failed.") ''// max number of steps exceeded''


Example: Finding the root of a polynomial

Suppose that the bisection method is used to find a root of the polynomial : f(x) = x^3 - x - 2 \,. First, two numbers a and b have to be found such that f(a) and f(b) have opposite signs. For the above function, a = 1 and b = 2 satisfy this criterion, as : f(1) = (1)^3 - (1) - 2 = -2 and : f(2) = (2)^3 - (2) - 2 = +4 \,. Because the function is continuous, there must be a root within the interval
, 2 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
In the first iteration, the end points of the interval which brackets the root are a_1 = 1 and b_1 = 2 , so the midpoint is : c_1 = \frac = 1.5 The function value at the midpoint is f(c_1) = (1.5)^3 - (1.5) - 2 = -0.125 . Because f(c_1) is negative, a = 1 is replaced with a = 1.5 for the next iteration to ensure that f(a) and f(b) have opposite signs. As this continues, the interval between a and b will become increasingly smaller, converging on the root of the function. See this happen in the table below. After 13 iterations, it becomes apparent that there is a convergence to about 1.521: a root for the polynomial.


Analysis

The method is guaranteed to converge to a root of ''f'' if ''f'' is a
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value ...
on the interval 'a'', ''b''and ''f''(''a'') and ''f''(''b'') have opposite signs. The
absolute error The approximation error in a data value is the discrepancy between an exact value and some ''approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute er ...
is halved at each step so the method converges linearly. Specifically, if ''c''1 = is the midpoint of the initial interval, and ''c''''n'' is the midpoint of the interval in the ''n''th step, then the difference between ''c''''n'' and a solution ''c'' is bounded by :, c_n-c, \le\frac. This formula can be used to determine, in advance, an upper bound on the number of iterations that the bisection method needs to converge to a root to within a certain tolerance. The number ''n'' of iterations needed to achieve a required tolerance ε (that is, an error guaranteed to be at most ε), is bounded by :n \le n_ \equiv \left\lceil\log_2\left(\frac\right)\right\rceil, where the initial bracket size \epsilon_0 = , b-a, and the required bracket size \epsilon \le \epsilon_0. The main motivation to use the bisection method is that over the set of continuous functions, no other method can guarantee to produce an estimate cn to the solution c that in the ''worst case'' has an \epsilon absolute error with less than n1/2 iterations. This is also true under several common assumptions on function f and the behaviour of the function in the neighbourhood of the root. However, despite the bisection method being optimal with respect to worst case performance under absolute error criteria it is sub-optimal with respect to ''average performance'' under standard assumptions as well as ''asymptotic performance''. Popular alternatives to the bisection method, such as the
secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation of ...
,
Ridders' method In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function f(x). The method is due to C. Ridders. Ridders' ...
or
Brent's method In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable ...
(amongst others), typically perform better since they trade-off worst case performance to achieve higher orders of convergence to the root. And, a strict improvement to the bisection method can be achieved with a higher order of convergence without trading-off worst case performance with the ITP Method.


See also

*
Binary search algorithm In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the m ...
*
Lehmer–Schur algorithm In mathematics, the Lehmer–Schur algorithm (named after Derrick Henry Lehmer and Issai Schur) is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one-dimensional bisection method to the complex p ...
, generalization of the bisection method in the complex plane *
Nested intervals In mathematics, a sequence of nested intervals can be intuitively understood as an ordered collection of intervals I_n on the real number line with natural numbers n=1,2,3,\dots as an index. In order for a sequence of intervals to be considered ne ...


References

*


Further reading

* *


External links

*
Bisection Method
Notes, PPT, Mathcad, Maple, Matlab, Mathematica fro
Holistic Numerical Methods Institute
{{Root-finding algorithms Root-finding algorithms Articles with example pseudocode