HOME

TheInfoList



OR:

Methods of computing square roots are
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s for approximating the principal, or non-negative,
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose ''square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
(usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for finding a number which when multiplied by itself, yields S; algebraically, it means a procedure for finding the non-negative root of the equation x^2-S=0; geometrically, it means given two line segments, a procedure for constructing their geometric mean. Every real number except zero has two square roots. In addition to the principal square root, there is a negative square root equal in magnitude but opposite in sign to the principal square root, except for zero, which has double square roots of zero. The principal square root of most numbers is an irrational number with an infinite decimal expansion. As a result, the decimal expansion of any such square root can only be computed to some finite-precision approximation. However, even if we are taking the square root of a perfect square integer, so that the result does have an exact finite representation, the procedure used to compute it may only return a series of increasingly accurate approximations. The
continued fraction In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer ...
representation of a real number can be used instead of its decimal or binary expansion and this representation has the property that the square root of any rational number (which is not already a perfect square) has a periodic, repeating expansion, similar to how rational numbers have repeating expansions in the decimal notation system. The most common analytical methods are iterative and consist of two steps: finding a suitable starting value, followed by iterative refinement until some termination criterion is met. The starting value can be any number, but fewer iterations will be required the closer it is to the final result. The most familiar such method, most suited for programmatic calculation, is Newton's method, which is based on a property of the derivative in the calculus. A few methods like paper-and-pencil synthetic division and series expansion, do not require a starting value. In some applications, an
integer square root In number theory, the integer square root (isqrt) of a non-negative integer ''n'' is the non-negative integer ''m'' which is the greatest integer less than or equal to the square root of ''n'', : \mbox( n ) = \lfloor \sqrt n \rfloor. For example ...
is required, which is the square root rounded or truncated to the nearest integer (a modified procedure may be employed in this case). The method employed depends on what the result is to be used for (i.e. how accurate it has to be), how much effort one is willing to put into the procedure, and what tools are at hand. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result). Procedures for finding square roots (particularly the square root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE. Heron's method from first century Egypt was the first ascertainable algorithm for computing square root. Modern analytic methods began to be developed after introduction of the
Arabic numeral Arabic numerals are the ten numerical digits: , , , , , , , , and . They are the most commonly used symbols to write decimal numbers. They are also used for writing numbers in other systems such as octal, and for writing identifiers such as ...
system to western Europe in the early Renaissance. Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures.


Initial estimate

Many iterative square root algorithms require an initial seed value. The seed must be a non-zero positive number; it should be between 1 and S, the number whose square root is desired, because the square root must be in that range. If the seed is far away from the root, the algorithm will require more iterations. If one initializes with x_0=1 (or S), then approximately \tfrac12 \vert \log_2 S \vert iterations will be wasted just getting the order of magnitude of the root. It is therefore useful to have a rough estimate, which may have limited accuracy but is easy to calculate. In general, the better the initial estimate, the faster the convergence. For Newton's method (also called Babylonian or Heron's method), a seed somewhat larger than the root will converge slightly faster than a seed somewhat smaller than the root. In general, an estimate is pursuant to an arbitrary interval known to contain the root (such as _0,S/x_0/math>). The estimate is a specific value of a functional approximation to f(x)=\sqrt over the interval. Obtaining a better estimate involves either obtaining tighter bounds on the interval, or finding a better functional approximation to f(x). The latter usually means using a higher order polynomial in the approximation, though not all approximations are polynomial. Common methods of estimating include scalar, linear, hyperbolic and logarithmic. A decimal base is usually used for mental or paper-and-pencil estimating. A binary base is more suitable for computer estimates. In estimating, the exponent and mantissa are usually treated separately, as the number would be expressed in scientific notation.


Decimal estimates

Typically the number S is expressed in
scientific notation Scientific notation is a way of expressing numbers that are too large or too small (usually would result in a long string of digits) to be conveniently written in decimal form. It may be referred to as scientific form or standard index form, o ...
as a\times10^ where 1\leq a<100 and ''n'' is an integer, and the range of possible square roots is \sqrt a\times10^n where 1\leq \sqrt a<10.


Scalar estimates

Scalar methods divide the range into intervals, and the estimate in each interval is represented by a single scalar number. If the range is considered as a single interval, the arithmetic mean (5.5) or geometric mean (\sqrt\approx3.16) times 10^n are plausible estimates. The absolute and relative error for these will differ. In general, a single scalar will be very inaccurate. Better estimates divide the range into two or more intervals, but scalar estimates have inherently low accuracy. For two intervals, divided geometrically, the square root \sqrt = \sqrt\times10^n can be estimated asThe factors two and six are used because they approximate the geometric means of the lowest and highest possible values with the given number of digits: \sqrt = \sqrt \approx 1.78 \, and \sqrt = \sqrt \approx 5.62 \,. : \sqrt \approx \begin 2 \cdot 10^n & \text a < 10, \\ 6 \cdot 10^n & \text a \geq 10. \end This estimate has maximum absolute error of 4\cdot10^n at a = 100, and maximum relative error of 100% at a = 1. For example, for S = 125348 factored as 12.5348 \times 10^4, the estimate is \sqrt \approx 6 \cdot 10^2 = 600. \sqrt = 354.0, an absolute error of 246 and relative error of almost 70%.


Linear estimates

A better estimate, and the standard method used, is a linear approximation to the function y = x^2 over a small arc. If, as above, powers of the base are factored out of the number S and the interval reduced to ,100/math>, a secant line spanning the arc, or a tangent line somewhere along the arc may be used as the approximation, but a least-squares regression line intersecting the arc will be more accurate. A least-squares regression line minimizes the average difference between the estimate and the value of the function. Its equation is y=8.7x-10. Reordering, x=0.115y+1.15. Rounding the coefficients for ease of computation, :\sqrt\approx (a/10+1.2)\cdot 10^n That is the best estimate ''on average'' that can be achieved with a single piece linear approximation of the function y=x2 in the interval ,100/math>. It has a maximum absolute error of 1.2 at a=100, and maximum relative error of 30% at S=1 and 10.The unrounded estimate has maximum absolute error of 2.65 at 100 and maximum relative error of 26.5% at y=1, 10 and 100 To divide by 10, subtract one from the exponent of a, or figuratively move the decimal point one digit to the left. For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn't a burden. The approximation (rounded or not) using a single line spanning the range ,100/math> is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided. The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation. A much better estimate can be obtained by a piece-wise linear approximation: multiple line segments, each approximating some subarc of the original. The more line segments used, the better the approximation. The most common way is to use tangent lines; the critical choices are how to divide the arc and where to place the tangent points. An efficacious way to divide the arc from y=1 to y=100 is geometrically: for two intervals, the bounds of the intervals are the square root of the bounds of the original interval, 1*100, i.e. ,and
100 100 or one hundred ( Roman numeral: C) is the natural number following 99 and preceding 101. In medieval contexts, it may be described as the short hundred or five score in order to differentiate the English and Germanic use of "hundred" to de ...
For three intervals, the bounds are the cube roots of 100: , ()2 and )2,100 etc. For two intervals, = 10, a very convenient number. Tangent lines are easy to derive, and are located at x = and x = . Their equations are: y = 3.56x - 3.16 and y = 11.2x - 31.6. Inverting, the square roots are: x = 0.28y + 0.89 and x = .089y + 2.8. Thus for S = a \cdot 10^: : \sqrt \approx \begin (0.28a + 0.89) \cdot 10^n & \text a < 10, \\ (.089a + 2.8) \cdot 10^n & \text a \geq 10. \end The maximum absolute errors occur at the high points of the intervals, at a=10 and 100, and are 0.54 and 1.7 respectively. The maximum relative errors are at the endpoints of the intervals, at a=1, 10 and 100, and are 17% in both cases. 17% or 0.17 is larger than 1/10, so the method yields less than a decimal digit of accuracy.


Hyperbolic estimates

In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc of Y = x2 better than a line. Hyperbolic estimates are more computationally complex, because they necessarily require a floating division. A near-optimal hyperbolic approximation to x2 on the interval ,100/math> is y=190/(10-x)-20. Transposing, the square root is x = -190/(y+20)+10. Thus for S=a\cdot10^: :\sqrt\approx \left(\frac+10\right)\cdot 10^n The floating division need be accurate to only one decimal digit, because the estimate overall is only that accurate, and can be done mentally. A hyperbolic estimate is better on average than scalar or linear estimates. It has maximum absolute error of 1.58 at 100 and maximum relative error of 16.0% at 10. For the worst case at a=10, the estimate is 3.67. If one starts with 10 and applies Newton-Raphson iterations straight away, two iterations will be required, yielding 3.66, before the accuracy of the hyperbolic estimate is exceeded. For a more typical case like 75, the hyperbolic estimate is 8.00, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result.


Arithmetic estimates

A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6. A better way is to the divide the range into intervals half way between the squares. So any number between 25 and half way to 36, which is 30.5, estimate 5; any number greater than 30.5 up to 36, estimate 6.If the number is exactly half way between two squares, like 30.5, guess the higher number which is 6 in this case The procedure only requires a little arithmetic to find a boundary number in the middle of two products from the multiplication table. Here is a reference table of those boundaries: The final operation is to multiply the estimate by the power of ten divided by 2, so for S = a\cdot 10^, :\sqrt\approx k\cdot 10^n The method implicitly yields one significant digit of accuracy, since it rounds to the best first digit. The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand. If k^2 \le a < (k+1)^2, then \sqrt is approximately k plus a fraction, the difference between and ''k''2 divided by the difference between the two squares: : \sqrt\approx k + R where R = \frac The final operation, as above, is to multiply the result by the power of ten divided by 2; :\sqrt = \sqrt\cdot 10^n \approx (k + R)\cdot 10^n is a decimal digit and is a fraction that must be converted to decimal. It usually has only a single digit in the numerator, and one or two digits in the denominator, so the conversion to decimal can be done mentally. Example: find the square root of 75. , so is 75 and is 0. From the multiplication tables, the square root of the mantissa must be 8 point ''something'' because 8 × 8 is 64, but 9 × 9 is 81, too big, so is 8; ''something'' is the decimal representation of . The fraction is 75 - ''k''2 = 11, the numerator, and 81 - ''k''2 = 17, the denominator. 11/17 is a little less than 12/18, which is 2/3s or .67, so guess .66 (it's ok to guess here, the error is very small). So the estimate is . to three significant digits is 8.66, so the estimate is good to 3 significant digits. Not all such estimates using this method will be so accurate, but they will be close.


Binary estimates

When working in the binary numeral system (as computers do internally), by expressing S as a\times2^ where 0.1_2\leq a<10_2, the square root \sqrt = \sqrt\times2^n can be estimated as : \sqrt \approx (0.485 + 0.485 \cdot a) \cdot 2^n which is the least-squares regression line to 3 significant digit coefficients. \sqrt has maximum absolute error of 0.0408 at a=2, and maximum relative error of 3.0% at a=1. A computationally convenient rounded estimate (because the coefficients are powers of 2) is: : \sqrt \approx (0.5 + 0.5 \cdot a) \cdot 2^nThis is incidentally the equation of the tangent line to y=x2 at y=1. which has maximum absolute error of 0.086 at 2 and maximum relative error of 6.1% at a=0.5 and a=2.0. For S = 125348 = 1\;1110\;1001\;1010\;0100_2 = 1.1110\;1001\;1010\;0100_2\times2^\, , the binary approximation gives \sqrt \approx (0.5 + 0.5 \cdot a) \cdot 2^8 = 1.0111\;0100\;1101\;0010_2 \cdot 1\;0000\;0000_2 = 1.456 \cdot 256 = 372.8. \sqrt=354.0, so the estimate has an absolute error of 19 and relative error of 5.3%. The relative error is a little less than 1/24, so the estimate is good to 4+ bits. An estimate for a good to 8 bits can be obtained by table lookup on the high 8 bits of a, remembering that the high bit is implicit in most floating point representations, and the bottom bit of the 8 should be rounded. The table is 256 bytes of precomputed 8-bit square root values. For example, for the index 111011012 representing 1.851562510, the entry is 101011102 representing 1.35937510, the square root of 1.851562510 to 8 bit precision (2+ decimal digits).


Heron's method

An unknown Babylonian mathematician somehow correctly calculated the square root of 2 to three sexagesimal "digits" after the 1, but it is not known exactly how. The Babylonians knew how to approximate a hypotenuse using :\sqrt\approx a+\frac 12b^2a^ (giving for example \frac+\frac for the diagonal of a gate whose height is \frac rods and whose width is \frac rods) and they may have used a similar approach for finding the approximation of \sqrt 2. The first explicit
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for approximating \sqrt is known as Heron's method, after the first-century Greek mathematician
Hero of Alexandria Hero of Alexandria (; grc-gre, Ἥρων ὁ Ἀλεξανδρεύς, ''Heron ho Alexandreus'', also known as Heron of Alexandria ; 60 AD) was a Greek mathematician and engineer who was active in his native city of Alexandria, Roman Egypt. He ...
who described the method in his
AD 60 __NOTOC__ AD 60 ( LX) was a leap year starting on Tuesday (link will display the full calendar) of the Julian calendar. At the time, it was known as the Year of the Consulship of Nero and Lentulus (or, less frequently, year 813 ''Ab urbe cond ...
work '' Metrica''. The basic idea is that if is an overestimate to the square root of a non-negative real number then will be an underestimate, or vice versa, and so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the
inequality of arithmetic and geometric means In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and ...
that shows this average is always an overestimate of the square root, as noted in the article on
square roots In mathematics, a square root of a number is a number such that ; in other words, a number whose ''square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . E ...
, thus assuring convergence). This is equivalent to using Newton's method to solve x^2 - S = 0 . More precisely, if is our initial guess of \sqrt and is the error in our estimate such that , then we can expand the binomial :(x + \epsilon)^2 = x^2 + 2x \epsilon + \epsilon^2 and solve for the error term :\varepsilon = \frac \approx \frac, since \varepsilon \ll x . Therefore, we can compensate for the error and update our old estimate as :x + \varepsilon \approx x + \frac = \frac = \frac \equiv x_\text Since the computed error was not exact, this becomes our next best guess. The process of updating is iterated until desired accuracy is obtained. This is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration. It proceeds as follows: #Begin with an arbitrary positive starting value (the closer to the actual square root of , the better). #Let be the average of and (using the arithmetic mean to approximate the geometric mean). #Repeat step 2 until the desired accuracy is achieved. It can also be represented as: :x_0 \approx \sqrt, :x_ = \frac \left(x_n + \frac\right), :\sqrt S = \lim_ x_n. This algorithm works equally well in the -adic numbers, but cannot be used to identify real square roots with -adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to −3 in the 2-adics.


Example

To calculate , where = 125348, to six significant figures, use the rough estimation method above to get :\begin \begin x_0 & = 6 \cdot 10^2 && = 600.000 \\ .3emx_1 & = \frac \left(x_0 + \frac\right) & = \frac \left(600.000 + \frac\right) & = 404.457 \\ .3emx_2 & = \frac \left(x_1 + \frac\right) & = \frac \left(404.457 + \frac\right) & = 357.187 \\ .3emx_3 & = \frac \left(x_2 + \frac\right) & = \frac \left(357.187 + \frac\right) & = 354.059 \\ .3emx_4 & = \frac \left(x_3 + \frac\right) & = \frac \left(354.059 + \frac\right) & = 354.045 \\ .3emx_5 & = \frac \left(x_4 + \frac\right) & = \frac \left(354.045 + \frac\right) & = 354.045 \end \end Therefore, .


Convergence

Suppose that ''x''0 > 0 and ''S'' > 0. Then for any natural number ''n'', ''x''''n'' > 0. Let the
relative error The approximation error in a data value is the discrepancy between an exact value and some '' approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute e ...
in ''x''''n'' be defined by :\varepsilon_n = \frac - 1 > -1 and thus :x_n = \sqrt \cdot (1 + \varepsilon_n) . Then it can be shown that :\varepsilon_ = \frac \geq 0 . And thus that :\varepsilon_ \leq \min \left\ and consequently that convergence is assured, and quadratic.


Worst case for convergence

If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows: : \begin S & = 1; & x_0 & = 2; & x_1 & = 1.250; & \varepsilon_1 & = 0.250. \\ S & = 10; & x_0 & = 2; & x_1 & = 3.500; & \varepsilon_1 & < 0.107. \\ S & = 10; & x_0 & = 6; & x_1 & = 3.833; & \varepsilon_1 & < 0.213. \\ S & = 100; & x_0 & = 6; & x_1 & = 11.333; & \varepsilon_1 & < 0.134. \end Thus in any case, : \varepsilon_1 \leq 2^. \, : \varepsilon_2 < 2^ < 10^. \, : \varepsilon_3 < 2^ < 10^. \, : \varepsilon_4 < 2^ < 10^. \, : \varepsilon_5 < 2^ < 10^. \, : \varepsilon_6 < 2^ < 10^. \, : \varepsilon_7 < 2^ < 10^. \, : \varepsilon_8 < 2^ < 10^. \, Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the being calculated to minimize round off error.


Bakhshali method

This method for finding an approximation to a square root was described in an ancient South Asian manuscript from
Pakistan Pakistan ( ur, ), officially the Islamic Republic of Pakistan ( ur, , label=none), is a country in South Asia. It is the world's fifth-most populous country, with a population of almost 243 million people, and has the world's second-lar ...
, called the Bakhshali manuscript. It is equivalent to two iterations of the
Babylonian method Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for fin ...
beginning with ''x''0. Thus, the algorithm is quartically convergent, which means that the number of correct digits of the approximation roughly quadruples with each iteration. The original presentation, using modern notation, is as follows: To calculate \sqrt, let x_0^2 be the initial approximation to S. Then, successively iterate as: :\begin a_n &= \frac,\\ b_n &= x_n + a_n,\\ x_ &= b_n - \frac = (x_n + a_n) - \frac. \end This can be used to construct a rational approximation to the square root by beginning with an integer. If x_0 = N is an integer chosen so N^2 is close to S, and d = S - N^2 is the difference whose absolute value is minimized, then the first iteration can be written as: :\sqrt \approx N + \frac - \frac = \frac = \frac = \frac. The Bakhshali method can be generalized to the computation of an arbitrary root, including fractional roots.


Example

Using the same example as given with the Babylonian method, let S = 125348. Then, the first iteration gives :\begin x_0 &= 600\\ a_1 &= \frac &&=& -195.543\\ b_1 &= 600 + (-195.543) &&=& 404.456\\ x_1 &= 404.456 - \frac &&=& 357.186 \end Likewise the second iteration gives :\begin a_2 &= \frac &&=& -3.126\\ b_2 &= 357.186 + (-3.126) &&=& 354.060\\ x_2 &= 354.06 - \frac &&=& 354.046 \end


Digit-by-digit calculation

This is a method to find each digit of the square root in a sequence. This method is based on the
binomial theorem In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial into a sum involving terms of the form , where the ...
and basically an inverse algorithm solving (x+y)² = x²+2xy+y². It is slower than the Babylonian method, but it has several advantages: * It can be easier for manual calculations. * Every digit of the root found is known to be correct, i.e., it does not have to be changed later. * If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a
square number In mathematics, a square number or perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 9 is a square number, since it equals and can be written as . The usu ...
. * The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen. * Inconveniences are that the algorithm becomes quite unhandleable for higher roots and that it is not allowing inaccurate guesses or inaccurate sub-calculations as they, unlike the self correcting approximations like with Newtons method, lead to every following digit of the result being wrong. Furthermore this algorithm, even though being efficient enough on paper, is way too expensive for software implementations as the many calculations become larger and larger and load the memory while still only allowing digit by digit progressions leading the algorithm to become slower and slower with every following digit.
Napier's bones Napier's bones is a manually-operated calculating device created by John Napier of Merchiston, Scotland for the calculation of products and quotients of numbers. The method was based on lattice multiplication, and also called ''rabdology'', a wor ...
include an aid for the execution of this algorithm. The shifting ''n''th root algorithm is a generalization of this method.


Basic principle

First, consider the case of finding the square root of a number ''Z'', that is the square of a two-digit number ''XY'', where ''X'' is the tens digit and ''Y'' is the units digit. Specifically: :Z = (10X + Y)^2 = 100X^2 + 20XY + Y^2 Now using the digit-by-digit algorithm, we first determine the value of ''X''. ''X'' is the largest digit such that X2 is less or equal to ''Z'' from which we removed the two rightmost digits. In the next iteration, we pair the digits, multiply ''X'' by 2, and place it in the tenth's place while we try to figure out what the value of ''Y'' is. Since this is a simple case where the answer is a perfect square root ''XY'', the algorithm stops here. The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root of ''N'' by expressing it as a sum of ''n'' positive numbers such that :N = (a_1+a_2+a_3+\dotsb+a_n)^2. By repeatedly applying the basic identity :(x+y)^2 = x^2 +2xy + y^2, the right-hand-side term can be expanded as : \begin & (a_1+a_2+a_3+ \dotsb +a_n)^2 \\ =& \, a_1^2 + 2a_1a_2 + a_2^2 + 2(a_1+a_2) a_3 + a_3^2 + \dotsb + a_^2 + 2 \left(\sum_^ a_i\right) a_n + a_n^2 \\ =& \, a_1^2 + a_1 + a_2a_2 + (a_1+a_2) + a_3a_3 + \dotsb + \left \left(\sum_^ a_i\right) + a_n\righta_n. \end This expression allows us to find the square root by sequentially guessing the values of a_is. Suppose that the numbers a_1, \ldots, a_ have already been guessed, then the m-th term of the right-hand-side of above summation is given by Y_ = P_ + a__, where P_ = \sum_^ a_i is the approximate square root found so far. Now each new guess a_m should satisfy the recursion :X_ = X_ - Y_, such that X_m \geq 0 for all 1\leq m\leq n, with initialization X_0 = N. When X_n = 0, the exact square root has been found; if not, then the sum of a_is gives a suitable approximation of the square root, with X_n being the approximation error. For example, in the decimal number system we have :N = (a_1 \cdot 10^ + a_2 \cdot 10^ + \cdots + a_ \cdot 10 + a_n)^2, where 10^ are place holders and the coefficients a_i \in \. At any m-th stage of the square root calculation, the approximate root found so far, P_ and the summation term Y_m are given by :P_ = \sum_^ a_i \cdot 10^ = 10^ \sum_^ a_i \cdot 10^, :Y_m = P_ + a_m \cdot 10^a_m \cdot 10^ = \left 0 \sum_^ a_i \cdot 10^ + a_m \righta_m \cdot 10^. Here since the place value of Y_m is an even power of 10, we only need to work with the pair of most significant digits of the remaining term X_ at any m-th stage. The section below codifies this procedure. It is obvious that a similar method can be used to compute the square root in number systems other than the decimal number system. For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value of a_i is searched from a smaller set of binary digits . This makes the computation faster since at each stage the value of Y_m is either Y_m = 0 for a_m = 0 or Y_m = 2 P_ + 1 for a_m = 1. The fact that we have only two possible options for a_m also makes the process of deciding the value of a_m at m-th stage of calculation easier. This is because we only need to check if Y_m \leq X_ for a_m = 1. If this condition is satisfied, then we take a_m = 1; if not then a_m = 0. Also, the fact that multiplication by 2 is done by left bit-shifts helps in the computation.


Decimal (base 10)

Write the original number in decimal form. The numbers are written similar to the
long division In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (Positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps ...
algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square. Beginning with the left-most pair of digits, do the following procedure for each pair: # Starting on the left, bring down the most significant (leftmost) pair of digits not yet used (if all the digits have been used, write "00") and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 100 and add the two digits. This will be the current value ''c''. # Find ''p'', ''y'' and ''x'', as follows: #* Let ''p'' be the part of the root found so far, ignoring any decimal point. (For the first step, ''p'' = 0.) #* Determine the greatest digit ''x'' such that x(20p + x) \le c. We will use a new variable ''y'' = ''x''(20''p'' + ''x''). #** Note: 20''p'' + ''x'' is simply twice ''p'', with the digit ''x'' appended to the right. #** Note: ''x'' can be found by guessing what ''c''/(20·''p'') is and doing a trial calculation of ''y'', then adjusting ''x'' upward or downward as necessary. #* Place the digit x as the next digit of the root, i.e., above the two digits of the square you just brought down. Thus the next ''p'' will be the old ''p'' times 10 plus ''x''. # Subtract ''y'' from ''c'' to form a new remainder. # If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration.


Examples

Find the square root of 152.2756. 1 2. 3 4 / \/ 01 52.27 56 01 1*1 <= 1 < 2*2 x = 1 01 y = x*x = 1*1 = 1 00 52 22*2 <= 52 < 23*3 x = 2 00 44 y = (20+x)*x = 22*2 = 44 08 27 243*3 <= 827 < 244*4 x = 3 07 29 y = (240+x)*x = 243*3 = 729 98 56 2464*4 <= 9856 < 2465*5 x = 4 98 56 y = (2460+x)*x = 2464*4 = 9856 00 00 Algorithm terminates: Answer is 12.34


Binary numeral system (base 2)

This section uses the formalism from the digit-by-digit calculation section above, with the slight variation that we let N^2 = (a_n + \dotsb + a_0)^2, with each a_m = 2^m or a_m = 0.
We iterate all 2^m, from 2^n down to 2^0, and build up an approximate solution P_m = a_n + a_ + \ldots + a_m, the sum of all a_i for which we have determined the value.
To determine if a_m equals 2^m or 0, we let P_m = P_ + 2^m. If P_m^2 \leq N^2 (i.e. the square of our approximate solution including 2^m does not exceed the target square) then a_m = 2^m, otherwise a_m = 0 and P_m = P_.
To avoid squaring P_m in each step, we store the difference X_m = N^2 - P_m^2 and incrementally update it by setting X_m = X_ - Y_m with Y_m = P_m^2 - P_^2 = 2P_a_m + a_m^2.
Initially, we set a_n = P_n = 2^n for the largest n with (2^n)^2 = 4^n \leq N^2. As an extra optimization, we store P_2^ and (2^m)^2, the two terms of Y_m in case that a_m is nonzero, in separate variables c_m, d_m: :c_m = P_2^ :d_m = (2^m)^2 : Y_m = \begin c_m + d_m & \text a_m = 2^m \\ 0 & \text a_m = 0 \end c_m and d_m can be efficiently updated in each step: :c_ = P_m2^m = (P_ + a_m)2^m = P_2^m + a_m 2^m = \begin c_m/2 + d_m & \text a_m = 2^m \\ c_m/2 & \text a_m = 0 \end :d_ = \frac Note that: :c_ = P_02^0 = P_0 = N, which is the final result returned in the function below. An implementation of this algorithm in C: int32_t isqrt(int32_t n) Faster algorithms, in binary and decimal or any other base, can be realized by using lookup tables—in effect trading more storage space for reduced run time.


Exponential identity

Pocket calculator An electronic calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics. The first solid-state electronic calculator was created in the early 1960s. Pocket-sized ...
s typically implement good routines to compute the
exponential function The exponential function is a mathematical function denoted by f(x)=\exp(x) or e^x (where the argument is written as an exponent). Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, ...
and the natural logarithm, and then compute the square root of ''S'' using the identity found using the properties of logarithms (\ln x^n = n \ln x) and exponentials (e^ = x): :\sqrt = e^. The denominator in the fraction corresponds to the ''n''th root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with
logarithm table In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm and as the decimal logarithm, named after its base, or Briggsian logarithm, after Henry Briggs, an English mathematician who pioneered i ...
s or
slide rule The slide rule is a mechanical analog computer which is used primarily for multiplication and division, and for functions such as exponents, roots, logarithms, and trigonometry. It is not typically designed for addition or subtraction, which ...
s.


A two-variable iterative method

This method is applicable for finding the square root of 0 < S < 3 \,\! and converges best for S \approx 1. This, however, is no real limitation for a computer based calculation, as in base 2 floating point and fixed point representations, it is trivial to multiply S \,\! by an integer power of 4, and therefore \sqrt by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, S \,\! can be moved to the range \frac \le S <2. Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one. The initialization step of this method is :a_0 = S \,\! :c_0 = S-1 \,\! while the iterative steps read :a_ = a_n - a_n c_n / 2 \,\! :c_ = c_n^2 (c_n - 3) / 4 \,\! Then, a_n \rightarrow \sqrt (while c_n \rightarrow 0). Note that the convergence of c_n \,\!, and therefore also of a_n \,\!, is quadratic. The proof of the method is rather easy. First, rewrite the iterative definition of c_n \,\! as :1 + c_ = (1 + c_n) (1 - c_n/2)^2 \,\!. Then it is straightforward to prove by induction that :S (1 + c_n) = a_n^2 and therefore the convergence of a_n \,\! to the desired result \sqrt is ensured by the convergence of c_n \,\! to 0, which in turn follows from -1 < c_0 < 2 \,\!. This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill for use on EDSAC, one of the first electronic computers. The method was later generalized, allowing the computation of non-square roots.


Iterative methods for reciprocal square roots

The following are iterative methods for finding the reciprocal square root of ''S'' which is 1/\sqrt. Once it has been found, find \sqrt by simple multiplication: \sqrt = S \cdot (1/\sqrt). These iterations involve only multiplication, and not division. They are therefore faster than the
Babylonian method Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for fin ...
. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods. *Applying Newton's method to the equation (1/x^2) - S = 0 produces a method that converges quadratically using three multiplications per step: *:x_ = \frac \cdot (3 - S \cdot x_n^2) = x_n \cdot \left(\frac - \frac\cdot x_n^2\right). *Another iteration is obtained by
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
, which is the
Householder's method In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order . Each of these methods is chara ...
of order two. This converges cubically, but involves five multiplications per iteration: *:y_n = S \cdot x_n^2, and *:x_ = \frac \cdot (15 - y_n \cdot (10 - 3 \cdot y_n)) = x_n \cdot \left(\frac - y_n \cdot \left(\frac - \frac \cdot y_n\right)\right). * If doing fixed-point arithmetic, the multiplication by 3 and division by 8 can implemented using shifts and adds. If using floating-point, Halley's method can be reduced to four multiplications per iteration by precomputing \sqrt\frac S and adjusting all the other constants to compensate: *:y_n = \sqrt\frac S \cdot x_n^2, and *:x_ = x_n \cdot \left(\frac - y_n \cdot \left(\sqrt\frac - y_n\right)\right).


Goldschmidt’s algorithm

Some computers use Goldschmidt's algorithm to simultaneously calculate \sqrt and 1/\sqrt. Goldschmidt's algorithm finds \sqrt faster than Newton-Raphson iteration on a computer with a fused multiply–add instruction and either a pipelined floating point unit or two independent floating-point units. The first way of writing Goldschmidt's algorithm begins : b_0 = S : Y_0 \approx 1/\sqrt (typically using a table lookup) : y_0 = Y_0 : x_0 = S y_0 and iterates :b_ = b_n Y_n^2 :Y_ = (3 - b_)/2 :x_ = x_n Y_ :y_ = y_n Y_ until b_i is sufficiently close to 1, or a fixed number of iterations. The iterations converge to :\lim_ x_n = \sqrt S, and :\lim_ y_n = 1/\sqrt S. Note that it is possible to omit either x_n and y_n from the computation, and if both are desired then x_n = S y_n may be used at the end rather than computing it through in each iteration. A second form, using
fused multiply-add Fuse or FUSE may refer to: Devices * Fuse (electrical), a device used in electrical systems to protect against excessive current ** Fuse (automotive), a class of fuses for vehicles * Fuse (hydraulic), a device used in hydraulic systems to protect ...
operations, begins : y_0 \approx 1/\sqrt (typically using a table lookup) : x_0 = S y_0 : h_0 = y_0/2 and iterates :r_n = 0.5 - x_n h_n :x_ = x_n + x_n r_n :h_ = h_n + h_n r_n until r_i is sufficiently close to 0, or a fixed number of iterations. This converges to :\lim_ x_n = \sqrt S, and :\lim_ 2h_n = 1/\sqrt S.


Taylor series

If ''N'' is an approximation to \sqrt, a better approximation can be found by using the
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
of the
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose ''square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
function: :\sqrt = N\sum_^\infty \frac\frac = N\left(1 + \frac - \frac + \frac - \frac + \cdots\right) As an iterative method, the
order of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of co ...
is equal to the number of terms used. With two terms, it is identical to the
Babylonian method Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for fin ...
. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, choose ''N'' so that \frac \, is as small as possible.


Continued fraction expansion

Quadratic irrational In mathematics, a quadratic irrational number (also known as a quadratic irrational, a quadratic irrationality or quadratic surd) is an irrational number that is the solution to some quadratic equation with rational coefficients which is irreducibl ...
s (numbers of the form \frac, where ''a'', ''b'' and ''c'' are integers), and in particular, square roots of integers, have
periodic continued fraction In mathematics, an infinite periodic continued fraction is a continued fraction that can be placed in the form : x = a_0 + \cfrac where the initial block of ''k'' + 1 partial denominators is followed by a block 'a'k''+1, ''a'k''+2,.. ...
s. Sometimes what is desired is finding not the numerical value of a square root, but rather its
continued fraction In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer ...
expansion, and hence its rational approximation. Let ''S'' be the positive number for which we are required to find the square root. Then assuming ''a'' to be a number that serves as an initial guess and ''r'' to be the remainder term, we can write S = a^2 + r. Since we have S - a^2 = (\sqrt + a)(\sqrt - a) = r, we can express the square root of ''S'' as : \sqrt = a + \frac. By applying this expression for \sqrt to the denominator term of the fraction, we have : \sqrt = a + \frac = a + \frac.
The numerator/denominator expansion for continued fractions (see left) is cumbersome to write as well as to embed in text formatting systems. So mathematicians have devised several alternative notations, like : \sqrt = a+ \frac\, \frac\, \frac\cdots When r = 1 throughout, an even more compact notation is: : ; 2a, 2a, 2a, \cdots For repeating continued fractions (which all square roots of non-perfect squares do), the repetend is represented only once, with an overline to signify a non-terminating repetition of the overlined part:see:
Periodic continued fraction In mathematics, an infinite periodic continued fraction is a continued fraction that can be placed in the form : x = a_0 + \cfrac where the initial block of ''k'' + 1 partial denominators is followed by a block 'a'k''+1, ''a'k''+2,.. ...
: ;\overline/math> For , the value of a is 1, so its representation is: : ;\overline/math>
Proceeding this way, we get a
generalized continued fraction In complex analysis, a branch of mathematics, a generalized continued fraction is a generalization of regular continued fractions in canonical form, in which the partial numerators and partial denominators can assume arbitrary complex values. A ge ...
for the square root as \sqrt = a + \cfrac The first step to evaluating such a fraction to obtain a root is to do numerical substitutions for the root of the number desired, and number of denominators selected. For example, in canonical form, r is 1 and for , a is 1, so the numerical continued fraction for 3 denominators is: : \sqrt \approx 1 + \cfrac Step 2 is to reduce the continued fraction from the bottom up, one denominator at a time, to yield a rational fraction whose numerator and denominator are integers. The reduction proceeds thus (taking the first three denominators): : 1 + \cfrac = 1 + \cfrac :: = 1 + \cfrac = 1 + \cfrac :: = 1 + \cfrac = \frac Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root: :17 \div 12 = 1.42 rounded to three digits of precision. The actual value of is 1.41 to three significant digits. The relative error is 0.17%, so the rational fraction is good to almost three digits of precision. Taking more denominators gives successively better approximations: four denominators yields the fraction \frac = 1.4137, good to almost 4 digits of precision, etc. The following are examples of square roots, their simple continued fractions, and their first terms called ''convergents'' up to and including denominator 99: In general, the larger the denominator of a rational fraction, the better the approximation. It can also be shown that truncating a continued fraction yields a rational fraction that is the best approximation to the root of any fraction with denominator less than or equal to the denominator of that fraction e.g., no fraction with a denominator less than or equal to 70 is as good an approximation to as 99/70.


Lucas sequence method

the
Lucas sequence In mathematics, the Lucas sequences U_n(P,Q) and V_n(P, Q) are certain constant-recursive integer sequences that satisfy the recurrence relation : x_n = P \cdot x_ - Q \cdot x_ where P and Q are fixed integers. Any sequence satisfying this rec ...
of the first kind ''Un''(''P'',''Q'') is defined by the
recurrence relations In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
:
U_n(P, Q)= \begin 0 & \textn = 0 \\ 1 & \textn = 1 \\ P \cdot U_(P, Q) -Q \cdot U_(P, Q) & \text \end
and the characteristic equation of it is:
x^2 -P \cdot x +Q = 0
it has the discriminant D = P^2 -4Q and the roots:
\begin x_1 = \dfrac, & x_2 = \dfrac\end
all that yield the following positive value:
\lim_ = x_1
so when we want \sqrt , we can choose P = 2 and Q = 1 -a , and then calculate x_1 = 1 +\sqrt using U_ and U_n for large value of n . The most effective way to calculate U_ and U_n is:
\begin U_n \\ U_ \end = \begin 0 & 1 \\ -Q & P \end \cdot \begin U_ \\ U_n \end = \begin 0 & 1 \\ -Q & P \end^n \cdot \begin U_0 \\ U_1 \end
Summary:
\begin 0 & 1 \\ a -1 & 2 \end^n \cdot \begin 0 \\ 1 \end = \begin U_n \\ U_ \end
then when n \to \infty :
\sqrt = \frac -1


Approximations that depend on the floating point representation

A number is represented in a
floating point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
format as m\times b^p which is also called
scientific notation Scientific notation is a way of expressing numbers that are too large or too small (usually would result in a long string of digits) to be conveniently written in decimal form. It may be referred to as scientific form or standard index form, o ...
. Its square root is \sqrt\times b^ and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then just b^ is good to an order of magnitude. Next, recognise that some powers, , will be odd, thus for 3141.59 = 3.14159 rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.4159 so that the square root will be . If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate. A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that a normalised floating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the first ''two'' bits of the original mantissa. Thus, 6.25 = 110.01 in binary, normalised to 1.1001 × 22 an even power so the paired bits of the mantissa are 01, while .625 = 0.101 in binary normalises to 1.01 × 2−1 an odd power so the adjustment is to 10.1 × 2−2 and the paired bits are 10. Notice that the low order bit of the power is echoed in the high order bit of the pairwise mantissa. An even power has its low-order bit zero and the adjusted mantissa will start with 0, whereas for an odd power that bit is one and the adjusted mantissa will start with 1. Thus, when the power is halved, it is as if its low order bit is shifted out to become the first bit of the pairwise mantissa. A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example, Fortran offers an EXPONENT(x) function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe. Many computers follow the
IEEE The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronic engineering and electrical engineering (and associated disciplines) with its corporate office in New York City and its operat ...
(or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method. The technique that follows is based on the fact that the floating point format (in base two) approximates the base-2 logarithm. That is \log_2(m\times 2^p) = p + \log_2(m) So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a
bias Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group ...
of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by 2^, and removing a bias of 127, i.e. :x_\text \cdot 2^ - 127 \approx \log_2(x). For example, 1.0 is represented by a hexadecimal number 0x3F800000, which would represent 1065353216 = 127 \cdot 2^ if taken as an integer. Using the formula above you get 1065353216 \cdot 2^ - 127 = 0, as expected from \log_2(1.0). In a similar fashion you get 0.5 from 1.5 (0x3FC00000). To get the square root, divide the logarithm by 2 and convert the value back. The following program demonstrates the idea. Note that the exponent's lowest bit is intentionally allowed to propagate into the mantissa. One way to justify the steps in this program is to assume b is the exponent bias and n is the number of explicitly stored bits in the mantissa and then show that :(((x_\text / 2^n - b) / 2) + b) \cdot 2^n = (x_\text - 2^n) / 2 + ((b + 1) / 2) \cdot 2^n. /* Assumes that float is in the IEEE 754 single precision floating point format */ #include float sqrt_approx(float z) The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as val.i = (1 << 29) + (val.i >> 1) - (1 << 22) + a; where ''a'' is a bias for adjusting the approximation errors. For example, with ''a'' = 0 the results are accurate for even powers of 2 (e.g. 1.0), but for other numbers the results will be slightly too big (e.g. 1.5 for 2.0 instead of 1.414... with 6% error). With ''a'' = −0x4B0D2, the maximum relative error is minimized to ±3.5%. If the approximation is to be used for an initial guess for Newton's method to the equation (1/x^2) - S = 0, then the reciprocal form shown in the following section is preferred.


Reciprocal of the square root

A variant of the above routine is included below, which can be used to compute the
reciprocal Reciprocal may refer to: In mathematics * Multiplicative inverse, in mathematics, the number 1/''x'', which multiplied by ''x'' gives the product 1, also known as a ''reciprocal'' * Reciprocal polynomial, a polynomial obtained from another pol ...
of the square root, i.e., x^ instead, was written by Greg Walsh. The integer-shift approximation produced a relative error of less than 4%, and the error dropped further to 0.15% with one iteration of Newton's method on the following line. In computer graphics it is a very efficient way to normalize a vector. float invSqrt(float x) Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by a Goldschmidt iteration.


Negative or complex square

If ''S'' < 0, then its principal square root is :\sqrt = \sqrt \, \, i \,. If ''S'' = ''a''+''bi'' where ''a'' and ''b'' are real and ''b'' ≠ 0, then its principal square root is :\sqrt = \sqrt \, + \, \sgn (b) \sqrt \, \, i \,. This can be verified by squaring the root. Here :\vert S \vert = \sqrt is the modulus of ''S''. The principal square root of a
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the fo ...
is defined to be the root with the non-negative real part.


See also

* Alpha max plus beta min algorithm * ''n''th root algorithm * Square root of 2 *
Integer square root In number theory, the integer square root (isqrt) of a non-negative integer ''n'' is the non-negative integer ''m'' which is the greatest integer less than or equal to the square root of ''n'', : \mbox( n ) = \lfloor \sqrt n \rfloor. For example ...


Notes


References


Bibliography


External links

*
Square roots by subtraction



Personal Calculator Algorithms I : Square Roots (William E. Egbert), Hewlett-Packard Journal (may 1977) : page 22

Calculator to learn the square root
{{DEFAULTSORT:Methods Of Computing Square Roots Root-finding algorithms Computer arithmetic algorithms