HOME

TheInfoList



OR:

The shifting ''n''th root algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for extracting the ''n''th root of a positive
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
which proceeds iteratively by shifting in ''n'' digits of the radicand, starting with the most significant, and produces one digit of the root on each iteration, in a manner similar to
long division In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (Positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps ...
.


Algorithm


Notation

Let ''B'' be the base of the number system you are using, and ''n'' be the degree of the root to be extracted. Let x be the radicand processed thus far, y be the root extracted thus far, and r be the remainder. Let \alpha be the next n digits of the radicand, and \beta be the next digit of the root. Let x' be the new value of x for the next iteration, y' be the new value of y for the next iteration, and r' be the new value of r for the next iteration. These are all
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
s.


Invariants

At each iteration, the invariant y^n + r = x will hold. The invariant (y+1)^n>x will hold. Thus y is the largest integer less than or equal to the ''n''th root of x, and r is the remainder.


Initialization

The initial values of x, y, and r should be 0. The value of \alpha for the first iteration should be the most significant aligned block of n digits of the radicand. An aligned block of n digits means a block of digits aligned so that the decimal point falls between blocks. For example, in 123.4 the most significant aligned block of two digits is 01, the next most significant is 23, and the third most significant is 40.


Main loop

On each iteration we shift in n digits of the radicand, so we have x' = B^n x + \alpha and we produce one digit of the root, so we have y' = B y + \beta . The first invariant implies that r' = x' - y'^n. We want to choose \beta so that the invariants described above hold. It turns out that there is always exactly one such choice, as will be proved below. To summarize, on each iteration: # Let \alpha be the next aligned block of digits from the radicand # Let x' = B^n x + \alpha # Let \beta be the largest \beta such that (B y + \beta)^n \le B^n x + \alpha # Let y' = B y + \beta # Let r' = x' - y'^n Now, note that x = y^n + r, so the condition :(B y + \beta)^n \le B^n x + \alpha is equivalent to :(B y + \beta)^n - B^n y^n \le B^n r + \alpha and :r' = x' - y'^n = B^n x + \alpha - (B y + \beta)^n is equivalent to :r' = B^n r + \alpha - ((B y + \beta)^n - B^n y ^n) Thus, we do not actually need x, and since r = x - y^n and x<(y+1)^n, r<(y+1)^n-y^n or r, or r, so by using r instead of x we save time and space by a factor of 1/n. Also, the B^n y^n we subtract in the new test cancels the one in (B y + \beta)^n, so now the highest power of y we have to evaluate is y^ rather than y^n.


Summary

# Initialize r and y to 0. # Repeat until desired
precision Precision, precise or precisely may refer to: Science, and technology, and mathematics Mathematics and computing (general) * Accuracy and precision, measurement deviation from true value and its scatter * Significant figures, the number of digit ...
is obtained: ## Let \alpha be the next aligned block of digits from the radicand. ## Let \beta be the largest \beta such that (B y + \beta)^n - B^n y^n \le B^n r + \alpha. ## Let y' = B y + \beta. ## Let r' = B^n r + \alpha - ((B y + \beta)^n - B^n y^n). ## Assign y \leftarrow y' and r \leftarrow r'. # y is the largest integer such that y^n, and y^n+r=x B^k, where k is the number of digits of the radicand after the decimal point that have been consumed (a negative number if the algorithm has not reached the decimal point yet).


Paper-and-pencil ''n''th roots

As noted above, this algorithm is similar to long division, and it lends itself to the same notation: . —————————————————————— _ / 3. \/ = (10×)2× +(10×)×2 +3 — 2 1 744 = (10×)2× +(10×)×2 +3 ————— 256 241 984 = (10×)2× +(10×)×2 +3 ——————— 14 016 12 458 888 = (10×)2× +(10×)×2 +3 —————————— 1 557 112 1 247 791 448 = (10×)2× +(10×)×2 +3 ————————————— 309 320 552 249 599 823 424 = (10×)2× +(10×)×2 +3 ——————————————— 59 720 728 576 Note that after the first iteration or two the leading term dominates the (B y + \beta)^n - B^n y^n, so we can get an often correct first guess at \beta by dividing B^n r + \alpha by n B^ y^{n-1}.


Performance

On each iteration, the most time-consuming task is to select \beta. We know that there are B possible values, so we can find \beta using O(\log(B)) comparisons. Each comparison will require evaluating (B y +\beta)^n - B^n y^n. In the ''k''th iteration, y has k digits, and the polynomial can be evaluated with 2 n - 4 multiplications of up to k(n-1) digits and n - 2 additions of up to k(n-1) digits, once we know the powers of y and \beta up through n-1 for y and n for \beta. \beta has a restricted range, so we can get the powers of \beta in constant time. We can get the powers of y with n-2 multiplications of up to k(n-1) digits. Assuming n-digit multiplication takes time O(n^2) and addition takes time O(n), we take time O(k^2 n^2) for each comparison, or time O(k^2 n^2 \log(B)) to pick \beta. The remainder of the algorithm is addition and subtraction that takes time O(k), so each iteration takes O(k^2 n^2 \log(B)). For all k digits, we need time O(k^3 n^2 \log(B)). The only internal storage needed is r, which is O(k) digits on the ''k''th iteration. That this algorithm does not have bounded memory usage puts an upper bound on the number of digits which can be computed mentally, unlike the more elementary algorithms of arithmetic. Unfortunately, any bounded memory state machine with periodic inputs can only produce periodic outputs, so there are no such algorithms which can compute irrational numbers from rational ones, and thus no bounded memory root extraction algorithms. Note that increasing the base increases the time needed to pick \beta by a factor of O(\log(B)), but decreases the number of digits needed to achieve a given precision by the same factor, and since the algorithm is cubic time in the number of digits, increasing the base gives an overall speedup of O(\log^2(B)). When the base is larger than the radicand, the algorithm degenerates to
binary search In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the ...
, so it follows that this algorithm is not useful for computing roots with a computer, as it is always outperformed by much simpler binary search, and has the same memory complexity.


Examples


Square root of 2 in binary

1. 0 1 1 0 1 ------------------ _ / 10.00 00 00 00 00 1 \/ 1 + 1 ----- ---- 1 00 100 0 + 0 -------- ----- 1 00 00 1001 10 01 + 1 ----------- ------ 1 11 00 10101 1 01 01 + 1 ---------- ------- 1 11 00 101100 0 + 0 ---------- -------- 1 11 00 00 1011001 1 01 10 01 1 ---------- 1 01 11 remainder


Square root of 3

1. 7 3 2 0 5 ---------------------- _ / 3.00 00 00 00 00 \/ 1 = 20×0×1+1^2 - 2 00 1 89 = 20×1×7+7^2 (27 x 7) ---- 11 00 10 29 = 20×17×3+3^2 (343 x 3) ----- 71 00 69 24 = 20×173×2+2^2 (3462 x 2) ----- 1 76 00 0 = 20×1732×0+0^2 (34640 x 0) ------- 1 76 00 00 1 73 20 25 = 20×17320×5+5^2 (346405 x 5) ---------- 2 79 75


Cube root of 5

1. 7 0 9 9 7 ---------------------- _ 3/ 5. 000 000 000 000 000 \/ 1 = 300×(0^2)×1+30×0×(1^2)+1^3 - 4 000 3 913 = 300×(1^2)×7+30×1×(7^2)+7^3 ----- 87 000 0 = 300×(17^2)×0+30×17×(0^2)+0^3 ------- 87 000 000 78 443 829 = 300×(170^2)×9+30×170×(9^2)+9^3 ---------- 8 556 171 000 7 889 992 299 = 300×(1709^2)×9+30×1709×(9^2)+9^3 ------------- 666 178 701 000 614 014 317 973 = 300×(17099^2)×7+30×17099×(7^2)+7^3 --------------- 52 164 383 027


Fourth root of 7

1. 6 2 6 5 7 --------------------------- _ 4/ 7.0000 0000 0000 0000 0000 \/ 1 = 4000×(0^3)×1+600×(0^2)×(1^2)+40×0×(1^3)+1^4 - 6 0000 5 5536 = 4000×(1^3)×6+600×(1^2)×(6^2)+40×1×(6^3)+6^4 ------ 4464 0000 3338 7536 = 4000×(16^3)×2+600×(16^2)×(2^2)+40×16×(2^3)+2^4 --------- 1125 2464 0000 1026 0494 3376 = 4000×(162^3)×6+600×(162^2)×(6^2)+40×162×(6^3)+6^4 -------------- 99 1969 6624 0000 86 0185 1379 0625 = 4000×(1626^3)×5+600×(1626^2)×(5^2)+ ----------------- 40×1626×(5^3)+5^4 13 1784 5244 9375 0000 12 0489 2414 6927 3201 = 4000×(16265^3)×7+600×(16265^2)×(7^2)+ ---------------------- 40×16265×(7^3)+7^4 1 1295 2830 2447 6799


See also

*
Methods of computing square roots Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for fi ...
* ''n''th root algorithm


External links


Why the square root algorithm works
"Home School Math". Also related pages giving examples of the long-division-like pencil and paper method for square roots.
Reflections on The Square Root of Two
"Medium". With an example of a C++ implementation. Operations on numbers Root-finding algorithms Computer arithmetic algorithms Digit-by-digit algorithms