HOME

TheInfoList



OR:

A multiplication algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
(or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal system.


Long multiplication

If a
positional numeral system Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which th ...
is used, a natural way of multiplying numbers is taught in schools as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the
multiplication table In mathematics, a multiplication table (sometimes, less formally, a times table) is a mathematical table used to define a multiplication operation for an algebraic system. The decimal multiplication table was traditionally taught as an essenti ...
for single digits. This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an
abacus The abacus (''plural'' abaci or abacuses), also called a counting frame, is a calculating tool which has been used since ancient times. It was used in the ancient Near East, Europe, China, and Russia, centuries before the adoption of the Hi ...
-user will sum the products as soon as each one is computed.


Example

This example uses ''long multiplication'' to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390)


Other notations

In some countries such as
Germany Germany,, officially the Federal Republic of Germany, is a country in Central Europe. It is the second most populous country in Europe after Russia, and the most populous member state of the European Union. Germany is situated betwee ...
, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness. multiply(a ..p b ..q base) // Operands containing rightmost digits at index 1 product = ..p+q // Allocate space for result for b_i = 1 to q // for all digits in b carry = 0 for a_i = 1 to p // for all digits in a product _i + b_i - 1+= carry + a _i* b _i carry = product _i + b_i - 1/ base product _i + b_i - 1= product _i + b_i - 1mod base product _i + p= carry // last digit comes from final carry return product


Usage in computers

Some
chips ''CHiPs'' is an American crime drama television series created by Rick Rosner and originally aired on NBC from September 15, 1977, to May 1, 1983. It follows the lives of two motorcycle officers of the California Highway Patrol (CHP). The seri ...
implement long multiplication, in hardware or in
microcode In processor design, microcode (μcode) is a technique that interposes a layer of computer organization between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. Microcode is a la ...
, for various integer and floating-point word sizes. In
arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are li ...
, it is common to use long multiplication with the base set to 2''w'', where ''w'' is the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers with ''n'' digits using this method, one needs about ''n''2 operations. More formally, multiplying two ''n''-digit numbers using long multiplication requires Θ(''n''2) single-digit operations (additions and multiplications). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, ''b'', such that, for example, 8''b'' is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than ''b''. This process is called ''normalization''. Richard Brent used this approach in his Fortran package, MP. Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. In base two, long multiplication is sometimes called "shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such as
Booth encoding Booth's multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in two's complement notation. The algorithm was invented by Andrew Donald Booth in 1950 while doing research on crystallography at Birkbeck C ...
) for various integer and floating-point sizes in hardware multipliers or in
microcode In processor design, microcode (μcode) is a technique that interposes a layer of computer organization between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. Microcode is a la ...
. On currently available processors, a bit-wise shift instruction is faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition. ((x << 2) + x) << 1 # Here 10*x is computed as (x*2^2 + x)*2 (x << 3) + (x << 1) # Here 10*x is computed as x*2^3 + x*2 In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form 2^n or 2^n \pm 1 often can be converted to such a short sequence.


Algorithms for multiplying by hand

In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers or
multiplication table In mathematics, a multiplication table (sometimes, less formally, a times table) is a mathematical table used to define a multiplication operation for an algebraic system. The decimal multiplication table was traditionally taught as an essenti ...
s are unavailable.


Grid method

The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at
primary school A primary school (in Ireland, the United Kingdom, Australia, Trinidad and Tobago, Jamaica, and South Africa), junior school (in Australia), elementary school or grade school (in North America and the Philippines) is a school for primary e ...
or
elementary school A primary school (in Ireland, the United Kingdom, Australia, Trinidad and Tobago, Jamaica, and South Africa), junior school (in Australia), elementary school or grade school (in North America and the Philippines) is a school for primary ed ...
. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s. Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage. The calculation 34 × 13, for example, could be computed using the grid:
  300
   40
   90
 + 12
 ————
  442
followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals (300 + 40) + (90 + 12) = 340 + 102 = 442. This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage. The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need.


Lattice multiplication

Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the
addition Addition (usually signified by the plus symbol ) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division. The addition of two whole numbers results in the total amount or '' sum'' ...
s. It was introduced to Europe in 1202 in
Fibonacci Fibonacci (; also , ; – ), also known as Leonardo Bonacci, Leonardo of Pisa, or Leonardo Bigollo Pisano ('Leonardo the Traveller from Pisa'), was an Italian mathematician from the Republic of Pisa, considered to be "the most talented Wester ...
's
Liber Abaci ''Liber Abaci'' (also spelled as ''Liber Abbaci''; "The Book of Calculation") is a historic 1202 Latin manuscript on arithmetic by Leonardo of Pisa, posthumously known as Fibonacci. ''Liber Abaci'' was among the first Western books to describe ...
. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire. Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death. As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in
Muhammad ibn Musa al-Khwarizmi Muḥammad ibn Mūsā al-Khwārizmī ( ar, محمد بن موسى الخوارزمي, Muḥammad ibn Musā al-Khwārazmi; ), or al-Khwarizmi, was a Persians, Persian polymath from Khwarazm, who produced vastly influential works in Mathematics ...
's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002. *During the multiplication phase, the lattice is filled in with two-digit products of the corresponding digits labeling each row and column: the tens digit goes in the top-left corner. *During the addition phase, the lattice is summed on the diagonals. * Finally, if a carry phase is necessary, the answer as shown along the left and bottom sides of the lattice is converted to normal form by carrying ten's digits as in long addition or multiplication.


Example

The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown.


Russian peasant multiplication

The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the
multiplication table In mathematics, a multiplication table (sometimes, less formally, a times table) is a mathematical table used to define a multiplication operation for an algebraic system. The decimal multiplication table was traditionally taught as an essenti ...
s required for long multiplication. The algorithm was in use in ancient Egypt. Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers.


Description

On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product.


Examples

This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33. Decimal: Binary: 11 3 1011 11 5 6 101 110 2 12 10 1100 1 24 1 11000 —— —————— 33 100001 Describing the steps explicitly: * 11 and 3 are written at the top * 11 is halved (5.5) and 3 is doubled (6). The fractional portion is discarded (5.5 becomes 5). * 5 is halved (2.5) and 6 is doubled (12). The fractional portion is discarded (2.5 becomes 2). The figure in the left column (2) is even, so the figure in the right column (12) is discarded. * 2 is halved (1) and 12 is doubled (24). * All not-scratched-out values are summed: 3 + 6 + 24 = 33. The method works because multiplication is distributive, so: : \begin 3 \times 11 & = 3 \times (1\times 2^0 + 1\times 2^1 + 0\times 2^2 + 1\times 2^3) \\ & = 3 \times (1 + 2 + 8) \\ & = 3 + 6 + 24 \\ & = 33. \end A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830): Decimal: Binary: 5830 23958233 1011011000110 1011011011001001011011001 2915 47916466 101101100011 10110110110010010110110010 1457 95832932 10110110001 101101101100100101101100100 728 191665864 1011011000 1011011011001001011011001000 364 383331728 101101100 10110110110010010110110010000 182 766663456 10110110 101101101100100101101100100000 91 1533326912 1011011 1011011011001001011011001000000 45 3066653824 101101 10110110110010010110110010000000 22 6133307648 10110 101101101100100101101100100000000 11 12266615296 1011 1011011011001001011011001000000000 5 24533230592 101 10110110110010010110110010000000000 2 49066461184 10 101101101100100101101100100000000000 1 98132922368 1 1011011011001001011011001000000000000 ———————————— 1022143253354344244353353243222210110 (before carry) 139676498390 10000010000101010111100011100111010110


Quarter square multiplication

Two quantities can be multiplied using quarter squares by employing the following identity involving the
floor function In mathematics and computer science, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted or . Similarly, the ceiling function maps to the least int ...
that some sources attribute to
Babylonian mathematics Babylonian mathematics (also known as ''Assyro-Babylonian mathematics'') are the mathematics developed or practiced by the people of Mesopotamia, from the days of the early Sumerians to the centuries following the fall of Babylon in 539 BC. Babyl ...
(2000–1600 BC). : \left\lfloor \frac \right\rfloor - \left\lfloor \frac \right\rfloor = \frac\left(\left(x^2+2xy+y^2\right) - \left(x^2-2xy+y^2\right)\right) = \frac\left(4xy\right) = xy. If one of or is odd, the other is odd too, thus their squares are 1 mod 4, then taking floor reduces both by a quarter; the subtraction then cancels the quarters out, and discarding the remainders does not introduce any difference comparing with the same expression without the floor functions. Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to . If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3. Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856, and a table from 1 to 200000 by Joseph Blater in 1888. Quarter square multipliers were used in
analog computer An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (''analog signals'') to model the problem being solved. In ...
s to form an
analog signal An analog signal or analogue signal (see spelling differences) is any continuous signal representing some other quantity, i.e., ''analogous'' to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies ...
that was the product of two analog input signals. In this application, the sum and difference of two input
voltage Voltage, also known as electric pressure, electric tension, or (electric) potential difference, is the difference in electric potential between two points. In a static electric field, it corresponds to the work needed per unit of charge to ...
s are formed using
operational amplifier An operational amplifier (often op amp or opamp) is a DC-coupled high-gain electronic voltage amplifier with a differential input and, usually, a single-ended output. In this configuration, an op amp produces an output potential (relative to c ...
s. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier. In 1980, Everett L. Johnson proposed using the quarter square method in a
digital Digital usually refers to something using discrete digits, often binary digits. Technology and computing Hardware *Digital electronics, electronic circuits which operate using digital signals ** Digital camera, which captures and stores digital ...
multiplier. To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29−1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29−1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025). The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502.


Computational complexity of multiplication

A line of research in
theoretical computer science computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumscribe the ...
is about the number of single-bit arithmetic operations necessary to multiply two n-bit integers. This is known as the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
of multiplication. Usual algorithms done by hand have asymptotic complexity of O(n^2), but in 1960
Anatoly Karatsuba Anatoly Alexeyevich Karatsuba (his first name often spelled Anatolii) (russian: Анато́лий Алексе́евич Карацу́ба; Grozny, Soviet Union, 31 January 1937 – Moscow, Russia, 28 September 2008) was a Russian mathematician ...
discovered that better complexity was possible (with the Karatsuba algorithm). Currently, the algorithm with the best computational complexity is a 2019 algorithm of
David Harvey David W. Harvey (born 31 October 1935) is a British-born Marxist economic geographer, podcaster and Distinguished Professor of anthropology and geography at the Graduate Center of the City University of New York (CUNY). He received his P ...
and
Joris van der Hoeven Joris van der Hoeven (born 1971) is a Dutch mathematician and computer scientist, specializing in algebraic analysis and computer algebra. He is the primary developer of GNU TeXmacs. Education and career Joris van der Hoeven received in 1997 hi ...
, which uses the strategies of using
number-theoretic transform In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring. Definition Let R be any ring, let n\geq 1 be an intege ...
s introduced with the
Schönhage–Strassen algorithm The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971.A. Schönhage and V. Strassen,Schnelle Multiplikation großer Zahlen, ...
to multiply integers using only O(n\log n) operations. This is conjectured to be the best possible algorithm, but lower bounds of \Omega(n\log n) are not known.


Karatsuba multiplication

For systems that need to multiply numbers in the range of several thousand digits, such as
computer algebra system A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The ...
s and bignum libraries, long multiplication is too slow. These systems may employ Karatsuba multiplication, which was discovered in 1960 (published in 1962). The heart of Karatsuba's method lies in the observation that two-digit multiplication can be done with only three rather than the four multiplications classically required. This is an example of what is now called a ''
divide-and-conquer algorithm In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved dire ...
''. Suppose we want to multiply two 2-digit base-''m'' numbers: ''x''1'' m + x''2 and ''y''1'' m + y''2: # compute ''x''1 · ''y''1, call the result ''F'' # compute ''x''2 · ''y''2, call the result ''G'' # compute (''x''1 + ''x''2) · (''y''1 + ''y''2), call the result ''H'' # compute ''H'' − ''F'' − ''G'', call the result ''K''; this number is equal to ''x''1 · ''y''2 + ''x''2 · ''y''1 # compute ''F'' · ''m''2 + ''K'' · ''m'' + ''G''. To compute these three products of base ''m'' numbers, we can employ the same trick again, effectively using
recursion Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematic ...
. Once the numbers are computed, we need to add them together (steps 4 and 5), which takes about ''n'' operations. Karatsuba multiplication has a time complexity of O(''n''log23) ≈ O(''n''1.585), making this method significantly faster than long multiplication. Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of ''n''; typical implementations therefore switch to long multiplication for small values of ''n''. Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication, and can thus be viewed as the starting point for the theory of fast multiplications.


Toom–Cook

Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-''3N'' multiplication for the cost of five size-''N'' multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3. Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers.


Schönhage–Strassen

The basic idea due to Strassen (1968) is to use fast polynomial multiplication to perform fast integer multiplication. The algorithm was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the
Schönhage–Strassen algorithm The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971.A. Schönhage and V. Strassen,Schnelle Multiplikation großer Zahlen, ...
. The details are the following: We choose the largest integer ''w'' that will not cause overflow during the process outlined below. Then we split the two numbers into ''m'' groups of ''w'' bits as follows : a=\sum_^ \textb=\sum_^ . We look at these numbers as polynomials in ''x'', where ''x'' = 2''w'', to get, : a=\sum_^ \textb=\sum_^ . Then we can say that, : ab=\sum_^ \sum_^ a_i b_j x^ = \sum_^ c_k x^. Clearly the above setting is realized by polynomial multiplication, of two polynomials ''a'' and ''b''. The crucial step now is to use Fast Fourier multiplication of polynomials to realize the multiplications above faster than in naive ''O''(''m''2) time. To remain in the modular setting of Fourier transforms, we look for a ring with a (2''m'')th root of unity. Hence we do multiplication modulo ''N'' (and thus in the ''Z''/''NZ'' ring). Further, ''N'' must be chosen so that there is no 'wrap around', essentially, no reductions modulo ''N'' occur. Thus, the choice of ''N'' is crucial. For example, it could be done as, : N = 2^ + 1. The ring ''Z''/''NZ'' would thus have a (2''m'')th root of unity, namely 8. Also, it can be checked that ''ck'' < ''N'', and thus no wrap around will occur. The algorithm has a time complexity of Θ(''n'' log(''n'') log(log(''n''))) and is used in practice for numbers with more than 10,000 to 40,000 decimal digits.


Further improvements

In 2007 the asymptotic complexity of integer multiplication was improved by the Swiss mathematician Martin Fürer of Pennsylvania State University to ''n'' log(''n'') 2Θ( log*(''n'')) using Fourier transforms over complex numbers. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using
modular arithmetic In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his boo ...
in 2008 achieving the same running time. In context of the above material, what these latter authors have achieved is to find ''N'' much less than 23''k'' + 1, so that ''Z''/''NZ'' has a (2''m'')th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs. In 2015, Harvey,
Joris van der Hoeven Joris van der Hoeven (born 1971) is a Dutch mathematician and computer scientist, specializing in algebraic analysis and computer algebra. He is the primary developer of GNU TeXmacs. Education and career Joris van der Hoeven received in 1997 hi ...
and Lecerf gave a new algorithm that achieves a running time of O(n\log n \cdot 2^), making explicit the implied constant in the O(\log^* n) exponent. They also proposed a variant of their algorithm which achieves O(n\log n \cdot 2^) but whose validity relies on standard conjectures about the distribution of
Mersenne prime In mathematics, a Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number of the form for some integer . They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17 ...
s. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization of Fermat primes that conjecturally achieves a complexity bound of O(n\log n \cdot 2^). This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture. In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed by Minkowski's theorem to prove an unconditional complexity bound of O(n\log n \cdot 2^). In March 2019,
David Harvey David W. Harvey (born 31 October 1935) is a British-born Marxist economic geographer, podcaster and Distinguished Professor of anthropology and geography at the Graduate Center of the City University of New York (CUNY). He received his P ...
and
Joris van der Hoeven Joris van der Hoeven (born 1971) is a Dutch mathematician and computer scientist, specializing in algebraic analysis and computer algebra. He is the primary developer of GNU TeXmacs. Education and career Joris van der Hoeven received in 1997 hi ...
announced their discovery of an multiplication algorithm. It was published in the ''
Annals of Mathematics The ''Annals of Mathematics'' is a mathematical journal published every two months by Princeton University and the Institute for Advanced Study. History The journal was established as ''The Analyst'' in 1874 and with Joel E. Hendricks as th ...
'' in 2021. Because Schönhage and Strassen predicted that ''n'' log(''n'') is the ‘best possible’ result Harvey said: “...our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously.” Using
number-theoretic transform In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring. Definition Let R be any ring, let n\geq 1 be an intege ...
s instead of
discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a comple ...
s avoids
rounding error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
problems by using modular arithmetic instead of
floating-point arithmetic In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
. In order to apply the factoring which enables the FFT to work, the length of the transform must be factorable to small primes and must be a factor of , where ''N'' is the field size. In particular, calculation using a Galois field GF(''k''2), where ''k'' is a
Mersenne prime In mathematics, a Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number of the form for some integer . They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17 ...
, allows the use of a transform sized to a power of 2; e.g. supports transform sizes up to 232.


Lower bounds

There is a trivial lower bound of Ω(''n'') for multiplying two ''n''-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of AC0 'p''for any prime ''p'', meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MOD''p'' gates that can compute a product. This follows from a constant-depth reduction of MOD''q'' to multiplication. Lower bounds for multiplication are also known for some classes of branching programs.


Complex number multiplication

Complex multiplication normally involves four multiplications and two additions. :(a+bi) (c+di) = (ac-bd) + (bc+ad)i. Or : \begin \times & a & bi \\ \hline c & ac & bci \\ \hline di & adi & -bd \end As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation as Karatsuba's algorithm. The product (''a'' + ''bi'') · (''c'' + ''di'') can be calculated in the following way. :''k''1 = ''c'' · (''a'' + ''b'') :''k''2 = ''a'' · (''d'' − ''c'') :''k''3 = ''b'' · (''c'' + ''d'') :Real part = ''k''1 − ''k''3 :Imaginary part = ''k''1 + ''k''2. This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point. For
fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in ...
s (FFTs) (or any
linear transformation In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
) the complex multiplies are by constant coefficients ''c'' + ''di'' (called twiddle factors in FFTs), in which case two of the additions (''d''−''c'' and ''c''+''d'') can be precomputed. Hence, only three multiplies and three adds are required. However, trading off a multiplication for an addition in this way may no longer be beneficial with modern
floating-point unit In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
s.


Polynomial multiplication

All the above multiplication algorithms can also be expanded to multiply
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
s. For instance the Strassen algorithm may be used for polynomial multiplication Alternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication. Long multiplication methods can be generalised to allow the multiplication of algebraic formulae: 14ac - 3ab + 2 multiplied by ac - ab + 1 14ac -3ab 2 ac -ab 1 ———————————————————— 14a2c2 -3a2bc 2ac -14a2bc 3 a2b2 -2ab 14ac -3ab 2 ——————————————————————————————————————— 14a2c2 -17a2bc 16ac 3a2b2 -5ab +2

As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses
avoirdupois The avoirdupois system (; abbreviated avdp.) is a measurement system of weights that uses pounds and ounces as units. It was first commonly used in the 13th century AD and was updated in 1959. In 1959, by international agreement, the defini ...
measures: 1 t = 20 cwt, 1 cwt = 4 qtr. t cwt qtr 23 12 2 47 x ———————————————— 141 94 94 940 470 29 23 ———————————————— 1110 587 94 ———————————————— 1110 7 2

=
Answer: 1110 ton 7 cwt 2 qtr First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down. The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British
£sd £sd (occasionally written Lsd, spoken as "pounds, shillings and pence" or pronounced ) is the popular name for the pre-decimal currencies once common throughout Europe, especially in the British Isles and hence in several countries of the ...
system.


See also

* Binary multiplier * Dadda multiplier *
Division algorithm A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Div ...
*
Horner scheme In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
for evaluating of a polynomial *
Logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 ...
* Mental calculation *
Number-theoretic transform In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring. Definition Let R be any ring, let n\geq 1 be an intege ...
* Prosthaphaeresis * Slide rule *
Trachtenberg system The Trachtenberg system is a system of rapid mental calculation. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. It was developed by the Ukrainian engineer Jakow Tracht ...
* for another fast multiplication algorithm, specially efficient when many operations are done in sequence, such as in linear algebra * Wallace tree


References


Further reading

* * * (x+268 pages)


External links


Basic arithmetic


The Many Ways of Arithmetic in UCSMP Everyday Mathematics

A Powerpoint presentation about ancient mathematics



Advanced algorithms



{{DEFAULTSORT:Multiplication Algorithm * Multiplication Articles with example pseudocode