Miller–Rabin Primality Test
   HOME

TheInfoList



OR:

The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic
primality test A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whet ...
: an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
which determines whether a given number is likely to be prime, similar to the
Fermat primality test The Fermat primality test is a probabilistic test to determine whether a number is a probable prime. Concept Fermat's little theorem states that if ''p'' is prime and ''a'' is not divisible by ''p'', then :a^ \equiv 1 \pmod. If one wants to tes ...
and the
Solovay–Strassen primality test The Solovay–Strassen primality test, developed by Robert M. Solovay and Volker Strassen in 1977, is a probabilistic test to determine if a number is composite or probably prime. The idea behind the test was discovered by M. M. Artjuhov in 196 ...
. It is of historical significance in the search for a
polynomial-time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
deterministic primality test. Its probabilistic variant remains widely used in practice, as one of the simplest and fastest tests known. Gary L. Miller discovered the test in 1976; Miller's version of the test is
deterministic Determinism is a philosophical view, where all events are determined completely by previously existing causes. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motives and consi ...
, but its correctness relies on the unproven
extended Riemann hypothesis The Riemann hypothesis is one of the most important conjectures in mathematics. It is a statement about the zeros of the Riemann zeta function. Various geometrical and arithmetical objects can be described by so-called global ''L''-functions, whic ...
.
Michael O. Rabin Michael Oser Rabin ( he, מִיכָאֵל עוזר רַבִּין; born September 1, 1931) is an Israeli mathematician and computer scientist and a recipient of the Turing Award. Biography Early life and education Rabin was born in 1931 in ...
modified it to obtain an unconditional
probabilistic algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
in 1980.


Mathematical concepts

Similarly to the Fermat and Solovay–Strassen tests, the Miller–Rabin primality test checks whether a specific property, which is known to hold for prime values, holds for the number under testing.


Strong probable primes

The property is the following. For a given odd integer , let’s write as where ''s'' is a positive integer and ''d'' is an odd positive integer. Let’s consider an integer ''a'', called a ''base'', which is
coprime In mathematics, two integers and are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides does not divide , and vice versa. This is equivale ...
to ''n''. Then, ''n'' is said to be a strong
probable prime In number theory, a probable prime (PRP) is an integer that satisfies a specific condition that is satisfied by all prime numbers, but which is not satisfied by most composite numbers. Different types of probable primes have different specific con ...
to base ''a'' if one of these congruence relations holds: * a^ \equiv 1 \pmod; * a^ \equiv -1 \pmod for some . The idea beneath this test is that when ''n'' is an odd prime, it passes the test because of two facts: * by Fermat's little theorem, a^ \equiv 1 \pmod (this property alone defines the weaker notion of ''probable prime to base a'', on which the Fermat test is based); * the only
square roots In mathematics, a square root of a number is a number such that ; in other words, a number whose ''square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . E ...
of 1 modulo ''n'' are 1 and −1. Hence, by
contraposition In logic and mathematics, contraposition refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as proof by contraposition. The contrapositive of a stateme ...
, if ''n'' is not a strong probable prime to base ''a'', then ''n'' is definitely composite, and ''a'' is called a
witness In law, a witness is someone who has knowledge about a matter, whether they have sensed it or are testifying on another witnesses' behalf. In law a witness is someone who, either voluntarily or under compulsion, provides testimonial evidence, e ...
for the compositeness of ''n'' (sometimes misleadingly called a ''strong witness''). However, this property is not an exact characterization of prime numbers. If ''n'' is composite, it may nonetheless be a strong probable prime to base ''a'', in which case it is called a
strong pseudoprime A strong pseudoprime is a composite number that passes the Miller–Rabin primality test. All prime numbers pass this test, but a small fraction of composites also pass, making them "pseudoprimes". Unlike the Fermat pseudoprimes, for which there ex ...
, and ''a'' is a strong liar.


Choices of bases

Thankfully, no composite number is a strong pseudoprime to all bases at the same time (contrary to the Fermat primality test for which Fermat pseudoprimes to all bases exist: the
Carmichael number In number theory, a Carmichael number is a composite number n, which in modular arithmetic satisfies the congruence relation: :b^n\equiv b\pmod for all integers b. The relation may also be expressed in the form: :b^\equiv 1\pmod. for all integers ...
s). However no simple way of finding a witness is known. A naïve solution is to try all possible bases, which yields an inefficient deterministic algorithm. The Miller test is a more efficient variant of this (see section ''Miller test'' below). Another solution is to pick a base at random. This yields a fast probabilistic test. When ''n'' is composite, most bases are witnesses, so the test will detect ''n'' as composite with a reasonably high probability (see section ''Accuracy'' below). We can quickly reduce the probability of a
false positive A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition (such as a disease when the disease is not present), while a false negative is the opposite error, where the test resul ...
to an arbitrarily small rate, by combining the outcome of as many independently chosen bases as necessary to achieve the said rate. This is the Miller–Rabin test. The number of bases to try does not depend on ''n''. There seems to be diminishing returns in trying many bases, because if ''n'' is a pseudoprime to some base, then it seems more likely to be a pseudoprime to another base. Note that holds trivially for , because the congruence relation is compatible with exponentiation. And holds trivially for since is odd, for the same reason. That is why random are usually chosen in the interval . For testing arbitrarily large , choosing bases at random is essential, as we don't know the distribution of witnesses and strong liars among the numbers 2, 3, …, . However, a pre-selected set of a few small bases guarantees the identification of all composites up to a pre-computed maximum. This maximum is generally quite large compared to the bases. This gives very fast deterministic tests for small enough ''n'' (see section ''Testing against small sets of bases'' below).


Proofs

Here is a proof that, if ''n'' is a prime, then the only square roots of 1 modulo ''n'' are 1 and −1. Here is a proof that, if ''n'' is an odd prime, then it is a strong probable prime to base ''a''.


Example

Suppose we wish to determine if is prime. We write as , so that we have and . We randomly select a number ''a'' such that , say . We proceed to compute: * ''a''20''d'' mod ''n'' = 17455 mod 221 = 47 ≠ 1, ''n'' − 1 * ''a''21''d'' mod ''n'' = 174110 mod 221 = 220 = ''n'' − 1. Since , either 221 is prime, or 174 is a strong liar for 221. We try another random ''a'', this time choosing : * ''a''20''d'' mod ''n'' = 13755 mod 221 = 188 ≠ 1, ''n'' − 1 * ''a''21''d'' mod ''n'' = 137110 mod 221 = 205 ≠ ''n'' − 1. Hence 137 is a witness for the compositeness of 221, and 174 was in fact a strong liar. Note that this tells us nothing about the factors of 221 (which are 13 and 17). However, the example with 341 in a later section shows how these calculations can sometimes produce a factor of ''n''.


Miller–Rabin test

The algorithm can be written in
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
as follows. The parameter ''k'' determines the accuracy of the test. The greater the number of rounds, the more accurate the result. Input #1: ''n'' > 2, an odd integer to be tested for primality Input #2: ''k'', the number of rounds of testing to perform Output: “''composite''” if ''n'' is found to be composite, “''probably prime''” otherwise let ''s'' > 0 and ''d'' odd > 0 such that ''n'' − 1 = 2''s''''d'' # by factoring out powers of 2 from ''n'' − 1 repeat ''k'' times: ''a'' ← random(2, ''n'' − 2) # ''n'' is always a probable prime to base 1 and ''n'' − 1 ''x'' ← ''a''''d'' mod ''n'' repeat ''s'' times: ''y'' ← ''x''2 mod ''n'' if ''y'' = 1 and ''x'' ≠ 1 and ''x'' ≠ ''n'' − 1 then # nontrivial square root of 1 modulo ''n'' return “''composite''” ''x'' ← ''y'' if ''y'' ≠ 1 then return “''composite''” return “''probably prime''”


Complexity

Using repeated squaring, the running time of this algorithm is , where ''n'' is the number tested for primality, and ''k'' is the number of rounds performed; thus this is an efficient, polynomial-time algorithm.
FFT A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the ...
-based multiplication (
Harvey-Hoeven algorithm A multiplication algorithm is an algorithm (or method) to multiplication, multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the ...
) can decrease the running time to .


Accuracy

The error made by the primality test is measured by the probability that a composite number is declared probably prime. The more bases ''a'' are tried, the better the accuracy of the test. It can be shown that if ''n'' is composite, then at most of the bases ''a'' are strong liars for ''n''. As a consequence, if ''n'' is composite then running ''k'' iterations of the Miller–Rabin test will declare ''n'' probably prime with a probability at most 4−''k''. This is an improvement over the Solovay–Strassen test, whose worst‐case error bound is 2−''k''. Moreover, the Miller–Rabin test is strictly stronger than the Solovay–Strassen test in the sense that for every composite ''n'', the set of strong liars for ''n'' is a subset of the set of Euler liars for ''n'', and for many ''n'', the subset is proper. In addition, for large values of ''n'', the probability for a composite number to be declared probably prime is often significantly smaller than 4−''k''. For instance, for most numbers ''n'', this probability is bounded by 8−''k''; the proportion of numbers ''n'' which invalidate this upper bound vanishes as we consider larger values of ''n''. Hence the ''average'' case has a much better accuracy than 4−''k'', a fact which can be exploited for ''generating'' probable primes (see below). However, such improved error bounds should not be relied upon to ''verify'' primes whose
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
is not controlled, since a
cryptographic Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or '' -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adve ...
adversary might send a carefully chosen pseudoprime in order to defeat the primality test. In such contexts, only the ''worst‐case'' error bound of 4−''k'' can be relied upon. The above error measure is the probability for a composite number to be declared as a strong probable prime after ''k'' rounds of testing; in mathematical words, it is the
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occur ...
\Pr(M\!R_k \mid \lnot P) where ''P'' is the
event Event may refer to: Gatherings of people * Ceremony, an event of ritual significance, performed on a special occasion * Convention (meeting), a gathering of individuals engaged in some common interest * Event management, the organization of e ...
that the number being tested is prime, and ''MRk'' is the event that it passes the Miller–Rabin test with ''k'' rounds. We are often interested instead in the inverse conditional probability \Pr(\lnot P \mid M\!R_k): the probability that a number which has been declared as a strong probable prime is in fact composite. These two probabilities are related by Bayes' law: : \Pr(\lnot P \mid M\!R_k) = \frac = \frac = \frac In the last equation, we simplified the expression using the fact that all prime numbers are correctly reported as strong probable primes (the test has no
false negative A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition (such as a disease when the disease is not present), while a false negative is the opposite error, where the test resul ...
). By dropping the left part of the
denominator A fraction (from la, fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight ...
, we derive a simple upper bound: : \Pr(\lnot P \mid M\!R_k) < \Pr(M\!R_k \mid \lnot P) \left(\tfrac - 1\right) Hence this conditional probability is related not only to the error measure discussed above — which is bounded by 4−''k'' — but also to the
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
of the input number. In the general case, as said earlier, this distribution is controlled by a cryptographic adversary, thus unknown, so we cannot deduce much about \Pr(\lnot P \mid M\!R_k). However, in the case when we use the Miller–Rabin test to ''generate'' primes (see below), the distribution is chosen by the generator itself, so we can exploit this result.


Deterministic variants


Miller test

The Miller–Rabin algorithm can be made deterministic by trying all possible ''a'' below a certain limit. Taking ''n'' as the limit would imply trials, hence the running time would be exponential with respect to the size of the input. To improve the running time, the challenge is then to lower the limit as much as possible while keeping the test reliable. If the tested number ''n'' is composite, the strong liars ''a'' coprime to ''n'' are contained in a proper
subgroup In group theory, a branch of mathematics, given a group ''G'' under a binary operation ∗, a subset ''H'' of ''G'' is called a subgroup of ''G'' if ''H'' also forms a group under the operation ∗. More precisely, ''H'' is a subgroup ...
of the group (Z/''n''Z)*, which means that if we test all ''a'' from a set which generates (Z/''n''Z)*, one of them must lie outside the said subgroup, hence must be a witness for the compositeness of ''n''. Assuming the truth of the
generalized Riemann hypothesis The Riemann hypothesis is one of the most important conjectures in mathematics. It is a statement about the zeros of the Riemann zeta function. Various geometrical and arithmetical objects can be described by so-called global ''L''-functions, whic ...
(GRH), it is known that the group is generated by its elements smaller than , which was already noted by Miller. The constant involved in the
Big O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
was reduced to 2 by Eric Bach. This leads to the following deterministic primality testing algorithm, known as the Miller test: Input: ''n'' > 2, an odd integer to be tested for primality Output: “''composite''” if ''n'' is composite, “''prime''” otherwise let ''s'' > 0 and ''d'' odd > 0 such that ''n'' − 1 = 2''s''''d'' # by factoring out powers of 2 from ''n'' − 1 for all ''a'' in the range , min(''n'' − 2, ⌊2(ln ''n'')2⌋) ''x'' ← ''a''''d'' mod ''n'' repeat ''s'' times: ''y'' ← ''x''2 mod ''n'' if ''y'' = 1 and ''x'' ≠ 1 and ''x'' ≠ ''n'' − 1 then # nontrivial square root of 1 modulo ''n'' return “''composite''” ''x'' ← ''y'' if ''y'' ≠ 1 then return “''composite''” return “''prime''” The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of even
index Index (or its plural form indices) may refer to: Arts, entertainment, and media Fictional entities * Index (''A Certain Magical Index''), a character in the light novel series ''A Certain Magical Index'' * The Index, an item on a Halo megastru ...
, it suffices to assume the validity of GRH for quadratic
Dirichlet character In analytic number theory and related branches of mathematics, a complex-valued arithmetic function \chi:\mathbb\rightarrow\mathbb is a Dirichlet character of modulus m (where m is a positive integer) if for all integers a and b: :1)   \chi ...
s. The running time of the algorithm is, in the soft-O notation, (using FFT‐based multiplication). The Miller test is not used in practice. For most purposes, proper use of the probabilistic Miller–Rabin test or the Baillie–PSW primality test gives sufficient confidence while running much faster. It is also slower in practice than commonly used proof methods such as APR-CL and ECPP which give results that do not rely on unproven assumptions. For theoretical purposes requiring a deterministic polynomial time algorithm, it was superseded by the
AKS primality test The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic Determinism is a philosophical view, where all events are determined completely by previously existing causes. Determi ...
, which also does not rely on unproven assumptions.


Testing against small sets of bases

When the number ''n'' to be tested is small, trying all is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, Pomerance, Selfridge, Wagstaff and Jaeschke have verified that * if ''n'' < 2,047, it is enough to test ''a'' = 2; * if ''n'' < 1,373,653, it is enough to test ''a'' = 2 and 3; * if ''n'' < 9,080,191, it is enough to test ''a'' = 31 and 73; * if ''n'' < 25,326,001, it is enough to test ''a'' = 2, 3, and 5; * if ''n'' < 3,215,031,751, it is enough to test ''a'' = 2, 3, 5, and 7; * if ''n'' < 4,759,123,141, it is enough to test ''a'' = 2, 7, and 61; * if ''n'' < 1,122,004,669,633, it is enough to test ''a'' = 2, 13, 23, and 1662803; * if ''n'' < 2,152,302,898,747, it is enough to test ''a'' = 2, 3, 5, 7, and 11; * if ''n'' < 3,474,749,660,383, it is enough to test ''a'' = 2, 3, 5, 7, 11, and 13; * if ''n'' < 341,550,071,728,321, it is enough to test ''a'' = 2, 3, 5, 7, 11, 13, and 17. Using the work of Feitsma and Galway enumerating all base 2 pseudoprimes in 2010, this was extended (see ), with the first result later shown using different methods in Jiang and Deng: * if ''n'' < 3,825,123,056,546,413,051, it is enough to test ''a'' = 2, 3, 5, 7, 11, 13, 17, 19, and 23. * if ''n'' < 18,446,744,073,709,551,616 = 264, it is enough to test ''a'' = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37. Sorenson and Webster verify the above and calculate precise results for these larger than 64‐bit results: * if ''n'' < 318,665,857,834,031,151,167,461, it is enough to test ''a'' = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37. * if ''n'' < 3,317,044,064,679,887,385,961,981, it is enough to test ''a'' = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, and 41. Other criteria of this sort, often more efficient (fewer bases required) than those shown above, exist. They give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions. There is a small list of potential witnesses for every possible input size (at most ''b'' values for ''b''‐bit numbers). However, no finite set of bases is sufficient for all composite numbers. Alford, Granville, and Pomerance have shown that there exist infinitely many composite numbers ''n'' whose smallest compositeness witness is at least . They also argue heuristically that the smallest number ''w'' such that every composite number below ''n'' has a compositeness witness less than ''w'' should be of order


Variants for finding factors

By inserting
greatest common divisor In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers ''x'', ''y'', the greatest common divisor of ''x'' and ''y'' is ...
calculations into the above algorithm, we can sometimes obtain a factor of ''n'' instead of merely determining that ''n'' is composite. This occurs for example when ''n'' is a probable prime to base ''a'' but not a strong probable prime to base ''a''. If ''x'' is a nontrivial square root of 1 modulo ''n'', * since , we know that ''n'' divides ; * since , we know that ''n'' does not divide nor . From this we deduce that and are nontrivial (not necessarily prime) factors of ''n'' (in fact, since ''n'' is odd, these factors are coprime and ''n'' = ''AB''). Hence, if factoring is a goal, these gcd calculations can be inserted into the algorithm at little additional computational cost. This leads to the following pseudocode, where the added or changed code is highlighted: Input #1: ''n'' > 2, an odd integer to be tested for primality Input #2: ''k'', the number of rounds of testing to perform Output: “''composite''” if ''n'' is otherwise found to be composite, “''probably prime''” otherwise let ''s'' > 0 and ''d'' odd > 0 such that ''n'' − 1 = 2''s''''d'' # by factoring out powers of 2 from ''n'' − 1 repeat ''k'' times: ''a'' ← random(2, ''n'' − 2) # ''n'' is always a probable prime to base 1 and ''n'' − 1 ''x'' ← ''a''''d'' mod ''n'' repeat ''s'' times: ''y'' ← ''x''2 mod ''n'' if ''y'' = 1 and ''x'' ≠ 1 and ''x'' ≠ ''n'' − 1 then # nontrivial square root of 1 modulo ''n'' ''x'' ← ''y'' if ''y'' ≠ 1 then return “''composite''” return “''probably prime''” This is ''not'' a probabilistic
factorization In mathematics, factorization (or factorisation, see American and British English spelling differences#-ise, -ize (-isation, -ization), English spelling differences) or factoring consists of writing a number or another mathematical object as a p ...
algorithm because it is only able to find factors for numbers ''n'' which are pseudoprime to base ''a'' (in other words, for numbers ''n'' such that ). For other numbers, the algorithm only returns “''composite''” with no further information. For example, consider ''n'' = 341 and ''a'' = 2. We have . Then and . This tells us that ''n'' is a pseudoprime base 2, but not a strong pseudoprime base 2. By computing a gcd at this stage, we find a factor of 341: . Indeed, . In order to find factors more often, the same ideas can also be applied to the square roots of −1 (or any other number). This strategy can be implemented by exploiting knowledge from previous rounds of the Miller–Rabin test. In those rounds we may have identified a square root modulo ''n'' of −1, say ''R''. Then, when , we can compare the value of ''x'' against ''R'': if ''x'' is neither ''R'' nor ''n''−''R'', then and are nontrivial factors of ''n''.


Generation of probable primes

The Miller–Rabin test can be used to generate strong probable primes, simply by drawing integers at random until one passes the test. This algorithm terminates
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
(since at each iteration there is a chance to draw a prime number). The pseudocode for generating ''b''‐
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represente ...
strong probable primes (with the most significant bit set) is as follows: Input #1: ''b'', the number of bits of the result Input #2: ''k'', the number of rounds of testing to perform Output: a strong probable prime ''n'' while True: pick a random odd integer ''n'' in the range ''b''−1, 2''b''−1 if the Miller–Rabin test with inputs ''n'' and ''k'' returns “''probably prime''” then return ''n''


Complexity

Of course the worst-case running time is infinite, since the outer loop may never terminate, but that happens with probability zero. As per the
geometric distribution In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number ''X'' of Bernoulli trials needed to get one success, supported on the set \; * ...
, the expected number of draws is \tfrac (reusing notations from earlier). As any prime number passes the test, the probability of being prime gives a coarse lower bound to the probability of passing the test. If we draw odd integers uniformly in the range ''b''−1, 2''b''−1 then we get: : \Pr(M\!R_k) > \Pr(P) = \frac where π is the
prime-counting function In mathematics, the prime-counting function is the function counting the number of prime numbers less than or equal to some real number ''x''. It is denoted by (''x'') (unrelated to the number ). History Of great interest in number theory is t ...
. Using an
asymptotic expansion In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion (after Henri Poincaré) is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to ...
of π (an extension of the
prime number theorem In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying ...
), we can approximate this probability when ''b'' grows towards infinity. We find: : \Pr(P) = \tfrac b^ + \mathcal\left(b^\right) : \tfrac = \tfrac b + \mathcal\left(b^\right) Hence we can expect the generator to run no more Miller–Rabin tests than a number proportional to ''b''. Taking into account the worst-case complexity of each Miller–Rabin test (see earlier), the expected running time of the generator with inputs ''b'' and ''k'' is then bounded by (or using FFT-based multiplication).


Accuracy

The error measure of this generator is the probability that it outputs a composite number. Using the relation between conditional probabilities (shown in an earlier section) and the asymptotic behavior of \Pr(P) (shown just before), this error measure can be given a coarse upper bound: : \Pr(\lnot P \mid M\!R_k) < \Pr(M\!R_k \mid \lnot P) \left( \tfrac - 1 \right) \leq 4^ \left( \tfrac b - 1 + \mathcal\left(b^\right) \right). Hence, for large enough ''b'', this error measure is less than \tfrac 4^ b. However, much better bounds exist. Using the fact that the Miller–Rabin test itself often has an error bound much smaller than 4−''k'' (see earlier), Damgård, Landrock and Pomerance derived several error bounds for the generator, with various classes of parameters ''b'' and ''k''. These error bounds allow an implementor to choose a reasonable ''k'' for a desired accuracy. One of these error bounds is 4−''k'', which holds for all ''b'' ≥ 2 (the authors only showed it for ''b'' ≥ 51, while Ronald Burthe Jr. completed the proof with the remaining values 2 ≤ ''b'' ≤ 50). Again this simple bound can be improved for large values of ''b''. For instance, another bound derived by the same authors is: : \left(\tfrac b^ 2^\right) 4^ which holds for all ''b'' ≥ 21 and ''k'' ≥ ''b''/4. This bound is smaller than 4−''k'' as soon as ''b'' ≥ 32.


Notes


References


External links

*
Interactive Online Implementation of the Deterministic Variant
(stepping through the algorithm step-by-step)


Miller–Rabin primality test in C#


{{DEFAULTSORT:Miller-Rabin Primality Test Primality tests Finite fields