A randomized algorithm is an
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
that employs a degree of
randomness
In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual rand ...
as part of its logic or procedure. The algorithm typically uses
uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
One has to distinguish between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (
Las Vegas algorithm In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the ...
s, for example
Quicksort
Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
), and algorithms which have a chance of producing an incorrect result (
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are Karger–Stein algorithm and Monte Carlo algorithm for minimum Feedba ...
s, for example the Monte Carlo algorithm for the
MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.
In common practice, randomized algorithms are approximated using a
pseudorandom number generator
A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG), is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. The PRNG-generate ...
in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.
Motivation
As a motivating example, consider the problem of finding an ‘''a''’ in an
array
An array is a systematic arrangement of similar objects, usually in rows and columns.
Things called an array include:
{{TOC right
Music
* In twelve-tone and serial composition, the presentation of simultaneous twelve-tone sets such that the ...
of ''n'' elements.
Input: An array of ''n''≥2 elements, in which half are ‘''a''’s and the other half are ‘''b''’s.
Output: Find an ‘''a''’ in the array.
We give two versions of the algorithm, one
Las Vegas algorithm In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the ...
and one
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are Karger–Stein algorithm and Monte Carlo algorithm for minimum Feedba ...
.
Las Vegas algorithm:
findingA_LV(array A, n)
begin
repeat
Randomly select one element out of n elements.
until 'a' is found
end
This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is
:
Since it is constant, the expected run time over many calls is
. (See
Big Theta notation)
Monte Carlo algorithm:
findingA_MC(array A, n, k)
begin
i := 0
repeat
Randomly select one element out of n elements.
i := i + 1
until i = k or 'a' is found
end
If an ‘''a''’ is found, the algorithm succeeds, else the algorithm fails. After ''k'' iterations, the probability of finding an ‘''a''’ is:
This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is
.
Randomized algorithms are particularly useful when faced with a malicious "adversary" or
attacker
In some team sports, an attacker is a specific type of player, usually involved in aggressive play. Heavy attackers are, usually, placed up front: their goal is to score the most possible points for the team. In association football, attackers a ...
who deliberately tries to feed a bad input to the algorithm (see
worst-case complexity
In computer science (specifically computational complexity theory), the worst-case complexity measures the System resource, resources (e.g. running time, Computer memory, memory) that an algorithm requires given an input of arbitrary size (commonl ...
and
competitive analysis (online algorithm) Competitive analysis is a method invented for analyzing online algorithms, in which the performance of an online algorithm (which must satisfy an unpredictable sequence of requests, completing each request without being able to see the future) is co ...
) such as in the
Prisoner's dilemma
The Prisoner's Dilemma is an example of a game analyzed in game theory. It is also a thought experiment that challenges two completely rational agents to a dilemma: cooperate with their partner for mutual reward, or betray their partner ("defe ...
. It is for this reason that
randomness
In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual rand ...
is ubiquitous in
cryptography
Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a
cryptographically secure pseudo-random number generator
A cryptographically secure pseudorandom number generator (CSPRNG) or cryptographic pseudorandom number generator (CPRNG) is a pseudorandom number generator (PRNG) with properties that make it suitable for use in cryptography. It is also loosely kno ...
is required. Another area in which randomness is inherent is
quantum computing
Quantum computing is a type of computation whose operations can harness the phenomena of quantum mechanics, such as superposition, interference, and entanglement. Devices that perform quantum computations are known as quantum computers. Though ...
.
In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the
Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be determi ...
for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter ''k'', but allows a ''small probability of error''. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via
Markov's inequality
In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function (mathematics), function of a random variable is greater than or equal to some positive Constant (mathematics), constant. It is named a ...
), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.
Computational complexity
Computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
models randomized algorithms as ''
probabilistic Turing machine
In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turin ...
s''. Both
Las Vegas
Las Vegas (; Spanish for "The Meadows"), often known simply as Vegas, is the 25th-most populous city in the United States, the most populous city in the state of Nevada, and the county seat of Clark County. The city anchors the Las Vegas ...
and
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are Karger–Stein algorithm and Monte Carlo algorithm for minimum Feedba ...
s are considered, and several
complexity class
In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.
In general, a complexity class is defined in terms of ...
es are studied. The most basic randomized complexity class is
RP, which is the class of
decision problem
In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question of the input values. An example of a decision problem is deciding by means of an algorithm wheth ...
s for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with
polynomial time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
average case running time whose output is always correct are said to be in
ZPP.
The class of problems for which both YES and NO-instances are allowed to be identified with some error is called
BPP
BPP may refer to:
Education
* BPP Holdings, a holding company based in the United Kingdom
* BPP Law School, a law school based in the United Kingdom and a constituent school of BPP University
* BPP University, a private university based in the ...
. This class acts as the randomized equivalent of
P, i.e. BPP represents the class of efficient randomized algorithms.
History
An important line of research in randomized algorithms in
number theory
Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic function, integer-valued functions. German mathematician Carl Friedrich Gauss (1777â ...
can be traced back to
Pocklington's algorithm, from 1917, which finds
square root
In mathematics, a square root of a number is a number such that ; in other words, a number whose ''square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because .
E ...
s modulo prime numbers.
The study of randomized algorithms in number theory was spurred by the 1977 discovery of a
randomized primality test (i.e., determining the
primality
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways ...
of a number) by
Robert M. Solovay
Robert Martin Solovay (born December 15, 1938) is an American mathematician specializing in set theory.
Biography
Solovay earned his Ph.D. from the University of Chicago in 1964 under the direction of Saunders Mac Lane, with a dissertation on '' ...
and
Volker Strassen
Volker Strassen (born April 29, 1936) is a German mathematician, a professor emeritus in the department of mathematics and statistics at the University of Konstanz.
For important contributions to the analysis of algorithms he has received many aw ...
. Soon afterwards Michael O. Rabin demonstrated that the 1976
Miller's primality test can be turned into a randomized algorithm. At that time, no practical
deterministic algorithm
In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far ...
for primality was known.
The Miller–Rabin primality test relies on a binary relation between two positive integers ''k'' and ''n'' that can be expressed by saying that ''k'' "is a witness to the compositeness of" ''n''. It can be shown that
* If there is a witness to the compositeness of ''n'', then ''n'' is composite (i.e., ''n'' is not
prime
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways ...
), and
* If ''n'' is composite then at least three-fourths of the natural numbers less than ''n'' are witnesses to its compositeness, and
* There is a fast algorithm that, given ''k'' and ''n'', ascertains whether ''k'' is a witness to the compositeness of ''n''.
Observe that this implies that the primality problem is in Co-
RP.
If one
random
In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no :wikt:order, order and does not follow an intelligible pattern or combination. Ind ...
ly chooses 100 numbers less than a composite number ''n'', then the probability of failing to find such a "witness" is (1/4)
100 so that for most practical purposes, this is a good primality test. If ''n'' is big, there may be no other test that is practical. The probability of error can be reduced to an arbitrary degree by performing enough independent tests.
Therefore, in practice, there is no penalty associated with accepting a small probability of error, since with a little care the probability of error can be made astronomically small. Indeed, even though a deterministic polynomial-time primality test has since been found (see
AKS primality test
The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at ...
), it has not replaced the older probabilistic tests in
cryptographic
Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or '' -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adve ...
software
Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work.
At the lowest programming level, executable code consists ...
nor is it expected to do so for the foreseeable future.
Examples
Quicksort
Quicksort
Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
is a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm require ''
O''(''n''
2) time to sort ''n'' numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in ''O''(''n'' log ''n'') time regardless of the characteristics of the input.
Randomized incremental constructions in geometry
In
computational geometry
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems ar ...
, a standard technique to build a structure like a
convex hull
In geometry, the convex hull or convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space ...
or
Delaunay triangulation
In mathematics and computational geometry, a Delaunay triangulation (also known as a Delone triangulation) for a given set P of discrete points in a general position is a triangulation DT(P) such that no point in P is inside the circumcircle o ...
is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as
randomized incremental construction.
Min cut
Input: A
graph
Graph may refer to:
Mathematics
*Graph (discrete mathematics), a structure made of vertices and edges
**Graph theory, the study of such graphs and their properties
*Graph (topology), a topological space resembling a graph in the sense of discre ...
''G''(''V'',''E'')
Output: A
cut
Cut may refer to:
Common uses
* The act of cutting, the separation of an object into two through acutely-directed force
** A type of wound
** Cut (archaeology), a hole dug in the past
** Cut (clothing), the style or shape of a garment
** Cut (ea ...
partitioning the vertices into ''L'' and ''R'', with the minimum number of edges between ''L'' and ''R''.
Recall that the
contraction
Contraction may refer to:
Linguistics
* Contraction (grammar), a shortened word
* Poetic contraction, omission of letters for poetic reasons
* Elision, omission of sounds
** Syncope (phonology), omission of sounds in a word
* Synalepha, merged ...
of two nodes, ''u'' and ''v'', in a (multi-)graph yields a new node ''u'' ' with edges that are the union of the edges incident on either ''u'' or ''v'', except from any edge(s) connecting ''u'' and ''v''. Figure 1 gives an example of contraction of vertex ''A'' and ''B''.
After contraction, the resulting graph may have parallel edges, but contains no self loops.
Karger's basic algorithm:
begin
i = 1
repeat
repeat
Take a random edge (u,v) ∈ E in G
replace u and v with the contraction u'
until only 2 nodes remain
obtain the corresponding cut result C
i
i = i + 1
until i = m
output the minimum cut among C
1, C
2, ..., C
m.
end
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is
, and ''n'' denotes the number of vertices.
After ''m'' times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an
example of one execution of the algorithm. After execution, we get a cut of size 3.
Analysis of algorithm
The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is
By lemma 1, the probability that is the probability that no edge of ''C'' is selected during iteration ''i''. Consider the inner loop and let denote the graph after ''j'' edge contractions, where . has vertices. We use the chain rule of
conditional possibilities.
The probability that the edge chosen at iteration ''j'' is not in ''C'', given that no edge of ''C'' has been chosen before, is
. Note that still has min cut of size ''k'', so by Lemma 2, it still has at least
edges.
Thus,
.
So by the chain rule, the probability of finding the min cut ''C'' is
Cancellation gives
. Thus the probability that the algorithm succeeds is at least
. For
, this is equivalent to
. The algorithm finds the min cut with probability
, in time
.
Derandomization
Randomness can be viewed as a resource, like space and time. Derandomization is then the process of ''removing'' randomness (or using as little of it as possible). It is not currently known if all algorithms can be derandomized without significantly increasing their running time. For instance, in
computational complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
, it is unknown whether
P =
BPP
BPP may refer to:
Education
* BPP Holdings, a holding company based in the United Kingdom
* BPP Law School, a law school based in the United Kingdom and a constituent school of BPP University
* BPP University, a private university based in the ...
, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness.
There are specific methods that can be employed to derandomize particular randomized algorithms:
* the
method of conditional probabilities In mathematics and computer science, the probabilistic method is used to prove the existence of mathematical objects with desired combinatorial properties. The proofs are probabilistic — they work by showing that a random object, chosen from some ...
, and its generalization,
pessimistic estimators
*
discrepancy theory
In mathematics, discrepancy theory describes the deviation of a situation from the state one would like it to be in. It is also called the theory of irregularities of distribution. This refers to the theme of ''classical'' discrepancy theory, name ...
(which is used to derandomize geometric algorithms)
* the exploitation of limited independence in the random variables used by the algorithm, such as the
pairwise independence In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise independe ...
used in
universal hashing
In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees ...
* the use of
expander graph
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applicati ...
s (or
disperser
A disperser is a one-sided extractor. Where an extractor requires that every event gets the same probability under the uniform distribution and the extracted distribution, only the latter is required for a disperser. So for a disperser, an event ...
s in general) to ''amplify'' a limited amount of initial randomness (this last approach is also referred to as generating
pseudorandom
A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic
Determinism is a philosophical view, where all events are determined completely by previously exi ...
bits from a random source, and leads to the related topic of pseudorandomness)
* changing the randomized algorithm to use a hash function as a source of randomness for the algorithm's tasks, and then derandomizing the algorithm by brute-forcing all possible parameters (seeds) of the hash function. This technique is usually used to exhaustively search a sample space and making the algorithm deterministic (e.g. randomized graph algorithms)
Where randomness helps
When the model of computation is restricted to
Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
s, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
* Based on the initial motivating example: given an exponentially long string of 2
''k'' characters, half a's and half b's, a
random-access machine
In computer science, random-access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the cou ...
requires 2
''k''−1 lookups in the worst-case to find the index of an ''a''; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
* The natural way of carrying out a numerical computation in
embedded systems
An embedded system is a computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is ''embedded'' as ...
or
cyber-physical system
A cyber-physical system (CPS) or intelligent system is a computer system in which a mechanism is controlled or monitored by computer-based algorithms. In cyber-physical systems, physical and software components are deeply intertwined, able to oper ...
s is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization
* In
communication complexity
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first intro ...
, the equality of two strings can be verified to some reliability using
bits of communication with a randomized protocol. Any deterministic protocol requires
bits if defending against a strong opponent.
* The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time.
Bárány and
Füredi showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box.
* A more complexity-theoretic example of a place where randomness appears to help is the class
IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP =
PSPACE
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.
Formal definition
If we denote by SPACE(''t''(''n'')), the set of all problems that can b ...
. However, if it is required that the verifier be deterministic, then IP =
NP.
* In a
chemical reaction network
Chemical reaction network theory is an area of applied mathematics that attempts to model the behaviour of real-world chemical systems. Since its foundation in the 1960s, it has attracted a growing research community, mainly due to its applications ...
(a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to
primitive recursive functions
In computability theory, a primitive recursive function is roughly speaking a function that can be computed by a computer program whose loops are all "for" loops (that is, an upper bound of the number of iterations of every loop can be determined ...
.
[.]
See also
*
Probabilistic analysis of algorithms
Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking ...
*
Atlantic City algorithm
Atlantic City algorithm is a probabilistic polynomial time algorithm that answers correctly at least 75% of the time (or, in some versions, some other value greater than 50%). The term "Atlantic City" was first introduced in 1982 by J. Finn in an ...
*
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are Karger–Stein algorithm and Monte Carlo algorithm for minimum Feedba ...
*
Las Vegas algorithm In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the ...
*
Bogosort
In computer science, bogosort (also known as permutation sort, stupid sort, slowsort or bozosort) is a sorting algorithm based on the generate and test paradigm. The function successively generates permutations of its input until it finds one th ...
*
Principle of deferred decision
*
Randomized algorithms as zero-sum games
*
Probabilistic roadmap
*
HyperLogLog
HyperLogLog is an algorithm for the count-distinct problem, approximating the number of distinct elements in a multiset. Calculating the ''exact'' cardinality of the distinct elements of a multiset requires an amount of memory proportional to t ...
*
count–min sketch
*
approximate counting algorithm The approximate counting algorithm allows the counting of a large number of events using a small amount of memory. Invented in 1977 by Robert Morris of Bell Labs, it uses probabilistic techniques to increment the counter. It was fully analyzed ...
*
Karger's algorithm
In computer science and graph theory, Karger's algorithm is a randomized algorithm to compute a minimum cut of a connected graph. It was invented by David Karger and first published in 1993.
The idea of the algorithm is based on the concept of ...
Notes
References
*
Thomas H. Cormen
Thomas H. Cormen is the co-author of ''Introduction to Algorithms'', along with Charles Leiserson, Ron Rivest, and Cliff Stein. In 2013, he published a new book titled '' Algorithms Unlocked''. He is a professor of computer science at Dartmout ...
,
Charles E. Leiserson
Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof. As part of this effort, he developed the Cilk multithreaded language. ...
,
Ronald L. Rivest
Ronald Linn Rivest (; born May 6, 1947) is a cryptographer and an Institute Professor at MIT. He is a member of MIT's Department of Electrical Engineering and Computer Science (EECS) and a member of MIT's Computer Science and Artificial Inte ...
, and
Clifford Stein
Clifford Seth Stein (born December 14, 1965), a computer scientist, is a professor of industrial engineering and operations research at Columbia University in New York, NY, where he also holds an appointment in the Department of Computer Scien ...
. ''
Introduction to Algorithms
''Introduction to Algorithms'' is a book on computer programming by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. The book has been widely used as the textbook for algorithms courses at many universities and is co ...
'', Second Edition. MIT Press and McGraw–Hill, 1990. . Chapter 5: Probabilistic Analysis and Randomized Algorithms, pp. 91–122.
* Dirk Draheim
"''Semantics of the Probabilistic Typed Lambda Calculus (Markov Chain Semantics, Termination Behavior, and Denotational Semantics).''"Springer, 2017.
*
Jon Kleinberg
Jon Michael Kleinberg (born 1971) is an American computer scientist and the Tisch University Professor of Computer Science and Information Science at Cornell University known for his work in algorithms and networks. He is a recipient of the Nevanl ...
and
Éva Tardos
Éva Tardos (born 1 October 1957) is a Hungarian mathematician and the Jacob Gould Schurman Professor of Computer Science at Cornell University.
Tardos's research interest is algorithms. Her work focuses on the design and analysis of efficient ...
. ''Algorithm Design''. Chapter 13: "Randomized algorithms".
*
*
M. Mitzenmacher and
E. Upfal. ''Probability and Computing: Randomized Algorithms and Probabilistic Analysis''. Cambridge University Press, New York (NY), 2005.
*
Rajeev Motwani
Rajeev Motwani (Hindi: राजीव मोटवानी
, March 24, 1962 – June 5, 2009) was an Indian American professor of Computer Science at Stanford University whose research focused on theoretical computer science. He was an early ad ...
and P. Raghavan. ''Randomized Algorithms''. Cambridge University Press, New York (NY), 1995.
* Rajeev Motwani and P. Raghavan
Randomized Algorithms A survey on Randomized Algorithms.
* Chapter 11: Randomized computation, pp. 241–278.
*
* A. A. Tsay, W. S. Lovejoy, David R. Karger, ''Random Sampling in Cut, Flow, and Network Design Problems'', Mathematics of Operations Research, 24(2):383–413, 1999.
{{DEFAULTSORT:Randomized Algorithm
Analysis of algorithms