HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, a low-discrepancy sequence is a
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is calle ...
with the property that for all values of ''N'', its subsequence ''x''1, ..., ''x''''N'' has a low
discrepancy Discrepancy may refer to: Mathematics * Discrepancy of a sequence * Discrepancy theory in structural modelling * Discrepancy of hypergraphs, an area of discrepancy theory * Discrepancy (algebraic geometry) Statistics * Discrepancy function in th ...
. Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set ''B'' is close to proportional to the
measure Measure may refer to: * Measurement, the assignment of a number to a characteristic of an object or event Law * Ballot measure, proposed legislation in the United States * Church of England Measure, legislation of the Church of England * Mea ...
of ''B'', as would happen on average (but not for particular samples) in the case of an equidistributed sequence. Specific definitions of discrepancy differ regarding the choice of ''B'' (
hyperspheres In mathematics, an -sphere or a hypersphere is a topological space that is homeomorphic to a ''standard'' -''sphere'', which is the set of points in -dimensional Euclidean space that are situated at a constant distance from a fixed point, call ...
, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value). Low-discrepancy sequences are also called quasirandom sequences, due to their common use as a replacement of uniformly distributed random numbers. The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor
pseudorandom A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic Determinism is a philosophical view, where all events are determined completely by previously exi ...
, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage.


Applications

Quasirandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly. Two useful applications are in finding the characteristic function of a
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
, and in finding the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. F ...
function of a deterministic function with a small amount of noise. Quasirandom numbers allow higher-order moments to be calculated to high accuracy very quickly. Applications that don't involve sorting would be in finding the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arithme ...
,
standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
, skewness and kurtosis of a statistical distribution, and in finding the
integral In mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented i ...
and global
maxima and minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
of difficult deterministic functions. Quasirandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such as
Newton–Raphson iteration In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. Quasirandom numbers can also be combined with search algorithms. A binary tree Quicksort-style algorithm ought to work exceptionally well because quasirandom numbers flatten the tree far better than random numbers, and the flatter the tree the faster the sorting. With a search algorithm, quasirandom numbers can be used to find the
mode Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Arts and entertainment * '' MO''D''E (magazine)'', a defunct U.S. women's fashion magazine * ''Mode'' magazine, a fictional fashion magazine which is ...
,
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic fe ...
, confidence intervals and
cumulative distribution In statistics, the frequency (or absolute frequency) of an event i is the number n_i of times the observation has occurred/recorded in an experiment or study. These frequencies are often depicted graphically or in tabular form. Types The cumul ...
of a statistical distribution, and all
local minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
and all solutions of deterministic functions.


Low-discrepancy sequences in numerical integration

At least three methods of
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
can be phrased as follows. Given a set in the interval ,1/nowiki>, approximate the integral of a function ''f'' as the average of the function evaluated at those points: : \int_0^1 f(u)\,du \approx \frac\,\sum_^N f(x_i). If the points are chosen as ''x''''i'' = ''i''/''N'', this is the ''rectangle rule''. If the points are chosen to be randomly (or
pseudorandom A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic Determinism is a philosophical view, where all events are determined completely by previously exi ...
ly) distributed, this is the ''
Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be determi ...
''. If the points are chosen as elements of a low-discrepancy sequence, this is the ''quasi-Monte Carlo method''. A remarkable result, the Koksma–Hlawka inequality (stated below), shows that the error of such a method can be bounded by the product of two terms, one of which depends only on ''f'', and the other one is the discrepancy of the set . It is convenient to construct the set in such a way that if a set with ''N''+1 elements is constructed, the previous ''N'' elements need not be recomputed. The rectangle rule uses points set which have low discrepancy, but in general the elements must be recomputed if ''N'' is increased. Elements need not be recomputed in the random Monte Carlo method if ''N'' is increased, but the point sets do not have minimal discrepancy. By using low-discrepancy sequences we aim for low discrepancy and no need for recomputations, but actually low-discrepancy sequences can only be incrementally good on discrepancy if we allow no recomputation.


Definition of discrepancy

The ''discrepancy'' of a set P = is defined, using Niederreiter's notation, as : D_N(P) = \sup_ \left, \frac - \lambda_s(B) \ where λ''s'' is the ''s''-dimensional
Lebesgue measure In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of ''n''-dimensional Euclidean space. For ''n'' = 1, 2, or 3, it coincides wit ...
, ''A''(''B'';''P'') is the number of points in ''P'' that fall into ''B'', and ''J'' is the set of ''s''-dimensional intervals or boxes of the form : \prod_^s [a_i, b_i) = \ \, where 0 \le a_i < b_i \le 1 . The ''star-discrepancy'' ''D''*''N''(''P'') is defined similarly, except that the supremum is taken over the set ''J''* of rectangular boxes of the form : \prod_^s [0, u_i) where ''u''''i'' is in the half-open interval [0, 1). The two are related by :D^*_N \le D_N \le 2^s D^*_N . \, Note: With these definitions, discrepancy represents the worst-case or maximum point density deviation of a uniform set. However, also other error measures are meaningful, leading to other definitions and variation measures. For instance, L2 discrepancy or modified centered L2 discrepancy are also used intensively to compare the quality of uniform point sets. Both are much easier to calculate for large N and s.


The Koksma–Hlawka inequality

Let Ī''s'' be the ''s''-dimensional unit cube, Ī''s'' = [0, 1] × ... × [0, 1]. Let ''f'' have bounded variation ''V''(''f'') on Ī''s'' in the sense of G. H. Hardy, Hardy and Krause. Then for any ''x''1, ..., ''x''''N'' in ''I''''s'' = \frac \sum_^N f(x_i) - \int_ f(u)\,du \ \le V(f)\, D_N^* (x_1,\ldots,x_N). The Koksma–Edmund Hlawka, Hlawka inequality is sharp in the following sense: For any point set in ''I''s and any \varepsilon>0, there is a function ''f'' with bounded variation and ''V''(''f'') = 1 such that : \left, \frac \sum_^N f(x_i) - \int_ f(u)\,du \>D_^(x_1,\ldots,x_N)-\varepsilon. Therefore, the quality of a numerical integration rule depends only on the discrepancy D*N(''x''1,...,''x''N).


The formula of Hlawka–Zaremba

Let D=\. For \emptyset\neq u\subseteq D we write : dx_u:=\prod_ dx_j and denote by (x_u,1) the point obtained from ''x'' by replacing the coordinates not in ''u'' by 1. Then : \frac \sum_^N f(x_i) - \int_ f(u)\,du= \sum_(-1)^ \int_ \operatorname(x_u,1)\fracf(x_u,1) \, dx_u, where \operatorname(z)= \frac\sum_^N \prod_^d 1_(x_) - \prod_^d z_i is the discrepancy function.


The ''L²'' version of the Koksma–Hlawka inequality

Applying the Cauchy–Schwarz inequality for integrals and sums to the Hlawka–Zaremba identity, we obtain an L^2 version of the Koksma–Hlawka inequality: : \left, \frac \sum_^N f(x_i) - \int_ f(u)\,du\\le \, f\, _d \operatorname_d (\), where : \operatorname_d(\)=\left(\sum_ \int_ \operatorname(x_u,1)^2 \, dx_u\right)^ and : \, f\, _d = \left(\sum_ \int_ \left, \frac f(x_u,1)\^2 dx_u\right)^. L^2 discrepancy has a high practical importance because fast explicit calculations are possible for a given point set. This way it is easy to create point set optimizers using L^2 discrepancy as criteria.


The Erdős–Turán–Koksma inequality

It is computationally hard to find the exact value of the discrepancy of large point sets. The
Erdős Erdős, Erdos, or Erdoes is a Hungarian surname. People with the surname include: * Ágnes Erdős (born 1950), Hungarian politician * Brad Erdos (born 1990), Canadian football player * Éva Erdős (born 1964), Hungarian handball player * Józse ...
TuránKoksma inequality provides an upper bound. Let ''x''1,...,''x''''N'' be points in ''I''s and ''H'' be an arbitrary positive integer. Then : D_^(x_1,\ldots,x_N)\leq \left(\frac\right)^s \left( \frac+ \sum_\frac \left, \frac \sum_^ e^ \ \right) where : r(h)=\prod_^s\max\\quad\mbox\quad h=(h_1,\ldots,h_s)\in\Z^s.


The main conjectures

Conjecture 1. There is a constant ''c''s depending only on the dimension ''s'', such that :D_^(x_1,\ldots,x_N)\geq c_s\frac for any finite point set . Conjecture 2. There is a constant ''c'''s depending only on ''s'', such that :D_^(x_1,\ldots,x_N)\geq c'_s\frac for infinite number of ''N'' for any infinite sequence ''x''1,''x''2,''x''3,.... These conjectures are equivalent. They have been proved for ''s'' ≤ 2 by
W. M. Schmidt W. may refer to: * SoHo (Australian TV channel) (previously W.), an Australian pay television channel * ''W.'' (film), a 2008 American biographical drama film based on the life of George W. Bush * "W.", the fifth track from Codeine's 1992 EP ''Bar ...
. In higher dimensions, the corresponding problem is still open. The best-known lower bounds are due to
Michael Lacey Michael Lacey may refer to: * Michael Lacey (mathematician) (born 1959), an American mathematician * Michael Lacey (editor), American newspaper editor * Michael Pearse Lacey (1916–2014), Canadian bishop * Mick Lacey, Irish hurler See also * M ...
and collaborators.


Lower bounds

Let ''s'' = 1. Then : D_N^*(x_1,\ldots,x_N)\geq\frac for any finite point set . Let ''s'' = 2.
W. M. Schmidt W. may refer to: * SoHo (Australian TV channel) (previously W.), an Australian pay television channel * ''W.'' (film), a 2008 American biographical drama film based on the life of George W. Bush * "W.", the fifth track from Codeine's 1992 EP ''Bar ...
proved that for any finite point set , : D_N^*(x_1,\ldots,x_N)\geq C\frac where : C=\max_\frac\frac=0.023335\dots. For arbitrary dimensions ''s'' > 1, K. F. Roth proved that : D_N^*(x_1,\ldots,x_N)\geq\frac\frac\frac for any finite point set . Jozef Beck established a double log improvement of this result in three dimensions. This was improved by D. Bilyk and M. T. Lacey to a power of a single logarithm. The best known bound for ''s'' > 2 is due D. Bilyk and M. T. Lacey and A. Vagharshakyan. For ''s'' > 2 there is a ''t'' > 0 so that : D_N^*(x_1,\ldots,x_N)\geq t \frac for any finite point set .


Construction of low-discrepancy sequences

Because any distribution of random numbers can be mapped onto a uniform distribution, and quasirandom numbers are mapped in the same way, this article only concerns generation of quasirandom numbers on a multidimensional uniform distribution. There are constructions of sequences known such that : D_N^(x_1,\ldots,x_N)\leq C\frac. where ''C'' is a certain constant, depending on the sequence. After Conjecture 2, these sequences are believed to have the best possible order of convergence. Examples below are the
van der Corput sequence A van der Corput sequence is an example of the simplest one-dimensional low-discrepancy sequence over the unit interval; it was first described in 1935 by the Dutch mathematician J. G. van der Corput. It is constructed by reversing the base-'' ...
, the
Halton sequence In statistics, Halton sequences are sequences used to generate points in space for numerical methods such as Monte Carlo simulations. Although these sequences are deterministic, they are of low discrepancy, that is, appear to be random for ma ...
s, and the
Sobol sequence Sobol sequences (also called LPτ sequences or (''t'', ''s'') sequences in base 2) are an example of quasi-random low-discrepancy sequences. They were first introduced by the Russian mathematician Ilya M. Sobol (Илья Меерович ...
s. One general limitation is that construction methods can usually only guarantee the order of convergence. Practically, low discrepancy can be only achieved if N is large enough, and for large given s this minimum N can be very large. This means running a Monte-Carlo analysis with e.g. s=20 variables and N=1000 points from a low-discrepancy sequence generator may offer only a very minor accuracy improvement.


Random numbers

Sequences of quasirandom numbers can be generated from random numbers by imposing a negative correlation on those random numbers. One way to do this is to start with a set of random numbers r_i on
modulo In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another (called the '' modulus'' of the operation). Given two positive numbers and , modulo (often abbreviated as ) is t ...
 1. For more than one dimension, Latin squares of the appropriate dimension can be used to provide offsets to ensure that the whole domain is covered evenly.


Additive recurrence

For any irrational \alpha, the sequence : s_n = \ has discrepancy tending to 1/N. Note that the sequence can be defined recursively by : s_ = (s_n + \alpha)\bmod 1 \;. A good value of \alpha gives lower discrepancy than a sequence of independent uniform random numbers. The discrepancy can be bounded by the
approximation exponent In number theory, a Liouville number is a real number ''x'' with the property that, for every positive integer ''n'', there exists a pair of integers (''p, q'') with ''q'' > 1 such that :0 1 + \log_2(d) ~) no pair of integers ~(\,p,\,q\,)~ exists ...
of \alpha. If the approximation exponent is \mu, then for any \varepsilon>0, the following bound holds: : D_N((s_n)) = O_\varepsilon (N^). By the Thue–Siegel–Roth theorem, the approximation exponent of any irrational algebraic number is 2, giving a bound of N^ above. The recurrence relation above is similar to the recurrence relation used by a Linear congruential generator, a poor-quality pseudorandom number generator: : r_i = (a r_ + c) \bmod m For the low discrepancy additive recurrence above, ''a'' and ''m'' are chosen to be 1. Note, however, that this will not generate independent random numbers, so should not be used for purposes requiring independence. The value of c with lowest discrepancy is the fractional part of the golden ratio: : c = \frac = \varphi - 1 \approx 0.618034. Another value that is nearly as good is the fractional part of the
silver ratio In mathematics, two quantities are in the silver ratio (or silver mean) if the ratio of the smaller of those two quantities to the larger quantity is the same as the ratio of the larger quantity to the sum of the smaller quantity and twice t ...
, which is the fractional part of the square root of 2: : c = \sqrt-1 \approx 0.414214. \, In more than one dimension, separate quasirandom numbers are needed for each dimension. A convenient set of values that are used, is the square roots of
primes A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways ...
from two up, all taken modulo 1: : c = \sqrt, \sqrt, \sqrt, \sqrt, \sqrt, \ldots \, However, a set of values based on the generalised golden ratio has been shown to produce more evenly distributed points. The list of pseudorandom number generators lists methods for generating independent pseudorandom numbers. Note: In few dimensions, recursive recurrence leads to uniform sets of good quality, but for larger s (like s>8) other point set generators can offer much lower discrepancies.


van der Corput sequence

Let : n=\sum_^d_k(n)b^k be the ''b''-ary representation of the positive integer ''n'' ≥ 1, i.e. 0 ≤ ''d''''k''(''n'') < ''b''. Set : g_b(n)=\sum_^d_k(n)b^. Then there is a constant ''C'' depending only on ''b'' such that (''g''''b''(''n''))''n'' ≥ 1satisfies : D^*_N(g_b(1),\dots,g_b(N))\leq C\frac, where ''D''*''N'' is the star discrepancy.


Halton sequence

The Halton sequence is a natural generalization of the van der Corput sequence to higher dimensions. Let ''s'' be an arbitrary dimension and ''b''1, ..., ''b''''s'' be arbitrary coprime integers greater than 1. Define : x(n)=(g_(n),\dots,g_(n)). Then there is a constant ''C'' depending only on ''b''1, ..., ''b''''s'', such that sequence ''n''≥1 is a ''s''-dimensional sequence with : D^*_N(x(1),\dots,x(N))\leq C'\frac.


Hammersley set

Let ''b''1,...,''b''''s''−1 be coprime positive integers greater than 1. For given ''s'' and ''N'', the ''s''-dimensional Hammersley set of size ''N'' is defined by : x(n)=\left(g_(n),\dots,g_(n),\frac\right) for ''n'' = 1, ..., ''N''. Then : D^*_N(x(1),\dots,x(N))\leq C\frac where ''C'' is a constant depending only on ''b''1, ..., ''b''''s''−1. Note: The formulas show that the Hammersley set is actually the Halton sequence, but we get one more dimension for free by adding a linear sweep. This is only possible if ''N'' is known upfront. A linear set is also the set with lowest possible one-dimensional discrepancy in general. Unfortunately, for higher dimensions, no such "discrepancy record sets" are known. For ''s'' = 2, most low-discrepancy point set generators deliver at least near-optimum discrepancies.


Sobol sequence

The Antonov–Saleev variant of the Sobol sequence generates numbers between zero and one directly as binary fractions of length w, from a set of w special binary fractions, V_i, i = 1, 2, \dots, w called direction numbers. The bits of the
Gray code The reflected binary code (RBC), also known as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). For example, the representati ...
of i, G(i), are used to select direction numbers. To get the Sobol sequence value s_i take the exclusive or of the binary value of the Gray code of i with the appropriate direction number. The number of dimensions required affects the choice of V_i.


Poisson disk sampling

Poisson disk sampling Supersampling or supersampling anti-aliasing (SSAA) is a spatial anti-aliasing method, i.e. a method used to remove aliasing (jagged and pixelated edges, colloquially known as "jaggies") from images rendered in computer games or other computer ...
is popular in video games to rapidly place objects in a way that appears random-looking but guarantees that every two points are separated by at least the specified minimum distance. This does not guarantee low discrepancy (as e. g. Sobol), but at least a significantly lower discrepancy than pure random sampling. The goal of these sampling patterns is based on frequency analysis rather than discrepancy, a type of so-called "blue noise" patterns.


Graphical examples

The points plotted below are the first 100, 1000, and 10000 elements in a sequence of the Sobol' type. For comparison, 10000 elements of a sequence of pseudorandom points are also shown. The low-discrepancy sequence was generated by TOMS algorithm 659. An implementation of the algorithm in Fortran is available from
Netlib Netlib is a repository of software for scientific computing maintained by AT&T, Bell Laboratories, the University of Tennessee and Oak Ridge National Laboratory. Netlib comprises many separate programs and libraries. Most of the code is written in ...
.


See also

*
Discrepancy theory In mathematics, discrepancy theory describes the deviation of a situation from the state one would like it to be in. It is also called the theory of irregularities of distribution. This refers to the theme of ''classical'' discrepancy theory, name ...
*
Markov chain Monte Carlo In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain ...
* Quasi-Monte Carlo method *
Sparse grid Sparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally developed by the Russian mathematician Sergey A. Smolyak, a student of Lazar Lyusternik, and are based on a sparse tensor ...
*
Systematic sampling In survey methodology, systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equiprobability method. In this approach, progression through ...


Notes


References

* * * * *


External links


Collected Algorithms of the ACM
''(See algorithms 647, 659, and 738.)''

from the
GNU Scientific Library The GNU Scientific Library (or GSL) is a software library for numerical computations in applied mathematics and science. The GSL is written in C; wrappers are available for other programming languages. The GSL is part of the GNU Project and is d ...

Quasi-random sampling subject to constraints
at FinancialMathematics.Com
C++ generator of Sobol sequence
{{DEFAULTSORT:Low-Discrepancy Sequence Numerical analysis Low-discrepancy sequences Random number generation Diophantine approximation Sequences and series