Extractor (mathematics)
   HOME
*





Extractor (mathematics)
{{Short description, Bipartite graph with nodes An (N,M,D,K,\epsilon) -extractor is a bipartite graph with N nodes on the left and M nodes on the right such that each node on the left has D neighbors (on the right), which has the added property that for any subset A of the left vertices of size at least K, the distribution on right vertices obtained by choosing a random node in A and then following a random edge to get a node x on the right side is \epsilon-close to the uniform distribution in terms of total variation distance. A disperser is a related graph. An equivalent way to view an extractor is as a bivariate function :E : \times \rightarrow /math> in the natural way. With this view it turns out that the extractor property is equivalent to: for any source of randomness X that gives n bits with min-entropy \log K, the distribution E(X,U_D) is \epsilon-close to U_M, where U_T denotes the uniform distribution on /math>. Extractors are interesting when they can be co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bipartite Graph
In the mathematical field of graph theory, a bipartite graph (or bigraph) is a graph whose vertices can be divided into two disjoint and independent sets U and V, that is every edge connects a vertex in U to one in V. Vertex sets U and V are usually called the ''parts'' of the graph. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles. The two sets U and V may be thought of as a coloring of the graph with two colors: if one colors all nodes in U blue, and all nodes in V red, each edge has endpoints of differing colors, as is required in the graph coloring problem.. In contrast, such a coloring is impossible in the case of a non-bipartite graph, such as a triangle: after one node is colored blue and another red, the third vertex of the triangle is connected to vertices of both colors, preventing it from being assigned either color. One often writes G=(U,V,E) to denote a bipartite graph whose partition has the parts U and V, with E denoting ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Graph Theory
In mathematics, graph theory is the study of ''graphs'', which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of '' vertices'' (also called ''nodes'' or ''points'') which are connected by '' edges'' (also called ''links'' or ''lines''). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics. Definitions Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures. Graph In one restricted but very common sense of the term, a graph is an ordered pair G=(V,E) comprising: * V, a set of vertices (also called nodes or points); * E \subseteq \, a set of edges (also called links or lines), which are unordered pairs of vertices (that is, an edge is associated with t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Uniform Distribution (continuous)
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions. The distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, ''a'' and ''b'', which are the minimum and maximum values. The interval can either be closed (e.g. , b or open (e.g. (a, b)). Therefore, the distribution is often abbreviated ''U'' (''a'', ''b''), where U stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable ''X'' under no constraint other than that it is contained in the distribution's support. Definitions Probability density function The probability density function of the continuous uniform distribution is: : f(x)=\begin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Total Variation Distance
In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance, statistical difference or variational distance. Definition Consider a measurable space (\Omega, \mathcal) and probability measures P and Q defined on (\Omega, \mathcal). The total variation distance between P and Q is defined as: :\delta(P,Q)=\sup_\left, P(A)-Q(A)\. Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. Properties Relation to other distances The total variation distance is related to the Kullback–Leibler divergence by Pinsker’s inequality: :\delta(P,Q) \le \sqrt. One also has the following inequality, due to Bretagnolle and Huber (see, also, Tsybakov), which has the advantage of providing a non-vacuous bound even when D_(P\parallel Q)>2: :\delta(P,Q) \le \sqrt ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Disperser
A disperser is a one-sided extractor. Where an extractor requires that every event gets the same probability under the uniform distribution and the extracted distribution, only the latter is required for a disperser. So for a disperser, an event A \subseteq \^ we have: Pr_ > 1 - \epsilon Definition (Disperser): ''A'' (k, \epsilon)''-disperser is a function'' Dis: \^\times \^\rightarrow \^ ''such that for every distribution'' X ''on'' \^ ''with'' H_(X) \geq k ''the support of the distribution'' Dis(X,U_) ''is of size at least'' (1-\epsilon)2^. Graph theory An (''N'', ''M'', ''D'', ''K'', ''e'')-disperser is a bipartite graph with ''N'' vertices on the left side, each with degree ''D'', and ''M'' vertices on the right side, such that every subset of ''K'' vertices on the left side is connected to more than (1 − ''e'')''M'' vertices on the right. An extractor is a related type of graph that guarantees an even stronger property; every (''N'', ''M'', ''D'', ''K'', ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Min-entropy
The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the ''most likely'' outcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary or Shannon entropy (which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the ''number'' of outcomes with nonzero probability. As with the classical Shannon entropy and its quantum generalization, the von Neumann entropy, one can define a conditional version of min-entropy. The conditional quantum min-entropy is a one-shot, or conservative, analog of conditional quantum entropy. To interpret a conditional information ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Randomness
In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if the probability distribution is known, the frequency of different outcomes over repeated events (or "trials") is predictable.Strictly speaking, the frequency of an outcome will converge almost surely to a predictable value as the number of trials becomes arbitrarily large. Non-convergence or convergence to a different value is possible, but has probability zero. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy. The fields of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Randomness Extractor
A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weakly random information entropy, entropy source, together with a short, uniformly random seed, generates a highly random output that appears Independent and identically distributed random variables, independent from the source and Uniform distribution (discrete), uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (Hardware_random_number_generator, TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source. Sometimes the term "bias" is used to denote a weakly random sou ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Probabilistic Method
The probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for ''certain'', without any possible error. This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory. Introduction If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does ''not ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Polynomial Time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computer Science
Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical disciplines (including the design and implementation of Computer architecture, hardware and Computer programming, software). Computer science is generally considered an area of research, academic research and distinct from computer programming. Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of computational problem, problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing Vulnerability (computing), security vulnerabilities. Computer graphics (computer science), Computer graphics and computational geometry address the generation of images. Progr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Graph Families
Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discrete mathematics *Graph of a function *Graph of a relation *Graph paper *Chart, a means of representing data (also called a graph) Computing * Graph (abstract data type), an abstract data type representing relations or connections *graph (Unix), Unix command-line utility *Conceptual graph, a model for knowledge representation and reasoning Other uses * HMS ''Graph'', a submarine of the UK Royal Navy See also *Complex network *Graf *Graff (other) *Graph database *Grapheme, in linguistics *Graphemics *Graphic (other) *-graphy (suffix from the Greek for "describe," "write" or "draw") *List of information graphics software *Statistical graphics Statistical graphics, also known as statistical graphical techniques, are graphi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]