Balls Into Bins
   HOME
*





Balls Into Bins
The balls into bins (or balanced allocations) problem is a classic problem in probability theory that has many applications in computer science. The problem involves ''m'' balls and ''n'' boxes (or "bins"). Each time, a single ball is placed into one of the bins. After all balls are in the bins, we look at the number of balls in each bin; we call this number the ''load'' on the bin. The problem can be modelled using a Multinomial distribution, and may involve asking a question such as: What is the expected number of bins with a ball in them? Obviously, it is possible to make the load as small as ''m''/''n'' by putting each ball into the least loaded bin. The interesting case is when the bin is selected at random, or at least partially at random. A powerful balls-into-bins paradigm is the "power of two random choices" where each ball chooses two (or more) random bins and is placed in the lesser-loaded bin. This paradigm has found wide practical applications in shared-memory emulatio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computer Science
Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical disciplines (including the design and implementation of Computer architecture, hardware and Computer programming, software). Computer science is generally considered an area of research, academic research and distinct from computer programming. Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of computational problem, problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing Vulnerability (computing), security vulnerabilities. Computer graphics (computer science), Computer graphics and computational geometry address the generation of images. Progr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Multinomial Distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n'' independent trials each of which leads to a success for exactly one of ''k'' categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories. When ''k'' is 2 and ''n'' is 1, the multinomial distribution is the Bernoulli distribution. When ''k'' is 2 and ''n'' is bigger than 1, it is the binomial distribution. When ''k'' is bigger than 2 and ''n'' is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so ''n'' determines the prefix, and ''k'' the suffix). The Bernoulli distribution models the outcome of a single Bernoulli trial ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


With High Probability
In mathematics, an event that occurs with high probability (often shortened to w.h.p. or WHP) is one whose probability depends on a certain number ''n'' and goes to 1 as ''n'' goes to infinity, i.e. the probability of the event occurring can be made as close to 1 as desired by making ''n'' big enough. Applications The term WHP is especially used in computer science, in the analysis of probabilistic algorithms. For example, consider a certain probabilistic algorithm on a graph with ''n'' nodes. If the probability that the algorithm returns the correct answer is 1-1/n, then when the number of nodes is very large, the algorithm is correct with a probability that is very near 1. This fact is expressed shortly by saying that the algorithm is correct WHP. Some examples where this term is used are: * Miller–Rabin primality test: a probabilistic algorithm for testing whether a given number ''n'' is prime or composite. If ''n'' is composite, the test will detect ''n'' as composite WHP. Th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gamma Function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer , \Gamma(n) = (n-1)!\,. Derived by Daniel Bernoulli, for complex numbers with a positive real part, the gamma function is defined via a convergent improper integral: \Gamma(z) = \int_0^\infty t^ e^\,dt, \ \qquad \Re(z) > 0\,. The gamma function then is defined as the analytic continuation of this integral function to a meromorphic function that is holomorphic in the whole complex plane except zero and the negative integers, where the function has simple poles. The gamma function has no zeroes, so the reciprocal gamma function is an entire function. In fact, the gamma function corresponds to the Mellin transform of the negative exponential function: \Gamma(z) = \mathcal M \ (z ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Load Balancing (computing)
In computing, load balancing is the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle. Load balancing is the subject of research in the field of parallel computers. Two main approaches exist: static algorithms, which do not take into account the state of the different machines, and dynamic algorithms, which are usually more general and more efficient but require exchanges of information between the different computing units, at the risk of a loss of efficiency. Problem overview A load-balancing algorithm always tries to answer a specific problem. Among other things, the nature of the tasks, the algorithmic complexity, the hardware architecture on which the algorithms will run as well as required error tolerance, must be taken into account. Therefore c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hash Function
A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called ''hash values'', ''hash codes'', ''digests'', or simply ''hashes''. The values are usually used to index a fixed-size table called a ''hash table''. Use of a hash function to index a hash table is called ''hashing'' or ''scatter storage addressing''. Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally and storage space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often exponential storage requirements of direct access of state spaces of large or variable-length keys. Use of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hash Table
In computing, a hash table, also known as hash map, is a data structure that implements an associative array or dictionary. It is an abstract data type that maps keys to values. A hash table uses a hash function to compute an ''index'', also called a ''hash code'', into an array of ''buckets'' or ''slots'', from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. Ideally, the hash function will assign each key to a unique bucket, but most hash table designs employ an imperfect hash function, which might cause hash ''collisions'' where the hash function generates the same index for more than one key. Such collisions are typically accommodated in some way. In a well-dimensioned hash table, the average time complexity for each lookup is independent of the number of elements stored in the table. Many hash table designs also allow arbitrary insertions and deletions of key–value pairs, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fair Cake-cutting
Fair cake-cutting is a kind of fair division problem. The problem involves a ''heterogeneous'' resource, such as a cake with different toppings, that is assumed to be ''divisible'' – it is possible to cut arbitrarily small pieces of it without destroying their value. The resource has to be divided among several partners who have different preferences over different parts of the cake, i.e., some people prefer the chocolate toppings, some prefer the cherries, some just want as large a piece as possible. The division should be ''unanimously'' fair - each person should receive a piece that he or she believes to be a fair share. The "cake" is only a metaphor; procedures for fair cake-cutting can be used to divide various kinds of resources, such as land estates, advertisement space or broadcast time. The prototypical procedure for fair cake-cutting is divide and choose, which is mentioned already in the book of Genesis. It solves the fair division problem for two people. The modern ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Edmonds–Pruhs Protocol
Edmonds–Pruhs protocol is a protocol for fair cake-cutting. Its goal is to create a partially proportional division of a heterogeneous resource among ''n'' people, such that each person receives a subset of the cake which that person values as at least 1/''an'' of the total, where a\geq 10 is some sufficiently large constant. It is a randomized algorithm whose running time is O(''n'') with probability close to 1. The protocol was developed by Jeff Edmonds and Kirk Pruhs, who later improved it in joint work with Jaisingh Solanki. Motivation A proportional division of a cake can be achieved using the recursive halving algorithm in time O(''n'' log ''n''). Several hardness results show that this run-time is optimal under a wide variety of assumptions. In particular, recursive halving is the fastest possible algorithm for achieving full proportionality when the pieces must be contiguous, and it is the fastest possible deterministic algorithm for achieving even partia ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Randomized Algorithm
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables. One has to distinguish between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort), and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem. In common practice, randomized algor ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]