HOME
*





Bach's Algorithm
Bach's algorithm is a probabilistic polynomial time algorithm for generating random numbers along with their factorizations, named after its discoverer, Eric Bach. It is of interest because no algorithm is known that efficiently factors numbers, so the straightforward method, namely generating a random number and then factoring it, is impractical. The algorithm performs, in expectation, O(log n) primality tests. A simpler, but less efficient algorithm (performing, in expectation, O(\log^2 n) primality tests), is due to Adam Kalai. Overview Bach's algorithm produces a number x uniformly at random in the range N/2 < x \le N (for a given input N), along with its factorization. It does this by picking a p and an exponent a such that p^a \le N, according to a certain distribution. The algo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


SIAM Journal On Computing
The ''SIAM Journal on Computing'' is a scientific journal focusing on the mathematical and formal aspects of computer science. It is published by the Society for Industrial and Applied Mathematics (SIAM). Although its official ISO abbreviation is ''SIAM J. Comput.'', its publisher and contributors frequently use the shorter abbreviation ''SICOMP''. SICOMP typically hosts the special issues of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) and the Annual ACM Symposium on Theory of Computing (STOC), where about 15% of papers published in FOCS and STOC each year are invited to these special issues. For example, Volume 48 contains 11 out of 85 papers published in FOCS 2016. References * External linksSIAM Journal on Computing
on

picture info

Polynomial Time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can perform automated deductions (referred to as automated reasoning) and use mathematical and logical tests to divert the code execution through various routes (referred to as automated decision-making). Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus". In contrast, a Heuristic (computer science), heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result. As an effective method, an algorithm ca ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Random Number Generation
Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols that cannot be reasonably predicted better than by random chance is generated. This means that the particular outcome sequence will contain some patterns detectable in hindsight but unpredictable to foresight. True random number generators can be '' hardware random-number generators'' (HRNGS) that generate random numbers, wherein each generation is a function of the current value of a physical environment's attribute that is constantly changing in a manner that is practically impossible to model. This would be in contrast to so-called "random number generations" done by ''pseudorandom number generators'' (PRNGs) that generate numbers that only look random but are in fact pre-determined—these generations can be reproduced simply by knowing the state of the PRNG. Various applications of randomness have led to the development of several different metho ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Factorization
In mathematics, factorization (or factorisation, see American and British English spelling differences#-ise, -ize (-isation, -ization), English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several ''factors'', usually smaller or simpler objects of the same kind. For example, is a factorization of the integer , and is a factorization of the polynomial . Factorization is not usually considered meaningful within number systems possessing division ring, division, such as the real number, real or complex numbers, since any x can be trivially written as (xy)\times(1/y) whenever y is not zero. However, a meaningful factorization for a rational number or a rational function can be obtained by writing it in lowest terms and separately factoring its numerator and denominator. Factorization was first considered by Greek mathematics, ancient Greek mathematicians in the case of integers. They proved the fundamental theorem o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Eric Bach
Eric Bach is an American computer scientist who has made contributions to computational number theory. Bach completed his undergraduate studies at the University of Michigan, Ann Arbor, and got his Ph.D. in computer science from the University of California, Berkeley, in 1984 under the supervision of Manuel Blum. He is currently a professor at the Computer Science Department, University of Wisconsin–Madison. Among other work, he gave explicit bounds for the Chebotarev density theorem, which imply that if one assumes the generalized Riemann hypothesis then \left(\mathbb/n\mathbb\right)^* is generated by its elements smaller than 2(log ''n'')2. This result shows that the generalized Riemann hypothesis implies tight bounds for the necessary run-time of the deterministic version of the Miller–Rabin primality test. Bach also did some of the first work on pinning down the actual expected run-time of the Pollard rho method where previous work relied on heuristic estimates a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Primality Tests
A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input). Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called ''compositeness tests'' instead of primality tests. Simple methods The simplest primality test is ''trial division'': given an input number, ''n'', check whether it is evenly divisible by any prime number between 2 and (i.e. that the division leaves no remainder). If so, then ''n'' is composite. Otherwise, it is prime.Riesel (1994) pp.2-3 For example, co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Adam Tauman Kalai
Adam Tauman Kalai is an American computer scientist who specializes in Machine Learning and works as a Senior Principal Researcher at Microsoft Research New England. Education and career Kalai graduated from Harvard University in 1996 and received a PhD from Carnegie Mellon University in 2001, where he worked under doctoral advisor Avrim Blum. He did his postdoctoral study at the Massachusetts Institute of Technology before becoming a faculty member at the Toyota Technological Institute at Chicago and then the Georgia Institute of Technology. He joined Microsoft Research in 2008. Contributions Kalai is known for his algorithm for generating random factored numbers (see Bach's algorithm), for efficiently learning learning mixtures of Gaussians, for the Blum-Kalai-Wasserman algorithm for learning parity with noise, and for the intractability of the folk theorem in game theory. More recently, Kalai is known for identifying and reducing gender bias in word embedding In n ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Journal Of Cryptology
The ''Journal of Cryptology'' () is a scientific journal in the field of cryptology and cryptography. The journal is published quarterly by the International Association for Cryptologic Research. Its editor-in-chief is Vincent Rijmen Vincent Rijmen (; born 16 October 1970) is a Belgian cryptographer and one of the two designers of the Rijndael, the Advanced Encryption Standard. Rijmen is also the co-designer of the WHIRLPOOL cryptographic hash function, and the block cipher ....Journal of Cryptology Editorial Board
Springer, retrieved 2022-05-09.


References


External links




[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Prime Number
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, or , involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order. The property of being prime is called primality. A simple but slow method of checking the primality of a given number n, called trial division, tests whether n is a multiple of any integer between 2 and \sqrt. Faster algorithms include the Miller–Rabin primality test, which is fast but has a small chance of error, and the AKS primality test, which always pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rejection Sampling
In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in \mathbb^m with a density. Rejection sampling is based on the observation that to sample a random variable in one dimension, one can perform a uniformly random sampling of the two-dimensional Cartesian graph, and keep the samples in the region under the graph of its density function. Note that this property can be extended to ''N''-dimension functions. Description To visualize the motivation behind rejection sampling, imagine graphing the density function of a random variable onto a large rectangular board and throwing darts at it. Assume that the darts are uniformly distributed around the board. Now remove all of the darts that are outside the area under the curve. The rem ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cryptographic Algorithms
Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or '' -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security ( data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. Cryptography prior to the modern age was effectively synonymous wit ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]