HOME
*





L-notation
''L''-notation is an asymptotic notation analogous to big-O notation, denoted as L_n alpha,c/math> for a bound variable n tending to infinity. Like big-O notation, it is usually used to roughly convey the rate of growth of a function, such as the computational complexity of a particular algorithm. Definition It is defined as :L_n alpha,ce^ where ''c'' is a positive constant, and \alpha is a constant 0 \leq \alpha \leq 1. L-notation is used mostly in computational number theory, to express the complexity of algorithms for difficult number theory problems, e.g. sieves for integer factorization and methods for solving discrete logarithms. The benefit of this notation is that it simplifies the analysis of these algorithms. The e^ expresses the dominant term, and the e^ takes care of everything smaller. When \alpha is 0, then :L_n alpha,c= L_n , c= e^ = (\ln n)^\, is a polynomial function of ln ''n''; When \alpha is 1 then :L_n alpha,c= L_n , c= e^ = n^\, is a fu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Big-O Notation
Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. The letter O was chosen by Bachmann to stand for ''Ordnung'', meaning the order of approximation. In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem. Big O notation is also used in many other fields to provide similar estimates. Big O notation characterizes functions according to their growth rates: ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Sub-exponential Time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Time Complexity
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Arjen Lenstra
Arjen Klaas Lenstra (born 2 March 1956, in Groningen) is a Dutch mathematician, cryptographer and computational number theorist. He is currently a professor at the École Polytechnique Fédérale de Lausanne (EPFL) where he heads of the Laboratory for Cryptologic Algorithms. Career He studied mathematics at the University of Amsterdam. He is currently a professor at the EPFL (Lausanne), in the Laboratory for Cryptologic Algorithms, and previously worked for Citibank and Bell Labs. Research Lenstra is active in cryptography and computational number theory, especially in areas such as integer factorization. With Mark Manasse, he was the first to seek volunteers over the internet for a large scale volunteer computing project. Such projects became more common after the Factorization of RSA-129 which was a high publicity distributed factoring success led by Lenstra along with Derek Atkins, Michael Graff and Paul Leyland. He was also a leader in the successful factorizations ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial Time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally express ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Integer Factorization
In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing. In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann factored a 240-digit (795-bit) number ( RSA-240) utilizing approximately 900 core-years of computing power. The researchers estimated that a 1024-bit R ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Quadratic Sieve
The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second fastest method known (after the general number field sieve). It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve. Basic aim The algorithm attempts to set up a congruence of squares modulo ''n'' (the integer to be factorized), which often leads to a factorization of ''n''. The algorithm works in two phases: the ''data collection'' phase, where it collects information that may lead to a congruence of squares; and the ''data processing'' phase, where it puts all the data it has collected into a matrix and solves it to obtain a congruence of sq ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


General Number Field Sieve
In number theory, the general number field sieve (GNFS) is the most efficient classical algorithm known for factoring integers larger than . Heuristically, its complexity for factoring an integer (consisting of bits) is of the form :\exp\left( \left(\sqrt + o(1)\right)(\ln n)^(\ln \ln n)^\right) =L_n\left .html"_;"title="frac,\sqrt[3">frac,\sqrt[3right/math> (in_L-notation.html" ;"title="">frac,\sqrt[3right.html" ;"title=".html" ;"title="frac,\sqrt[3">frac,\sqrt[3right">.html" ;"title="frac,\sqrt[3">frac,\sqrt[3right/math> (in L-notation">">frac,\sqrt[3right.html" ;"title=".html" ;"title="frac,\sqrt[3">frac,\sqrt[3right">.html" ;"title="frac,\sqrt[3">frac,\sqrt[3right/math> (in L-notation), where is the natural logarithm. It is a generalization of the special number field sieve: while the latter can only factor numbers of a certain special form, the general number field sieve can factor any number apart from prime powers (which are trivial to factor by taking roots). The p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Superpolynomial
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the Worst-case complexity, worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Don Coppersmith
Don Coppersmith (born 1950) is a cryptographer and mathematician. He was involved in the design of the Data Encryption Standard block cipher at IBM, particularly the design of the S-boxes, strengthening them against differential cryptanalysis. He also improved the quantum Fourier transform discovered by Peter Shor in the same year (1994). He has also worked on algorithms for computing discrete logarithms, the cryptanalysis of RSA, methods for rapid matrix multiplication (see Coppersmith–Winograd algorithm) and IBM's MARS cipher. Don is also a co-designer of the SEAL and Scream ciphers. In 1972, Coppersmith obtained a bachelor's degree in mathematics at the Massachusetts Institute of Technology, and a Masters and Ph.D. in mathematics from Harvard University in 1975 and 1977 respectively. He was a Putnam Fellow each year from 1968–1971, becoming the first four-time Putnam Fellow in history. In 1998, he started ''Ponder This'', an online monthly column on mathematical ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hendrik Lenstra
Hendrik Willem Lenstra Jr. (born 16 April 1949, Zaandam) is a Dutch mathematician. Biography Lenstra received his doctorate from the University of Amsterdam in 1977 and became a professor there in 1978. In 1987 he was appointed to the faculty of the University of California, Berkeley; starting in 1998, he divided his time between Berkeley and the University of Leiden, until 2003, when he retired from Berkeley to take a full-time position at Leiden. Three of his brothers, Arjen Lenstra, Andries Lenstra, and Jan Karel Lenstra, are also mathematicians. Jan Karel Lenstra is the former director of the Netherlands Centrum Wiskunde & Informatica (CWI). Hendrik Lenstra was the Chairman of the Program Committee of the International Congress of Mathematicians in 2010. Scientific contributions Lenstra has worked principally in computational number theory. He is well known for: * Co-discovering of the Lenstra–Lenstra–Lovász lattice basis reduction algorithm (in 1982); * Developi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Carl Pomerance
Carl Bernard Pomerance (born 1944 in Joplin, Missouri) is an American number theorist. He attended college at Brown University and later received his Ph.D. from Harvard University in 1972 with a dissertation proving that any odd perfect number has at least seven distinct prime factors. He joined the faculty at the University of Georgia, becoming full professor in 1982. He subsequently worked at Lucent Technologies for a number of years, and then became a distinguished Professor at Dartmouth College. Contributions He has over 120 publications, including co-authorship with Richard Crandall of ''Prime numbers: a computational perspective'' (Springer-Verlag, first edition 2001, second edition 2005), and with Paul Erdős. He is the inventor of one of the integer factorization methods, the quadratic sieve algorithm, which was used in 1994 for the factorization of RSA-129. He is also one of the discoverers of the Adleman–Pomerance–Rumely primality test. Awards and honors He has w ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]