Index Calculus
   HOME
*





Index Calculus
In computational number theory, the index calculus algorithm is a probabilistic algorithm for computing discrete logarithms. Dedicated to the discrete logarithm in (\mathbb/q\mathbb)^* where q is a prime, index calculus leads to a family of algorithms adapted to finite fields and to some families of elliptic curves. The algorithm collects relations among the discrete logarithms of small primes, computes them by a linear algebra procedure and finally expresses the desired discrete logarithm with respect to the discrete logarithms of small primes. Description Roughly speaking, the discrete log problem asks us to find an ''x'' such that g^x \equiv h \pmod, where ''g'', ''h'', and the modulus ''n'' are given. The algorithm (described in detail below) applies to the group (\mathbb/q\mathbb)^* where ''q'' is prime. It requires a ''factor base'' as input. This ''factor base'' is usually chosen to be the number −1 and the first ''r'' primes starting with 2. From the point of view of e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Computational Number Theory
In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry. Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program. Software packages * Magma computer algebra system * SageMath * Number Theory Library * PARI/GP * Fast Library for Number Theory Further reading * * * * * * * * * * * References ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Supersingular Elliptic Curve
In algebraic geometry, supersingular elliptic curves form a certain class of elliptic curves over a field of characteristic ''p'' > 0 with unusually large endomorphism rings. Elliptic curves over such fields which are not supersingular are called ''ordinary'' and these two classes of elliptic curves behave fundamentally differently in many aspects. discovered supersingular elliptic curves during his work on the Riemann hypothesis for elliptic curves by observing that positive characteristic elliptic curves could have endomorphism rings of unusually large rank 4, and developed their basic theory. The term "supersingular" has nothing to do with singular points of curves, and all supersingular elliptic curves are non-singular. It comes from the phrase "singular values of the j-invariant" used for values of the j-invariant for which a complex elliptic curve has complex multiplication. The complex elliptic curves with complex multiplication are those for which the endomorp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Andrew Odlyzko
Andrew Michael Odlyzko (Andrzej Odłyżko) (born 23 July 1949) is a Polish-American mathematician and a former head of the University of Minnesota's Digital Technology Center and of the Minnesota Supercomputing Institute. He began his career in 1975 at Bell Telephone Laboratories, where he stayed for 26 years before joining the University of Minnesota in 2001. Work in mathematics Odlyzko received his B.S. and M.S. in mathematics from the California Institute of Technology and his Ph.D. from the Massachusetts Institute of Technology in 1975. In the field of mathematics he has published extensively on analytic number theory, computational number theory, cryptography, algorithms and computational complexity, combinatorics, probability, and error-correcting codes. In the early 1970s, he was a co-author (with D. Kahaner and Gian-Carlo Rota) of one of the founding papers of the modern umbral calculus. In 1985 he and Herman te Riele disproved the Mertens conjecture. In mathematic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Function Field Sieve
In mathematics the Function Field Sieve is one of the most efficient algorithms to solve the Discrete Logarithm Problem (DLP) in a finite field. It has heuristic subexponential complexity. Leonard Adleman developed it in 1994 and then elaborated it together with M. D. Huang in 1999.L. Adleman, M.D. Huang. "Function Field Sieve Method for Discrete Logarithms over Finite Fields". In: Inf. Comput. 151 (May 1999), pp. 5-16. DOI: 10.1006/inco.1998.2761. Previous work includes the work of D. Coppersmith about the DLP in fields of characteristic two. The discrete logarithm problem in a finite field consists of solving the equation a^x = b for a,b \in \mathbb_ , p a prime number and n an integer. The function f: \mathbb_ \to \mathbb_, x \mapsto a^x for a fixed a \in \mathbb_ is a one-way function used in cryptography. Several cryptographic methods are based on the DLP such as the Diffie-Hellman key exchange, the El Gamal cryptosystem and the Digital Signature Algorithm. Num ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Leonard Adleman
Leonard Adleman (born December 31, 1945) is an American computer scientist. He is one of the creators of the RSA encryption algorithm, for which he received the 2002 Turing Award, often called the Nobel prize of Computer science. He is also known for the creation of the field of DNA computing. Biography Leonard M. Adleman was born to a JewishLeonard (Len) Max Adleman 2002 Recipient of the ACM Turing Award
Interviewed by Hugh Williams, August 18, 2016 amturing.acm.org
family in . His family had originally immigrated to the United States from modern-day

L-notation
''L''-notation is an asymptotic notation analogous to big-O notation, denoted as L_n alpha,c/math> for a bound variable n tending to infinity. Like big-O notation, it is usually used to roughly convey the rate of growth of a function, such as the computational complexity of a particular algorithm. Definition It is defined as :L_n alpha,ce^ where ''c'' is a positive constant, and \alpha is a constant 0 \leq \alpha \leq 1. L-notation is used mostly in computational number theory, to express the complexity of algorithms for difficult number theory problems, e.g. sieves for integer factorization and methods for solving discrete logarithms. The benefit of this notation is that it simplifies the analysis of these algorithms. The e^ expresses the dominant term, and the e^ takes care of everything smaller. When \alpha is 0, then :L_n alpha,c= L_n , c= e^ = (\ln n)^\, is a polynomial function of ln ''n''; When \alpha is 1 then :L_n alpha,c= L_n , c= e^ = n^\, is a full ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Reduced Echelon Form
In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian elimination has operated on the columns. In other words, a matrix is in column echelon form if its transpose is in row echelon form. Therefore, only row echelon forms are considered in the remainder of this article. The similar properties of column echelon form are easily deduced by transposing all the matrices. Specifically, a matrix is in row echelon form if * All rows consisting of only zeroes are at the bottom. * The leading entry (that is the left-most nonzero entry) of every nonzero row is to the right the leading entry of every row above. Some texts add the condition that the leading coefficient must be 1 while others regard this as ''reduced'' row echelon form. These two conditions imply that all entries in a column below a leadin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Smooth Numbers
In number theory, an ''n''-smooth (or ''n''-friable) number is an integer whose prime factors are all less than or equal to ''n''. For example, a 7-smooth number is a number whose every prime factor is at most 7, so 49 = 72 and 15750 = 2 × 32 × 53 × 7 are both 7-smooth, while 11 and 702 = 2 × 33 × 13 are not 7-smooth. The term seems to have been coined by Leonard Adleman. Smooth numbers are especially important in cryptography, which relies on factorization of integers. The 2-smooth numbers are just the powers of 2, while 5-smooth numbers are known as regular numbers. Definition A positive integer is called B-smooth if none of its prime factors are greater than B. For example, 1,620 has prime factorization 22 × 34 × 5; therefore 1,620 is 5-smooth because none of its prime factors are greater than 5. This definition includes numbers that lack some of the smaller prime factors; for example, both 10 and 12 are 5-smooth, even though they miss out the prime factors 3 and 5, respec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Integer Factorization
In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing. In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann factored a 240-digit (795-bit) number (RSA-240) utilizing approximately 900 core-years of computing power. The researchers estimated that a 1024-bit RSA ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Elliptic Curves
In mathematics, an elliptic curve is a smooth, projective, algebraic curve of genus one, on which there is a specified point . An elliptic curve is defined over a field and describes points in , the Cartesian product of with itself. If the field's characteristic is different from 2 and 3, then the curve can be described as a plane algebraic curve which consists of solutions for: :y^2 = x^3 + ax + b for some coefficients and in . The curve is required to be non-singular, which means that the curve has no cusps or self-intersections. (This is equivalent to the condition , that is, being square-free in .) It is always understood that the curve is really sitting in the projective plane, with the point being the unique point at infinity. Many sources define an elliptic curve to be simply a curve given by an equation of this form. (When the coefficient field has characteristic 2 or 3, the above equation is not quite general enough to include all non-singular cubic curve ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probabilistic
Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty."Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th Ed, (2009), .William Feller, ''An Introduction to Probability Theory and Its Applications'', (Vol 1), 3rd Ed, (1968), Wiley, . The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]