Leonid Levin
   HOME
*





Leonid Levin
Leonid Anatolievich Levin ( ; russian: Леони́д Анато́льевич Ле́вин; uk, Леоні́д Анато́лійович Ле́він; born November 2, 1948) is a Soviet-American mathematician and computer scientist. He is known for his work in randomness in computing, algorithmic complexity and intractability, average-case complexity, foundations of mathematics and computer science, algorithmic probability, theory of computation, and information theory. He obtained his master's degree at Moscow University in 1970 where he studied under Andrey Kolmogorov and completed the Candidate Degree academic requirements in 1972. He and Stephen Cook independently discovered the existence of NP-complete problems. This NP-completeness theorem, often called the Cook–Levin theorem, was a basis for one of the seven Millennium Prize Problems declared by the Clay Mathematics Institute with a $1,000,000 prize offered. The Cook–Levin theorem was a breakthrough in comput ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Dnipropetrovsk
Dnipro, previously called Dnipropetrovsk from 1926 until May 2016, is Ukraine's fourth-largest city, with about one million inhabitants. It is located in the eastern part of Ukraine, southeast of the Ukrainian capital Kyiv on the Dnieper River, after which its Ukrainian language name (Dnipro) it is named. Dnipro is the administrative centre of the Dnipropetrovsk Oblast. It hosts the administration of Dnipro urban hromada. The population of Dnipro is Archeological evidence suggests the site of the present city was settled by Cossack communities from at least 1524. The town, named Yekaterinoslav (''the glory of Catherine''), was established by decree of the Russian Empress Catherine the Great in 1787 as the administrative center of Novorossiya. From the end of the nineteenth century, the town attracted foreign capital and an international, multi-ethnic, workforce exploiting Kryvbas iron ore and Donbas coal. Renamed ''Dnipropetrovsk'' in 1926 after the Ukrainian Communist ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Analysis Of Algorithms
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a br ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

National Academy Of Sciences
The National Academy of Sciences (NAS) is a United States nonprofit, non-governmental organization. NAS is part of the National Academies of Sciences, Engineering, and Medicine, along with the National Academy of Engineering (NAE) and the National Academy of Medicine (NAM). As a national academy, new members of the organization are elected annually by current members, based on their distinguished and continuing achievements in original research. Election to the National Academy is one of the highest honors in the scientific field. Members of the National Academy of Sciences serve ''pro bono'' as "advisers to the nation" on science, engineering, and medicine. The group holds a congressional charter under Title 36 of the United States Code. Founded in 1863 as a result of an Act of Congress that was approved by Abraham Lincoln, the NAS is charged with "providing independent, objective advice to the nation on matters related to science and technology. ... to provide scien ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of compu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Clay Mathematics Institute
The Clay Mathematics Institute (CMI) is a private, non-profit foundation dedicated to increasing and disseminating mathematical knowledge. Formerly based in Peterborough, New Hampshire, the corporate address is now in Denver, Colorado. CMI's scientific activities are managed from the President's office in Oxford, United Kingdom. It gives out various awards and sponsorships to promising mathematicians. The institute was founded in 1998 through the sponsorship of Boston businessman Landon T. Clay. Harvard mathematician Arthur Jaffe was the first president of CMI. While the institute is best known for its Millennium Prize Problems, it carries out a wide range of activities, including a postdoctoral program (ten Clay Research Fellows are supported currently), conferences, workshops, and summer schools. Governance The institute is run according to a standard structure comprising a scientific advisory committee that decides on grant-awarding and research proposals, and a board of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Millennium Prize Problems
The Millennium Prize Problems are seven well-known complex mathematical problems selected by the Clay Mathematics Institute in 2000. The Clay Institute has pledged a US$1 million prize for the first correct solution to each problem. According to the official website of the Clay Mathematics Institute, the Millennium Prize Problems are officially also called the Millennium Problems. To date, the only Millennium Prize problem to have been solved is the Poincaré conjecture. The Clay Institute awarded the monetary prize to Russian mathematician Grigori Perelman in 2010. However, he declined the award as it was not also offered to Richard S. Hamilton, upon whose work Perelman built. The remaining six unsolved problems are the Birch and Swinnerton-Dyer conjecture, Hodge conjecture, Navier–Stokes existence and smoothness, P versus NP problem, Riemann hypothesis, and Yang–Mills existence and mass gap. Overview The Clay Institute was inspired by a set of twenty-three problems ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP-completeness
In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions. # the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified. The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers to nondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm. Polynomial time refers to an amount of time that is considered "quick" for a dete ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Stephen Cook
Stephen Arthur Cook (born December 14, 1939) is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor at the University of Toronto, Department of Computer Science and Department of Mathematics. Biography Cook received his bachelor's degree in 1961 from the University of Michigan, and his master's degree and PhD from Harvard University, respectively in 1962 and 1966, from the Mathematics Department. He joined the University of California, Berkeley, mathematics department in 1966 as an assistant professor, and stayed there until 1970 when he was denied reappointment. In a speech celebrating the 30th anniversary of the Berkeley electrical engineering and computer sciences department, fellow Turing Award winner and Berkeley professor Richard Karp said that, "It is to our everlasting shame that we were unable to persuade the math department to give him t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Candidate Of Sciences
Candidate of Sciences (russian: кандидат наук, translit=kandidat nauk) is the first of two doctoral level scientific degrees in Russia and the Commonwealth of Independent States. It is formally classified as UNESCO's ISCED level 8, "doctoral or equivalent". It may be recognized as Doctor of Philosophy, usually in natural sciences, by scientific institutions in other countries. Former Soviet countries also have a more advanced degree, Doctor of Sciences. Overview The degree was first introduced in the USSR on 13 January 1934 by a decision of the Council of People's Commissars of the USSR, all previous degrees, ranks and titles having been abolished immediately after the October Revolution in 1917. Academic distinctions and ranks were viewed as survivals of capitalist inequality and hence were to be permanently eliminated. The original decree also recognized some degrees earned prior to 1917 in Tsarist Russia and elsewhere. To attain the Candidate of Sciences ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Information Theory
Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include sourc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Theory Of Computation
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how algorithmic efficiency, efficiently they can be solved or to what degree (e.g., approximation algorithms, approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: ''"What are the fundamental capabilities and limitations of computers?".'' In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many con ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]