List of important publications in theoretical computer science
   HOME

TheInfoList



OR:

This is a list of important publications in
theoretical computer science computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumscribe the ...
, organized by field. Some reasons why a particular publication might be regarded as important: *Topic creator – A publication that created a new topic *Breakthrough – A publication that changed scientific knowledge significantly *Influence – A publication that has significantly influenced the world or has had a massive impact on the teaching of theoretical computer science.


Computability Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is clo ...


''On computable numbers, with an application to the Entscheidungsproblem''

*
Alan Turing Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical ...
(1937) * ''
Proceedings of the London Mathematical Society The London Mathematical Society (LMS) is one of the United Kingdom's learned societies for mathematics (the others being the Royal Statistical Society (RSS), the Institute of Mathematics and its Applications (IMA), the Edinburgh Mathematical S ...
, Series 2'', vol. 42, pp. 230–265, 1937, . Errata appeared in vol. 43, pp. 544–546, 1938, .
HTML versionPDF version
Description: This article set the limits of computer science. It defined the
Turing Machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer alg ...
, a model for all computations. On the other hand, it proved the undecidability of the
halting problem In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. Alan Turing proved in 1936 that a ...
and
Entscheidungsproblem In mathematics and computer science, the ' (, ) is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. The problem asks for an algorithm that considers, as input, a statement and answers "Yes" or "No" according to whether the state ...
and by doing so found the limits of possible computation.


''Rekursive Funktionen''

* The first textbook on the theory of recursive functions. The book went through many editions and earned Péter the
Kossuth Prize The Kossuth Prize ( hu, Kossuth-díj) is a state-sponsored award in Hungary, named after the Hungarian politician and revolutionist Lajos Kossuth. The Prize was established in 1948 (on occasion of the centenary of the March 15th revolution, the ...
from the Hungarian government. Reviews by Raphael M. Robinson and
Stephen Kleene Stephen Cole Kleene ( ; January 5, 1909 – January 25, 1994) was an American mathematician. One of the students of Alonzo Church, Kleene, along with Rózsa Péter, Alan Turing, Emil Post, and others, is best known as a founder of the branch of ...
praised the book for providing an effective elementary introduction for students.


''Representation of Events in Nerve Nets and Finite Automata''

* S. C. Kleene (1951) * U. S. Air Force Project Rand Research Memorandum ''RM-704'', 1951
Online version
* republished in: Description: this paper introduced
finite automata A finite-state machine (FSM) or finite-state automaton (FSA, plural: ''automata''), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number o ...
,
regular expression A regular expression (shortened as regex or regexp; sometimes referred to as rational expression) is a sequence of characters that specifies a search pattern in text. Usually such patterns are used by string-searching algorithms for "find" ...
s, and
regular language In theoretical computer science and formal language theory, a regular language (also called a rational language) is a formal language that can be defined by a regular expression, in the strict sense in theoretical computer science (as opposed to ...
s, and established their connection.


''Finite automata and their decision problems''

* Michael O. Rabin and
Dana S. Scott Dana Stewart Scott (born October 11, 1932) is an American logician who is the emeritus Hillman University Professor of Computer Science, Philosophy, and Mathematical Logic at Carnegie Mellon University; he is now retired and lives in Berkeley, C ...
(1959) * '' IBM Journal of Research and Development'', vol. 3, pp. 114–125, 1959
Online version
Description: Mathematical treatment of
automata An automaton (; plural: automata or automatons) is a relatively self-operating machine, or control mechanism designed to automatically follow a sequence of operations, or respond to predetermined instructions.Automaton – Definition and More ...
, proof of core properties, and definition of non-deterministic finite automaton.


''On certain formal properties of grammars''

* Description: This article introduced what is now known as the
Chomsky hierarchy In formal language theory, computer science and linguistics, the Chomsky hierarchy (also referred to as the Chomsky–Schützenberger hierarchy) is a containment hierarchy of classes of formal grammars. This hierarchy of grammars was described by ...
, a
containment hierarchy A hierarchy (from Greek: , from , 'president of sacred rites') is an arrangement of items (objects, names, values, categories, etc.) that are represented as being "above", "below", or "at the same level as" one another. Hierarchy is an important ...
of classes of
formal grammar In formal language theory, a grammar (when the context is not given, often called a formal grammar for clarity) describes how to form strings from a language's alphabet that are valid according to the language's syntax. A grammar does not describe ...
s that generate
formal language In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules. The alphabet of a formal language consists of sym ...
s.


''Decidability of second order theories and automata on infinite trees''

* Michael O. Rabin (1969) * ''
Transactions of the American Mathematical Society The ''Transactions of the American Mathematical Society'' is a monthly peer-reviewed scientific journal of mathematics published by the American Mathematical Society. It was established in 1900. As a requirement, all articles must be more than 15 p ...
'', vol. 141, pp. 1–35, 1969 Description: The paper presented the
tree automaton A tree automaton is a type of state machine. Tree automata deal with tree structures, rather than the strings of more conventional state machines. The following article deals with branching tree automata, which correspond to regular languages ...
, an extension of the
automata An automaton (; plural: automata or automatons) is a relatively self-operating machine, or control mechanism designed to automatically follow a sequence of operations, or respond to predetermined instructions.Automaton – Definition and More ...
. The tree automaton had numerous applications to proofs of correctness of programs.


''Cutland's ''Computability: An Introduction to Recursive Function Theory'' (Cambridge)''

* The review of this early text by Carl Smith of Purdue University (in the ''Society for Industrial and Applied Mathematics Reviews''), reports that this a text with an "appropriate blend of intuition and rigor… in the exposition of proofs" that presents "the fundamental results of classical recursion theory T.. in a style... accessible to undergraduates with minimal mathematical background". While he states that it "would make an excellent introductory text for an introductory course in Tfor mathematics students", he suggests that an "instructor must be prepared to substantially augment the material… " when it is used with computer science students (given a dearth of material on RT applications to this area).


''Introduction to Automata Theory, Languages, and Computation''

*
John E. Hopcroft John Edward Hopcroft (born October 7, 1939) is an American theoretical computer scientist. His textbooks on theory of computation (also known as the Cinderella book) and data structures are regarded as standards in their fields. He is the IBM P ...
,
Jeffrey D. Ullman Jeffrey David Ullman (born November 22, 1942) is an American computer scientist and the Stanford W. Ascherman Professor of Engineering, Emeritus, at Stanford University. His textbooks on compilers (various editions are popularly known as the ...
, and
Rajeev Motwani Rajeev Motwani (Hindi: राजीव मोटवानी , March 24, 1962 – June 5, 2009) was an Indian American professor of Computer Science at Stanford University whose research focused on theoretical computer science. He was an early ad ...
(2001) *
Addison-Wesley Addison-Wesley is an American publisher of textbooks and computer literature. It is an imprint of Pearson PLC, a global publishing and education company. In addition to publishing books, Addison-Wesley also distributes its technical titles through ...
, 2001, Description: A popular textbook.


Computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...


''A letter from Gödel to von Neumann''

*
Kurt Gödel Kurt Friedrich Gödel ( , ; April 28, 1906 – January 14, 1978) was a logician, mathematician, and philosopher. Considered along with Aristotle and Gottlob Frege to be one of the most significant logicians in history, Gödel had an imme ...
(1956) * A Letter from Gödel to
John von Neumann John von Neumann (; hu, Neumann János Lajos, ; December 28, 1903 – February 8, 1957) was a Hungarian-American mathematician, physicist, computer scientist, engineer and polymath. He was regarded as having perhaps the widest cove ...
, March 20, 1956
Online version
Description: Gödel discusses the idea of efficient universal theorem prover.


''Degree of difficulty of computing a function and a partial ordering of recursive sets''

* Description: This technical report was the first publication talking about what later was renamed
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
Shasha, Dennis
"An Interview with Michael O. Rabin"
''Communications of the ACM'', Vol. 53 No. 2, Pages 37–42, February 2010.


''The Intrinsic Computational Difficulty of Functions''

* Description: First definition of the complexity class P. One of the founding papers of complexity theory.


''On the computational complexity of algorithms''

* Description: This paper gave
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
its name and seed.


''A machine-independent theory of the complexity of recursive functions''

* Description: The
Blum axioms In computational complexity theory the Blum axioms or Blum complexity axioms are axioms that specify desirable properties of complexity measures on the set of computable functions. The axioms were first defined by Manuel Blum in 1967. Importantly ...
.


''How good is the simplex method?''

*
Victor Klee Victor LaRue Klee, Jr. (September 18, 1925 – August 17, 2007) was a mathematician specialising in convex sets, functional analysis, analysis of algorithms, optimization, and combinatorics. He spent almost his entire career at the University of ...
and George J. Minty (1969) * Description: Constructed the "Klee–Minty cube" in dimension ''D'', whose 2''D'' corners are each visited by Dantzig's
simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are n ...
for
linear optimization Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
.


''The complexity of theorem proving procedures''

* Description: This paper introduced the concept of
NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
ness and proved that
Boolean satisfiability problem In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfie ...
(SAT) is
NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
. Note that similar ideas were developed independently slightly later by
Leonid Levin Leonid Anatolievich Levin ( ; russian: Леони́д Анато́льевич Ле́вин; uk, Леоні́д Анато́лійович Ле́він; born November 2, 1948) is a Soviet-American mathematician and computer scientist. He is kn ...
at "Levin, Universal Search Problems. Problemy Peredachi Informatsii 9(3):265–266, 1973".


''Reducibility among combinatorial problems''

* R. M. Karp (1972) * In R. E. Miller and J. W. Thatcher, editors, ''Complexity of Computer Computations'', Plenum Press, New York, NY, 1972, pp. 85–103 Description: This paper showed that 21 different problems are
NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
and showed the importance of the concept.


''Computers and Intractability: A Guide to the Theory of NP-Completeness''

* Description: The main importance of this book is due to its extensive list of more than 300
NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
problems. This list became a common reference and definition. Though the book was published only few years after the concept was defined such an extensive list was found.


''How to construct random functions''

* Description: This paper showed that the existence of
one way function In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, sp ...
s leads to computational randomness.


''The Knowledge Complexity of Interactive Proof Systems''

* Description: This paper introduced the concept of zero knowledge.


''Algebraic methods for interactive proof systems''

* Description: This paper showed that PH is contained in IP.


''IP = PSPACE''

* Description: IP is a complexity class whose characterization (based on
interactive proof system In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a ''prover'' and a ''verifier''. The parties interact by exchanging messages in order t ...
s) is quite different from the usual time/space bounded computational classes. In this paper, Shamir extended the technique of the previous paper by Lund, et al., to show that
PSPACE In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space. Formal definition If we denote by SPACE(''t''(''n'')), the set of all problems that can b ...
is contained in IP, and hence IP = PSPACE, so that each problem in one complexity class is solvable in the other.


''Arora & Barak's ''Computational Complexity'' and Goldreich's ''Computational Complexity'' (both Cambridge)''

* Sanjeev Arora and Boaz Barak (2009) "Computational Complexity: A Modern Approach," Cambridge University Press, 579 pages, Hardcover * Oded Goldreich (2008) "Computational Complexity: A Conceptual Perspective, Cambridge University Press, 606 pages, Hardcover Besides the estimable press bringing these recent texts forward, they are very positively reviewed in ''ACM's SIGACT News'' by Daniel Apon of the University of Arkansas,Daniel Apon, 2010, "Joint Review of ''Computational Complexity: A Conceptual Perspective'' by Oded Goldreich… and ''Computational Complexity: A Modern Approach'' by Sanjeev Arora and Boaz Barak…," ''ACM SIGACT News,'' Vol. 41(4), December 2010, pp. 12–15, se

accessed 1 February 2015.
who identifies them as "textbooks for a course in complexity theory, aimed at early graduate… or... advanced undergraduate students…
ith The Ith () is a ridge in Germany's Central Uplands which is up to 439 m high. It lies about 40 km southwest of Hanover and, at 22 kilometres, is the longest line of crags in North Germany. Geography Location The Ith is immediatel ...
numerous, unique strengths and very few weaknesses," and states that both are: The reviewer notes that there is "a definite attempt in rora and Barakto include very up-to-date material, while Goldreich focuses more on developing a contextual and historical foundation for each concept presented," and that he "applaud all… authors for their outstanding contributions."


''Paths, trees, and flowers''

* Description: There is a polynomial time algorithm to find a maximum matching in a graph that is not bipartite and another step toward the idea of
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
. For more information se


''Theory and applications of trapdoor functions''

* Description: This paper creates a theoretical framework for
trapdoor function In theoretical computer science and cryptography, a trapdoor function is a function that is easy to compute in one direction, yet difficult to compute in the opposite direction (finding its inverse) without special information, called the "trap ...
s and described some of their applications, like in
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
. Note that the concept of trapdoor functions was brought at "New directions in cryptography" six years earlier (See section V "Problem Interrelationships and Trap Doors.").


''Computational Complexity''

* C.H. Papadimitriou *
Addison-Wesley Addison-Wesley is an American publisher of textbooks and computer literature. It is an imprint of Pearson PLC, a global publishing and education company. In addition to publishing books, Addison-Wesley also distributes its technical titles through ...
, 1994, Description: An introduction to
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
, the book explains its author's characterization of P-SPACE and other results.


''Interactive proofs and the hardness of approximating cliques''

*


''Probabilistic checking of proofs: a new characterization of NP''

*


''Proof verification and the hardness of approximation problems''

* Description: These three papers established the surprising fact that certain problems in NP remain hard even when only an approximative solution is required. See PCP theorem.


Algorithms In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing c ...


"A machine program for theorem proving"

* Description: The
DPLL algorithm In logic and computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solvi ...
. The basic algorithm for SAT and other
NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
problems.


"A machine-oriented logic based on the resolution principle"

* Description: First description of
resolution Resolution(s) may refer to: Common meanings * Resolution (debate), the statement which is debated in policy debate * Resolution (law), a written motion adopted by a deliberative body * New Year's resolution, a commitment that an individual mak ...
and unification used in
automated theorem proving Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a maj ...
; used in
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily ...
and
logic programming Logic programming is a programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic prog ...
.


"The traveling-salesman problem and minimum spanning trees"

* Description: The use of an algorithm for
minimum spanning tree A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. T ...
as an
approximation algorithm In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solu ...
for the
NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
travelling salesman problem The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each cit ...
.
Approximation algorithm In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solu ...
s became a common method for coping with NP-Complete problems.


"A polynomial algorithm in linear programming"

* L. G. Khachiyan (1979) * '' Soviet Mathematics - Doklady'', vol. 20, pp. 191–194, 1979 Description: For long, there was no provably polynomial time algorithm for the
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
problem. Khachiyan was the first to provide an algorithm that was polynomial (and not just was fast enough most of the time as previous algorithms). Later,
Narendra Karmarkar Narendra Krishna Karmarkar (born Circa 1956) is an Indian Mathematician. Karmarkar developed Karmarkar's algorithm. He is listed as an ISI highly cited researcher. He invented one of the first provably polynomial time algorithms for linear p ...
presented a faster algorithm at: Narendra Karmarkar, "A new polynomial time algorithm for linear programming",
Combinatorica ''Combinatorica'' is an international journal of mathematics, publishing papers in the fields of combinatorics and computer science. It started in 1981, with László Babai and László Lovász as the editors-in-chief with Paul Erdős as honorar ...
, vol 4, no. 4, p. 373–395, 1984.


"Probabilistic algorithm for testing primality"

* Description: The paper presented the
Miller–Rabin primality test The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen prima ...
and outlined the program of
randomized algorithms A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
.


"Optimization by simulated annealing"

* Description: This article described
simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It ...
, which is now a very common heuristic for
NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
problems.


''The Art of Computer Programming''

*
Donald Knuth Donald Ervin Knuth ( ; born January 10, 1938) is an American computer scientist, mathematician, and professor emeritus at Stanford University. He is the 1974 recipient of the ACM Turing Award, informally considered the Nobel Prize of computer sc ...
Description: This
monograph A monograph is a specialist work of writing (in contrast to reference works) or exhibition on a single subject or an aspect of a subject, often by a single author or artist, and usually on a scholarly subject. In library cataloging, ''monograph ...
has four volumes covering popular algorithms. The algorithms are written in both English and MIX assembly language (or
MMIX MMIX (pronounced ''em-mix'') is a 64-bit reduced instruction set computing (RISC) architecture designed by Donald Knuth, with significant contributions by John L. Hennessy (who contributed to the design of the MIPS architecture) and Richard L. S ...
assembly language in more recent fascicles). This makes algorithms both understandable and precise. However, the use of a
low-level programming language A low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture—commands or functions in the language map that are structurally similar to processor's instructions ...
frustrates some programmers more familiar with modern
structured programming Structured programming is a programming paradigm aimed at improving the clarity, quality, and development time of a computer program by making extensive use of the structured control flow constructs of selection ( if/then/else) and repetition ( ...
languages Language is a structured system of communication. The structure of a language is its grammar and the free components are its vocabulary. Languages are the primary means by which humans communicate, and may be conveyed through a variety of met ...
.


''Algorithms + Data Structures = Programs''

*
Niklaus Wirth Niklaus Emil Wirth (born 15 February 1934) is a Swiss computer scientist. He has designed several programming languages, including Pascal (programming language), Pascal, and pioneered several classic topics in software engineering. In 1984, he w ...
* Prentice Hall, 1976, Description: An early, influential book on algorithms and data structures, with implementations in Pascal.


''The Design and Analysis of Computer Algorithms''

* Alfred V. Aho,
John E. Hopcroft John Edward Hopcroft (born October 7, 1939) is an American theoretical computer scientist. His textbooks on theory of computation (also known as the Cinderella book) and data structures are regarded as standards in their fields. He is the IBM P ...
, and
Jeffrey D. Ullman Jeffrey David Ullman (born November 22, 1942) is an American computer scientist and the Stanford W. Ascherman Professor of Engineering, Emeritus, at Stanford University. His textbooks on compilers (various editions are popularly known as the ...
* Addison-Wesley, 1974, Description: One of the standard texts on algorithms for the period of approximately 1975–1985.


''How to Solve It By Computer''

* Description: Explains the ''Why''s of algorithms and data-structures. Explains the ''Creative Process'', the ''Line of Reasoning'', the ''Design Factors'' behind innovative solutions.


''Algorithms''

* Robert Sedgewick * Addison-Wesley, 1983, Description: A very popular text on algorithms in the late 1980s. It was more accessible and readable (but more elementary) than Aho, Hopcroft, and Ullman. There are more recent editions.


''Introduction to Algorithms''

*
Thomas H. Cormen Thomas H. Cormen is the co-author of ''Introduction to Algorithms'', along with Charles Leiserson, Ron Rivest, and Cliff Stein. In 2013, he published a new book titled '' Algorithms Unlocked''. He is a professor of computer science at Dartmout ...
,
Charles E. Leiserson Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof. As part of this effort, he developed the Cilk multithreaded language. ...
,
Ronald L. Rivest Ronald Linn Rivest (; born May 6, 1947) is a cryptographer and an Institute Professor at MIT. He is a member of MIT's Department of Electrical Engineering and Computer Science (EECS) and a member of MIT's Computer Science and Artificial Inte ...
, and
Clifford Stein Clifford Seth Stein (born December 14, 1965), a computer scientist, is a professor of industrial engineering and operations research at Columbia University in New York, NY, where he also holds an appointment in the Department of Computer Scien ...
* 3rd Edition, MIT Press, 2009, . Description: This textbook has become so popular that it is almost the de facto standard for teaching basic algorithms. The 1st edition (with first three authors) was published in 1990, the 2nd edition in 2001, and the 3rd in 2009.


Algorithmic information theory Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as st ...


"On Tables of Random Numbers"

* * Description: Proposed a computational and combinatorial approach to probability.


"A formal theory of inductive inference"

*
Ray Solomonoff Ray Solomonoff (July 25, 1926 – December 7, 2009) was the inventor of algorithmic probability, his General Theory of Inductive Inference (also known as Universal Inductive Inference),Samuel Rathmanner and Marcus Hutter. A philosophical treatise ...
* ''
Information and Control ''Information and Computation'' is a closed-access computer science journal published by Elsevier (formerly Academic Press). The journal was founded in 1957 under its former name ''Information and Control'' and given its current title in 1987. , t ...
'', vol. 7, pp. 1–22 and 224–254, 1964 * Online copy
part Ipart II
Description: This was the beginning of
algorithmic information theory Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as st ...
and
Kolmogorov complexity In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produ ...
. Note that though
Kolmogorov complexity In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produ ...
is named after
Andrey Kolmogorov Andrey Nikolaevich Kolmogorov ( rus, Андре́й Никола́евич Колмого́ров, p=ɐnˈdrʲej nʲɪkɐˈlajɪvʲɪtɕ kəlmɐˈɡorəf, a=Ru-Andrey Nikolaevich Kolmogorov.ogg, 25 April 1903 – 20 October 1987) was a Sovi ...
, he said that the seeds of that idea are due to
Ray Solomonoff Ray Solomonoff (July 25, 1926 – December 7, 2009) was the inventor of algorithmic probability, his General Theory of Inductive Inference (also known as Universal Inductive Inference),Samuel Rathmanner and Marcus Hutter. A philosophical treatise ...
.
Andrey Kolmogorov Andrey Nikolaevich Kolmogorov ( rus, Андре́й Никола́евич Колмого́ров, p=ɐnˈdrʲej nʲɪkɐˈlajɪvʲɪtɕ kəlmɐˈɡorəf, a=Ru-Andrey Nikolaevich Kolmogorov.ogg, 25 April 1903 – 20 October 1987) was a Sovi ...
contributed a lot to this area but in later articles.


"Algorithmic information theory"

* Description: An introduction to
algorithmic information theory Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as st ...
by one of the important people in the area.


Information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...


"A mathematical theory of communication"

* Description: This paper created the field of
information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...
.


"Error detecting and error correcting codes"

* Description: In this paper, Hamming introduced the idea of
error-correcting code In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea is ...
. He created the
Hamming code In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the sim ...
and the
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
and developed methods for code optimality proofs.


"A method for the construction of minimum redundancy codes"

* Description: The
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algori ...
.


"A universal algorithm for sequential data compression"

* Description: The
LZ77 LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations includin ...
compression algorithm.


''Elements of Information Theory''

* Description: A popular introduction to information theory.


Formal verification In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal metho ...


Assigning meaning to programs

* Description: Floyd's landmark paper introduces the method of inductive assertions and describes how a program (presented as a
flow chart A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task. The flowchart shows the steps as boxes of va ...
) annotated with first-order assertions may be shown to satisfy a pre- and post-condition specification. The paper also introduces the concepts of
loop invariant In computer science, a loop invariant is a property of a program loop that is true before (and after) each iteration. It is a logical assertion, sometimes checked within the code by an assertion call. Knowing its invariant(s) is essential in und ...
and verification condition.


An axiomatic basis for computer programming

* Description: Hoare's paper gives a set of logical rules for reasoning rigorously about the
correctness of computer programs In theoretical computer science, an algorithm is correct with respect to a specification if it behaves as specified. Best explored is ''functional'' correctness, which refers to the input-output behavior of the algorithm (i.e., for each input it p ...
. It defines an Algol-like programming language using what are now called
Hoare triple Hoare logic (also known as Floyd–Hoare logic or Hoare rules) is a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. It was proposed in 1969 by the British computer scientist and lo ...
s. It has been used as the basis for most later work in the field, rather than Floyd's paper "Assigning meaning to programs", which did the same in terms of
flow chart A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task. The flowchart shows the steps as boxes of va ...
s.


Guarded Commands, Nondeterminacy and Formal Derivation of Programs

* Description: Dijkstra's paper proposes (for the first time) that, instead of formally verifying a program after it has been written (i.e. post facto), programs and their formal proofs should be developed hand-in-hand (using predicate transformers to progressively refine weakest pre-conditions), a method known as program (or formal) refinement (or derivation), or sometimes "correctness-by-construction".


''Proving Assertions about Parallel Programs''

* Ashcroft, Edward A. (1975) J. Comput. Syst. Sci. 10(1): 110–135 Description: The paper introduced invariance proofs of concurrent programs. It did not receive much attention because it discussed programs in terms of
flow charts A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task. The flowchart shows the steps as boxes of v ...
and was not written clearly enough.


''An Axiomatic Proof Technique for Parallel Programs I''

* Description: This paper, along with the same authors' paper extends the axiomatic approach to parallel-program verification with the notion of Interference freedom.


''A Discipline of Programming''

* Dijkstra, Edsger W. (1976) Description: Dijkstra's classic postgraduate-level textbook extends his earlier paper ''Guarded commands, nondeterminacy and formal derivation of programs'' and firmly establishes the principle of formally deriving programs and their proofs hand-in-hand from their specifications.


The Temporal Logic of Programs

* Description: Pnueli suggests
temporal logic In logic, temporal logic is any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time (for example, "I am ''always'' hungry", "I will ''eventually'' be hungry", or "I will be hungry ''until'' I ...
for formal verification in which the basic concept is the time dependence of events.


''Communicating Sequential Processes''

* Description: Hoare's paper introduces the idea of concurrent processes (i.e. programs) that do not share variables but instead cooperate solely by exchanging synchronous messages. This has spurred great deal of research and advances, which are detailed in
communicating sequential processes In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras, or pro ...


''Characterizing correctness properties of parallel programs using fixpoints''

Description: Introduces model checking as a procedure to check correctness of concurrent programs.


''A Calculus of Communicating Systems''

Description: Milner's book describes a process algebra, called CCS, that permits systems of concurrent processes to be reasoned about formally, something that has not been possible for earlier models of concurrency (semaphores, critical sections, original CSP).


''Software Development: A Rigorous Approach''

Description: Jones' book is the first full-length exposition of the Vienna Development Method (VDM), which had evolved (principally) at IBM's Vienna research lab over the previous decade and which combines the idea of program refinement as per Dijkstra with that of data refinement (or reification) whereby algebraically defined abstract data types are formally transformed into progressively more "concrete" representations.


''The Science of Programming''

* Description: This textbook describes Dijkstra's weakest precondition method of formal program derivation, except in a very much more accessible manner than Dijkstra's earlier ''A Discipline of Programming''. It has jokingly been called "Dijkstra for the masses". The emphasis is on developing program and proof hand-in-hand, with the proof ideas leading the way. The examples in the book are small-scale, emphasizing basic algorithms, such as sorting, merging, and string manipulation. Subroutines (functions) are included, but not object-oriented and functional programming environments.


''Denotational Semantics''

* Stoy, Joe (1981) Description: This book is the first postgraduate-level book-length exposition of the mathematical (or functional) approach to the formal semantics of programming languages (in contrast to the operational and algebraic approaches).


''Communicating Sequential Processes''

Hoare, Tony (1985). ''
Communicating Sequential Processes In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras, or pro ...
''.
Prentice Hall Prentice Hall was an American major educational publisher owned by Savvas Learning Company. Prentice Hall publishes print and digital content for the 6–12 and higher-education market, and distributes its technical titles through the Safari B ...
International Series in Computer Science. (hardback) or (paperback). (PDF available at http://www.usingcsp.com/) Description: Hoare's book (currently the third most cited computer science reference of all time) presents an updated CSP model in which cooperating processes do not even have program variables and which, like CCS, permits systems of processes to be reasoned about formally.


''Linear logic''

* Description: Girard's
linear logic Linear logic is a substructural logic proposed by Jean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter. Although the logic has also be ...
was a breakthrough in designing typing systems for sequential and concurrent computation, especially for resource conscious typing systems.


''A Calculus of Mobile Processes''

Description: This paper introduces the Pi-Calculus, a generalisation of CCS that allows process mobility. This extremely simple calculus has become the dominant paradigm in the theoretical study of programming languages, typing systems, and program logics. A second paper, which completes the work, can be downloade
here


''Communication and Concurrency''

Description: Milner's text is a more accessible, although still technically advanced, exposition of his earlier CCS work.


''The Z Notation: A Reference Manual''

* Description: Spivey's classic text summarises the formal specification language
Z notation The Z notation is a formal specification language used for describing and modelling computing systems. It is targeted at the clear specification of computer programs and computer-based systems in general. History In 1974, Jean-Raymond Abrial ...
, which, although originated by Jean-Raymond Abrial, had evolved (principally) at Oxford University over the previous decade.


''a Practical Theory of Programming''

Description: the up-to-date version of
Predicative programming Predicative programming is the original name of a formal method for program specification language, specification and program refinement, refinement, more recently called a Practical Theory of Programming, invented by Eric Hehner. The central idea i ...
. The basis for C.A.R. Hoare's UTP. The simplest and most comprehensive formal methods. The latest online edition can be downloade
here


References

* {{DEFAULTSORT:Important Publications In Theoretical Computer Science History of computer science
Theoretical computer science Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumsc ...