HOME
*





Counter-machine Model
Although some authors use the name "register machine" synonymously with "counter machine", this article will give details and examples of only of the most primitive speciesthe "counter machine"of the genus "register machine." Within the species "counter machine" there are a number of varieties: the models of Hermes (1954), Kaphengst (1957), Ershov (1958), Péter (1958), Minsky (1961) and Minsky (1967), Melzak (1961), Lambek (1961), Shepherdson and Sturgis (1963), and Schönhage (1980). These models will be described in more detail in the following. The models in more detail 1954: Hermes' model Shepherdson and Sturgis observe that "the proof of this universality f digital computers to Turing machines... seems to have been first written down by Hermes, who showed in --their reference numberhow an idealized computer could be programmed to duplicate the behavior of any Turing machine" (Shepherdson and Sturgis, p. 219). Shepherdson and Sturgis observe that: :"Kaphengst' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Register Machine
In mathematical logic and theoretical computer science a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent. Overview The register machine gets its name from its use of one or more " registers". In contrast to the tape and head used by a Turing machine, the model uses multiple, uniquely addressed registers, each of which holds a single positive integer. There are at least four sub-classes found in literature, here listed from most primitive to the most like a computer: * Counter machine – the most primitive and reduced theoretical model of a computer hardware. Lacks indirect addressing. Instructions are in the finite state machine in the manner of the Harvard architecture. *Pointer machine – a blend of counter machine and RAM models. Less common and more abstract than either model. Instructions are in the finite state machine in the manner of the Harvard architecture. *Random-access mach ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


William Lowell Putnam Mathematical Competition
The William Lowell Putnam Mathematical Competition, often abbreviated to Putnam Competition, is an annual list of mathematics competitions, mathematics competition for undergraduate college students enrolled at institutions of higher learning in the United States and Canada (regardless of the students' nationalities). It awards a scholarship and cash prizes ranging from $250 to $2,500 for the top students and $5,000 to $25,000 for the top schools, plus one of the top five individual scorers (designated as ''#Putnam_Fellows, Putnam Fellows'') is awarded a scholarship of up to $12,000 plus tuition at Harvard University (Putnam Fellow Prize Fellowship), the top 100 individual scorers have their names mentioned in the American Mathematical Monthly (alphabetically ordered within rank), and the names and addresses of the top 500 contestants are mailed to all participating institutions. It is widely considered to be the most prestigious university-level mathematics competition in the world, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Space Hierarchy Theorem
In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space ''n'' log ''n'' than in space ''n''. The somewhat weaker analogous theorems for time are the time hierarchy theorems. The foundation for the hierarchy theorems lies in the intuition that with either more time or more space comes the ability to compute more functions (or decide more languages). The hierarchy theorems are used to demonstrate that the time and space complexity classes form a hierarchy where classes with tighter bounds contain fewer languages than those with more relaxed bounds. Here we define and prove the space hierarchy theorem. The space hierarchy theorems rely on the concept of space-constructible functions. The deterministic and nondeterminis ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Time Hierarchy Theorem
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with ''n''2 time but not ''n'' time. The time hierarchy theorem for deterministic multi-tape Turing machines was first proven by Richard E. Stearns and Juris Hartmanis in 1965. It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the Universal Turing machine. Consequent to the theorem, for every deterministic time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all time-constructible functions ''f''(''n''), :\mathsf\left(o\left(\frac\right)\righ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Richard Jeffrey
Richard Carl Jeffrey (August 5, 1926 – November 9, 2002) was an American philosopher, logician, and probability theorist. He is best known for developing and championing the philosophy of radical probabilism and the associated heuristic of probability kinematics, also known as Jeffrey conditioning. Life and career Born in Boston, Massachusetts, Jeffrey served in the U.S. Navy during World War II. As a graduate student he studied under Rudolf Carnap and Carl Hempel. He received his M.A. from the University of Chicago in 1952 and his Ph.D. from Princeton in 1957. After holding academic positions at MIT, City College of New York, Stanford University, and the University of Pennsylvania, he joined the faculty of Princeton in 1974 and became a professor emeritus there in 1999. He was also a visiting professor at the University of California, Irvine. Jeffrey, who died of lung cancer at the age of 76, was known for his sense of humor, which often came through in his breezy writin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


John P
John is a common English name and surname: * John (given name) * John (surname) John may also refer to: New Testament Works * Gospel of John, a title often shortened to John * First Epistle of John, often shortened to 1 John * Second Epistle of John, often shortened to 2 John * Third Epistle of John, often shortened to 3 John People * John the Baptist (died c. AD 30), regarded as a prophet and the forerunner of Jesus Christ * John the Apostle (lived c. AD 30), one of the twelve apostles of Jesus * John the Evangelist, assigned author of the Fourth Gospel, once identified with the Apostle * John of Patmos, also known as John the Divine or John the Revelator, the author of the Book of Revelation, once identified with the Apostle * John the Presbyter, a figure either identified with or distinguished from the Apostle, the Evangelist and John of Patmos Other people with the given name Religious figures * John, father of Andrew the Apostle and Saint Peter * Pope Joh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


George Boolos
George Stephen Boolos (; 4 September 1940 – 27 May 1996) was an American philosopher and a mathematical logician who taught at the Massachusetts Institute of Technology. Life Boolos is of Greek-Jewish descent. He graduated with an A.B. in mathematics from Princeton University after completing a senior thesis, titled "A simple proof of Gödel's first incompleteness theorem", under the supervision of Raymond Smullyan. Oxford University awarded him the B.Phil. in 1963. In 1966, he obtained the first PhD in philosophy ever awarded by the Massachusetts Institute of Technology, under the direction of Hilary Putnam. After teaching three years at Columbia University, he returned to MIT in 1969, where he spent the rest of his career. A charismatic speaker well known for his clarity and wit, he once delivered a lecture (1994b) giving an account of Gödel's second incompleteness theorem, employing only words of one syllable. At the end of his viva, Hilary Putnam asked him, "And ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Pointer Machine
In theoretical computer science a pointer machine is an "atomistic" ''abstract computational machine'' model akin to the random-access machine. A pointer algorithm is an algorithm restricted to the pointer machine model. Depending on the type, a pointer machine may be called a linking automaton, a KU-machine, an SMM, an atomistic LISP machine, a tree-pointer machine, etc. (cf Ben-Amram 1995). At least three major varieties exist in the literature—the Kolmogorov-Uspenskii model (KUM, KU-machine), the Knuth linking automaton, and the Schönhage Storage Modification Machine model (SMM). The SMM seems to be the most common. From its "read-only tape" (or equivalent) a pointer machine receives ''input''—bounded symbol-sequences ("words") made of at least two symbols e.g. -- and it writes ''output'' symbol-sequences on an output "write-only" tape (or equivalent). To transform a symbol-sequence (input word) to an output symbol-sequence the machine is equipped with a "program"—a fi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Mu Recursive Function
In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a computable function, formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In Computability theory (computation), computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions ar ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Primitive Recursive Function
In computability theory, a primitive recursive function is roughly speaking a function that can be computed by a computer program whose loops are all "for" loops (that is, an upper bound of the number of iterations of every loop can be determined before entering the loop). Primitive recursive functions form a strict subset of those general recursive functions that are also total functions. The importance of primitive recursive functions lies in the fact that most computable functions that are studied in number theory (and more generally in mathematics) are primitive recursive. For example, addition and division, the factorial and exponential function, and the function which returns the ''n''th prime are all primitive recursive. In fact, for showing that a computable function is primitive recursive, it suffices to show that its time complexity is bounded above by a primitive recursive function of the input size. It is hence not that easy to devise a computable function that is ''n ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Post–Turing Machine
A Post–Turing machineRajendra Kumar, ''Theory of Automata'', Tata McGraw-Hill Education, 2010, p. 343. is a "program formulation" of a type of Turing machine, comprising a variant of Emil Post's Turing-equivalent model of computation. Post's model and Turing's model, though very similar to one another, were developed independently. Turing's paper was received for publication in May 1936, followed by Post's in October. A Post–Turing machine uses a binary alphabet, an infinite sequence of binary storage locations, and a primitive programming language with instructions for bi-directional movement among the storage locations and alteration of their contents one at a time. The names "Post–Turing program" and "Post–Turing machine" were used by Martin Davis in 1973–1974 (Davis 1973, p. 69ff). Later in 1980, Davis used the name "Turing–Post program" (Davis, in Steen p. 241). 1936: Post model In his 1936 paper "Finite Combinatory Processes—Formulation 1", ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]