Blum–Shub–Smale Machine
   HOME





Blum–Shub–Smale Machine
In computation theory, the Blum–Shub–Smale machine, or BSS machine, is a model of computation introduced by Lenore Blum, Michael Shub and Stephen Smale, intended to describe computations over the real numbers. Essentially, a BSS machine is a Random Access Machine with registers that can store arbitrary real numbers and that can compute rational functions over reals in a single time step. It is closely related to the Real RAM model. BSS machines are more powerful than Turing machines, because the latter are by definition restricted to a finite set of symbols. A Turing machine can represent a countable set (such as the rational numbers) by strings of symbols, but this does not extend to the uncountable real numbers. Definition A BSS machine M is given by a list I of N+1 instructions (to be described below), indexed 0, 1, \dots, N. A configuration of M is a tuple (k,r,w,x), where k is the index of the instruction to be executed next, r and w are registers holding non-negative int ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Computation Theory
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: ''"What are the fundamental capabilities and limitations of computers?".'' In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" mode ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Uncountable Set
In mathematics, an uncountable set, informally, is an infinite set that contains too many elements to be countable. The uncountability of a set is closely related to its cardinal number: a set is uncountable if its cardinal number is larger than aleph-null, the cardinality of the natural numbers. Examples of uncountable sets include the set of all real numbers and set of all subsets of the natural numbers. Characterizations There are many equivalent characterizations of uncountability. A set ''X'' is uncountable if and only if any of the following conditions hold: * There is no injective function (hence no bijection) from ''X'' to the set of natural numbers. * ''X'' is nonempty and for every ω- sequence of elements of ''X'', there exists at least one element of X not included in it. That is, ''X'' is nonempty and there is no surjective function from the natural numbers to ''X''. * The cardinality of ''X'' is neither finite nor equal to \aleph_0 ( aleph-null). * The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Quantum Finite Automaton
In physics, a quantum (: quanta) is the minimum amount of any physical entity (physical property) involved in an interaction. The fundamental notion that a property can be "quantized" is referred to as "the hypothesis of quantization". This means that the magnitude of the physical property can take on only discrete values consisting of integer multiples of one quantum. For example, a photon is a single quantum of light of a specific frequency (or of any other form of electromagnetic radiation). Similarly, the energy of an electron bound within an atom is quantized and can exist only in certain discrete values. Atoms and matter in general are stable because electrons can exist only at discrete energy levels within an atom. Quantization is one of the foundations of the much broader physics of quantum mechanics. Quantization of energy and its influence on how energy and matter interact (quantum electrodynamics) is part of the fundamental framework for understanding and describing ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Real Computer
In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable." These hypothetical computing machines can be viewed as idealised analog computers which operate on real numbers, whereas digital computers are limited to computable numbers. They may be further subdivided into differential and algebraic models (digital computers, in this context, should be thought of as topological, at least insofar as their operation on computable reals is concerned). Depending on the model chosen, this may enable real computers to solve problems that are inextricable on digital computers (For example, Hava Siegelmann's neural nets can have noncomputable real weights, making them able to compute nonrecursive la ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hypercomputation
Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic. The Church–Turing thesis states that any "computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense. Technically, the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of deterministic, rather than random, uncomputable functions. History A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation '' Systems of L ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


General Purpose Analog Computer
The general purpose analog computer (GPAC) is a mathematical model of analog computers first introduced in 1941 by Claude Shannon. This model consists of circuits where several basic units are interconnected in order to compute some function. The GPAC can be implemented in practice through the use of mechanical devices or analog electronics. Although analog computers have fallen almost into oblivion due to emergence of the digital computer, the GPAC has recently been studied as a way to provide evidence for the physical Church–Turing thesis. This is because the GPAC is also known to model a large class of dynamical systems defined with ordinary differential equations, which appear frequently in the context of physics. In particular it was shown in 2007 that (a deterministic variant of) the GPAC is equivalent, in computability terms, to Turing machines, thereby proving the physical Church–Turing thesis for the class of systems modelled by the GPAC. This was recently strengthened ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Complexity And Real Computation
''Complexity and Real Computation'' is a book on the computational complexity theory of real computation. It studies algorithms whose inputs and outputs are real numbers, using the Blum–Shub–Smale machine as its model of computation. For instance, this theory is capable of addressing a question posed in 1991 by Roger Penrose in ''The Emperor's New Mind'': "is the Mandelbrot set computable?" The book was written by Lenore Blum, Felipe Cucker, Michael Shub and Stephen Smale, with a foreword by Richard M. Karp, and published by Springer-Verlag in 1998 ( doi:10.1007/978-1-4612-0701-6, ). Purpose Stephen Vavasis observes that this book fills a significant gap in the literature: although theoretical computer scientists working on discrete algorithms had been studying models of computation and their implications for the complexity of algorithms since the 1970s, researchers in numerical algorithms had for the most part failed to define their model of computation, leaving their resul ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Time Complexity
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is gene ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Complexity Class
In computational complexity theory, a complexity class is a set (mathematics), set of computational problems "of related resource-based computational complexity, complexity". The two most commonly analyzed resources are time complexity, time and space complexity, memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time complexity, time or space complexity, memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements. For instance, the class P (complexity), P is the set of decision problems solvable by a deterministic Turing machine in polynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g. Counting problem (complexity), counting problems and function problems) and using other models of computation (e.g. probabil ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Countable Set
In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is ''countable'' if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements. In more technical terms, assuming the axiom of countable choice, a set is ''countable'' if its cardinality (the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to be countably infinite. The concept is attributed to Georg Cantor, who proved the existence of uncountable sets, that is, sets that are not countable; for example the set of the real numbers. A note on terminology Although the terms "countable" and "countably infinite" as def ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Model Of Computation
In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology. Categories Models of computation can be classified into three categories: sequential models, functional models, and concurrent models. Sequential models Sequential models include: * Finite-state machines * Post machines ( Post–Turing machines and tag machines). * Pushdown automata * Register machines ** Random-access machines * Turing machines * Decision tree model * External memory model Functional models Functio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]