HOME
*





P-complete Problems
In computational complexity theory, a decision problem is P-complete (complete (complexity), complete for the P (complexity), complexity class P) if it is in P and every problem in P Reduction (complexity), can be reduced to it by an appropriate reduction. The notion of P-complete decision problems is useful in the analysis of: * which problems are difficult to parallelize effectively, * which problems are difficult to solve in limited space. specifically when stronger notions of reducibility than polytime-reducibility are considered. The specific type of reduction used varies and may affect the exact set of problems. Generically, reductions stronger than polynomial-time reductions are used, since all languages in P (except the empty language and the language of all strings) are P-complete under polynomial-time reductions. If we use NC (complexity), NC reductions, that is, reductions which can operate in polylogarithmic time on a parallel computer with a polynomial number of pro ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Graph Theory
In mathematics, graph theory is the study of ''graphs'', which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of '' vertices'' (also called ''nodes'' or ''points'') which are connected by '' edges'' (also called ''links'' or ''lines''). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics. Definitions Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures. Graph In one restricted but very common sense of the term, a graph is an ordered pair G=(V,E) comprising: * V, a set of vertices (also called nodes or points); * E \subseteq \, a set of edges (also called links or lines), which are unordered pairs of vertices (that is, an edge is associated with t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Time Complexity
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sparse Language
In computational complexity theory, a sparse language is a formal language (a set of strings) such that the complexity function, counting the number of strings of length ''n'' in the language, is bounded by a polynomial function of ''n''. They are used primarily in the study of the relationship of the complexity class NP with other classes. The complexity class of all sparse languages is called SPARSE. Sparse languages are called ''sparse'' because there are a total of 2''n'' strings of length ''n'', and if a language only contains polynomially many of these, then the proportion of strings of length ''n'' that it contains rapidly goes to zero as ''n'' grows. All unary languages are sparse. An example of a nontrivial sparse language is the set of binary strings containing exactly ''k'' 1 bits for some fixed ''k''; for each ''n'', there are only \binom strings in the language, which is bounded by ''n''''k''. Relationships to other complexity classes SPARSE contains TALLY, the cl ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lambda Calculus
Lambda calculus (also written as ''λ''-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. Lambda calculus consists of constructing § lambda terms and performing § reduction operations on them. In the simplest form of lambda calculus, terms are built using only the following rules: * x – variable, a character or string representing a parameter or mathematical/logical value. * (\lambda x.M) – abstraction, function definition (M is a lambda term). The variable x becomes bound in the expression. * (M\ N) – application, applying a function M to an argument N. M and N are lambda terms. The reduction operations include: * (\lambda x.M \rightarrow(\l ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Type Theory
In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that were proposed as foundations are Alonzo Church's typed λ-calculus and Per Martin-Löf's intuitionistic type theory. Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions. History Type theory was created to avoid a paradox in a mathematical foundation based on naive set theory and formal logic. Russell's paradox, which was discovered by Bertrand Russell, existed because a set could be defined using "all possible sets", which included itself. Between 1902 and 1908, Bertrand Russell proposed various "theories of type" to fix the problem. By 1908 Russell arrived at a "ramified" theory ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Type Inference
Type inference refers to the automatic detection of the type of an expression in a formal language. These include programming languages and mathematical type systems, but also natural languages in some branches of computer science and linguistics. Nontechnical explanation Types in a most general view can be associated to a designated use suggesting and restricting the activities possible for an object of that type. Many nouns in language specify such uses. For instance, the word leash indicates a different use than the word line. Calling something a table indicates another designation than calling it firewood, though it might be materially the same thing. While their material properties make things usable for some purposes, they are also subject of particular designations. This is especially the case in abstract fields, namely mathematics and computer science, where the material is finally only bits or formulas. To exclude unwanted, but materially possible uses, the concept of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gzip
gzip is a file format and a software application used for file compression and decompression. The program was created by Jean-loup Gailly and Mark Adler as a free software replacement for the compress program used in early Unix systems, and intended for use by GNU (from where the "g" of gzip is derived). Version 0.1 was first publicly released on 31 October 1992, and version 1.0 followed in February 1993. The decompression of the ''gzip'' format can be implemented as a streaming algorithm, an important feature for Web protocols, data interchange and ETL (in standard pipes) applications. File format gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding. DEFLATE was intended as a replacement for LZW and other patent-encumbered data compression algorithms which, at the time, limited the usability of compress and other popular archivers. "gzip" is often also used to refer to the gzip file format, which is: * a 10-byte header, contai ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


LZ77
LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations including LZW, LZSS, LZMA and others. Besides their academic influence, these algorithms formed the basis of several ubiquitous compression schemes, including GIF and the DEFLATE algorithm used in PNG and ZIP. They are both theoretically dictionary coders. LZ77 maintains a sliding window during compression. This was later shown to be equivalent to the ''explicit dictionary'' constructed by LZ78—however, they are only equivalent when the entire data is intended to be decompressed. Since LZ77 encodes and decodes from a sliding window over previously seen characters, decompression must always start at the beginning of the input. Conceptually, LZ78 decompression could allow random access to the input if the entire dictionary were known in ad ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Conway's Game Of Life
The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine. Rules The universe of the Game of Life is an infinite, two-dimensional orthogonal grid of square ''cells'', each of which is in one of two possible states, ''live'' or ''dead'' (or ''populated'' and ''unpopulated'', respectively). Every cell interacts with its eight '' neighbours'', which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur: # Any live cell with fewer than two live neighbours dies, as if by underpopulation. # Any live cell with two or three live neig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Boolean Satisfiability Problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called ''satisfiable''. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is ''unsatisfiable''. For example, the formula "''a'' AND NOT ''b''" is satisfiable because one can find the values ''a'' = TRUE and ''b'' = FALSE, which make (''a'' AND NOT ''b'') = TRUE. In contrast, "''a'' AND NOT ''a''" is unsatisfiable. SAT is the first problem that was proved to be NP-complete ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]