Polynomial-time
   HOME
*



picture info

Polynomial-time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the Worst-case complexity, worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Complexity Class
In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time or memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements. For instance, the class P is the set of decision problems solvable by a deterministic Turing machine in polynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g. counting problems and function problems) and using other models of computation (e.g. probabilistic Turing machines, interactive proof systems, Boolean circuits, and quantum computers). The study of the relationships between complexity classes is a ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Average-case Complexity
In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs. There are three primary motivations for studying average-case complexity.O. Goldreich and S. Vadhan, Special issue on worst-case versus average-case complexity, Comput. Complex. 16, 325–330, 2007. First, although some problems may be intractable in the worst-case, the inputs which elicit this behavior may rarely occur in practice, so the average-case complexity may be a more accurate measure of an algorithm's performance. Second, average-case complexity analysis provides tools and techniques to generate hard instances of problems which can be utilized in areas such as cryptography and derandomization. Third, average-case complexity allows discrimin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cole-Vishkin Algorithm
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a plane graph is just a vertex coloring of its dual. However, non-vertex coloring problems are often stated and studied as-is. This is p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Information Processing Letters
''Information Processing Letters'' is a peer reviewed scientific journal in the field of computer science, published by Elsevier Elsevier () is a Dutch academic publishing company specializing in scientific, technical, and medical content. Its products include journals such as ''The Lancet'', ''Cell'', the ScienceDirect collection of electronic journals, '' Trends'', th .... The aim of the journal is to enable fast dissemination of results in the field of information processing in the form of short papers. Submissions are limited to nine double-spaced pages. Both theoretical and experimental research is covered. External links * Computer science journals Publications established in 1971 Semi-monthly journals Elsevier academic journals {{compu-journal-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bubble Sort
Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the input list element by element, comparing the current element with the one after it, swapping their values if needed. These passes through the list are repeated until no swaps had to be performed during a pass, meaning that the list has become fully sorted. The algorithm, which is a comparison sort, is named for the way the larger elements "bubble" up to the top of the list. This simple algorithm performs poorly in real world use and is used primarily as an educational tool. More efficient algorithms such as quicksort, timsort, or merge sort are used by the sorting libraries built into popular programming languages such as Python and Java. Analysis Performance Bubble sort has a worst-case and average complexity of O(n^2), where n is the number of items being sorted. Most practical sorting algorithms have substantially better worst-case or average complexity, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial Evaluation
In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial P(x_1, x_2) = 2x_1x_2 + x_1^3 + 4 at x_1=2, x_2=3 consists of computing P(2,3)= 2\cdot 2\cdot 3 + 2^3+4=24. See also For evaluating the univariate polynomial a_nx^n+a_x^+\cdots +a_0, the most naive method would use n multiplications to compute a_n x^n, use n-1 multiplications to compute a_ x^ and so on for a total of \tfrac multiplications and n additions. Using better methods, such as Horner's rule, this can be reduced to n multiplications and n additions. If some preprocessing is allowed, even more savings are possible. Background This problem arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In cryptography and hash tables, polynomials are used to compute k-independent hashing. In ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fast Fourier Transform
A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexity of computing the DFT from O\left(N^2\right), which arises if one simply applies the definition of DFT, to O(N \log N), where N is the data size. The difference in speed can be enormous, especially for long data sets where ''N'' may be in the thousands or millions. In the presence of round-off error, many FFT algorithm ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Comparison Sort
A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation (often a "less than or equal to" operator or a three-way comparison) that determines which of two elements should occur first in the final sorted list. The only requirement is that the operator forms a total preorder over the data, with: # if ''a'' ≤ ''b'' and ''b'' ≤ ''c'' then ''a'' ≤ ''c'' (transitivity) # for all ''a'' and ''b'', ''a'' ≤ ''b'' or ''b'' ≤ ''a'' ( connexity). It is possible that both ''a'' ≤ ''b'' and ''b'' ≤ ''a''; in this case either may come first in the sorted list. In a stable sort, the input order determines the sorted order in this case. A metaphor for thinking about comparison sorts is that someone has a set of unlabelled weights and a balance scale. Their goal is to line up the weights in order by their weight without any information except that obtained by placing two weights on the scale and seeing which one i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Polygon Triangulation
In computational geometry, polygon triangulation is the partition of a polygonal area (simple polygon) into a set of triangles, i.e., finding a set of triangles with pairwise non-intersecting interiors whose union is . Triangulations may be viewed as special cases of planar straight-line graphs. When there are no holes or added points, triangulations form maximal outerplanar graphs. Polygon triangulation without extra vertices Over time, a number of algorithms have been proposed to triangulate a polygon. Special cases It is trivial to triangulate any convex polygon in linear time into a fan triangulation, by adding diagonals from one vertex to all other non-nearest neighbor vertices. The total number of ways to triangulate a convex ''n''-gon by non-intersecting diagonals is the (''n''−2)nd Catalan number, which equals :\frac, a formula found by Leonhard Euler. A monotone polygon can be triangulated in linear time with either the algorithm of A. Fournier and D.Y. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Raimund Seidel
Raimund G. Seidel is a German and Austrian theoretical computer scientist and an expert in computational geometry. Seidel was born in Graz, Austria, and studied with Hermann Maurer at the Graz University of Technology. He received his M. Sc. in 1981 from University of British Columbia under David G. Kirkpatrick. He received his Ph.D. in 1987 from Cornell University under the supervision of John Gilbert. After teaching at the University of California, Berkeley, he moved in 1994 to Saarland University. In 1997 he and Christoph M. Hoffmann were program chairs for the Symposium on Computational Geometry. In 2014, he took over as Scientific Director of the Leibniz Center for Informatics (LZI) from Reinhard Wilhelm. Seidel invented backwards analysis of randomized algorithms and used it to analyze a simple linear programming algorithm that runs in linear time for problems of bounded dimension. With his student Cecilia R. Aragon in 1989 he devised the treap data structure, . and he is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Linear Search
In computer science, a linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched. A linear search runs in at worst linear time and makes at most comparisons, where is the length of the list. If each element is equally likely to be searched, then linear search has an average case of comparisons, but the average case can be affected if the search probabilities for each element vary. Linear search is rarely practical because other search algorithms and schemes, such as the binary search algorithm and hash tables, allow significantly faster searching for all but short lists. Algorithm A linear search sequentially checks each element of the list until it finds an element that matches the target value. If the algorithm reaches the end of the list, the search terminates unsuccessfully. Basic algorithm Given a list of elements with values or record ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Kadane's Algorithm
In computer science, the maximum sum subarray problem, also known as the maximum segment sum problem, is the task of finding a contiguous subarray with the largest sum, within a given one-dimensional array A ...nof numbers. Formally, the task is to find indices i and j with 1 \leq i \leq j \leq n , such that the sum : \sum_^j A is as large as possible. (Some formulations of the problem also allow the empty subarray to be considered; by convention, the sum of all values of the empty subarray is zero.) Each number in the input array A could be positive, negative, or zero. For example, for the array of values minus;2, 1, −3, 4, −1, 2, 1, −5, 4 the contiguous subarray with the largest sum is , −1, 2, 1 with sum 6. Some properties of this problem are: # If the array contains all non-negative numbers, then the problem is trivial; a maximum subarray is the entire array. # If the array contains all non-positive numbers, then a solution is any subarray of siz ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]