HOME





Unique Games Conjecture
In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate ''value'' of a certain type of game, known as a ''unique game'', has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP,The unique games conjecture is vacuously true if P = NP, as then every problem in NP would also be NP-hard. then for many important problems it is not only impossible to get an exact solution in polynomial time (as postulated by the P versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines. The conjecture is unusual in that the academic world ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Vertex Cover
In graph theory, a vertex cover (sometimes node cover) of a graph is a set of vertices that includes at least one endpoint of every edge of the graph. In computer science, the problem of finding a minimum vertex cover is a classical optimization problem. It is NP-hard, so it cannot be solved by a polynomial-time algorithm if P ≠ NP. Moreover, it is hard to approximate – it cannot be approximated up to a factor smaller than 2 if the unique games conjecture is true. On the other hand, it has several simple 2-factor approximations. It is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karp's 21 NP-complete problems and is therefore a classical NP-complete problem in computational complexity theory. Furthermore, the vertex cover problem is fixed-parameter tractable and a central problem in parameterized complexity theory. The minimum vertex cover problem can be formulated ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sanjeev Arora
Sanjeev Arora (born January 1968) is an Indian-American theoretical computer scientist who works in AI and Machine learning. Life Sanjeev scored the IIT JEE number 1 rank in 1986 He was a visiting scholar at the Institute for Advanced Study in 2002–03. In 2008 he was inducted as a Fellow of the Association for Computing Machinery. In 2011 he was awarded thACM Infosys Foundation Award(now renamed ACM Prize in Computing), given to mid-career researchers in Computer Science. He is a two-time recipient of the Gödel Prize (2001 & 2010). Arora has been awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and related problems from O(\log n) to O(\sqrt) (jointly with Satish Rao and Umesh Vazirani). In 2012 he became a Simons Investigator. Arora was elected in 2015 to the American Academy of Arts and Sciences and in 2018 to the National Academy of Sciences. He was a plenary speaker at the 2018 International Congress of Math ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Complete Bipartite Graph
In the mathematical field of graph theory, a complete bipartite graph or biclique is a special kind of bipartite graph where every vertex of the first set is connected to every vertex of the second set..Electronic edition page 17. Graph theory itself is typically dated as beginning with Leonhard Euler's 1736 work on the Seven Bridges of Königsberg. However, drawings of complete bipartite graphs were already printed as early as 1669, in connection with an edition of the works of Ramon Llull edited by Athanasius Kircher. Llull himself had made similar drawings of complete graphs three centuries earlier.. Definition A complete bipartite graph is a graph whose vertices can be partitioned into two subsets and such that no edge has both endpoints in the same subset, and every possible edge that could connect vertices in different subsets is part of the graph. That is, it is a bipartite graph such that for every two vertices and, is an edge in . A complete bipartite graph w ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Small Set Expansion Hypothesis
The small set expansion hypothesis or small set expansion conjecture in computational complexity theory is an unproven computational hardness assumption. Under the small set expansion hypothesis it is assumed to be computationally infeasible to distinguish between a certain class of expander graphs called "small set expanders" and other graphs that are very far from being small set expanders. This assumption implies the hardness of several other computational problems, and the optimality of certain known approximation algorithms. The small set expansion hypothesis is related to the unique games conjecture, another unproven computational hardness assumption according to which accurately approximating the value of certain games is computationally infeasible. If the small set expansion hypothesis is true, then so is the unique games conjecture. Background The ''edge expansion'' of a set X of vertices in a graph G is defined as \frac, where the vertical bars denote the number of el ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Marek Karpinski
Marek KarpinskiMarek Karpinski Biography
at the Hausdorff Center for Mathematics, Excellence Cluster is a and known for his research in the theory of s and their applications, ,

Parallel Repetition Theorem
Parallel may refer to: Mathematics * Parallel (geometry), two lines in the Euclidean plane which never intersect * Parallel (operator), mathematical operation named after the composition of electrical resistance in parallel circuits Science and engineering * Parallel (latitude), an imaginary east–west line circling a globe * Parallel of declination, used in astronomy * Parallel, a geometric term of location meaning "in the same direction" * Parallel electrical circuits Computing * Parallel (software), a UNIX utility for running programs in parallel Language * Parallelism (grammar), a balance of two or more similar words, phrases, or clauses * Parallelism (rhetoric) Entertainment * ''Parallel'' (manga) * ''Parallel'' (2018 film), a Canadian science fiction thriller film * ''Parallel'' (2024 film) an American science fiction thriller film * ''Parallel'' (video), a compilation of music videos by R.E.M. * ''Parallel'' (The Black Dog album), 1995 * ''Parallel'' (Fou ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Graph (discrete Mathematics)
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a Set (mathematics), set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called ''Vertex (graph theory), vertices'' (also called ''nodes'' or ''points'') and each of the related pairs of vertices is called an ''edge'' (also called ''link'' or ''line''). Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges. The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person ''A'' can shake hands with a person ''B'' only if ''B'' also shakes hands with ''A''. In contrast, if an edge from a person ''A'' to a person ''B'' means that ''A'' owes money to ''B'', then this graph is directed, because owing mon ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Maximum Cut
In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets and , such that the number of edges between and is as large as possible. Finding such a cut is known as the max-cut problem. The problem can be stated simply as follows. One wants a subset of the vertex set such that the number of edges between and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible. There is a more general version of the problem called weighted max-cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between and its complement rather than the number of the edges. The weighted max-cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Semidefinite Programming
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron. Semidefinite programming is a relatively new field of optimization which is of growing interest for several reasons. Many practical problems in operations research and combinatorial optimization can be modeled or approximated as semidefinite programming problems. In automatic control theory, SDPs are used in the context of linear matrix inequalities. SDPs are in fact a special case of cone programming and can be efficiently solved by interior point methods. All linear programs and (convex) quadratic programs can be expressed as SDPs, and via hierarchies of SDPs the solutions of polynomial optimization problems can be approximated. Semidefinite programming ha ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Approximation Algorithms
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Ryan O'Donnell (computer Scientist)
Ryan O'Donnell is a Canadian theoretical computer scientist and a professor at Carnegie Mellon University. He is known for his work on the analysis of Boolean functions and for authoring the textbook on this subject. He is also known for his work on computational learning theory, hardness of approximation, property testing, quantum computation and quantum information. O'Donnell attended the Gifted Program at O'Neill Collegiate and Vocational Institute in Oshawa, Ontario and then went on to complete his B.Sc. in Mathematics and Computer Science at the University of Toronto. He then completed his Ph.D. at the Massachusetts Institute of Technology (MIT) in 2003, advised by Madhu Sudan. Research O'Donnell proved that the Goemans–Williamson approximation algorithm for MAX-CUT is optimal, assuming the unique games conjecture. The proof follows from two papers, one in 2004 with Subhash Khot, Guy Kindler, and Elchanan Mossel which reduced this statement to proving the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]