Token Reconfiguration
   HOME
*





Token Reconfiguration
In computational complexity theory and combinatorics, the token reconfiguration problem is a reconfiguration problem on a graph with both an initial and desired state for tokens. Given a graph G, an initial state of tokens is defined by a subset V \subset V(G) of the vertices of the graph; let n = , V, . Moving a token from vertex v_1 to vertex v_2 is valid if v_1 and v_2 are joined by a path in G that does not contain any other tokens; note that the distance traveled within the graph is inconsequential, and moving a token across multiple edges sequentially is considered a single move. A desired end state is defined as another subset V' \subset V(G). The goal is to minimize the number of valid moves to reach the end state from the initial state. Motivation The problem is motivated by so-called sliding puzzles, which are in fact a variant of this problem, often restricted to rectangular grid graphs with no holes. The most famous such puzzle, the 15 puzzle, is a variant of this ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Combinatorics
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science. Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ''ad hoc'' solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is gra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Reconfiguration
In discrete mathematics and theoretical computer science, reconfiguration problems are computational problems involving reachability or connectivity of state spaces. Types of problems Here, a state space is a discrete set of configurations of a system or solutions of a combinatorial problem, called states, together with a set of allowed moves linking one state to another. Reconfiguration problems may ask: *For a given class of problems, is the state space always connected? That is, can one transform every pair of states into each other with a sequence of moves? If not, what is the computational complexity of determining whether the state space for a particular problem is connected? *What is the diameter of the state space, the smallest number such that every two states can be transformed into each other with at most moves? *Given two states, what is the complexity of determining whether they can be transformed into each other, or of finding the shortest sequence of moves for tran ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Sliding Puzzle
A sliding puzzle, sliding block puzzle, or sliding tile puzzle is a combination puzzle that challenges a player to slide (frequently flat) pieces along certain routes (usually on a board) to establish a certain end-configuration. The pieces to be moved may consist of simple shapes, or they may be imprinted with colours, patterns, sections of a larger picture (like a jigsaw puzzle), numbers, or letters. Sliding puzzles are essentially two-dimensional in nature, even if the sliding is facilitated by mechanically interlinked pieces (like partially encaged marbles) or three-dimensional tokens. In manufactured wood and plastic products, the linking and encaging is often achieved in combination, through mortise-and-tenon key channels along the edges of the pieces. In at least one vintage case of the popular Chinese cognate game Huarong Road, a wire screen prevents lifting of the pieces, which remain loose. As the illustration shows, some sliding puzzles are mechanical puzzles. Howev ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Reduction (complexity)
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first. Intuitively, problem ''A'' is reducible to problem ''B'', if an algorithm for solving problem ''B'' efficiently (if it existed) could also be used as a subroutine to solve problem ''A'' efficiently. When this is true, solving ''A'' cannot be harder than solving ''B''. "Harder" means having a higher estimate of the required computational resources in a given context (e.g., higher time complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). The existence of a reduction from ''A'' to ''B'', can be written in the shorthand notation ''A'' ≤m ''B'', usually with a subscript on the ≤ to indicate the t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Set Cover
The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 1972. Given a set of elements (called the universe) and a collection of sets whose union equals the universe, the set cover problem is to identify the smallest sub-collection of whose union equals the universe. For example, consider the universe and the collection of sets Clearly the union of is . However, we can cover all of the elements with the following, smaller number of sets: More formally, given a universe \mathcal and a family \mathcal of subsets of \mathcal, a ''cover'' is a subfamily \mathcal\subseteq\mathcal of sets whose union is \mathcal. In the set covering decision problem, the input is a pair (\mathcal,\mathcal) and an integer k; the question is whether there is a set covering of size k or less. In the set covering optimization problem, the input is a pair (\ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Approximation Algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


L-reduction
In computer science, particularly the study of approximation algorithms, an L-reduction ("''linear reduction''") is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems. The term ''L reduction'' is sometimes used to refer to log-space reductions, by analogy with the complexity class L, but this is a different concept. Definition Let A and B be optimization problems and cA and cB their respective cost functions. A pair of functions ''f'' and ''g'' is an L-reduction if all of the following conditions are met: * functions ''f'' and ''g'' are computable in polynomial time, * if ''x'' is an instance of problem A, then ''f''(''x'') is an instance of problem B, * if ''y' '' is a solution to ''f''(''x''), then ''g ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP-complete Problems
In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions. # the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified. The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers to nondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm. Polynomial time refers to an amount of time that is considered "quick" for a dete ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Computational Problems In Graph Theory
Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as ''computers''. An especially well-known discipline of the study of computation is computer science. Physical process of Computation Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Examples of such physical systems are digital computers, mechanical computers, quantum computers, DNA computers, molecular computers, microfluidics-based computers, analog computers, and wetware computers. This point of view has been adopted by the physics of computation, a branch of theoretical physics, as well as the field of natural computing. An even more radical point of view, pancomputationalism (inaudible word), is the postulate of digital physics that argues that the evolution of the universe is itself a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Approximation Algorithms
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]