Graph Embedding
   HOME
*



picture info

Graph Embedding
In topological graph theory, an embedding (also spelled imbedding) of a Graph (discrete mathematics), graph G on a surface (mathematics), surface \Sigma is a representation of G on \Sigma in which points of \Sigma are associated with graph theory, vertices and simple arcs (Homeomorphism, homeomorphic images of [0,1]) are associated with graph theory, edges in such a way that: * the endpoints of the arc associated with an edge e are the points associated with the end vertices of e, * no arcs include points associated with other vertices, * two arcs never intersect at a point which is interior to either of the arcs. Here a surface is a compact space, compact, connected space, connected 2-manifold. Informally, an embedding of a graph into a surface is a drawing of the graph on the surface in such a way that its edges may intersect only at their endpoints. It is well known that any finite graph can be embedded in 3-dimensional Euclidean space \mathbb^3.. A planar graph is one that ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Heawood Graph And Map On Torus
Heawood is a surname. Notable people with the surname include: *Jonathan Heawood, British journalist *Percy John Heawood (1861–1955), British mathematician **Heawood conjecture **Heawood graph **Heawood number See also

*Heywood (surname) {{surname ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Ribbon Graph
In topological graph theory, a ribbon graph is a way to represent graph embeddings, equivalent in power to signed rotation systems or graph-encoded maps. It is convenient for visualizations of embeddings, because it can represent unoriented surfaces without self-intersections (unlike embeddings of the whole surface into three-dimensional Euclidean space) and because it omits the parts of the surface that are far away from the graph, allowing holes through which the rest of the embedding can be seen. Ribbon graphs are also called fat graphs. Definition In a ribbon graph representation, each vertex of a graph is represented by a topological disk, and each edge is represented by a topological rectangle with two opposite ends glued to the edges of vertex disks (possibly to the same disk as each other). Embeddings A ribbon graph representation may be obtained from an embedding of a graph onto a surface (and a metric on the surface) by choosing a sufficiently small number \epsilon, and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


William Lawrence Kocay
William Lawrence Kocay is a Canadian professor at the department of computer science at St. Paul's College of the University of Manitoba and a graph theorist. He is known for his work in graph algorithms and the reconstruction conjecture and is affectionately referred to as "Wild Bill" by his students. Bill Kocay is a former managing editor (from Jan 1988 to May 1997) of ''Ars Combinatoria'', a Canadian journal of combinatorial mathematics, is a founding fellow of the Institute of Combinatorics and its Applications. His research interests include algorithms for graphs, the development of mathematical software, the graph reconstruction problem, the graph isomorphism problem, projective geometry, Hamiltonian cycles, planarity, graph embedding algorithms, graphs on surfaces, and combinatorial designs. Publications * Some new methods in reconstruction theory, W. L. Kocay – Combinatorial mathematics, IX (Brisbane, 1981), LNM * Some NP-complete problems for hypergraph degree seq ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Wendy Myrvold
Wendy Joanne Myrvold is a Canadian mathematician and computer scientist known for her work on graph algorithms, planarity testing, and algorithms in enumerative combinatorics. She is a professor emeritus of computer science at the University of Victoria. Myrvold completed her Ph.D. in 1988 at the University of Waterloo. Her dissertation, ''The Ally and Adversary Reconstruction Problems'', was supervised by Charles Colbourn Charles Joseph Colbourn (born October 24, 1953) is a Canadian computer scientist and mathematician, whose research concerns graph algorithms, combinatorial designs, and their applications. From 1996 to 2001 he was the Dorothean Professor of Comput .... References External linksHome page* Canadian women mathematicians Canadian women computer scientists Canadian computer scientists Graph theorists University of Waterloo alumni University of Victoria faculty Year of birth missing (living people) Living people {{compu-bio-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


John Reif
John H. Reif (born 1951) is an American academic, and Professor of Computer Science at Duke University, who has made contributions to large number of fields in computer science: ranging from algorithms and computational complexity theory to robotics and to game theory. Biography John Reif received a B.S. (magna cum laude) from Tufts University in 1973, a M.S. from Harvard University in 1975 and a Ph.D. from Harvard University in 1977. From 1983 to 1986 he was Associate Professor of Harvard University, and since 1986 he has been Professor of Computer Science at Duke University. Currently he holds the Hollis Edens Distinguished Professor, Trinity College of Arts and Sciences, Duke University. From 2011-2014 he was Distinguished Adjunct Professor, Faculty of Computing and Information Technology (FCIT), King Abdulaziz University (KAU), Jeddah, Saudi Arabia. John Reif is President of Eagle Eye Research, Inc., which specializes in defense applications of DNA biotechnology. He has als ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Gary Miller (computer Scientist)
Gary Lee Miller is a professor of Computer Science at Carnegie Mellon University, Pittsburgh, United States. In 2003 he won the ACM Paris Kanellakis Award (with three others) for the Miller–Rabin primality test. He was made an ACM Fellow in 2002 and won the Knuth Prize in 2013. Early life and career Miller received his Ph.D. from the University of California, Berkeley in 1975 under the direction of Manuel Blum. Following periods on the faculty at the University of Waterloo, the University of Rochester, MIT and the University of Southern California, Miller moved to Carnegie Mellon University, where he is now Professor of Computer Science. In addition to his influential thesis on computational number theory and primality testing, Miller has worked on many central topics in computer science, including graph isomorphism, parallel algorithms, computational geometry and scientific computing. His most recent focus on scientific computing led to breakthrough results with student ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


ACM Symposium On Theory Of Computing
The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012. As writes, STOC and its annual IEEE counterpart FOCS (the Symposium on Foundations of Computer Science) are considered the two top conferences in theoretical computer science, considered broadly: they “are forums for some of the best work throughout theory of computing that promote breadth among theory of computing researchers and help to keep the community together.” includes regular attendance at STOC and FOCS as one of several defining characteristics of theoretical computer scientists. Awards The Gödel Prize for outstanding papers in theoretical computer science is presented alternately ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Time Complexity
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial Time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expresse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Fixed-parameter Tractability
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by . Under the assumption that P ≠ NP, there exist many natural problems that require superpolynomial running time when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter . Hence, if is fixed at a small value and the growth of the function over is relatively small then such p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP-complete
In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions. # the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified. The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers to nondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm. Polynomial time refers to an amount of time that is considered "quick" for a de ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]