Salem–Spencer Set
   HOME
*



picture info

Salem–Spencer Set
In mathematics, and in particular in arithmetic combinatorics, a Salem-Spencer set is a set of numbers no three of which form an arithmetic progression. Salem–Spencer sets are also called 3-AP-free sequences or progression-free sets. They have also been called non-averaging sets, but this term has also been used to denote a set of integers none of which can be obtained as the average of any subset of the other numbers. Salem-Spencer sets are named after Raphaël Salem and Donald C. Spencer, who showed in 1942 that Salem–Spencer sets can have nearly-linear size. However a later theorem of Klaus Roth shows that the size is always less than linear. Examples For k=1,2,\dots the smallest values of n such that the numbers from 1 to n have a k-element Salem-Spencer set are :1, 2, 4, 5, 9, 11, 13, 14, 20, 24, 26, 30, 32, 36, ... For instance, among the numbers from 1 to 14, the eight numbers : form the unique largest Salem-Spencer set. This example is shifted by adding one to the ele ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Roth's Theorem
In mathematics, Roth's theorem is a fundamental result in diophantine approximation to algebraic numbers. It is of a qualitative type, stating that algebraic numbers cannot have many rational number approximations that are 'very good'. Over half a century, the meaning of ''very good'' here was refined by a number of mathematicians, starting with Joseph Liouville in 1844 and continuing with work of , , , and . Statement Roth's theorem states that every irrational algebraic number \alpha has approximation exponent equal to 2. This means that, for every \varepsilon>0, the inequality :\left, \alpha - \frac\ \frac with C(\alpha,\varepsilon) a positive number depending only on \varepsilon>0 and \alpha. Discussion The first result in this direction is Liouville's theorem on approximation of algebraic numbers, which gives an approximation exponent of ''d'' for an algebraic number α of degree ''d'' ≥ 2. This is already enough to demonstrate the existence of transcen ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Strong Exponential Time Hypothesis
In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential time in the worst case. The exponential time hypothesis, if true, would imply that P ≠ NP, but it is a stronger statement. It implies that many computational problems are equivalent in complexity, in the sense that if one of them has a subexponential time algorithm then they all do, and that many known algorithms for these problems have optimal or near-optimal time Definition The problem is a version of the Boolean satisfiability problem in which the input to the problem is a Boolean expression in conjunctive normal form (that is, an ''and'' of ''ors'' of variables and their negations) with at most k variables per clause. The goal is to determine whether this expression can be made to be true by some assignment of Boolean values to its ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Graph Spanner
This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges. Symbols A B C D E F G H I K L M N O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Non-interactive Zero-knowledge Proof
Non-interactive zero-knowledge proofs are zero-knowledge proofs where information between a prover and a verifier can be authenticated by the prover, without revealing any of the specific information beyond the validity of the transaction itself. This function of encryption makes direct communication between the prover and verifier unnecessary, effectively removing any intermediaries. The core trustless cryptography "proofing" involves a hash function generation of a random number, constrained within mathematical parameters (primarily to modulate hashing difficulties) determined by the prover and verifier. With this cryptographic engine, the prover must demonstrate the validity of the transaction, by solving the hash of a random number. Finally, proof of the answer is returned to the verifier, without revealing its value. There are many different methods for establishing a cryptographic proof of hash validity. Perhaps the most notable method, proof of work, involves computing the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matrix Multiplication
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices and is denoted as . Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812, to represent the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering. Computing matrix products is a central operation in all computational applications of linear algebra. Notation This article will use the following notati ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Coppersmith–Winograd Algorithm
In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the right amount of time it should take is of major practical relevance. Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires field operations to multiply two matrices over that field ( in big O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". The optimal number of field operations needed to multiply two square matrices up to constant factors is still unknown. This is a major open question ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Locally Linear Graph
In graph theory, a locally linear graph is an undirected graph in which every edge belongs to exactly one triangle. Equivalently, for each vertex of the graph, its neighbors are each adjacent to exactly one other neighbor, so the neighbors can be paired up into an induced matching. Locally linear graphs have also been called locally matched graphs. Many constructions for locally linear graphs are known. Examples of locally linear graphs include the triangular cactus graphs, the line graphs of 3-regular triangle-free graphs, and the Cartesian products of smaller locally linear graphs. Certain Kneser graphs, and certain strongly regular graphs, are also locally linear. The question of how many edges locally linear graphs can have is one of the formulations of the Ruzsa–Szemerédi problem. Although dense graphs can have a number of edges proportional to the square of the number of vertices, locally linear graphs have a smaller number of edges, falling short of the square by at l ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Ruzsa–Szemerédi Problem
In combinatorial mathematics and extremal graph theory, the Ruzsa–Szemerédi problem or (6,3)-problem asks for the maximum number of edges in a graph in which every edge belongs to a unique triangle. Equivalently it asks for the maximum number of edges in a balanced bipartite graph whose edges can be partitioned into a linear number of induced matchings, or the maximum number of triples one can choose from n points so that every six points contain at most two triples. The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who first proved that its answer is smaller than n^2 by a slowly-growing (but still unknown) factor. Equivalence between formulations The following questions all have answers that are asymptotically equivalent: they differ by, at most, constant factors from each other. *What is the maximum possible number of edges in a graph with n vertices in which every edge belongs to a unique triangle? The graphs with this property are called locally linear graph ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the mathematical optimization, optimization of a linear objective function, subject to linear equality and linear inequality Constraint (mathematics), constraints. Its feasible region is a convex polytope, which is a set defined as the intersection (mathematics), intersection of finitely many Half-space (geometry), half spaces, each of which is defined by a linear inequality. Its objective function is a real number, real-valued affine function, affine (linear) function defined on this polyhedron. A linear programming algorithm finds a point in the polytope where ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Branch-and-bound
Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores ''branches'' of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated ''bounds'' on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm. The algorithm depends on efficient estimation of the lower and upper bounds of regions/branches of the search space. If no bounds are available, the algorithm degenerates to an exhaustive search. The method was first proposed by Ailsa Land and Alison Doig whilst ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Leo Moser
Leo Moser (11 April 1921, Vienna – 9 February 1970, Edmonton) was an Austrian-Canadian mathematician, best known for his polygon notation. A native of Vienna, Leo Moser immigrated with his parents to Canada at the age of three. He received his Bachelor of Science degree from the University of Manitoba in 1943, and a Master of Science from the University of Toronto in 1945. After two years of teaching he went to the University of North Carolina to complete a PhD, supervised by Alfred Brauer. There, in 1950, he began suffering recurrent heart problems. He took a position at Texas Technical College for one year, and joined the faculty of the University of Alberta in 1951, where he remained until his death at the age of 48. In 1966, Moser posed the question "What is the region of smallest area which will accommodate every planar arc of length one?".W. Moser, G. Bloind, V. Klee, C. Rousseau, J. Goodman, B. Monson, J. Wetzel, L. M. Kelly7, G. Purdy, and J Wilker, Fifth edition, ''Pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]