Semidefinite Programming
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron. Semidefinite programming is a relatively new field of optimization which is of growing interest for several reasons. Many practical problems in operations research and combinatorial optimization can be modeled or approximated as semidefinite programming problems. In automatic control theory, SDPs are used in the context of linear matrix inequalities. SDPs are in fact a special case of cone programming and can be efficiently solved by interior point methods. All linear programs and (convex) quadratic programs can be expressed as SDPs, and via hierarchies of SDPs the solutions of polynomial optimization problems can be approximated. Semidefinite programming ha ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematical Programming
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. Optimization problems Optimization problems can be divided into two categories, depending on whether the variables ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Dot Product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a Scalar (mathematics), scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. Not to be confused with scalar multiplication. is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two Euclidean vector, vectors is widely used. It is often called the inner product (or rarely the projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see ''Inner product space'' for more). It should not be confused with the cross product. Algebraically, the dot product is the sum of the Product (mathematics), products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euc ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Strong Duality
Strong duality is a condition in mathematical optimization in which the primal optimal objective and the dual optimal objective are equal. By definition, strong duality holds if and only if the duality gap is equal to 0. This is opposed to weak duality (the primal problem has optimal value smaller than or equal to the dual problem, in other words the duality gap is greater than or equal to zero). Sufficient conditions Each of the following conditions is sufficient for strong duality to hold: * F = F^ where F is the perturbation function relating the primal and dual problems and F^ is the biconjugate of F (follows by construction of the duality gap) * F is convex and lower semi-continuous (equivalent to the first point by the Fenchel–Moreau theorem) * the primal problem is a linear optimization problem * Slater's condition for a convex optimization problem. Strong duality and computational complexity Under certain conditions (called "constraint qualification"), if a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Weak Duality
In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. This means that for any minimization problem, called the ''primal problem'', the solution to the primal problem is always greater than or equal to the solution to the dual maximization problem. Alternatively, the solution to a primal maximization problem is always less than or equal to the solution to the dual minimization problem. So, in short: weak duality states that any solution feasible for the dual problem is an upper bound to the solution of the primal problem. Weak duality is in contrast to strong duality, which states that the primal optimal objective and the dual optimal objective are ''equal''. Strong duality only holds in certain cases. Uses Many primal-dual approximation algorithms are based on the principle of weak duality.. Weak duality theorem Consider a linear programming problem, whe ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Dual Problem
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality. In general, the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. This fact is called strong duality. Dual problem Usually the term "dual problem" refers to the ''Lagrangian dual problem'' but o ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Program
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the polytope where this function has the largest (or smallest) value if such a point exists. Linear programs are problems that can be expressed in standard form as: : \beg ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Convex Cone
In linear algebra, a cone—sometimes called a linear cone to distinguish it from other sorts of cones—is a subset of a real vector space that is closed under positive scalar multiplication; that is, C is a cone if x\in C implies sx\in C for every . This is a broad generalization of the standard cone in Euclidean space. A convex cone is a cone that is also closed under addition, or, equivalently, a subset of a vector space that is closed under linear combinations with positive coefficients. It follows that convex cones are convex sets. The definition of a convex cone makes sense in a vector space over any ordered field, although the field of real numbers is used most often. Definition A subset C of a vector space is a cone if x\in C implies sx\in C for every s>0. Here s>0 refers to (strict) positivity in the scalar field. Competing definitions Some other authors require ,\infty)C\subset C or even 0\in C. Some require a cone to be convex and/or satisfy C\cap-C\subset\. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Cholesky Decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations. Statement The Cholesky decomposition of a Hermitian positive-definite matrix , is a decomposition of the form \mathbf = \mathbf^, where is a lower triangular matrix with real and positive diagonal entries, and * denotes the conjugate transpose of . Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. The converse holds trivially: if can be ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Scalar (mathematics)
A scalar is an element of a field which is used to define a ''vector space''. In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers). A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Slack Variable
In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality constraint. A non-negativity constraint on the slack variable is also added. Slack variables are used in particular in linear programming. As with the other variables in the augmented constraints, the slack variable cannot take on negative values, as the simplex algorithm requires them to be positive or zero. * If a slack variable associated with a constraint is ''zero'' at a particular candidate solution, the constraint is binding there, as the constraint restricts the possible changes from that point. * If a slack variable is ''positive'' at a particular candidate solution, the constraint is non-binding there, as the constraint does not restrict the possible changes from that point. * If a slack variable is ''negative'' at some point, the point is infeasible (not allowed), as it does not satisfy the constraint. Slack variables are also used i ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Trace (linear Algebra)
In linear algebra, the trace of a square matrix , denoted , is the sum of the elements on its main diagonal, a_ + a_ + \dots + a_. It is only defined for a square matrix (). The trace of a matrix is the sum of its eigenvalues (counted with multiplicities). Also, for any matrices and of the same size. Thus, similar matrices have the same trace. As a consequence, one can define the trace of a linear operator mapping a finite-dimensional vector space into itself, since all matrices describing such an operator with respect to a basis are similar. The trace is related to the derivative of the determinant (see Jacobi's formula). Definition The trace of an square matrix is defined as \operatorname(\mathbf) = \sum_^n a_ = a_ + a_ + \dots + a_ where denotes the entry on the row and column of . The entries of can be real numbers, complex numbers, or more generally elements of a field . The trace is not defined for non-square matrices. Example Let be a matrix, with \m ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Inner Product Space
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in \langle a, b \rangle. Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality (zero inner product) of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or ''scalar product'' of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898. An inner product naturally induces an associated norm, (denoted , x, and , y, in the picture) ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |