Non-negative Least Squares
   HOME
*





Non-negative Least Squares
In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix and a (column) vector of response variables , the goal is to find :\operatorname\limits_\mathbf \, \mathbf - \mathbf\, _2^2 subject to . Here means that each component of the vector should be non-negative, and denotes the Euclidean norm. Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC and non-negative matrix/tensor factorization. The latter can be considered a generalization of NNLS. Another generalization of NNLS is bounded-variable least squares (BVLS), with simultaneous upper and lower bounds . Quadratic programming version The NNLS problem is equivalent to a quadratic programming problem :\operatorname\limits_\mathbf \left(\frac \mathbf^\mathsf \mathbf\mathbf + \mathbf^\mathsf \mathbf\right), wher ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematical Optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maxima and minima, maximizing or minimizing a Function of a real variable, real function by systematically choosing Argument of a function, input values from within an allowed set and computing the Value (mathematics), value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, opti ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Pseudocode
In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine reading. It typically omits details that are essential for machine understanding of the algorithm, such as variable declarations and language-specific code. The programming language is augmented with natural language description details, where convenient, or with compact mathematical notation. The purpose of using pseudocode is that it is easier for people to understand than conventional programming language code, and that it is an efficient and environment-independent description of the key principles of an algorithm. It is commonly used in textbooks and scientific publications to document algorithms and in planning of software and other algorithms. No broad standard for pseudocode syntax exists, as a program in pseudocode is not an executa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Perron–Frobenius Theorem
In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems ( subshifts of finite type); to economics ( Okishio's theorem, Hawkins–Simon condition); to demography ( Leslie population age distribution model); to social networks ( DeGroot learning process); to Internet search engines (PageRank); and even to ranking of football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau. Statement Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


M-matrix
In mathematics, especially linear algebra, an ''M''-matrix is a ''Z''-matrix with eigenvalues whose real parts are nonnegative. The set of non-singular ''M''-matrices are a subset of the class of ''P''-matrices, and also of the class of inverse-positive matrices (i.e. matrices with inverses belonging to the class of positive matrices). The name ''M''-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive.. Characterizations An M-matrix is commonly defined as follows: Definition: Let be a real Z-matrix. That is, where for all . Then matrix ''A'' is also an ''M-matrix'' if it can be expressed in the form , where with , for all , where is at least as large as the maximum of the moduli of the eigenvalues of , and is an identity matrix. For the non-singularity of , according to the Perron–Frobenius theorem, it must be ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Coordinate Descent
Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, then exactly or inexactly minimizes over the corresponding coordinate hyperplane while fixing all other coordinates or coordinate blocks. A line search along the coordinate direction can be performed at the current iterate to determine the appropriate step size. Coordinate descent is applicable in both differentiable and derivative-free contexts. Description Coordinate descent is based on the idea that the minimization of a multivariable function F(\mathbf) can be achieved by minimizing it along one direction at a time, i.e., solving univariate (or at least much simpler) optimization problems in a loop. In the simplest case of ''cyclic coordinate descent'', one cyclically iterates through the directions, one at a time, minimizing the objec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gradient Descent
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent. Gradient descent is generally attributed to Augustin-Louis Cauchy, who first suggested it in 1847. Jacques Hadamard independently proposed a similar method in 1907. Its convergence properties for non-linear optimization problems were first studied by Haskell Curry in 1944, with the method becoming increasingly well-studied and used in the following decades. Description Gradient descent is based on the observation that if the multi-variable function F(\mathbf) is def ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Landweber Iteration
The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, and it can be now viewed as a special case of many other more general methods. Basic algorithm The original Landweber algorithm attempts to recover a signal ''x'' from (noisy) measurements ''y''. The linear version assumes that y = Ax for a linear operator ''A''. When the problem is in finite dimensions, ''A'' is just a matrix. When ''A'' is Algebraic curve#Singularities, nonsingular, then an explicit solution is x = A^ y. However, if ''A'' is ill-conditioned, the explicit solution is a poor choice since it is sensitive to any noise in the data ''y''. If ''A'' is Mathematical singularity, singular, this explicit solution doesn't even exist. The Landweber algorithm is an attempt to Regularization (mathematics), regularize the problem ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

SciPy
SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers and other tasks common in science and engineering. SciPy is also a family of conferences for users and developers of these tools: SciPy (in the United States), EuroSciPy (in Europe) and SciPy.in (in India). Enthought originated the SciPy conference in the United States and continues to sponsor many of the international conferences as well as host the SciPy website. The SciPy library is currently distributed under the BSD license, and its development is sponsored and supported by an open community of developers. It is also supported by NumFOCUS, a community foundation for supporting reproducible and accessible science. Components The SciPy package is at the core of Python's scientific computing capabilit ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

MATLAB
MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages. Although MATLAB is intended primarily for numeric computing, an optional toolbox uses the MuPAD symbolic engine allowing access to symbolic computing abilities. An additional package, Simulink, adds graphical multi-domain simulation and model-based design for dynamic and embedded systems. As of 2020, MATLAB has more than 4 million users worldwide. They come from various backgrounds of engineering, science, and economics. History Origins MATLAB was invented by mathematician and computer programmer Cleve Moler. The idea for MATLAB was based on his 1960s PhD thesis. Moler became a math professor at the University of New Mexico and starte ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Active-set Method
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality Inequality may refer to: Economics * Attention inequality, unequal distribution of attention across users, groups of people, issues in etc. in attention economy * Economic inequality, difference in economic well-being between population groups * ... constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem. An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints : g_1(x) \ge 0, \dots, g_k(x) \ge 0 that define the feasible region, that is, the set of all ''x'' to search for the optimal solution. Given a point x in the feasible region, a constraint : g_i(x) \ge 0 is called active at x_0 if g_i(x_0) = 0, and inactive at x if g_i(x_0) > 0. Equality constraints are alwa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Constrained Least Squares
In constrained least squares one solves a linear least squares problem with an additional constraint on the solution. I.e., the unconstrained equation \mathbf \boldsymbol = \mathbf must be fit as closely as possible (in the least squares sense) while ensuring that some other property of \boldsymbol is maintained. There are often special-purpose algorithms for solving such problems efficiently. Some examples of constraints are given below: * Equality constrained least squares: the elements of \boldsymbol must exactly satisfy \mathbf \boldsymbol = \mathbf (see Ordinary least squares). * Regularized least squares: the elements of \boldsymbol must satisfy \, \mathbf \boldsymbol - \mathbf \, \le \alpha (choosing \alpha in proportion to the noise standard deviation of y prevents over-fitting). * Non-negative least squares (NNLS): The vector \boldsymbol must satisfy the vector inequality \boldsymbol \geq \boldsymbol defined componentwise—that is, each component mus ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]