HOME
*





Non-linear Least Squares
Non-linear least squares is the form of least squares analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box-Cox transformed regressors (m(x,\theta_i) = \theta_1 + \theta_2 x^). Theory Consider a set of m data points, (x_1, y_1), (x_2, y_2), \dots, (x_m, y_m), and a curve (model function) \hat = f(x, \boldsymbol \beta), that in addition to the variable x also depends on n parameters, \boldsymbol \beta = (\beta_1, \beta_2, \dots, \beta_n), with m\ge n. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Least Squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals (a residual being the difference between an observed value and the fitted value provided by a model) made in the results of each individual equation. The most important application is in data fitting. When the problem has substantial uncertainties in the independent variable (the ''x'' variable), then simple regression and least-squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares. Least squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regress ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Ellipse
In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity e, a number ranging from e = 0 (the limiting case of a circle) to e = 1 (the limiting case of infinite elongation, no longer an ellipse but a parabola). An ellipse has a simple algebraic solution for its area, but only approximations for its perimeter (also known as circumference), for which integration is required to obtain an exact solution. Analytically, the equation of a standard ellipse centered at the origin with width 2a and height 2b is: : \frac+\frac = 1 . Assuming a \ge b, the foci are (\pm c, 0) for c = \sqrt. The standard parametric equation is: : (x,y) = (a\cos(t),b\sin(t)) \quad \text \quad 0\leq t\leq 2\pi. Ellipses ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Line Search
In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a descent direction along which the objective function f will be reduced and then computes a step size that determines how far \mathbf should move along that direction. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method. The step size can be determined either exactly or inexactly. Example use Here is an example gradient method that uses a line search in step 4. # Set iteration counter \displaystyle k=0, and make an initial guess \mathbf_0 for the minimum # Repeat: #     Compute a descent direction \mathbf_k #     Choose \displaystyle \alpha_k to 'loosely' minimize h(\alpha_k)=f(\mathbf_k+\alpha_k\mathbf_k) over \alpha_k\in\mathbb R_+ #  &n ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cholesky Decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations. Statement The Cholesky decomposition of a Hermitian positive-definite matrix A, is a decomposition of the form : \mathbf = \mathbf^*, where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. The converse holds trivially: if A can ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Lineweaver–Burk Plot
In biochemistry, the Lineweaver–Burk plot (or double reciprocal plot) is a graphical representation of the Lineweaver–Burk equation of enzyme kinetics, described by Hans Lineweaver and Dean Burk in 1934. The Lineweaver–Burk plot for inhibited enzymes can be compared to no inhibitor to determine how the inhibitor is competing with the enzyme. The Lineweaver–Burk plot is correct when the enzyme kinetics obey ideal second-order kinetics, however non-linear regression is needed for systems that do not behave ideally. The double reciprocal plot distorts the error structure of the data, and is therefore not the most accurate tool for the determination of enzyme kinetic parameters. While the Lineweaver–Burk plot has historically been used for evaluation of the parameters, together with the alternative linear forms of the Michaelis–Menten equation such as the Hanes-Woolf plot or Eadie–Hofstee plot, all linearized forms of the Michaelis–Menten equation should be avoided ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Michaelis–Menten Kinetics
In biochemistry, Michaelis–Menten kinetics, named after Leonor Michaelis and Maud Menten, is the simplest case of enzyme kinetics, applied to enzyme-catalysed reactions of one substrate and one product. It takes the form of an equation describing the rate reaction rate v (rate of formation of product P, with concentration p) to a, the concentration of the substrate  A (using the symbols recommended by the IUBMB). Its formula is given by the Michaelis–Menten equation: : v = \frac = \frac V, which is often written as V_\max, represents the limiting rate approached by the system at saturating substrate concentration for a given enzyme concentration. When the value of the Michaelis constant K_\mathrm is numerically equal to the substrate concentration, the reaction rate is half of V. Biochemical reactions involving a single substrate are often assumed to follow Michaelis–Menten kinetics, without regard to the model's underlying assumptions. Only a small proportion of en ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Log-normal Distribution
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable is log-normally distributed, then has a normal distribution. Equivalently, if has a normal distribution, then the exponential function of , , has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics). The distribution is occasionally referred to as the Galton distribution or Galton's distribution, after Francis Galton. The log-normal distribution has also been associated with other names, such as McAlister, Gibrat and Cobb–Douglas. A log-normal process is the statistical realization of the mul ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Semi-log Plot
In science and engineering, a semi-log plot/graph or semi-logarithmic plot/graph has one axis on a logarithmic scale, the other on a linear scale. It is useful for data with exponential relationships, where one variable covers a large range of values, or to zoom in and visualize that - what seems to be a straight line in the beginning - is in fact the slow start of a logarithmic curve that is about to spike and changes are much bigger than thought initially.(1) (2) All equations of the form y=\lambda a^ form straight lines when plotted semi-logarithmically, since taking logs of both sides gives :\log_a y = \gamma x + \log_a \lambda. This is a line with slope \gamma and \log_a \lambda vertical intercept. The logarithmic scale is usually labeled in base 10; occasionally in base 2: :\log (y) = (\gamma \log (a)) x + \log (\lambda). A log–linear (sometimes log–lin) plot has the logarithmic scale on the ''y''-axis, and a linear scale on the ''x''-axis; a linear-log (some ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Round-off Error
A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error. When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals of numerical analysis is to estimate computation errors. Computation errors, also called numerical errors, include both truncation errors and roundoff errors. When a sequence of calculations with an input involving any roundoff error are made, errors may accumulate, sometimes dominating the calculation. In ill-conditioned problems, significant error may accumulate. In short, there are two major facets of roundoff errors ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Levenberg–Marquardt Algorithm
In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach. The algorithm was first published in 1944 by Kenneth Levenberg, while working at the Frankford Army Arsenal. It was rediscovered in 1963 by Donald Marquardt, who worked as a statistician at DuPont, and independently by Girard, Wynne and Morrison. The LMA is used in many software applications for solving gen ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cauchy Distribution
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution f(x; x_0,\gamma) is the distribution of the -intercept of a ray issuing from (x_0,\gamma) with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero. The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined (but see below). The Cauchy distribution does not have finite moments of order greater than or equal to one; only fractional absolute moments exist., Chapter 16. The Cauchy distribution has no moment generating function. In mathematics, it is closely related to the P ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Round-off
A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error. When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals of numerical analysis is to estimate computation errors. Computation errors, also called numerical errors, include both truncation errors and roundoff errors. When a sequence of calculations with an input involving any roundoff error are made, errors may accumulate, sometimes dominating the calculation. In ill-conditioned problems, significant error may accumulate. In short, there are two major facets of roundoff errors i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]