Least Squares Adjustment
   HOME

TheInfoList



OR:

Least-squares adjustment is a model for the solution of an
overdetermined system In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an over ...
of equations based on the principle of least squares of observation residuals. It is used extensively in the disciplines of
surveying Surveying or land surveying is the technique, profession, art, and science of determining the terrestrial two-dimensional or three-dimensional positions of points and the distances and angles between them. A land surveying professional is ca ...
,
geodesy Geodesy ( ) is the Earth science of accurately measuring and understanding Earth's figure (geometric shape and size), orientation in space, and gravity. The field also incorporates studies of how these properties change over time and equivale ...
, and
photogrammetry Photogrammetry is the science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant ima ...
—the field of
geomatics Geomatics is defined in the ISO/TC 211 series of standards as the "discipline concerned with the collection, distribution, storage, analysis, processing, presentation of geographic data or geographic information". Under another definition, it ...
, collectively.


Formulation

There are three forms of least squares adjustment: ''parametric'', ''conditional'', and ''combined'': * In parametric adjustment, one can find an observation equation ''h(X)=Y'' relating observations ''Y'' explicitly in terms of parameters ''X'' (leading to the A-model below). * In conditional adjustment, there exists a condition equation which is ''g(Y)=0'' involving only observations ''Y'' (leading to the B-model below) — with no parameters ''X'' at all. * Finally, in a combined adjustment, both parameters ''X'' and observations ''Y'' are involved implicitly in a mixed-model equation ''f(X,Y)=0''. Clearly, parametric and conditional adjustments correspond to the more general combined case when ''f(X,Y)=h(X)-Y'' and ''f(X,Y)=g(Y)'', respectively. Yet the special cases warrant simpler solutions, as detailed below. Often in the literature, ''Y'' may be denoted ''L''.


Solution

The equalities above only hold for the estimated parameters \hat and observations \hat, thus f\left(\hat,\hat\right)=0. In contrast, measured observations \tilde and approximate parameters \tilde produce a nonzero ''misclosure'': :\tilde = f\left(\tilde,\tilde\right). One can proceed to
Taylor series expansion In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor serie ...
of the equations, which results in the Jacobians or design matrices: the first one, :A=\partial/\partial; and the second one, :B=\partial/\partial. The linearized model then reads: :\tilde + A \hat + B \hat = 0, where \hat=\hat-\tilde are estimated ''parameter corrections'' to the ''a priori'' values, and \hat=\hat-\tilde are post-fit ''observation residuals''. In the parametric adjustment, the second design matrix is an identity, ''B=-I'', and the misclosure vector can be interpreted as the pre-fit residuals, \tilde=\tilde=h(\tilde)-\tilde, so the system simplifies to: :A \hat = \hat - \tilde, which is in the form of
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the prin ...
. In the conditional adjustment, the first design matrix is null, ''A=0''. For the more general cases,
Lagrange multipliers In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied e ...
are introduced to relate the two Jacobian matrices and transform the constrained least squares problem into an unconstrained one (albeit a larger one). In any case, their manipulation leads to the \hat and \hat vectors as well as the respective parameters and observations ''a posteriori'' covariance matrices.


Computation

Given the matrices and vectors above, their solution is found via standard least-squares methods; e.g., forming the
normal matrix In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As ...
and applying
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effici ...
, applying the
QR factorization In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthogonal matrix ''Q'' and an upper triangular matrix ''R''. QR decomp ...
directly to the Jacobian matrix,
iterative methods In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
for very large systems, etc.


Worked-out examples


Applications

*
Leveling Levelling or leveling (American English; see spelling differences) is a branch of surveying, the object of which is to establish or verify or measure the height of specified points relative to a datum. It is widely used in geodesy and cartogra ...
, traverse, and
control networks A geodetic control network (also geodetic network, reference network, control point network, or control network) is a network, often of triangles, which are measured precisely by techniques of terrestrial surveying or by satellite geodesy. ...
*
Bundle adjustment In photogrammetry and computer stereo vision, bundle adjustment is simultaneous refining of the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acqui ...
*
Triangulation In trigonometry and geometry, triangulation is the process of determining the location of a point by forming triangles to the point from known points. Applications In surveying Specifically in surveying, triangulation involves only angle me ...
,
Trilateration Trilateration is the use of distances (or "ranges") for determining the unknown position coordinates of a point of interest, often around Earth (geopositioning). When more than three distances are involved, it may be called multilateration, for e ...
, Triangulateration *
GPS The Global Positioning System (GPS), originally Navstar GPS, is a Radionavigation-satellite service, satellite-based radionavigation system owned by the United States government and operated by the United States Space Force. It is one of t ...
/
GNSS positioning Satellite navigation solution for the receiver's position (geopositioning) involves an algorithm. In essence, a GNSS receiver measures the transmitting time of GNSS signals emitted from four or more GNSS satellites (giving the pseudorange) and thes ...
*
Helmert transformation The Helmert transformation (named after Friedrich Robert Helmert, 1843–1917) is a geometric transformation method within a three-dimensional space. It is frequently used in geodesy to produce datum transformations between datums. The ...


Related concepts

*Parametric adjustment is similar to most of
regression analysis In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
and coincides with the
Gauss–Markov model The phrase Gauss–Markov is used in two different ways: * Gauss–Markov processes in probability theory *The Gauss–Markov theorem In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary ...
*Combined adjustment, also known as the Gauss–Helmert model (named after German mathematicians/geodesists C.F. Gauss and F.R. Helmert), is related to the
errors-in-variables models In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured e ...
and
total least squares In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generaliza ...
. *The use of ''a priori'' parameter covariance matrix is akin to
Tikhonov regularization Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also ...


Extensions

If
rank deficiency In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dime ...
is encountered, it can often be rectified by the inclusion of additional equations imposing constraints on the parameters and/or observations, leading to
constrained least squares In constrained least squares one solves a linear least squares problem with an additional constraint on the solution. I.e., the unconstrained equation \mathbf \boldsymbol = \mathbf must be fit as closely as possible (in the least squares sense ...
.


References


Bibliography

;Lecture notes and technical reports: *Nico Sneeuw and Friedhelm Krum
"Adjustment theory"
Geodätisches Institut,
Universität Stuttgart The University of Stuttgart (german: Universität Stuttgart) is a leading research university located in Stuttgart, Germany. It was founded in 1829 and is organized into 10 faculties. It is one of the oldest technical universities in Germany wit ...
, 2014 *Krakiwsky
"A synthesis of recent advances in the method of least squares"
Lecture Notes #42, Department of Geodesy and Geomatics Engineering,
University of New Brunswick The University of New Brunswick (UNB) is a public university with two primary campuses in Fredericton and Saint John, New Brunswick. It is the oldest English-language university in Canada, and among the oldest public universities in North Ameri ...
, 1975 *Cross, P.A. tp://stella.ncl.ac.uk/pub/Fugro/Working%20Paper%20No6%20-%20P%20A%20Cross.pdf "Advanced least squares applied to position-fixing"
University of East London , mottoeng = Knowledge and the fulfilment of vows , established = 1898 – West Ham Technical Institute1952 – West Ham College of Technology1970 – North East London Polytechnic1989 – Polytechnic of East London ...
, School of Surveying, Working Paper No. 6, , January 1994. First edition April 1983, Reprinted with corrections January 1990. (Original Working Papers,
North East London Polytechnic , mottoeng = Knowledge and the fulfilment of vows , established = 1898 – West Ham Technical Institute1952 – West Ham College of Technology1970 – North East London Polytechnic1989 – Polytechnic of East London ...
, Dept. of Surveying, 205 pp., 1983.) *Snow, Kyle B.
Applications of Parameter Estimation and Hypothesis Testing to GPS Network Adjustments
Division of Geodetic Science,
Ohio State University The Ohio State University, commonly called Ohio State or OSU, is a public land-grant research university in Columbus, Ohio. A member of the University System of Ohio, it has been ranked by major institutional rankings among the best publ ...
, 2002 ;Books and chapters: *
Friedrich Robert Helmert Friedrich Robert Helmert (31 July 1843 – 15 June 1917) was a German geodesist and statistician with important contributions to the theory of errors. Career Helmert was born in Freiberg, Kingdom of Saxony. After schooling in Freiberg and ...
. ''Die Ausgleichsrechnung nach der Methode der kleinsten Quadrate'' (''Adjustment computation based on the method of least squares''). Leipzig: Teubner, 1872. . *
Reino Antero Hirvonen Reino Antero Hirvonen (1908–1989) was a famous Finnish physical geodesist, also well known for contributions in mathematical and astronomical geodesy. He worked at first at the Finnish Geodetic Institute under W.A. Heiskanen on gravimetric g ...
, "Adjustments by least squares in geodesy and photogrammetry", Ungar, New York. 261 p., , , 1971. *Edward M. Mikhail, Friedrich E. Ackermann, "Observations and least squares", University Press of America, 1982 * * Peter Vaníček and E.J. Krakiwsky, "Geodesy: The Concepts." Amsterdam: Elsevier. (third ed.): , ; chap. 12, "Least-squares solution of overdetermined models", pp. 202–213, 1986. *
Gilbert Strang William Gilbert Strang (born November 27, 1934), usually known as simply Gilbert Strang or Gil Strang, is an American mathematician, with contributions to finite element theory, the calculus of variations, wavelet analysis and linear algebr ...
and Kai Borre, "Linear Algebra, Geodesy, and GPS", SIAM, 624 pages, 1997. *Paul Wolf and Bon DeWitt, "Elements of Photogrammetry with Applications in GIS", McGraw-Hill, 2000 *Karl-Rudolf Koch, "Parameter Estimation and Hypothesis Testing in Linear Models", 2a ed., Springer, 2000 *P.J.G. Teunissen, "Adjustment theory, an introduction", Delft Academic Press, 2000 *Edward M. Mikhail, James S. Bethel, J. Chris McGlone, "Introduction to Modern Photogrammetry", Wiley, 2001 *Harvey, Bruce R., "Practical least squares and statistics for surveyors", Monograph 13, Third Edition, School of Surveying and Spatial Information Systems, University of New South Wales, 2006 *Huaan Fan, "Theory of Errors and Least Squares Adjustment", Royal Institute of Technology (KTH), Division of Geodesy and Geoinformatics, Stockholm, Sweden, 2010, . * *Charles D. Ghilani, "Adjustment Computations: Spatial Data Analysis", John Wiley & Sons, 2011 *Charles D. Ghilani and Paul R. Wolf, "Elementary Surveying: An Introduction to Geomatics", 13th Edition, Prentice Hall, 2011 *Erik Grafarend and Joseph Awange, "Applications of Linear and Nonlinear Models: Fixed Effects, Random Effects, and Total Least Squares", Springer, 2012 *Alfred Leick, Lev Rapoport, and Dmitry Tatarnikov, "GPS Satellite Surveying", 4th Edition, John Wiley & Sons, ; Chapter 2, "Least-Squares Adjustments", pp. 11–79, doi:10.1002/9781119018612.ch2 *A. Fotiou (2018) "A Discussion on Least Squares Adjustment with Worked Examples" In: Fotiou A., D. Rossikopoulos, eds. (2018): “Quod erat demonstrandum. In quest for the ultimate geodetic insight.” Special issue for Professor Emeritus Athanasios Dermanis. Publication of the School of Rural and Surveying Engineering, Aristotle University of Thessaloniki, 405 pages.

*John Olusegun Ogundare (2018), "Understanding Least Squares Estimation and Geomatics Data Analysis", John Wiley & Sons, 720 pages, . * {{cite book , last=Shen , first=Yunzhong , last2=Xu , first2=Guochang , title=Sciences of Geodesy - II , chapter=Regularization and Adjustment , publisher=Springer Berlin Heidelberg , publication-place=Berlin, Heidelberg , date=2012-07-31 , isbn=978-3-642-27999-7 , doi=10.1007/978-3-642-28000-9_6 Curve fitting Least squares Geodesy Surveying Photogrammetry