Optimal Estimation
   HOME
*





Optimal Estimation
In applied statistics, optimal estimation is a regularized matrix inverse method based on Bayes' theorem. It is used very commonly in the geosciences, particularly for atmospheric sounding. A matrix inverse problem looks like this: : \mathbf \vec x = \vec y The essential concept is to transform the matrix, A, into a conditional probability and the variables, \vec x and \vec y into probability distributions by assuming Gaussian statistics and using empirically-determined covariance matrices. Derivation Typically, one expects the statistics of most measurements to be Gaussian. So for example for P(\vec y, \vec x), we can write: : P(\vec y, \vec x) = \frac \exp \left -\frac (\boldsymbol \vec - \vec)^T \boldsymbol ^ (\boldsymbol \vec - \vec) \right where ''m'' and ''n'' are the numbers of elements in \vec x and \vec y respectively \boldsymbol is the matrix to be solved (the linear or linearised forward model) and \boldsymbol is the covariance matrix of the vector \vec y. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Regularization (mathematics)
In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting. Although regularization procedures can be divided in many ways, following delineation is particularly helpful: * Explicit regularization is regularization whenever one explicitly adds a term to the optimization problem. These terms could be priors, penalties, or constraints. Explicit regularization is commonly employed with ill-posed optimization problems. The regularization term, or penalty, imposes a cost on the optimization function to make the optimal solution unique. * Implicit regularization is all other forms of regularization. This includes, for example, early stopping, using a robust loss function, and discarding outliers. Implicit regularization is essentially ubiquitous in modern machine learning appr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  



MORE