HOME

TheInfoList



OR:

Barnes interpolation, named after Stanley L. Barnes, is the
interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a n ...
of unevenly spread data points from a set of measurements of an unknown function in two dimensions into an
analytic function In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex an ...
of two variables. An example of a situation where the Barnes scheme is important is in
weather forecasting Weather forecasting is the application of science and technology forecasting, to predict the conditions of the Earth's atmosphere, atmosphere for a given location and time. People have attempted to predict the weather informally for millennia a ...
where measurements are made wherever monitoring stations may be located, the positions of which are constrained by
topography Topography is the study of the forms and features of land surfaces. The topography of an area may refer to the land forms and features themselves, or a description or depiction in maps. Topography is a field of geoscience and planetary sci ...
. Such interpolation is essential in data visualisation, e.g. in the construction of
contour plot A contour line (also isoline, isopleth, or isarithm) of a function of two variables is a curve along which the function has a constant value, so that the curve joins points of equal value. It is a plane section of the three-dimensional graph ...
s or other representations of analytic surfaces.


Introduction

Barnes proposed an objective scheme for the interpolation of two dimensional data using a multi-pass scheme. This provided a method to interpolating sea-level pressures across the entire United States of America, producing a
synoptic chart The synoptic scale in meteorology (also known as large scale or cyclonic scale) is a horizontal length scale of the order of 1000 kilometers (about 620 miles) or more. This corresponds to a horizontal scale typical of mid-latitude depressions (e. ...
across the country using dispersed monitoring stations. Researchers have subsequently improved the Barnes method to reduce the number of parameters required for calculation of the interpolated result, increasing the objectivity of the method. The method constructs a grid of size determined by the distribution of the two dimensional data points. Using this grid, the function values are calculated at each grid point. To do this the method utilises a series of
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real constants , and non-zero . It is n ...
s, given a distance weighting in order to determine the relative importance of any given measurement on the determination of the function values. Correction passes are then made to optimise the function values, by accounting for the spectral response of the interpolated points.


Method

Here we describe the method of interpolation used in a multi-pass Barnes interpolation.


First pass

For a given grid point ''i'', ''j'' the interpolated function ''g''(''x''''i'', ''y''''i'') is first approximated by the inverse weighting of the data points. To do this as weighting values is assigned to each Gaussian for each grid point, such that : w_ = \exp\left(-\frac\right), \, where \kappa is a falloff parameter that controls the width of the Gaussian function. This parameter is controlled by the characteristic data spacing, for a fixed Gaussian cutoff radius ''w''''ij'' = ''e''−1 giving Δ''n'' such that: : \kappa = 5.052\,\left(\frac\right)^2. \, The initial interpolation for the function from the measured values f_k(x,y) then becomes: : g_0(x_i,y_j) = \frac.


Second pass

The correction for the next pass then utilises the difference between the observed field and the interpolated values at the measurement points to optimise the result: : g_1(x_i,y_j) = g_0(x_i,y_j) + \sum( f(x,y) - g_0(x,y)) \exp\left(-\frac{\gamma\kappa}\right). \, It is worth to note that successive correction steps can be used in order to achieve better agreement between the interpolated function and the measured values at the experimental points.


Parameter selection

Although described as an objective method, there are many parameters which control the interpolated field. The choice of Δ''n'', grid spacing Δ''x'' and \gamma as well influence the final result. Guidelines for the selection of these parameters have been suggested, however the final values used are free to be chosen within these guidelines. The data spacing used in the analysis, Δ''n'' may be chosen either by calculating the true experimental data inter-point spacing, or by the use of a
complete spatial randomness Complete spatial randomness (CSR) describes a point process whereby point events occur within a given study area in a completely random fashion. It is synonymous with a ''homogeneous spatial Poisson process''.O. Maimon, L. Rokach, ''Data Mining and ...
assumption, depending upon the degree of clustering in the observed data. The smoothing parameter \gamma is constrained to be between 0.2 and 1.0. For reasons of interpolation integrity, Δ''x'' is argued to be constrained between 0.3 and 0.5.


Notes

Interpolation Spatial analysis