RBF Network
   HOME

TheInfoList



OR:

In the field of mathematical modeling, a radial basis function network is an
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected unit ...
that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction,
classification Classification is a process related to categorization, the process in which ideas and objects are recognized, differentiated and understood. Classification is the grouping of related facts into classes. It may also refer to: Business, organizat ...
, and system
control Control may refer to: Basic meanings Economics and business * Control (management), an element of management * Control, an element of management accounting * Comptroller (or controller), a senior financial officer in an organization * Controlling ...
. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.


Network architecture

Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers \mathbf \in \mathbb^n. The output of the network is then a scalar function of the input vector, \varphi : \mathbb^n \to \mathbb , and is given by :\varphi(\mathbf) = \sum_^N a_i \rho(, , \mathbf-\mathbf_i, , ) where N is the number of neurons in the hidden layer, \mathbf c_i is the center vector for neuron i, and a_i is the weight of neuron i in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The
norm Naturally occurring radioactive materials (NORM) and technologically enhanced naturally occurring radioactive materials (TENORM) consist of materials, usually industrial wastes or by-products enriched with radioactive elements found in the envir ...
is typically taken to be the
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefor ...
(although the
Mahalanobis distance The Mahalanobis distance is a measure of the distance between a point ''P'' and a distribution ''D'', introduced by P. C. Mahalanobis in 1936. Mahalanobis's definition was prompted by the problem of identifying the similarities of skulls based ...
appears to perform better with pattern recognition) and the radial basis function is commonly taken to be
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
: \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) = \exp \left -\beta_i \left \Vert \mathbf - \mathbf_i \right \Vert ^2 \right. The Gaussian basis functions are local to the center vector in the sense that :\lim_\rho(\left \Vert \mathbf - \mathbf_i \right \Vert) = 0 i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron. Given certain mild conditions on the shape of the activation function, RBF networks are
universal approximator In the mathematical theory of artificial neural networks, universal approximation theorems are results that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these result ...
s on a
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact * Blood compact, an ancient ritual of the Philippines * Compact government, a type of colonial rule utilized in British ...
subset of \mathbb^n. This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision. The parameters a_i , \mathbf_i , and \beta_i are determined in a manner that optimizes the fit between \varphi and the data.


Normalized


Normalized architecture

In addition to the above ''unnormalized'' architecture, RBF networks can be ''normalized''. In this case the mapping is : \varphi ( \mathbf ) \ \stackrel\ \frac = \sum_^N a_i u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) where : u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \ \stackrel\ \frac is known as a ''normalized radial basis function''.


Theoretical motivation for normalization

There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density : P\left ( \mathbf \land y \right ) = \sum_^N \, \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \, \sigma \big ( \left \vert y - e_i \right \vert \big ) where the weights \mathbf_i and e_i are exemplars from the data and we require the kernels to be normalized : \int \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \, d^n\mathbf =1 and : \int \sigma \big ( \left \vert y - e_i \right \vert \big ) \, dy =1. The probability densities in the input and output spaces are : P \left ( \mathbf \right ) = \int P \left ( \mathbf \land y \right ) \, dy = \sum_^N \, \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) and : The expectation of y given an input \mathbf is : \varphi \left ( \mathbf \right ) \ \stackrel\ E\left ( y \mid \mathbf \right ) = \int y \, P\left ( y \mid \mathbf \right ) dy where : P\left ( y \mid \mathbf \right ) is the conditional probability of y given \mathbf . The conditional probability is related to the joint probability through
Bayes theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
: P\left ( y \mid \mathbf \right ) = \frac which yields : \varphi \left ( \mathbf \right ) = \int y \, \frac \, dy . This becomes : \varphi \left ( \mathbf \right ) = \frac = \sum_^N e_i u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) when the integrations are performed.


Local linear models

It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order, : \varphi \left ( \mathbf \right ) = \sum_^N \left ( a_i + \mathbf_i \cdot \left ( \mathbf - \mathbf_i \right ) \right )\rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) and : \varphi \left ( \mathbf \right ) = \sum_^N \left ( a_i + \mathbf_i \cdot \left ( \mathbf - \mathbf_i \right ) \right )u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) in the unnormalized and normalized cases, respectively. Here \mathbf_i are weights to be determined. Higher order linear terms are also possible. This result can be written : \varphi \left ( \mathbf \right ) = \sum_^ \sum_^n e_ v_ \big ( \mathbf - \mathbf_i \big ) where : e_ = \begin a_i, & \mbox i \in ,N\\ b_, & \mboxi \in +1,2N\end and : v_\big ( \mathbf - \mathbf_i \big ) \ \stackrel\ \begin \delta_ \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) , & \mbox i \in ,N\\ \left ( x_ - c_ \right ) \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) , & \mboxi \in +1,2N\end in the unnormalized case and : v_\big ( \mathbf - \mathbf_i \big ) \ \stackrel\ \begin \delta_ u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) , & \mbox i \in ,N\\ \left ( x_ - c_ \right ) u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) , & \mboxi \in +1,2N\end in the normalized case. Here \delta_ is a
Kronecker delta function In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\t ...
defined as : \delta_ = \begin 1, & \mboxi = j \\ 0, & \mboxi \ne j \end .


Training

RBF networks are typically trained from pairs of input and target values \mathbf(t), y(t), t = 1, \dots, T by a two-step algorithm. In the first step, the center vectors \mathbf c_i of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using
k-means clustering ''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or ...
. Note that this step is
unsupervised ''Unsupervised'' is an American adult animated sitcom created by David Hornsby, Rob Rosell, and Scott Marder which ran on FX from January 19 to December 20, 2012. The show was created, and for the most part, written by David Hornsby, Scott Marder ...
. The second step simply fits a linear model with coefficients w_i to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function: : K( \mathbf ) \ \stackrel\ \sum_^T K_t( \mathbf ) where : K_t( \mathbf ) \ \stackrel\ \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big 2 . We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit. There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as : H( \mathbf ) \ \stackrel\ K( \mathbf ) + \lambda S( \mathbf ) \ \stackrel\ \sum_^T H_t( \mathbf ) where : S( \mathbf ) \ \stackrel\ \sum_^T S_t( \mathbf ) and : H_t( \mathbf ) \ \stackrel\ K_t ( \mathbf ) + \lambda S_t ( \mathbf ) where optimization of S maximizes smoothness and \lambda is known as a
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
parameter. A third optional backpropagation step can be performed to fine-tune all of the RBF net's parameters.


Interpolation

RBF networks can be used to interpolate a function y: \mathbb^n \to \mathbb when the values of that function are known on finite number of points: y(\mathbf x_i) = b_i, i=1, \ldots, N. Taking the known points \mathbf x_i to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points g_ = \rho(, , \mathbf x_j - \mathbf x_i , , ) the weights can be solved from the equation :\left \begin g_ & g_ & \cdots & g_ \\ g_ & g_ & \cdots & g_ \\ \vdots & & \ddots & \vdots \\ g_ & g_ & \cdots & g_ \end\right\left \begin w_1 \\ w_2 \\ \vdots \\ w_N \end \right= \left \begin b_1 \\ b_2 \\ \vdots \\ b_N \end \right/math> It can be shown that the interpolation matrix in the above equation is non-singular, if the points \mathbf x_i are distinct, and thus the weights w can be solved by simple linear algebra: :\mathbf = \mathbf^ \mathbf where G = (g_).


Function approximation

If the purpose is not to perform strict interpolation but instead more general function approximation or
classification Classification is a process related to categorization, the process in which ideas and objects are recognized, differentiated and understood. Classification is the grouping of related facts into classes. It may also refer to: Business, organizat ...
the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.


Training the basis function centers

Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers. The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.


Pseudoinverse solution for the linear weights

After the centers c_i have been fixed, the weights that minimize the error at the output can be computed with a linear
pseudoinverse In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element ''x'' is an element ''y'' that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inv ...
solution: :\mathbf = \mathbf^+ \mathbf, where the entries of ''G'' are the values of the radial basis functions evaluated at the points x_i: g_ = \rho(, , x_j-c_i, , ). The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed).


Gradient descent training of the linear weights

Another possible training algorithm is gradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found), : \mathbf(t+1) = \mathbf(t) - \nu \frac H_t(\mathbf) where \nu is a "learning parameter." For the case of training the linear weights, a_i , the algorithm becomes : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \rho \big ( \left \Vert \mathbf(t) - \mathbf_i \right \Vert \big ) in the unnormalized case and : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big u \big ( \left \Vert \mathbf(t) - \mathbf_i \right \Vert \big ) in the normalized case. For local-linear-architectures gradient-descent training is : e_ (t+1) = e_(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big v_ \big ( \mathbf(t) - \mathbf_i \big )


Projection operator training of the linear weights

For the case of training the linear weights, a_i and e_ , the algorithm becomes : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac in the unnormalized case and : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac in the normalized case and : e_ (t+1) = e_(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac in the local-linear case. For one basis function, projection operator training reduces to
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
.


Examples


Logistic map

The basic properties of radial basis functions can be illustrated with a simple mathematical map, the
logistic map The logistic map is a polynomial mapping (equivalently, recurrence relation) of degree 2, often referred to as an archetypal example of how complex, chaotic behaviour can arise from very simple non-linear dynamical equations. The map was popular ...
, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore function approximation, time series prediction, and
control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
. The map originated from the field of
population dynamics Population dynamics is the type of mathematics used to model and study the size and age composition of populations as dynamical systems. History Population dynamics has traditionally been the dominant branch of mathematical biology, which has ...
and became the prototype for
chaotic Chaotic was originally a Danish trading card game. It expanded to an online game in America which then became a television program based on the game. The program was able to be seen on 4Kids TV (Fox affiliates, nationwide), Jetix, The CW4Kid ...
time series. The map, in the fully chaotic regime, is given by : x(t+1)\ \stackrel\ f\left x(t)\right = 4 x(t) \left 1-x(t) \right where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map. Generation of the time series from this equation is the forward problem. The examples here illustrate the inverse problem; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate : x(t+1) = f \left x(t) \right \approx \varphi(t) = \varphi \left x(t)\right for f.


Function approximation


Unnormalized radial basis functions

The architecture is : \varphi ( \mathbf ) \ \stackrel\ \sum_^N a_i \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) where : \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) = \exp \left -\beta_i \left \Vert \mathbf - \mathbf_i \right \Vert ^2 \right= \exp \left -\beta_i \left ( x(t) - c_i \right ) ^2 \right. Since the input is a
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers * Scalar (physics), a physical quantity that can be described by a single element of a number field such ...
rather than a vector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight \beta is taken to be a constant equal to 5. The weights c_i are five exemplars from the time series. The weights a_i are trained with projection operator training: : a_i (t+1) = a_i(t) + \nu \big x(t+1) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac where the
learning rate In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly ac ...
\nu is taken to be 0.3. The training is performed with one pass through the 100 training points. The
rms error The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample or population values) predicted by a model or an estimator and the values observed. The RMSD represents ...
is 0.15.


Normalized radial basis functions

The normalized RBF architecture is : \varphi ( \mathbf ) \ \stackrel\ \frac = \sum_^N a_i u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) where : u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \ \stackrel\ \frac . Again: : \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) = \exp \left -\beta \left \Vert \mathbf - \mathbf_i \right \Vert ^2 \right= \exp \left -\beta \left ( x(t) - c_i \right ) ^2 \right. Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight \beta is taken to be a constant equal to 6. The weights c_i are five exemplars from the time series. The weights a_i are trained with projection operator training: : a_i (t+1) = a_i(t) + \nu \big x(t+1) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac where the
learning rate In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly ac ...
\nu is again taken to be 0.3. The training is performed with one pass through the 100 training points. The
rms error The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample or population values) predicted by a model or an estimator and the values observed. The RMSD represents ...
on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.


Time series prediction

Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration: : \varphi(0) = x(1) : (t) \approx \varphi(t-1) : (t+1) \approx \varphi(t)=\varphi varphi(t-1)/math>. A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps. Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the
Lyapunov exponent In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with ini ...
.


Control of a chaotic time series

We assume the output of the logistic map can be manipulated through a control parameter c
x(t),t X, or x, is the twenty-fourth and third-to-last Letter (alphabet), letter in the Latin alphabet, used in the English alphabet, modern English alphabet, the alphabets of other western European languages and others worldwide. Its English a ...
such that : ^_(t+1) = 4 x(t) -x(t)+c (t),t. The goal is to choose the control parameter in such a way as to drive the time series to a desired output d(t) . This can be done if we choose the control parameter to be : c^_ (t),t\ \stackrel\ -\varphi (t)+ d(t+1) where : y (t)\approx f (t)= x(t+1)- c (t),t is an approximation to the underlying natural dynamics of the system. The learning algorithm is given by : a_i (t+1) = a_i(t) + \nu \varepsilon \frac where : \varepsilon \ \stackrel\ f (t)- \varphi (t)= x(t+1)- c (t),t- \varphi (t)= x(t+1) - d(t+1) .


See also

* Radial basis function kernel *
instance-based learning In machine learning, instance-based learning (sometimes called memory-based learning) is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have b ...
* In Situ Adaptive Tabulation *
Predictive analytics Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events. In business ...
*
Chaos theory Chaos theory is an interdisciplinary area of scientific study and branch of mathematics focused on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions, and were once thought to have co ...
*
Hierarchical RBF In computer graphics, a hierarchical RBF is an interpolation method based on Radial basis functions (RBF). Hierarchical RBF interpolation has applications in the construction of shape models in 3D computer graphics (see Stanford Bunny image below ...
*
Cerebellar model articulation controller The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type of associative memory Memory is the fac ...
*
Instantaneously trained neural networks Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are ...


References


Further reading

* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also se
Radial basis function networks according to Moody and Darken
* T. Poggio and F. Girosi,
Networks for approximation and learning
" Proc. IEEE 78(9), 1484-1487 (1990). * Roger D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian,
Function approximation and time series prediction with neural networks
? Proceedings of the International Joint Conference on Neural Networks, June 17–21, p. I-649 (1990). * * * John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in * {{cite book , author=Simon Haykin , title=Neural Networks: A Comprehensive Foundation , edition=2nd , location=Upper Saddle River, NJ , publisher=Prentice Hall, year=1999 , isbn=0-13-908385-5 * S. Chen, C. F. N. Cowan, and P. M. Grant,
Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks
, IEEE Transactions on Neural Networks, Vol 2, No 2 (Mar) 1991. Neural network architectures Computational statistics Classification algorithms Machine learning algorithms Regression analysis