HOME

TheInfoList



OR:

Quickprop is an iterative method for determining the minimum of the
loss function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
of an
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected unit ...
, following an algorithm inspired by the
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. Sometimes, the algorithm is classified to the group of the second order learning methods. It follows a quadratic approximation of the previous
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradi ...
step and the current gradient, which is expected to be close to the minimum of the loss function, under the assumption that the loss function is locally approximately square, trying to describe it by means of an upwardly open
parabola In mathematics, a parabola is a plane curve which is mirror-symmetrical and is approximately U-shaped. It fits several superficially different mathematical descriptions, which can all be proved to define exactly the same curves. One descript ...
. The minimum is sought in the vertex of the parabola. The procedure requires only local information of the
artificial neuron An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron receives one or more inputs (representing e ...
to which it is applied. The k -th approximation step is given by: \Delta^ \, w_ = \Delta^ \, w_ \left ( \frac \right) Being w_ the neuron j weight of its i input and E is the loss function. The Quickprop algorithm is an implementation of the error
backpropagation In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward neural network, feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANN ...
algorithm, but the network can behave chaotically during the learning phase due to large step sizes.


References

{{Reflist


Bibliography

*Scott E. Fahlman:
An Empirical Study of Learning Speed in Back-Propagation Networks
', September 1988 Machine learning algorithms Artificial neural networks Computational neuroscience