HOME

TheInfoList



OR:

In
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
, a multilayer perceptron (MLP) is a name for a modern feedforward
neural network A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
consisting of fully connected neurons with nonlinear
activation function The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
s, organized in layers, notable for being able to distinguish data that is not linearly separable.Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function '' Mathematics of Control, Signals, and Systems'', 2(4), 303–314. Modern neural networks are trained using
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams.
Learning Internal Representations by Error Propagation
. David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
and are colloquially referred to as "vanilla" networks. MLPs grew out of an effort to improve single-layer perceptrons, which could only be applied to linearly separable data. A perceptron traditionally used a
Heaviside step function The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Differen ...
as its nonlinear activation function. However, the backpropagation algorithm requires that modern MLPs use continuous activation functions such as sigmoid or
ReLU In the context of Neural network (machine learning), artificial neural networks, the rectifier or ReLU (rectified linear unit) activation function is an activation function defined as the non-negative part of its argument, i.e., the ramp function ...
. Multilayer perceptrons form the basis of deep learning, and are applicable across a vast set of diverse domains.


Timeline

* In 1943,
Warren McCulloch Warren Sturgis McCulloch (November 16, 1898 – September 24, 1969) was an American neurophysiologist and cybernetician known for his work on the foundation for certain brain theories and his contribution to the cybernetics movement.Ken Aizawa ...
and
Walter Pitts Walter Harry Pitts, Jr. (April 23, 1923 – May 14, 1969) was an American logician who worked in the field of computational neuroscience.Smalheiser, Neil R"Walter Pitts", ''Perspectives in Biology and Medicine'', Volume 43, Number 2, Wint ...
proposed the binary
artificial neuron An artificial neuron is a mathematical function conceived as a model of a biological neuron in a neural network. The artificial neuron is the elementary unit of an ''artificial neural network''. The design of the artificial neuron was inspired ...
as a logical model of biological neural networks. * In 1958,
Frank Rosenblatt Frank Rosenblatt (July 11, 1928July 11, 1971) was an American psychologist notable in the field of artificial intelligence. He is sometimes called the father of deep learning for his pioneering work on artificial neural networks. Life and career ...
proposed the multilayered
perceptron In machine learning, the perceptron is an algorithm for supervised classification, supervised learning of binary classification, binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vect ...
model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. * In 1962, Rosenblatt published many variants and experiments on perceptrons in his book ''Principles of Neurodynamics'', including up to 2 trainable layers by "back-propagating errors". However, it was not the backpropagation algorithm, and he did not have a general method for training multiple layers. * In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling. It was one of the first
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
methods, used to train an eight-layer neural net in 1971. * In 1967, Shun'ichi Amari reported the first multilayered neural network trained by
stochastic gradient descent Stochastic gradient descent (often abbreviated SGD) is an Iterative method, iterative method for optimizing an objective function with suitable smoothness properties (e.g. Differentiable function, differentiable or Subderivative, subdifferentiable ...
, was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers. *
Backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
was independently developed multiple times in early 1970s. The earliest published instance was Seppo Linnainmaa's master thesis (1970). Paul Werbos developed it independently in 1971, but had difficulty publishing it until 1982. * In 1986, David E. Rumelhart et al. popularized backpropagation.Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams.
Learning Internal Representations by Error Propagation
. David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
* In 2003, interest in backpropagation networks returned due to the successes of
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
being applied to language modelling by Yoshua Bengio with co-authors. * In 2021, a very simple NN architecture combining two deep MLPs with skip connections and layer normalizations was designed and called MLP-Mixer; its realizations featuring 19 to 431 millions of parameters were shown to be comparable to vision transformers of similar size on ImageNet and similar image classification tasks.


Mathematical foundations


Activation function

If a multilayer perceptron has a linear
activation function The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
shows that any number of layers can be reduced to a two-layer input-output model. In MLPs some neurons use a ''nonlinear'' activation function that was developed to model the frequency of
action potentials An action potential (also known as a nerve impulse or "spike" when in a neuron) is a series of quick changes in voltage across a cell membrane. An action potential occurs when the membrane potential of a specific cell rapidly rises and falls. ...
, or firing, of biological neurons. The two historically common activation functions are both sigmoids, and are described by :y(v_i) = \tanh(v_i) ~~ \textrm ~~ y(v_i) = (1+e^)^. The first is a hyperbolic tangent that ranges from −1 to 1, while the other is the
logistic function A logistic function or logistic curve is a common S-shaped curve ( sigmoid curve) with the equation f(x) = \frac where The logistic function has domain the real numbers, the limit as x \to -\infty is 0, and the limit as x \to +\infty is L. ...
, which is similar in shape but ranges from 0 to 1. Here y_i is the output of the ith node (neuron) and v_i is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). In recent developments of
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.


Layers

The MLP consists of three or more layers (an input and an output layer with one or more ''hidden layers'') of nonlinearly-activating nodes. Since MLPs are fully connected, each node in one layer connects with a certain weight w_ to every node in the following layer.


Learning

Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of
supervised learning In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
, and is carried out through
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
, a generalization of the least mean squares algorithm in the linear perceptron. We can represent the degree of error in an output node j in the nth data point (training example) by e_j(n)=d_j(n)-y_j(n), where d_j(n) is the desired target value for nth data point at node j, and y_j(n) is the value produced by the perceptron at node j when the nth data point is given as an input. The node weights can then be adjusted based on corrections that minimize the error in the entire output for the nth data point, given by :\mathcal(n)=\frac\sum_ e_j^2(n). Using
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
, the change in each weight w_ is :\Delta w_ (n) = -\eta\frac y_i(n) where y_i(n) is the output of the previous neuron i, and \eta is the '' learning rate'', which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, \frac denotes the partial derivate of the error \mathcal(n) according to the weighted sum v_j(n) of the input connections of neuron i. The derivative to be calculated depends on the induced local field v_j, which itself varies. It is easy to prove that for an output node this derivative can be simplified to :-\frac = e_j(n)\phi^\prime (v_j(n)) where \phi^\prime is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is :-\frac = \phi^\prime (v_j(n))\sum_k -\frac w_(n). This depends on the change in weights of the kth nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.


References


External links


Weka: Open source data mining software with multilayer perceptron implementation

Neuroph Studio documentation, implements this algorithm and a few others
{{Artificial intelligence navbox Classification algorithms Neural network architectures de:Perzeptron#Mehrlagiges Perzeptron