Dynamic Neural Network
   HOME

TheInfoList



OR:

There are many types of artificial neural networks (ANN).
Artificial neural network In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
s are
computational model A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics ...
s inspired by
biological neural network A neural network, also called a neuronal network, is an interconnected population of neurons (typically containing multiple neural circuits). Biological neural networks are studied to understand the organization and functioning of nervous syst ...
s, and are used to
approximate An approximation is anything that is intentionally similar but not exactly equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very near'' and the prefix ...
functions that are generally unknown. Particularly, they are inspired by the behaviour of
neuron A neuron (American English), neurone (British English), or nerve cell, is an membrane potential#Cell excitability, excitable cell (biology), cell that fires electric signals called action potentials across a neural network (biology), neural net ...
s and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research. Most artificial neural networks bear only some resemblance to their more complex biological counterparts, but are very effective at their intended tasks (e.g. classification or segmentation). Some artificial neural networks are
adaptive system An adaptive system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole that together are able to respond to environmental changes or changes in the interacting parts, in a way analogous to either cont ...
s and are used for example to model populations and environments, which constantly change. Neural networks can be hardware- (neurons are represented by physical components) or software-based (computer models), and can use a variety of topologies and learning algorithms.


Feedforward

In feedforward neural networks the information moves from the input to output directly in every layer. There can be hidden layers with or without cycles/loops to sequence inputs. Feedforward networks can be constructed with various types of units, such as binary McCulloch–Pitts neurons, the simplest of which is the
perceptron In machine learning, the perceptron is an algorithm for supervised classification, supervised learning of binary classification, binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vect ...
. Continuous neurons, frequently with sigmoidal
activation In chemistry and biology, activation is the process whereby something is prepared or excited for a subsequent reaction. Chemistry In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or ...
, are used in the context of
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
.


Group method of data handling

The Group Method of Data Handling (GMDH) features fully automatic structural and parametric model optimization. The node activation functions are Kolmogorov–Gabor polynomials that permit additions and multiplications. It uses a deep multilayer
perceptron In machine learning, the perceptron is an algorithm for supervised classification, supervised learning of binary classification, binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vect ...
with eight layers. It is a
supervised learning In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
network that grows layer by layer, where each layer is trained by regression analysis. Useless items are detected using a
validation set In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from ...
, and pruned through
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
. The size and depth of the resulting network depends on the task.


Autoencoder

An autoencoder, autoassociator or Diabolo network is similar to the
multilayer perceptron In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is ...
(MLP) – with an input layer, an output layer and one or more hidden layers connecting them. However, the output layer has the same number of units as the input layer. Its purpose is to reconstruct its own inputs (instead of emitting a target value). Therefore, autoencoders are
unsupervised learning Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, wh ...
models. An autoencoder is used for
unsupervised learning Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, wh ...
of efficient codings, typically for the purpose of
dimensionality reduction Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally ...
and for learning
generative model In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
s of data.


Probabilistic

A probabilistic neural network (PNN) is a four-layer feedforward neural network. The layers are Input, hidden pattern, hidden summation, and output. In the PNN algorithm, the parent probability distribution function (PDF) of each class is approximated by a Parzen window and a non-parametric function. Then, using PDF of each class, the class probability of a new input is estimated and Bayes’ rule is employed to allocate it to the class with the highest posterior probability. It was derived from the
Bayesian network A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
and a statistical algorithm called
Kernel Fisher discriminant analysis In statistics, kernel Fisher discriminant analysis (KFD), also known as generalized discriminant analysis and kernel discriminant analysis, is a kernelized version of linear discriminant analysis (LDA). It is named after Ronald Fisher. Linear disc ...
. It is used for classification and pattern recognition.


Time delay

A time delay neural network (TDNN) is a feedforward architecture for sequential data that recognizes
features Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenome ...
independent of sequence position. In order to achieve time-shift invariance, delays are added to the input so that multiple data points (points in time) are analyzed together. It usually forms part of a larger pattern recognition system. It has been implemented using a
perceptron In machine learning, the perceptron is an algorithm for supervised classification, supervised learning of binary classification, binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vect ...
network whose connection weights were trained with back propagation (supervised learning).


Convolutional

A convolutional neural network (CNN, or ConvNet or shift invariant or space invariant) is a class of deep network, composed of one or more
convolution In mathematics (in particular, functional analysis), convolution is a operation (mathematics), mathematical operation on two function (mathematics), functions f and g that produces a third function f*g, as the integral of the product of the two ...
al layers with fully connected layers (matching those in typical ANNs) on top. It uses tied weights and
pooling layer In neural networks, a pooling layer is a kind of network layer that downsamples and aggregates information that is dispersed among many vectors into fewer vectors. It has several uses. It removes redundant information, reducing the amount of comp ...
s. In particular, max-pooling. It is often structured via Fukushima's convolutional architecture. They are variations of
multilayer perceptron In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is ...
s that use minimal preprocessing. This architecture allows CNNs to take advantage of the 2D structure of input data. Its unit connectivity pattern is inspired by the organization of the
visual cortex The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalam ...
. Units respond to stimuli in a restricted region of space known as the
receptive field The receptive field, or sensory space, is a delimited medium where some physiological stimuli can evoke a sensory neuronal response in specific organisms. Complexity of the receptive field ranges from the unidimensional chemical structure of od ...
. Receptive fields partially overlap, over-covering the entire
visual field The visual field is "that portion of space in which objects are visible at the same moment during steady fixation of the gaze in one direction"; in ophthalmology and neurology the emphasis is mostly on the structure inside the visual field and it i ...
. Unit response can be approximated mathematically by a
convolution In mathematics (in particular, functional analysis), convolution is a operation (mathematics), mathematical operation on two function (mathematics), functions f and g that produces a third function f*g, as the integral of the product of the two ...
operation. CNNs are suitable for processing visual and other two-dimensional data. They have shown superior results in both image and speech applications. They can be trained with standard backpropagation. CNNs are easier to train than other regular, deep, feed-forward neural networks and have many fewer parameters to estimate. Capsule Neural Networks (CapsNet) add structures called capsules to a CNN and reuse output from several capsules to form more stable (with respect to various perturbations) representations. Examples of applications in computer vision include
DeepDream DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscen ...
and
robot navigation Robot localization denotes the robot's ability to establish its own position and orientation within the frame of reference. Path planning is effectively an extension of localization, in that it requires the determination of the robot's current pos ...
. They have wide applications in image and video recognition,
recommender system A recommender system (RecSys), or a recommendation system (sometimes replacing ''system'' with terms such as ''platform'', ''engine'', or ''algorithm'') and sometimes only called "the algorithm" or "algorithm", is a subclass of information fi ...
s and
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
.


Deep stacking network

A deep stacking network (DSN) (deep convex network) is based on a hierarchy of blocks of simplified neural network modules. It was introduced in 2011 by Deng and Yu. It formulates the learning as a convex optimization problem with a
closed-form solution In mathematics, an expression or equation is in closed form if it is formed with constants, variables, and a set of functions considered as ''basic'' and connected by arithmetic operations (, and integer powers) and function composition. C ...
, emphasizing the mechanism's similarity to stacked generalization. Each DSN block is a simple module that is easy to train by itself in a supervised fashion without backpropagation for the entire blocks. Each block consists of a simplified multi-layer perceptron (MLP) with a single hidden layer. The hidden layer ''h'' has logistic sigmoidal
units Unit may refer to: General measurement * Unit of measurement, a definite magnitude of a physical quantity, defined and adopted by convention or by law **International System of Units (SI), modern form of the metric system **English units, histo ...
, and the output layer has linear units. Connections between these layers are represented by weight matrix ''U;'' input-to-hidden-layer connections have weight matrix ''W''. Target vectors ''t'' form the columns of matrix ''T'', and the input data vectors ''x'' form the columns of matrix ''X.'' The matrix of hidden units is \boldsymbol = \sigma(\boldsymbol^T\boldsymbol). Modules are trained in order, so lower-layer weights ''W'' are known at each stage. The function performs the element-wise logistic sigmoid operation. Each block estimates the same final label class ''y'', and its estimate is concatenated with original input ''X'' to form the expanded input for the next block. Thus, the input to the first block contains the original data only, while downstream blocks' input adds the output of preceding blocks. Then learning the upper-layer weight matrix ''U'' given other weights in the network can be formulated as a convex optimization problem: : \min_ f = \, \boldsymbol^T \boldsymbol - \boldsymbol\, ^2_F, which has a closed-form solution. Unlike other deep architectures, such as DBNs, the goal is not to discover the transformed
feature Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenome ...
representation. The structure of the hierarchy of this kind of architecture makes parallel learning straightforward, as a batch-mode optimization problem. In purely discriminative tasks, DSNs outperform conventional DBNs.


Tensor deep stacking networks

This architecture is a DSN extension. It offers two important improvements: it uses higher-order information from
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
statistics, and it transforms the non-convex problem of a lower-layer to a convex sub-problem of an upper-layer. TDSNs use covariance statistics in a
bilinear map In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example. A bilinear map can also be defined for ...
ping from each of two distinct sets of hidden units in the same layer to predictions, via a third-order
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other ...
. While parallelization and scalability are not considered seriously in conventional , all learning for s and s is done in batch mode, to allow parallelization. Parallelization allows scaling the design to larger (deeper) architectures and data sets. The basic architecture is suitable for diverse tasks such as
classification Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
and
regression Regression or regressions may refer to: Arts and entertainment * ''Regression'' (film), a 2015 horror film by Alejandro Amenábar, starring Ethan Hawke and Emma Watson * ''Regression'' (magazine), an Australian punk rock fanzine (1982–1984) * ...
.


Physics-informed

Such a
neural network A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
is designed for the numerical solution of
mathematical equations In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign . The word ''equation'' and its cognates in other languages may have subtly different meanings; for e ...
, such as differential, integral, delay, fractional and others. As input parameters, PINN accepts variables (spatial, temporal, and others), transmits them through the network block. At the output, it produces an approximate solution and substitutes it into the mathematical model, considering the initial and boundary conditions. If the solution does not satisfy the required accuracy, one uses the
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
and rectify the solution. Besides PINN, there exists deep neural operator (DeepONet) and Fourier neural operator (FNO).


Regulatory feedback

Regulatory feedback networks account for feedback found throughout brain recognition processing areas. Instead of recognition-inference being feedforward (inputs-to-output) as in neural networks, regulatory feedback assumes inference iteratively compares inputs to outputs & neurons inhibit their own inputs, collectively evaluating how important and unique each input is for the next iteration. This ultimately finds neuron activations minimizing mutual input overlap, estimating distributions during recognition and offloading the need for complex neural network training & rehearsal. Regulatory feedback processing suggests an important real-time recognition processing role for ubiquitous feedback found between brain pre and post synaptic neurons, which is meticulously maintained by homeostatic plasticity: found to be kept in balance through multiple, often redundant, mechanisms. RF also inherently shows neuroscience phenomena such as Excitation-Inhibition balance, network-wide
bursting Bursting, or burst firing, is an extremely diverse general phenomenon of the activation patterns of neurons in the central nervous system where periods of rapid action potential spiking are followed by quiescent periods much longer than typical in ...
followed by quieting, and human cognitive search phenomena of difficulty with similarity and pop-out when multiple inputs are present, without additional parameters. A regulatory feedback network makes inferences using
negative feedback Negative feedback (or balancing feedback) occurs when some function (Mathematics), function of the output of a system, process, or mechanism is feedback, fed back in a manner that tends to reduce the fluctuations in the output, whether caused ...
. The feedback is used to find the optimal activation of units. It is most similar to a non-parametric method but is different from
K-nearest neighbor In statistics, the ''k''-nearest neighbors algorithm (''k''-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. Most often, it is used for cl ...
in that it mathematically emulates feedforward networks.


Radial basis function

Radial basis functions are functions that have a distance criterion with respect to a center. Radial basis functions have been applied as a replacement for the sigmoidal hidden layer transfer characteristic in multi-layer perceptrons. RBF networks have two layers: In the first, input is mapped onto each RBF in the 'hidden' layer. The RBF chosen is usually a Gaussian. In regression problems the output layer is a linear combination of hidden layer values representing mean predicted output. The interpretation of this output layer value is the same as a regression model in statistics. In classification problems the output layer is typically a
sigmoid function A sigmoid function is any mathematical function whose graph of a function, graph has a characteristic S-shaped or sigmoid curve. A common example of a sigmoid function is the logistic function, which is defined by the formula :\sigma(x ...
of a linear combination of hidden layer values, representing a posterior probability. Performance in both cases is often improved by shrinkage techniques, known as
ridge regression Ridge regression (also known as Tikhonov regularization, named for Andrey Tikhonov) is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in m ...
in classical statistics. This corresponds to a prior belief in small parameter values (and therefore smooth output functions) in a
Bayesian Thomas Bayes ( ; c. 1701 – 1761) was an English statistician, philosopher, and Presbyterian minister. Bayesian ( or ) may be either any of a range of concepts and approaches that relate to statistical methods based on Bayes' theorem Bayes ...
framework. RBF networks have the advantage of avoiding local minima in the same way as multi-layer perceptrons. This is because the only parameters that are adjusted in the learning process are the linear mapping from hidden layer to output layer. Linearity ensures that the error surface is quadratic and therefore has a single easily found minimum. In regression problems this can be found in one matrix operation. In classification problems the fixed non-linearity introduced by the sigmoid output function is most efficiently dealt with using
iteratively re-weighted least squares The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a ''p''-norm: \mathop_ \sum_^n \big, y_i - f_i (\boldsymbol\beta) \big, ^p, by an iterative meth ...
. RBF networks have the disadvantage of requiring good coverage of the input space by radial basis functions. RBF centres are determined with reference to the distribution of the input data, but without reference to the prediction task. As a result, representational resources may be wasted on areas of the input space that are irrelevant to the task. A common solution is to associate each data point with its own centre, although this can expand the linear system to be solved in the final layer and requires shrinkage techniques to avoid
overfitting In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfi ...
. Associating each input datum with an RBF leads naturally to kernel methods such as
support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborato ...
s (SVM) and Gaussian processes (the RBF is the
kernel function In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solvi ...
). All three approaches use a non-linear kernel function to project the input data into a space where the learning problem can be solved using a linear model. Like Gaussian processes, and unlike SVMs, RBF networks are typically trained in a maximum likelihood framework by maximizing the probability (minimizing the error). SVMs avoid overfitting by maximizing instead a
margin Margin may refer to: Physical or graphical edges *Margin (typography), the white space that surrounds the content of a page * Continental margin, the zone of the ocean floor that separates the thin oceanic crust from thick continental crust *Leaf ...
. SVMs outperform RBF networks in most classification applications. In regression applications they can be competitive when the dimensionality of the input space is relatively small.


How RBF networks work

RBF neural networks are conceptually similar to
K-nearest neighbor In statistics, the ''k''-nearest neighbors algorithm (''k''-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. Most often, it is used for cl ...
(k-NN) models. The basic idea is that similar inputs produce similar outputs. Assume that each case in a training set has two predictor variables, x and y, and the target variable has two categories, positive and negative. Given a new case with predictor values x=6, y=5.1, how is the target variable computed? The nearest neighbor classification performed for this example depends on how many neighboring points are considered. If 1-NN is used and the closest point is negative, then the new point should be classified as negative. Alternatively, if 9-NN classification is used and the closest 9 points are considered, then the effect of the surrounding 8 positive points may outweigh the closest 9-th (negative) point. An RBF network positions neurons in the space described by the predictor variables (x,y in this example). This space has as many dimensions as predictor variables. The Euclidean distance is computed from the new point to the center of each neuron, and a radial basis function (RBF, also called a kernel function) is applied to the distance to compute the weight (influence) for each neuron. The radial basis function is so named because the radius distance is the argument to the function. : ''Weight'' = RBF(''distance'')


Radial basis function

The value for the new point is found by summing the output values of the RBF functions multiplied by weights computed for each neuron. The radial basis function for a neuron has a center and a radius (also called a spread). The radius may be different for each neuron, and, in RBF networks generated by DTREG, the radius may be different in each dimension. With larger spread, neurons at a distance from a point have a greater influence.


Architecture

RBF networks have three layers: * Input layer: One neuron appears in the input layer for each predictor variable. In the case of
categorical variable In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
s, N-1 neurons are used where N is the number of categories. The input neurons standardizes the value ranges by subtracting the
median The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
and dividing by the interquartile range. The input neurons then feed the values to each of the neurons in the hidden layer. * Hidden layer: This layer has a variable number of neurons (determined by the training process). Each neuron consists of a radial basis function centered on a point with as many dimensions as predictor variables. The spread (radius) of the RBF function may be different for each dimension. The centers and spreads are determined by training. When presented with the x vector of input values from the input layer, a hidden neuron computes the Euclidean distance of the test case from the neuron's center point and then applies the RBF kernel function to this distance using the spread values. The resulting value is passed to the summation layer. * Summation layer: The value coming out of a neuron in the hidden layer is multiplied by a weight associated with the neuron and adds to the weighted values of other neurons. This sum becomes the output. For classification problems, one output is produced (with a separate set of weights and summation unit) for each target category. The value output for a category is the probability that the case being evaluated has that category.


Training

The following parameters are determined by the training process: * The number of neurons in the hidden layer * The coordinates of the center of each hidden-layer RBF function * The radius (spread) of each RBF function in each dimension * The weights applied to the RBF function outputs as they pass to the summation layer Various methods have been used to train RBF networks. One approach first uses
K-means clustering ''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition of a set, partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster (statistics), cluste ...
to find cluster centers which are then used as the centers for the RBF functions. However, K-means clustering is computationally intensive and it often does not generate the optimal number of centers. Another approach is to use a random subset of the training points as the centers. DTREG uses a training algorithm that uses an evolutionary approach to determine the optimal center points and spreads for each neuron. It determines when to stop adding neurons to the network by monitoring the estimated leave-one-out (LOO) error and terminating when the LOO error begins to increase because of overfitting. The computation of the optimal weights between the neurons in the hidden layer and the summation layer is done using ridge regression. An iterative procedure computes the optimal regularization Lambda parameter that minimizes the generalized cross-validation (GCV) error.


General regression neural network

A GRNN is an associative memory neural network that is similar to the probabilistic neural network but it is used for regression and approximation rather than classification.


Deep belief network

A deep belief network (DBN) is a probabilistic,
generative model In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
made up of multiple hidden layers. It can be considered a
composition Composition or Compositions may refer to: Arts and literature *Composition (dance), practice and teaching of choreography * Composition (language), in literature and rhetoric, producing a work in spoken tradition and written discourse, to include ...
of simple learning modules. A DBN can be used to generatively pre-train a deep neural network (DNN) by using the learned DBN weights as the initial DNN weights. Various discriminative algorithms can then tune these weights. This is particularly helpful when training data are limited, because poorly initialized weights can significantly hinder learning. These pre-trained weights end up in a region of the weight space that is closer to the optimal weights than random choices. This allows for both improved modeling and faster ultimate convergence.


Recurrent neural network

Recurrent neural network Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which proces ...
s (RNN) propagate data forward, but also backwards, from later processing stages to earlier stages. RNN can be used as general sequence processors.


Fully recurrent

This architecture was developed in the 1980s. Its network creates a directed connection between every pair of units. Each has a time-varying, real-valued (more than just zero or one) activation (output). Each connection has a modifiable real-valued weight. Some of the nodes are called labeled nodes, some output nodes, the rest hidden nodes. For
supervised learning In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
in discrete time settings, training sequences of real-valued input vectors become sequences of activations of the input nodes, one input vector at a time. At each time step, each non-input unit computes its current activation as a nonlinear function of the weighted sum of the activations of all units from which it receives connections. The system can explicitly activate (independent of incoming signals) some output units at certain time steps. For example, if the input sequence is a speech signal corresponding to a spoken digit, the final target output at the end of the sequence may be a label classifying the digit. For each sequence, its error is the sum of the deviations of all activations computed by the network from the corresponding target signals. For a training set of numerous sequences, the total error is the sum of the errors of all individual sequences. To minimize total error,
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
can be used to change each weight in proportion to its derivative with respect to the error, provided the non-linear activation functions are
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
. The standard method is called "
backpropagation through time Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by numerous researchers. Algorithm The training data ...
" or BPTT, a generalization of back-propagation for feedforward networks. A more computationally expensive online variant is called " Real-Time Recurrent Learning" or RTRL. Unlike BPTT this algorithm is ''local in time but not local in space''. An online hybrid between BPTT and RTRL with intermediate complexity exists, with variants for continuous time. A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events. The
Long short-term memory Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, ...
architecture overcomes these problems. In
reinforcement learning Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learnin ...
settings, no teacher provides target signals. Instead a
fitness function A fitness function is a particular type of objective or cost function that is used to summarize, as a single figure of merit, how close a given candidate solution is to achieving the set aims. It is an important component of evolutionary algorit ...
or
reward function Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning ...
or
utility function In economics, utility is a measure of a certain person's satisfaction from a certain state of the world. Over time, the term has been used with at least two meanings. * In a Normative economics, normative context, utility refers to a goal or ob ...
is occasionally used to evaluate performance, which influences its input stream through output units connected to actuators that affect the environment. Variants of
evolutionary computation Evolutionary computation from computer science is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms ...
are often used to optimize the weight matrix.


Hopfield

The
Hopfield network A Hopfield network (or associative memory) is a form of recurrent neural network, or a spin glass system, that can serve as a content-addressable memory. The Hopfield network, named for John Hopfield, consists of a single layer of neurons, where ...
(like similar attractor-based networks) is of historic interest although it is not a general RNN, as it is not designed to process sequences of patterns. Instead it requires stationary inputs. It is an RNN in which all connections are symmetric. It guarantees that it will converge. If the connections are trained using
Hebbian learning Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptat ...
the Hopfield network can perform as robust
content-addressable memory Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications. It is also known as associative memory or associative storage and compares input search data against a table of stored ...
, resistant to connection alteration.


Boltzmann machine

The
Boltzmann machine A Boltzmann machine (also called Sherrington–Kirkpatrick model with external field or stochastic Ising model), named after Ludwig Boltzmann, is a spin glass, spin-glass model with an external field, i.e., a Spin glass#Sherrington–Kirkpatrick m ...
can be thought of as a noisy Hopfield network. It is one of the first neural networks to demonstrate learning of
latent variable In statistics, latent variables (from Latin: present participle of ) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such '' latent va ...
s (hidden units). Boltzmann machine learning was at first slow to simulate, but the contrastive divergence algorithm speeds up training for Boltzmann machines and Products of Experts.


Self-organizing map

The self-organizing map (SOM) uses
unsupervised learning Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, wh ...
. A set of neurons learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and SOM attempts to preserve these.


Learning vector quantization

Learning vector quantization (LVQ) can be interpreted as a neural network architecture. Prototypical representatives of the classes parameterize, together with an appropriate distance measure, in a distance-based classification scheme.


Simple recurrent

Simple recurrent networks have three layers, with the addition of a set of "context units" in the input layer. These units connect from the hidden layer or the output layer with a fixed weight of one. At each time step, the input is propagated in a standard feedforward fashion, and then a backpropagation-like learning rule is applied (not performing
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
). The fixed back connections leave a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied).


Reservoir computing

Reservoir computing is a computation framework that may be viewed as an extension of
neural network A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
s. Typically an input signal is fed into a fixed (random)
dynamical system In mathematics, a dynamical system is a system in which a Function (mathematics), function describes the time dependence of a Point (geometry), point in an ambient space, such as in a parametric curve. Examples include the mathematical models ...
called a ''reservoir'' whose dynamics map the input to a higher dimension. A ''readout'' mechanism is trained to map the reservoir to the desired output. Training is performed only at the readout stage. Liquid-state machines are a type of reservoir computing.


Echo state

The echo state network (ESN) employs a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that are trained. ESN are good at reproducing certain
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
.


Long short-term memory

The
long short-term memory Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, ...
(LSTM) avoids the
vanishing gradient problem In machine learning, the vanishing gradient problem is the problem of greatly diverging gradient magnitudes between earlier and later layers encountered when training neural networks with backpropagation. In such methods, neural network weights ar ...
. It works even when with long delays between inputs and can handle signals that mix low and high frequency components. LSTM RNN outperformed other RNN and other sequence learning methods such as HMM in applications such as language learning and connected handwriting recognition.


Bi-directional

Bi-directional RNN, or BRNN, use a finite sequence to predict or label each element of a sequence based on both the past and future context of the element. This is done by adding the outputs of two RNNs: one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique proved to be especially useful when combined with LSTM.


Hierarchical

Hierarchical RNN connects elements in various ways to decompose hierarchical behavior into useful subprograms.


Stochastic

A district from conventional neural networks, stochastic artificial neural network used as an approximation to random functions.


Genetic scale

A RNN (often a LSTM) where a series is decomposed into a number of scales where every scale informs the primary length between two consecutive points. A first order scale consists of a normal RNN, a second order consists of all points separated by two indices and so on. The Nth order RNN connects the first and last node. The outputs from all the various scales are treated as a Committee of Machines and the associated scores are used genetically for the next iteration.


Modular

Biological studies have shown that the human brain operates as a collection of small networks. This realization gave birth to the concept of modular neural networks, in which several small networks cooperate or compete to solve problems.


Committee of machines

A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result than individual networks. Because neural networks suffer from local minima, starting with the same architecture and training but using randomly different initial weights often gives vastly different results. A CoM tends to stabilize the result. The CoM is similar to the general
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
'' bagging'' method, except that the necessary variety of machines in the committee is obtained by training from different starting weights rather than training on different randomly selected subsets of the training data.


Associative

The associative neural network (ASNN) is an extension of committee of machines that combines multiple feedforward neural networks and the k-nearest neighbor technique. It uses the correlation between ensemble responses as a measure of distance amid the analyzed cases for the kNN. This corrects the Bias of the neural network ensemble. An associative neural network has a memory that can coincide with the training set. If new data become available, the network instantly improves its predictive ability and provides data approximation (self-learns) without retraining. Another important feature of ASNN is the possibility to interpret neural network results by analysis of correlations between data cases in the space of models.


Physical

A physical neural network includes electrically adjustable resistance material to simulate artificial synapses. Examples include the ADALINE
memristor A memristor (; a portmanteau of ''memory resistor'') is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. It was described and named in 1971 by Leon Chua, completing a theoretical quartet of ...
-based neural network. An  optical neural network is a physical implementation of an 
artificial neural network In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
 with 
optical components Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of optical instruments, instruments that use or Photodetector, detect it. Optics usually describes t ...
.


Dynamic

Unlike static neural networks, dynamic neural networks adapt their structure and/or parameters to the input during inference showing time-dependent behaviour, such as transient phenomena and delay effects. Dynamic neural networks in which the parameters may change over time are related to the fast weights architecture (1987), where one neural network outputs the weights of another neural network.


Cascading

Cascade correlation is an architecture and
supervised learning In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages: It learns quickly, determines its own size and topology, retains the structures it has built even if the training set changes and requires no
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
.


Neuro-fuzzy

A neuro-fuzzy network is a fuzzy inference system in the body of an artificial neural network. Depending on the FIS type, several layers simulate the processes involved in a fuzzy inference-like fuzzification, inference, aggregation and
defuzzification Defuzzification is the process of producing a quantifiable result in crisp logic, given fuzzy sets and corresponding membership degrees. It is the process that maps a fuzzy set to a crisp set. It is typically needed in fuzzy control systems. The ...
. Embedding an FIS in a general structure of an ANN has the benefit of using available ANN training methods to find the parameters of a fuzzy system.


Compositional pattern-producing

Compositional pattern-producing networks (CPPNs) are a variation of artificial neural networks which differ in their set of
activation function The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
s and how they are applied. While typical artificial neural networks often contain only
sigmoid function A sigmoid function is any mathematical function whose graph of a function, graph has a characteristic S-shaped or sigmoid curve. A common example of a sigmoid function is the logistic function, which is defined by the formula :\sigma(x ...
s (and sometimes
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function (mathematics), function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real number, rea ...
s), CPPNs can include both types of functions and many others. Furthermore, unlike typical artificial neural networks, CPPNs are applied across the entire space of possible inputs so that they can represent a complete image. Since they are compositions of functions, CPPNs in effect encode images at infinite resolution and can be sampled for a particular display at whatever resolution is optimal.


Memory networks

Memory networks incorporate
long-term memory Long-term memory (LTM) is the stage of the Atkinson–Shiffrin memory model in which informative knowledge is held indefinitely. It is defined in contrast to sensory memory, the initial stage, and short-term or working memory, the second stage ...
. The long-term memory can be read and written to, with the goal of using it for prediction. These models have been applied in the context of
question answering Question answering (QA) is a computer science discipline within the fields of information retrieval and natural language processing (NLP) that is concerned with building systems that automatically answer questions that are posed by humans in a n ...
(QA) where the long-term memory effectively acts as a (dynamic) knowledge base and the output is a textual response. In
sparse distributed memory Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. This memory exhibits behaviors, both in theory and in experiment, that resemble thos ...
or
hierarchical temporal memory Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book '' On Intelligence'' by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used toda ...
, the patterns encoded by neural networks are used as addresses for
content-addressable memory Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications. It is also known as associative memory or associative storage and compares input search data against a table of stored ...
, with "neurons" essentially serving as address encoders and decoders. However, the early controllers of such memories were not differentiable.


One-shot associative memory

This type of network can add new patterns without re-training. It is done by creating a specific memory structure, which assigns each new pattern to an orthogonal plane using adjacently connected hierarchical arrays. The network offers real-time pattern recognition and high scalability; this requires parallel processing and is thus best suited for platforms such as
wireless sensor network Wireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental ...
s,
grid computing Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished fro ...
, and
GPGPU General-purpose computing on graphics processing units (GPGPU, or less often GPGP) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditiona ...
s.


Hierarchical temporal memory

Hierarchical temporal memory (HTM) models some of the structural and
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
ic properties of the
neocortex The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, ...
. HTM is a
biomimetic Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (''bios''), life, and μίμησις ('' mīm ...
model based on memory-prediction theory. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world. HTM combines existing ideas to mimic the neocortex with a simple design that provides many capabilities. HTM combines and extends approaches used in
Bayesian networks A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their Conditional dependence, conditional dependencies via a directed a ...
, spatial and temporal clustering algorithms, while using a tree-shaped hierarchy of nodes that is common in
neural networks A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either Cell (biology), biological cells or signal pathways. While individual neurons are simple, many of them together in a netwo ...
.


Holographic associative memory

Holographic Associative Memory (HAM) is an analog, correlation-based, associative, stimulus-response system. Information is mapped onto the phase orientation of complex numbers. The memory is effective for
associative In mathematics, the associative property is a property of some binary operations that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for express ...
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembe ...
tasks, generalization and pattern recognition with changeable attention. Dynamic search localization is central to biological memory. In visual perception, humans focus on specific objects in a pattern. Humans can change focus from object to object without learning. HAM can mimic this ability by creating explicit representations for focus. It uses a bi-modal representation of pattern and a hologram-like complex spherical weight state-space. HAMs are useful for optical realization because the underlying hyper-spherical computations can be implemented with optical computation.


LSTM-related differentiable memory structures

Apart from
long short-term memory Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, ...
(LSTM), other approaches also added differentiable memory to recurrent functions. For example: * Differentiable push and pop actions for alternative memory networks called neural stack machines * Memory networks where the control network's external differentiable storage is in the fast weights of another network * LSTM forget gates * Self-referential RNNs with special output units for addressing and rapidly manipulating the RNN's own weights in differentiable fashion (internal storage) * Learning to transduce with unbounded memory


Neural Turing machines

Neural Turing machines (NTM) couple LSTM networks to external memory resources, with which they can interact by attentional processes. The combined system is analogous to a
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
but is differentiable end-to-end, allowing it to be efficiently trained by
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
. Preliminary results demonstrate that neural Turing machines can infer simple algorithms such as copying, sorting and associative recall from input and output examples.
Differentiable neural computer In artificial intelligence, a differentiable neural computer (DNC) is a memory augmented artificial neural network, neural network architecture (MANN), which is typically (but not by definition) recurrent in its implementation. The model was publ ...
s (DNC) are an NTM extension. They out-performed Neural turing machines,
long short-term memory Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, ...
systems and memory networks on sequence-processing tasks.


Semantic hashing

Approaches that represent previous experiences directly and use a similar experience to form a local model are often called nearest neighbour or k-nearest neighbors methods. Deep learning is useful in semantic hashing where a deep
graphical model A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. Graphical models are commonly used in ...
the word-count vectors obtained from a large set of documents. Documents are mapped to memory addresses in such a way that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by accessing all the addresses that differ by only a few bits from the address of the query document. Unlike
sparse distributed memory Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. This memory exhibits behaviors, both in theory and in experiment, that resemble thos ...
that operates on 1000-bit addresses, semantic hashing works on 32 or 64-bit addresses found in a conventional computer architecture.


Pointer networks

Deep neural networks can be potentially improved by deepening and parameter reduction, while maintaining trainability. While training extremely deep (e.g., 1 million layers) neural networks might not be practical,
CPU A central processing unit (CPU), also called a central processor, main processor, or just processor, is the primary processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, log ...
-like architectures such as pointer networks and neural random-access machines overcome this limitation by using external
random-access memory Random-access memory (RAM; ) is a form of Computer memory, electronic computer memory that can be read and changed in any order, typically used to store working Data (computing), data and machine code. A random-access memory device allows ...
and other components that typically belong to a computer architecture such as registers, ALU and
pointers Pointer may refer to: People with the name * Pointer (surname), a surname (including a list of people with the name) * Pointer Williams (born 1974), American former basketball player Arts, entertainment, and media * ''Pointer'' (journal), the ...
. Such systems operate on
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
vectors stored in memory cells and registers. Thus, the model is fully differentiable and trains end-to-end. The key characteristic of these models is that their depth, the size of their short-term memory, and the number of parameters can be altered independently.


Hybrids


Encoder–decoder networks

Encoder–decoder frameworks are based on neural networks that map highly
structured Structuring, also known as smurfing in banking jargon, is the practice of executing financial transactions such as making bank deposits in a specific pattern, calculated to avoid triggering financial institutions to file reports required by law ...
input to highly structured output. The approach arose in the context of
machine translation Machine translation is use of computational techniques to translate text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages. Early approaches were mostly rule-based or statisti ...
, where the input and output are written sentences in two natural languages. In that work, an LSTM RNN or CNN was used as an encoder to summarize a source sentence, and the summary was decoded using a conditional RNN
language model A language model is a model of the human brain's ability to produce natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation,Andreas, Jacob, Andreas Vlachos, and Stephen Clark (2013)"S ...
to produce the translation. These systems share building blocks: gated RNNs and CNNs and trained attention mechanisms.


Other types


Instantaneously trained

Instantaneously trained neural networks (ITNN) were inspired by the phenomenon of short-term learning that seems to occur instantaneously. In these networks the weights of the hidden and the output layers are mapped directly from the training vector data. Ordinarily, they work on binary data, but versions for continuous data that require small additional processing exist.


Spiking

Spiking neural network Spiking neural networks (SNNs) are artificial neural networks (ANN) that mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier. In addition to Artificial neuron, neuronal and Electrical ...
s (SNN) explicitly consider the timing of inputs. The network input and output are usually represented as a series of spikes (
delta function In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real lin ...
or more complex shapes). SNN can process information in the
time domain In mathematics and signal processing, the time domain is a representation of how a signal, function, or data set varies with time. It is used for the analysis of mathematical functions, physical signals or time series of economic or environmental ...
(signals that vary over time). They are often implemented as recurrent networks. SNN are also a form of pulse computer. Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a very large memory capacity. SNN and the temporal correlations of neural assemblies in such networks—have been used to model figure/ground separation and region linking in the visual system.


Spatial


Neocognitron

The
neocognitron __NOTOC__ The neocognitron is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in 1979. It has been used for Japanese handwritten character recognition and other pattern recognition tasks, and served as the i ...
is a hierarchical, multilayered network that was modeled after the
visual cortex The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalam ...
. It uses multiple types of units, (originally two, called
simple Simple or SIMPLE may refer to: *Simplicity, the state or quality of being simple Arts and entertainment * ''Simple'' (album), by Andy Yorke, 2008, and its title track * "Simple" (Florida Georgia Line song), 2018 * "Simple", a song by John ...
and
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
cells), as a cascading model for use in pattern recognition tasks. Local features are extracted by S-cells whose deformation is tolerated by C-cells. Local features in the input are integrated gradually and classified at higher layers. Among the various kinds of neocognitron are systems that can detect multiple patterns in the same input by using back propagation to achieve
selective attention Attentional control, commonly referred to as concentration, refers to an individual's capacity to choose what they pay attention to and what they ignore. It is also known as endogenous attention or executive attention. In lay terms, attentional c ...
. It has been used for
pattern recognition Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR capabilities but their p ...
tasks and inspired
convolutional neural network A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different ty ...
s.


Compound hierarchical-deep models

Compound hierarchical-deep models compose deep networks with non-parametric Bayesian models.
Features Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenome ...
can be learned using deep architectures such as DBNs, deep Boltzmann machines (DBM), deep auto encoders, convolutional variants, ssRBMs, deep coding networks, DBNs with sparse feature learning, RNNs, conditional DBNs,
denoising autoencoder An autoencoder is a type of artificial neural network used to learn Feature learning, efficient codings of unlabeled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data, and a ...
s. This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data. However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (a ') and must be adjusted together (high
degree of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinites ...
). Limiting the degree of freedom reduces the number of parameters to learn, facilitating learning of new classes from few examples. ''Hierarchical Bayesian (HB)'' models allow learning from few examples, for example for
computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
,
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
and
cognitive science Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Mental faculties of concern to cognitive scientists include percep ...
. Compound HD architectures aim to integrate characteristics of both HB and deep networks. The compound HDP-DBM architecture is a ''
hierarchical Dirichlet process In statistics and machine learning, the hierarchical Dirichlet process (HDP) is a nonparametric Bayesian approach to clustering grouped data. It uses a Dirichlet process for each group of data, with the Dirichlet processes for all groups sharing ...
(HDP)'' as a hierarchical model, incorporating DBM architecture. It is a full
generative model In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
, generalized from abstract concepts flowing through the model layers, which is able to synthesize new examples in novel classes that look "reasonably" natural. All the levels are learned jointly by maximizing a joint log-probability
score SCORE may refer to: *SCORE (software), a music scorewriter program * SCORE (television), a weekend sports service of the defunct Financial News Network *SCORE! Educational Centers *SCORE International, an offroad racing organization *Sarawak Corrido ...
. In a DBM with three hidden layers, the probability of a visible input '' is: : p(\boldsymbol, \psi) = \frac\sum_h \exp\left(\sum_W_^ \nu_i h_j^1 + \sum_ W_^h_j^1 h_\ell^2 +\sum_ W_^h_\ell^2 h_m^3 \right), where \boldsymbol = \ is the set of hidden units, and \psi = \ are the model parameters, representing visible-hidden and hidden-hidden symmetric interaction terms. A learned DBM model is an undirected model that defines the
joint distribution A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
P(\nu, h^1, h^2, h^3). One way to express what has been learned is the conditional model P(\nu, h^1, h^2\mid h^3) and a
prior The term prior may refer to: * Prior (ecclesiastical), the head of a priory (monastery) * Prior convictions, the life history and previous convictions of a suspect or defendant in a criminal case * Prior probability, in Bayesian statistics * Prio ...
term P(h^3). Here P(\nu, h^1, h^2\mid h^3) represents a conditional DBM model, which can be viewed as a two-layer DBM but with bias terms given by the states of h^3: : P(\nu, h^1, h^2\mid h^3) = \frac \exp \left(\sum_W_^\nu_i h_j^1 + \sum_ W_^ h_j^1 h_\ell^2 +\sum_ W_^ h_\ell^2 h_m^3\right).


Deep predictive coding networks

A deep predictive coding network (DPCN) is a predictive coding scheme that uses top-down information to empirically adjust the priors needed for a bottom-up
inference Inferences are steps in logical reasoning, moving from premises to logical consequences; etymologically, the word '' infer'' means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinct ...
procedure by means of a deep, locally connected,
generative model In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
. This works by extracting sparse
features Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenome ...
from time-varying observations using a linear dynamical model. Then, a pooling strategy is used to learn invariant feature representations. These units compose to form a deep architecture and are trained by greedy layer-wise
unsupervised learning Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, wh ...
. The layers constitute a kind of
Markov chain In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
such that the states at any layer depend only on the preceding and succeeding layers. DPCNs predict the representation of the layer, by using a top-down approach using the information in upper layer and temporal dependencies from previous states. DPCNs can be extended to form a convolutional network.


Multilayer kernel machine

Multilayer kernel machines (MKM) are a way of learning highly nonlinear functions by iterative application of weakly nonlinear kernels. They use
kernel principal component analysis In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performe ...
(KPCA), as a method for the unsupervised greedy layer-wise pre-training step of deep learning. Layer \ell+1 learns the representation of the previous layer \ell, extracting the n_l
principal component Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing. The data is linearly transformed onto a new coordinate system such that th ...
(PC) of the projection layer l output in the feature domain induced by the kernel. To reduce the dimensionaliity of the updated representation in each layer, a supervised strategy selects the best informative features among features extracted by KPCA. The process is: * rank the n_\ell features according to their
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
with the class labels; * for different values of ''K'' and m_\ell \in\, compute the classification error rate of a ''
K-nearest neighbor In statistics, the ''k''-nearest neighbors algorithm (''k''-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. Most often, it is used for cl ...
(K-NN)'' classifier using only the m_l most informative features on a
validation set In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from ...
; * the value of m_\ell with which the classifier has reached the lowest error rate determines the number of features to retain. Some drawbacks accompany the KPCA method for MKMs. A more straightforward way to use kernel machines for deep learning was developed for spoken language understanding. The main idea is to use a kernel machine to approximate a shallow neural net with an infinite number of hidden units, then use a deep stacking network to splice the output of the kernel machine and the raw input in building the next, higher level of the kernel machine. The number of levels in the deep convex network is a hyper-parameter of the overall system, to be determined by cross validation.


See also

* Adaptive resonance theory *
Artificial life Artificial life (ALife or A-Life) is a field of study wherein researchers examine systems related to natural life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry. The discipline ...
*
Autoassociative memory Autoassociative memory, also known as auto-association memory or an autoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interfer ...
*
Autoencoder An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function ...
*
Biologically inspired computing Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, b ...
*
Blue brain The Blue Brain Project was a Swiss brain research initiative that aimed to create a Artificial brain, digital reconstruction of the mouse brain. The project was founded in May 2005 by the Brain Mind Institute of ''École Polytechnique Fédérale de ...
* Connectionist expert system *
Decision tree A decision tree is a decision support system, decision support recursive partitioning structure that uses a Tree (graph theory), tree-like Causal model, model of decisions and their possible consequences, including probability, chance event ou ...
*
Expert system In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as ...
*
Genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to g ...
* In Situ Adaptive Tabulation * Large memory storage and retrieval neural networks *
Linear discriminant analysis Linear discriminant analysis (LDA), normal discriminant analysis (NDA), canonical variates analysis (CVA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to fi ...
*
Logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
*
Multilayer perceptron In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is ...
* Neural gas *
Neuroevolution Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing ...
, NeuroEvolution of Augmented Topologies (NEAT) * Ni1000 chip * Optical neural network *
Particle swarm optimization In computational science, particle swarm optimization (PSO) is a computational method that Mathematical optimization, optimizes a problem by iterative method, iteratively trying to improve a candidate solution with regard to a given measure of qu ...
*
Predictive analytics Predictive analytics encompasses a variety of Statistics, statistical techniques from data mining, Predictive modelling, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or other ...
*
Principal components analysis Principal component analysis (PCA) is a Linear map, linear dimensionality reduction technique with applications in exploratory data analysis, visualization and Data Preprocessing, data preprocessing. The data is linear map, linearly transformed ...
*
Simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
*
Systolic array In parallel computer architectures, a systolic array is a homogeneous network of tightly coupled data processing units (DPUs) called cells or nodes. Each node or DPU independently computes a partial result as a function of the data received fro ...
* Time delay neural network (TDNN) *
Graph neural network Graph neural networks (GNN) are specialized artificial neural networks that are designed for tasks whose inputs are graphs. One prominent example is molecular drug design. Each input sample is a graph representation of a molecule, where atoms f ...


References


Bibliography

* * {{DEFAULTSORT:Artificial Neural Networks - types * Computational statistics Classification algorithms
Types Type may refer to: Science and technology Computing * Typing, producing text via a keyboard, typewriter, etc. * Data type, collection of values used for computations. * File type * TYPE (DOS command), a command to display contents of a file. * Ty ...
Oscillating neural network