HOME

TheInfoList



OR:

Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are near it, thus providing generalization. Kak, S. On training feedforward neural networks. Pramana, vol. 40, pp. 35-42, 199

This separation is done using the nearest hyperplane that can be written down instantaneously. In the two most important implementations the neighborhood of generalization either varies with the training sample (CC1 network) or remains constant (CC4 network). These networks use unary coding for an effective representation of the data sets. This type of network was first proposed in a 1993 paper of
Subhash Kak Subhash Kak is an Indian-American computer scientist and historical revisionist. He is the Regents Professor of Computer Science Department at Oklahoma State University–Stillwater, an honorary visiting professor of engineering at Jawahar ...
. Since then, instantaneously trained neural networks have been proposed as models of short term learning and used in web search, and financial time series prediction applications. They have also been used in instant classification of documents and for deep learning and data mining. As in other neural networks, their normal use is as software, but they have also been implemented in hardware using FPGAs and by optical implementation.


CC4 network

In the CC4 network, which is a three-stage network, the number of input nodes is one more than the size of the training vector, with the extra node serving as the biasing node whose input is always 1. For binary input vectors, the weights from the input nodes to the hidden neuron (say of index j) corresponding to the trained vector is given by the following formula: :w_ = \begin -1, & \mbox x_i = 0\\ +1, & \mbox x_i = 1\\ r-s+1, & \mbox i = n+1 \end where r is the radius of generalization and s is the
Hamming weight The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string o ...
(the number of 1s) of the binary sequence. From the hidden layer to the output layer the weights are 1 or -1 depending on whether the vector belongs to a given output class or not. The neurons in the hidden and output layers output 1 if the weighted sum to the input is 0 or positive and 0, if the weighted sum to the input is negative: :y = \left\{ \begin{matrix} 1 & \mbox{if } \sum x_i \ge 0\\ 0 & \mbox{if } \sum x_i< 0\end{matrix} \right.


Other networks

The CC4 network has also been modified to include non-binary input with varying radii of generalization so that it effectively provides a CC1 implementation.Tang, K.W. and Kak, S
Fast classification networks for signal processing
''Circuits, Systems, Signal Processing'' 21, 2002, pp. 207-224.
In feedback networks the Willshaw network as well as the
Hopfield network A Hopfield network (or Ising model of a neural network or Ising–Lenz–Little model) is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described earlier by Little in 1974 b ...
are able to learn instantaneously.


References

{{Reflist Learning Artificial neural networks