Leabra stands for local, error-driven and associative, biologically realistic algorithm. It is a
model
A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure.
Models c ...
of
learning
Learning is the process of acquiring new understanding, knowledge, behaviors, skills, value (personal and cultural), values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machine learning, machines ...
which is a balance between
Hebbian
Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation ...
and
error-driven learning
Error-driven learning is a sub-area of machine learning concerned with how an Intelligent agent, agent ought to take actions in an Environment (biophysical), environment so as to minimize some error feedback. It is a type of reinforcement learning ...
with other
network
Network, networking and networked may refer to:
Science and technology
* Network theory, the study of graphs as a representation of relations between discrete objects
* Network science, an academic field that studies complex networks
Mathematics
...
-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences. This model is heavily influenced by and contributes to neural network designs and models. This algorithm is the default algorithm in
''emergent'' (successor of PDP++) when making a new project, and is extensively used in various simulations.
Hebbian learning
Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation ...
is performed using
conditional principal components analysis (CPCA) algorithm with correction factor for sparse expected activity levels.
Error-driven learning
Error-driven learning is a sub-area of machine learning concerned with how an Intelligent agent, agent ought to take actions in an Environment (biophysical), environment so as to minimize some error feedback. It is a type of reinforcement learning ...
is performed using
GeneRec GeneRec is a generalization of the recirculation algorithm, and approximates Almeida-Pineda recurrent backpropagation.O'Reilly, R.C. Biologically Plausible Error-driven Learning using Local Activation Differences: The Generalized Recirculation Alg ...
, which is a generalization of the
recirculation algorithm
Circulation may refer to:
Science and technology
* Atmospheric circulation, the large-scale movement of air
* Circulation (physics), the path integral of the fluid velocity around a closed curve in a fluid flow field
* Circulatory system, a biol ...
, and approximates
Almeida–Pineda recurrent backpropagation Almeida–Pineda recurrent backpropagation is an extension to the backpropagation algorithm that is applicable to recurrent neural networks. It is a type of supervised learning. It was described somewhat cryptically in Richard Phillips Feynman, Rich ...
. The symmetric, midpoint version of GeneRec is used, which is equivalent to the
contrastive Hebbian learning
Contrastive Hebbian learning is a biologically plausible form of Hebbian learning.
It is based on the contrastive divergence algorithm, which has been used to train a variety of energy-based latent variable models.
In 2003, contrastive Hebbian le ...
algorithm (CHL). See O'Reilly (1996; Neural Computation) for more details.
The activation function is a point-neuron approximation with both discrete
spiking and continuous rate-code output.
Layer or unit-group level inhibition can be computed directly using a
k-winners-take-all (KWTA) function, producing sparse distributed representations.
The net input is computed as an average, not a sum, over connections, based on normalized, sigmoidally transformed weight values, which are subject to scaling on a connection-group level to alter relative contributions. Automatic scaling is performed to compensate for differences in expected activity level in the different projections.
Documentation about this algorithm can be found in the book "Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain" published by MIT press.
and in th
Emergent Documentation
Overview of the leabra algorithm
The pseudocode for Leabra is given here, showing exactly how the pieces of the algorithm described in more detail in the subsequent sections fit together.
Iterate over minus and plus phases of settling for each event.
o At start of settling, for all units:
- Initialize all state variables (activation, v_m, etc).
- Apply external patterns (clamp input in minus, input & output in
plus).
- Compute net input scaling terms (constants, computed
here so network can be dynamically altered).
- Optimization: compute net input once from all static activations
(e.g., hard-clamped external inputs).
o During each cycle of settling, for all non-clamped units:
- Compute excitatory netinput (g_e(t), aka eta_j or net)
-- sender-based optimization by ignoring inactives.
- Compute
kWTA inhibition for each layer, based on g_i^Q:
* Sort units into two groups based on g_i^Q: top k and
remaining k+1 -> n.
* If basic, find k and k+1th highest
If avg-based, compute avg of 1 -> k & k+1 -> n.
* Set inhibitory conductance g_i from g^Q_k and g^Q_k+1
- Compute point-neuron activation combining excitatory input and
inhibition
o After settling, for all units, record final settling activations
as either minus or plus phase (y^-_j or y^+_j).
After both phases update the weights (based on linear current
weight values), for all connections:
o Compute
error-driven weight changes with
CHL with soft weight bounding
o Compute
Hebbian
Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation ...
weight changes with
CPCA from plus-phase activations
o Compute net weight change as weighted sum of error-driven and Hebbian
o Increment the weights according to net weight change.
Implementations
Emergentis the original implementation of Leabra; its most recent implementation is written in
Go. It was written chiefly b
Dr. O'Reilly but professional software engineers were recently hired to improve the existing codebase. This is the fastest implementation, suitable for constructing large networks. Although ''emergent'' has a graphical user interface, it is very complex and has a steep learning curve.
If you want to understand the algorithm in detail, it will be easier to read non-optimized code. For this purpose, check out th
MATLAB version There is also a
R versionavailable, that can be easily installed via
install.packages("leabRa")
in R and has
to how the package is used. The MATLAB and R versions are not suited for constructing very large networks, but they can be installed quickly and (with some programming background) are easy to use. Furthermore, they can also be adapted easily.
Special algorithms
* Temporal differences and general dopamine modulation.
Temporal differences (TD) is widely used as a
model
A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure.
Models c ...
of
midbrain
The midbrain or mesencephalon is the forward-most portion of the brainstem and is associated with vision, hearing, motor control, sleep and wakefulness, arousal (alertness), and temperature regulation. The name comes from the Greek ''mesos'', " ...
dopaminergic
Dopaminergic means "related to dopamine" (literally, "working on dopamine"), dopamine being a common neurotransmitter. Dopaminergic substances or actions increase dopamine-related activity in the brain. Dopaminergic brain pathways facilitate d ...
firing.
* Primary value learned value (PVLV).
PVLV The primary value learned value (PVLV) model is a possible explanation for the reward-predictive firing properties of dopamine (DA) neurons. It simulates behavioral and neural data on Pavlovian conditioning and the midbrain dopaminergic neurons tha ...
simulates behavioral and neural data on
Pavlovian conditioning
Classical conditioning (also known as Pavlovian or respondent conditioning) is a behavioral procedure in which a biologically potent stimulus (e.g. food) is paired with a previously neutral stimulus (e.g. a triangle). It also refers to the learni ...
and the
midbrain
The midbrain or mesencephalon is the forward-most portion of the brainstem and is associated with vision, hearing, motor control, sleep and wakefulness, arousal (alertness), and temperature regulation. The name comes from the Greek ''mesos'', " ...
dopaminergic
Dopaminergic means "related to dopamine" (literally, "working on dopamine"), dopamine being a common neurotransmitter. Dopaminergic substances or actions increase dopamine-related activity in the brain. Dopaminergic brain pathways facilitate d ...
neurons
A neuron, neurone, or nerve cell is an electrically excitable cell that communicates with other cells via specialized connections called synapses. The neuron is the main component of nervous tissue in all animals except sponges and placozoa. N ...
that fire in proportion to unexpected rewards (an alternative to
TD).
* Prefrontal cortex basal ganglia working memory (PBWM).
PBWM uses
PVLV The primary value learned value (PVLV) model is a possible explanation for the reward-predictive firing properties of dopamine (DA) neurons. It simulates behavioral and neural data on Pavlovian conditioning and the midbrain dopaminergic neurons tha ...
to train
prefrontal cortex
In mammalian brain anatomy, the prefrontal cortex (PFC) covers the front part of the frontal lobe of the cerebral cortex. The PFC contains the Brodmann areas BA8, BA9, BA10, BA11, BA12, BA13, BA14, BA24, BA25, BA32, BA44, BA45, BA46, ...
working memory
Working memory is a cognitive system with a limited capacity that can hold information temporarily. It is important for reasoning and the guidance of decision-making and behavior. Working memory is often used synonymously with short-term memory, ...
updating system, based on the biology of the prefrontal cortex and
basal ganglia
The basal ganglia (BG), or basal nuclei, are a group of subcortical nuclei, of varied origin, in the brains of vertebrates. In humans, and some primates, there are some differences, mainly in the division of the globus pallidus into an extern ...
.
References
{{reflist
External links
Emergent about Leabra* O'Reilly, R.C. (1996). The Leabra Model of Neural Interactions and Learning in the Neocortex. Phd Thesis, Carnegie Mellon University, Pittsburgh, PA
tp://grey.colorado.edu/pub/oreilly/thesis/oreilly_thesis.all.pdf PDFR version of Leabra
Machine learning algorithms
Artificial neural networks