HOME

TheInfoList



OR:

Temporal difference (TD) learning refers to a class of model-free
reinforcement learning Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learnin ...
methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like
Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be ...
s, and perform updates based on current estimates, like dynamic programming methods. While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known. This is a form of
bootstrapping In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input. Many analytical techniques are often called bootstrap methods in reference to their self-starting or self-supporting ...
, as illustrated with the following example:
Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives.
Temporal difference methods are related to the temporal difference model of animal learning.


Mathematical formulation

The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates the state value function of a finite-state Markov decision process (MDP) under a policy \pi. Let V^\pi denote the state value function of the MDP with states (S_t)_, rewards (R_t)_ and discount rate \gamma under the policy \pi : :V^\pi(s) = E_\left\. We drop the action from the notation for convenience. V^\pi satisfies the Hamilton-Jacobi-Bellman Equation: : V^\pi(s)=E_\, so R_1 + \gamma V^\pi(S_1) is an unbiased estimate for V^\pi(s). This observation motivates the following algorithm for estimating V^\pi. The algorithm starts by initializing a table V(s) arbitrarily, with one value for each state of the MDP. A positive
learning rate In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly ...
\alpha is chosen. We then repeatedly evaluate the policy \pi, obtain a reward r and update the value function for the current state using the rule: : V(S_t) \leftarrow (1 - \alpha) V(S_t) + \underbrace_ \overbrace^ /math> where S_t and S_ are the current and next states, respectively. The value R_ + \gamma V(S_) is known as the TD target, and R_ + \gamma V(S_) - V(S_t) is known as the TD error.


TD-Lambda

TD-Lambda is a learning algorithm invented by Richard S. Sutton based on earlier work on temporal difference learning by Arthur Samuel. This algorithm was famously applied by Gerald Tesauro to create
TD-Gammon TD-Gammon is a computer backgammon program developed in the 1990s by Gerald Tesauro at IBM's Thomas J. Watson Research Center. Its name comes from the fact that it is an artificial neural net trained by a form of temporal-difference learning, speci ...
, a program that learned to play the game of
backgammon Backgammon is a two-player board game played with counters and dice on tables boards. It is the most widespread Western member of the large family of tables games, whose ancestors date back at least 1,600 years. The earliest record of backgammo ...
at the level of expert human players. The lambda (\lambda) parameter refers to the trace decay parameter, with 0 \leqslant \lambda \leqslant 1. Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions when \lambda is higher, with \lambda = 1 producing parallel learning to Monte Carlo RL algorithms.


In neuroscience

The TD
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
has also received attention in the field of
neuroscience Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, ...
. Researchers discovered that the firing rate of
dopamine Dopamine (DA, a contraction of 3,4-dihydroxyphenethylamine) is a neuromodulatory molecule that plays several important roles in cells. It is an organic chemical of the catecholamine and phenethylamine families. It is an amine synthesized ...
neurons A neuron (American English), neurone (British English), or nerve cell, is an membrane potential#Cell excitability, excitable cell (biology), cell that fires electric signals called action potentials across a neural network (biology), neural net ...
in the
ventral tegmental area The ventral tegmental area (VTA) (tegmentum is Latin for ''covering''), also known as the ventral tegmental area of Tsai, or simply ventral tegmentum, is a group of neurons located close to the midline on the floor of the midbrain. The VTA is th ...
(VTA) and
substantia nigra The substantia nigra (SN) is a basal ganglia structure located in the midbrain that plays an important role in reward and movement. ''Substantia nigra'' is Latin for "black substance", reflecting the fact that parts of the substantia nigra a ...
(SNc) appear to mimic the error function in the algorithm. The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward.
Dopamine Dopamine (DA, a contraction of 3,4-dihydroxyphenethylamine) is a neuromodulatory molecule that plays several important roles in cells. It is an organic chemical of the catecholamine and phenethylamine families. It is an amine synthesized ...
cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice. Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used for
reinforcement learning Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learnin ...
. The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research. It has also been used to study conditions such as
schizophrenia Schizophrenia () is a mental disorder characterized variously by hallucinations (typically, Auditory hallucination#Schizophrenia, hearing voices), delusions, thought disorder, disorganized thinking and behavior, and Reduced affect display, f ...
or the consequences of pharmacological manipulations of dopamine on learning.


See also

* PVLV *
Q-learning ''Q''-learning is a reinforcement learning algorithm that trains an agent to assign values to its possible actions based on its current state, without requiring a model of the environment ( model-free). It can handle problems with stochastic tra ...
*
Rescorla–Wagner model The Rescorla–Wagner model ("R-W") is a model of classical conditioning, in which learning is conceptualized in terms of associations between conditioned (CS) and unconditioned (US) stimuli. A strong CS-US association means that the CS signals p ...
*
State–action–reward–state–action State–action–reward–state–action (SARSA) is an algorithm for learning a Markov decision process policy, used in the reinforcement learning area of machine learning. It was proposed by Rummery and Niranjan in a technical note with the na ...
(SARSA)


Notes


Works cited

* *


Further reading

* See final chapter and appendix. *


External links


Connect Four TDGravity Applet
(+ mobile phone version) – self-learned using TD-Leaf method (combination of TD-Lambda with shallow tree search)
Self Learning Meta-Tic-Tac-Toe
Example web app showing how temporal difference learning can be used to learn state evaluation constants for a minimax AI playing a simple board game.
Reinforcement Learning Problem
document explaining how temporal difference learning can be used to speed up
Q-learning ''Q''-learning is a reinforcement learning algorithm that trains an agent to assign values to its possible actions based on its current state, without requiring a model of the environment ( model-free). It can handle problems with stochastic tra ...

TD-Simulator
Temporal difference simulator for classical conditioning {{DEFAULTSORT:Temporal Difference Learning Computational neuroscience Reinforcement learning Subtraction