HOME

TheInfoList



OR:

Empowerment in the field of
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech re ...
formalises and quantifies (via
information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...
) the potential an
agent Agent may refer to: Espionage, investigation, and law *, spies or intelligence officers * Law of agency, laws involving a person authorized to act on behalf of another ** Agent of record, a person with a contractual agreement with an insuranc ...
perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of
intrinsic motivation Motivation is the reason for which humans and other animals initiate, continue, or terminate a behavior at a given time. Motivational states are commonly understood as forces acting within the agent that create a disposition to engage in goal-dire ...
. The empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables (S: s \in \mathcal, A: a \in \mathcal) and time (t). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network.


Definition

Empowerment (\mathfrak) is defined as the channel capacity (C) of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors. \mathfrak(A^n_t \longrightarrow S_) = \max_ I(A_t,...,A_;S_) The unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represente ...
s.


Contextual Empowerment

In general the choice of action (action distribution) that maximises empowerment varies from state to state. Knowing the empowerment of an agent in a specific state is useful, for example to construct an empowerment maximising policy. State-specific empowerment can be found using the more general formalism for 'contextual empowerment'. C is a random variable describing the context (e.g. state). reinforcement learning Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine ...
agents playing video games, and in the control of underwater vehicles.


References

{{reflist, refs= Klyubin, A., Polani, D., and Nehaniv, C. (2005a). All else being equal be empowered. Advances in Artificial Life, pages 744–753. Klyubin, A., Polani, D., and Nehaniv, C. (2005b). Empowerment: A universal agent- centric measure of control. In Evolutionary Computation, 2005. The 2005 IEEE Congress on, volume 1, pages 128–135. IEEE. Klyubin, A., Polani, D., and Nehaniv, C. (2008). Keep your options open: an information-based driving principle for sensorimotor systems. PLOS ONE, 3(12):e4018. https://dx.doi.org/10.1371%2Fjournal.pone.0004018 {{cite book , last1=Salge , first1=C , last2=Glackin , first2=C , last3=Polani , first3=D , date=2014 , chapter=Empowerment -- An Introduction , editor-last=Prokopenko , editor-first=M , title=Guided Self-Organization: Inception. Emergence, Complexity and Computation , volume=9 , publisher=Springer , pages=67–114 , doi=10.1007/978-3-642-53734-9_4 , arxiv=1310.1863 , isbn=978-3-642-53733-2 Volpi, N. C., De Palma, D., Polani, D., & Indiveri, G. (2016). Computation of empowerment for an autonomous underwater vehicle. IFAC-PapersOnLine, 49(15), 81-87. Mohamed, S., & Rezende, D. J. (2015). Variational information maximisation for intrinsically motivated reinforcement learning. arXiv preprint arXiv:1509.08731. Jung, T., Polani, D., & Stone, P. (2011). Empowerment for continuous agent—environment systems. Adaptive Behavior, 19(1), 16-39. Salge, C., Glackin, C., & Polani, D. (2013). Approximation of empowerment in the continuous domain. Advances in Complex Systems, 16(02n03), 1250079. Capdepuy, P., Polani, D., & Nehaniv, C. L. (2007, April). Maximization of potential information flow as a universal utility for collective behaviour. In 2007 IEEE Symposium on Artificial Life (pp. 207-213). Ieee. Karl, M., Soelch, M., Becker-Ehmck, P., Benbouzid, D., van der Smagt, P., & Bayer, J. (2017). Unsupervised real-time control through variational empowerment. arXiv preprint arXiv:1710.05101. Artificial intelligence Robotics Cognitive science