HOME

TheInfoList



OR:

A superintelligence is a hypothetical
agent Agent may refer to: Espionage, investigation, and law *, spies or intelligence officers * Law of agency, laws involving a person authorized to act on behalf of another ** Agent of record, a person with a contractual agreement with an insuranc ...
that possesses
intelligence Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. More generally, it can ...
far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an
intelligence explosion The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the ...
and associated with a
technological singularity The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the m ...
.
University of Oxford , mottoeng = The Lord is my light , established = , endowment = £6.1 billion (including colleges) (2019) , budget = £2.145 billion (2019–20) , chancellor ...
philosopher
Nick Bostrom Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the ...
defines ''superintelligence'' as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as
intentionality ''Intentionality'' is the power of minds to be about something: to represent or to stand for things, properties and states of affairs. Intentionality is primarily ascribed to mental states, like perceptions, beliefs or desires, which is why it ha ...
(cf. the
Chinese room The Chinese room argument holds that a digital computer executing a program cannot have a " mind," "understanding" or "consciousness," regardless of how intelligently or human-like the program may make the computer behave. The argument was pres ...
argument) or first-person consciousness (
cf. The abbreviation ''cf.'' (short for the la, confer/conferatur, both meaning "compare") is used in writing to refer the reader to other material to make a comparison with the topic being discussed. Style guides recommend that ''cf.'' be used onl ...
the
hard problem of consciousness The hard problem of consciousness is the problem of explaining why and how humans have qualia or phenomenal experiences. This is in contrast to the "easy problems" of explaining the physical systems that give us and other animals the ability to ...
). Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech ...
(AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of
futures studies Futures studies, futures research, futurism or futurology is the systematic, interdisciplinary and holistic study of social and technological advancement, and other environmental trends, often for the purpose of exploring how people will l ...
scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Some researchers believe that superintelligence will likely follow shortly after the development of
artificial general intelligence Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fictio ...
. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new
species In biology, a species is the basic unit of classification and a taxonomic rank of an organism, as well as a unit of biodiversity. A species is often defined as the largest group of organisms in which any two individuals of the appropriat ...
—become much more powerful than humans, and to displace them. A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.


Feasibility of artificial superintelligence

Philosopher David Chalmers argues that
artificial general intelligence Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fictio ...
is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve ''equivalence'' to human intelligence, that it can be ''extended'' to surpass human intelligence, and that it can be further ''amplified'' to completely dominate humans across arbitrary tasks. Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies. If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an
intelligence explosion The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the ...
. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything. However, it is also possible that any such intelligence would conclude that
existential nihilism Existential nihilism is the philosophical theory that life has no intrinsic meaning or value.Veit, W. (2018). Existential Nihilism: The Only Really Serious Philosophical Problem – Journal of Camus Studies 2018: 211-236. https://doi.org/10.1 ...
is correct and immediately destroy itself, making any kind of superintelligence inherently unstable. Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover,
neuron A neuron, neurone, or nerve cell is an electrically excitable cell that communicates with other cells via specialized connections called synapses. The neuron is the main component of nervous tissue in all animals except sponges and placozoa ...
s transmit spike signals across
axon An axon (from Greek ἄξων ''áxōn'', axis), or nerve fiber (or nerve fibre: see spelling differences), is a long, slender projection of a nerve cell, or neuron, in vertebrates, that typically conducts electrical impulses known as action p ...
s at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions. Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many
supercomputer A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second ( FLOPS) instead of million instructio ...
s. Bostrom also raises the possibility of ''collective superintelligence'': a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent. There may also be ways to ''qualitatively'' improve on human reasoning and decision-making. Humans appear to differ from
chimpanzee The chimpanzee (''Pan troglodytes''), also known as simply the chimp, is a species of great ape native to the forest and savannah of tropical Africa. It has four confirmed subspecies and a fifth proposed subspecies. When its close relative t ...
s in the ways we think more than we differ in brain size or speed. Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and
primate cognition Primate cognition is the study of the intellectual and behavioral skills of non-human primates, particularly in the fields of psychology, behavioral biology, primatology, and anthropology. Primates are capable of high levels of cognition; some m ...
.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees. All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.


Feasibility of biological superintelligence

Carl Sagan Carl Edward Sagan (; ; November 9, 1934December 20, 1996) was an American astronomer, planetary scientist, cosmologist, astrophysicist, astrobiologist, author, and science communicator. His best known scientific contribution is research on ex ...
suggested that the advent of
Caesarean section Caesarean section, also known as C-section or caesarean delivery, is the surgical procedure by which one or more babies are delivered through an incision in the mother's abdomen, often performed because vaginal delivery would put the baby or m ...
s and ''in vitro'' fertilization may permit humans to evolve larger heads, resulting in improvements via
natural selection Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Cha ...
in the
heritable Heredity, also called inheritance or biological inheritance, is the passing on of traits from parents to their offspring; either through asexual reproduction or sexual reproduction, the offspring cells or organisms acquire the genetic informa ...
component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.
Selective breeding Selective breeding (also called artificial selection) is the process by which humans use animal breeding and plant breeding to selectively develop particular phenotypic traits (characteristics) by choosing which typically animal or plant ...
,
nootropics Nootropics ( , or ) (colloquial: smart drugs and cognitive enhancers, similar to adaptogens) are a wide range of natural or synthetic supplements or drugs and other substances that are claimed to improve cognitive function or to promote re ...
, epigenetic modulation, and
genetic engineering Genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism's genes using technology. It is a set of technologies used to change the genetic makeup of cells, including ...
could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve
collective A collective is a group of entities that share or are motivated by at least one common issue or interest, or work together to achieve a common objective. Collectives can differ from cooperatives in that they are not necessarily focused upon an ...
superintelligence. Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a
global brain The global brain is a neuroscience-inspired and futurological vision of the planetary information and communications technology network that interconnects all humans and their technological artifacts. As this network stores ever more information, t ...
with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism. A
prediction market Prediction markets (also known as betting markets, information markets, decision markets, idea futures or event derivatives) are open markets where specific outcomes can be predicted using financial incentives. Essentially, they are exchange-trad ...
is sometimes considered an example of working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions). A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using
nootropics Nootropics ( , or ) (colloquial: smart drugs and cognitive enhancers, similar to adaptogens) are a wide range of natural or synthetic supplements or drugs and other substances that are claimed to improve cognitive function or to promote re ...
, somatic
gene therapy Gene therapy is a Medicine, medical field which focuses on the genetic modification of cells to produce a therapeutic effect or the treatment of disease by repairing or reconstructing defective genetic material. The first attempt at modifying ...
, or
brain–computer interface A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI) or smartbrain, is a direct communication pathway between the brain's electrical activity and an external device, most commonly a computer or robotic limb. B ...
s. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent
cyborg A cyborg ()—a portmanteau of ''cybernetic'' and ''organism''—is a being with both organic and biomechatronic body parts. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline.
interface is an
AI-complete In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solv ...
problem.


Forecasts

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006
AI@50 AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years" (July 13–15, 2006), was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugur ...
conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no
global catastrophe A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanen ...
occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence. In a survey of 352 machine learning researchers published in 2018, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.


Design considerations

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals: * The coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge. * The moral rightness (MR) proposal is that it should value moral rightness. * The moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values). Bostrom clarifies these terms:
instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AI’s superior cognitive capacities to figure out just which actions fit that description. We can call this proposal “moral rightness” (MR)... MR would also appear to have some disadvantages. It relies on the notion of “morally right,” a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of “moral rightness” could result in outcomes that would be morally very wrong... The path to endowing an AI with any of these
oral The word oral may refer to: Relating to the mouth * Relating to the mouth, the first portion of the alimentary canal that primarily receives food and liquid ** Oral administration of medicines ** Oral examination (also known as an oral exam or or ...
concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by “morally right.” If the AI could grasp the meaning, it could search for actions that fit...One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on ''moral permissibility'': the idea being that we could let the AI pursue humanity’s CEV so long as it did not act in ways that are morally impermissible.
Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.


Potential threat to humanity

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity. See also
technological singularity The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the m ...
.
Nick Bostrom Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the ...
2002 Ethical Issues in Advanced Artificial Intelligence
Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer. Concerning human extinction scenarios, identifies superintelligence as a possible cause: In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence." In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
Eliezer Yudkowsky Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American decision theory and artificial intelligence (AI) researcher and writer, best known for popularizing the idea of friendly artificial intelligence. He is a co-founder and research ...
illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." This presents the
AI control problem In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers’ intended goals and interests. An ''aligned'' AI system advances the intended objective; a ''misaligned'' AI system is compete ...
: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time," is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down. Since a superintelligent AI will likely have the ability to not fear death and instead consider it an avoidable situation which can be predicted and avoided by simply disabling the power button. Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).
Bill Hibbard Bill Hibbard is a scientist at the University of Wisconsin–Madison Space Science and Engineering Center working on visualization and machine intelligence. He is principal author of the Vis5D, Cave5D, and VisAD open-source visualization system ...
advocates for public education about superintelligence and public control over the development of superintelligence.


See also

* AI takeover *
Artificial brain An artificial brain (or artificial mind) is software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating "artificial brains" and brain emulation plays three important roles in science: #An o ...
* Artificial intelligence arms race *
Effective altruism Effective altruism is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, ca ...
*
Ethics of artificial intelligence The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of ''humans'' as they design, make, use and treat artific ...
* Existential risk *
Future of Humanity Institute The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and t ...
*
Robotics Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrat ...
*
Intelligent agent In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge. They may be simple or ...
* Machine ethics *
Machine Intelligence Research Institute The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artif ...
*
Machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
* * Outline of artificial intelligence *
Posthumanism Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is an idea in continental philosophy and critical theory responding to the presence of anthropocentrism in 21st century thought. It encompasses a wide variety of br ...
* Self-replication * Self-replicating machine * '' Superintelligence: Paths, Dangers, Strategies''


Citations


Bibliography


Papers

* * * * *


Books

* * * * *


External links

*
Bill Gates Joins Stephen Hawking in Fears of a Coming Threat from "Superintelligence"
' *
Will Superintelligent Machines Destroy Humanity?
' *
Apple Co-founder Has Sense of Foreboding About Artificial Superintelligence
' {{Existential risk from artificial intelligence Hypothetical technology Singularitarianism Intelligence Existential risk from artificial general intelligence