The ethics of artificial intelligence is the branch of the
ethics of technology specific to
artificially intelligent
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animal cognition, animals and human intelligence, humans. Example tasks in ...
systems. It is sometimes divided into a concern with the moral behavior of ''humans'' as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of ''machines,'' in
machine ethics
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherw ...
. It also includes the issue of a possible
singularity due to
superintelligent AI
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language ...
.
Ethics fields' approaches
Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.
Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.
Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing
Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.
To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of
agency
Agency may refer to:
Organizations
* Institution, governmental or others
** Advertising agency or marketing agency, a service business dedicated to creating, planning and handling advertising for its clients
** Employment agency, a business that ...
,
rational agency,
moral agency
Moral agency is an individual's ability to make moral choices based on some notion of right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wrong."
Developm ...
, and artificial agency, which are related to the concept of AMAs.
Isaac Asimov
yi, יצחק אזימאװ
, birth_date =
, birth_place = Petrovichi, Russian SFSR
, spouse =
, relatives =
, children = 2
, death_date =
, death_place = Manhattan, New York City, U.S.
, nationality = Russian (1920–1922)Soviet (192 ...
considered the issue in the 1950s in his ''
I, Robot
''I, Robot'' is a fixup (compilation) novel of science fiction short stories or essays by American writer Isaac Asimov. The stories originally appeared in the American magazines ''Super Science Stories'' and ''Astounding Science Fiction'' betw ...
''. At the insistence of his editor
John W. Campbell Jr.
John Wood Campbell Jr. (June 8, 1910 – July 11, 1971) was an American science fiction writer and editor. He was editor of ''Astounding Science Fiction'' (later called '' Analog Science Fiction and Fact'') from late 1937 until his death ...
, he proposed the
Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable.
A panel convened by the
United Kingdom
The United Kingdom of Great Britain and Northern Ireland, commonly known as the United Kingdom (UK) or Britain, is a country in Europe, off the north-western coast of the continental mainland. It comprises England, Scotland, Wales and North ...
in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of
Lausanne
, neighboring_municipalities= Bottens, Bretigny-sur-Morrens, Chavannes-près-Renens, Cheseaux-sur-Lausanne, Crissier, Cugy, Écublens, Épalinges, Évian-les-Bains (FR-74), Froideville, Jouxtens-Mézery, Le Mont-sur-Lausanne, Lugrin (FR-74), ...
, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.
The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
The President of the
Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "
the Singularity".
He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called
Singularitarianism
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity ben ...
. The
Machine Intelligence Research Institute
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artific ...
has suggested a need to build "
Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.
There are discussion on creating tests to see if an AI is capable of making
ethical decisions. Alan Winfield concludes that the
Turing test
The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to artificial intelligence, exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing propos ...
is flawed and the requirement for an AI to pass the test is too low.
A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.
In 2009, academics and technical experts attended a conference organized by the
Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots,
Nayef Al-Rodhan mentions the case of
neuromorphic
Neuromorphic engineering, also known as neuromorphic computing, is the use of electronic circuits to mimic neuro-biological architectures present in the nervous system. A neuromorphic computer/chip is any device that uses physical artificial ne ...
chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.
In ''Moral Machines: Teaching Robots Right from Wrong'',
Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern
normative theory
Normative generally means relating to an evaluative standard. Normativity is the phenomenon in human societies of designating some actions or outcomes as good, desirable, or permissible, and others as bad, undesirable, or impermissible. A norm in ...
and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific
learning algorithms
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
to use in machines.
Nick Bostrom
Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the rev ...
and
Eliezer Yudkowsky have argued for
decision tree
A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains condit ...
s (such as
ID3
ID3 is a metadata container most often used in conjunction with the MP3 audio file format. It allows information such as the title, artist, album, track number, and other information about the file to be stored in the file itself.
There are two ...
) over
neural networks
A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
and
genetic algorithm
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to gene ...
s on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. ''
stare decisis
A precedent is a principle or rule established in a previous legal case that is either binding on or persuasive for a court or other tribunal when deciding subsequent cases with similar issues or facts. Common-law legal systems place great valu ...
''), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "
hackers
A hacker is a person skilled in information technology who uses their technical knowledge to achieve a goal or overcome an obstacle, within a computerized system by non-standard means. Though the term ''hacker'' has become associated in popu ...
".
According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.
Ethics principles of artificial intelligence
In the review of 84
ethics guidelines for AI 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.
Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of
bioethics
Bioethics is both a field of study and professional practice, interested in ethical issues related to health (primarily focused on the human, but also increasingly includes animal ethics), including those emerging from advances in biology, med ...
(
beneficence
Beneficence may refer to:
* Beneficence (hip-hop artist)
* Beneficence, a synonym for philanthropy
* Beneficence (ethics), a concept in medical ethics
* Beneficence (statue), a statue at Ball State University
* Procreative beneficence
* Order of ...
,
non-maleficence
' () is a Latin phrase that means "first, do no harm". The phrase is sometimes recorded as '.
Non-maleficence, which is derived from the maxim, is one of the principal precepts of bioethics that all students in healthcare are taught in school a ...
,
autonomy
In developmental psychology and moral, political, and bioethical philosophy, autonomy, from , ''autonomos'', from αὐτο- ''auto-'' "self" and νόμος ''nomos'', "law", hence when combined understood to mean "one who gives oneself one's ...
and
justice
Justice, in its broadest sense, is the principle that people receive that which they deserve, with the interpretation of what then constitutes "deserving" being impacted upon by numerous fields, with many differing viewpoints and perspective ...
) and an additional AI enabling principle – explicability.
Transparency, accountability, and open source
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.
[Open Source AI.](_blank)
Bill Hibbard. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Ben Goertzel and David Hart created
OpenCog
OpenCog is a project that aims to build an open source software, open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to giv ...
as an
open source
Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized sof ...
framework for AI development.
[OpenCog: A Software Framework for Integrative Artificial General Intelligence.](_blank)
David Hart and Ben Goertzel. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin. OpenAI is a non-profit AI research company created by
Elon Musk
Elon Reeve Musk ( ; born June 28, 1971) is a business magnate and investor. He is the founder, CEO and chief engineer of SpaceX; angel investor, CEO and product architect of Tesla, Inc.; owner and CEO of Twitter, Inc.; founder of The Bori ...
,
Sam Altman and others to develop open-source AI beneficial to humanity.
[Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free](_blank)
Cade Metz, Wired 27 April 2016. There are numerous other open-source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The
IEEE
The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronic engineering and electrical engineering (and associated disciplines) with its corporate office in New York City and its operation ...
has a
standardisation effort on AI transparency.
[.] The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.
The
OECD
The Organisation for Economic Co-operation and Development (OECD; french: Organisation de coopération et de développement économiques, ''OCDE'') is an intergovernmental organisation with 38 member countries, founded in 1961 to stimulate e ...
,
UN,
EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence". This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally. To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.
Ethical challenges
Biases in AI systems
AI has become increasingly inherent in facial and
voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases. For instance,
facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender; these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. Furthermore,
Amazon
Amazon most often refers to:
* Amazons, a tribe of female warriors in Greek mythology
* Amazon rainforest, a rainforest covering most of the Amazon basin
* Amazon River, in South America
* Amazon (company), an American multinational technology c ...
terminated their use of
AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.
Bias can creep into algorithms in many ways. The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance, Amazon's AI-powered recruitment tool was trained with its own recruitment data accumulated over the years, during which time the candidates that successfully got the job were mostly white males. Consequently, the algorithms learned the (biased) pattern from the historical data and generated predictions for the present/future that these types of candidates are mostly like to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turn out to be biased against female and minority candidates. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In
natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
, problems can arise from the
text corpus — the source material the algorithm uses to learn about the relationships between different words.
Large companies such as IBM, Google, etc. have made efforts to research and address these biases. One solution for addressing bias is to create documentation for the data used to train AI systems.
Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.
The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that
algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it. There are some open-sourced tools by civil societies that are looking to bring more awareness to biased AI.
Robot rights
"Robot rights" is the concept that people should have moral obligations towards their machines, akin to
human rights
Human rights are Morality, moral principles or Social norm, normsJames Nickel, with assistance from Thomas Pogge, M.B.E. Smith, and Leif Wenar, 13 December 2013, Stanford Encyclopedia of PhilosophyHuman Rights Retrieved 14 August 2014 for ce ...
or
animal rights
Animal rights is the philosophy according to which many or all sentient animals have moral worth that is independent of their utility for humans, and that their most basic interests—such as avoiding suffering—should be afforded the sa ...
. It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society. These could include the right to life and liberty, freedom of thought and expression, and equality before the law. The issue has been considered by the
Institute for the Future and by the
U.K. Department of Trade and Industry.
Experts disagree on how soon specific and detailed laws on the subject will be necessary.
[ Glenn McGee reported that sufficiently humanoid robots might appear by 2020, while Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.
The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
]61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.
In October 2017, the android Sophia
Sophia means "wisdom" in Greek. It may refer to:
*Sophia (wisdom)
*Sophia (Gnosticism)
*Sophia (given name)
Places
*Niulakita or Sophia, an island of Tuvalu
*Sophia, Georgetown, a ward of Georgetown, Guyana
*Sophia, North Carolina, an unincorpor ...
was granted "honorary" citizenship in Saudi Arabia
Saudi Arabia, officially the Kingdom of Saudi Arabia (KSA), is a country in Western Asia. It covers the bulk of the Arabian Peninsula, and has a land area of about , making it the fifth-largest country in Asia, the second-largest in the A ...
, though some considered this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights
Human rights are Morality, moral principles or Social norm, normsJames Nickel, with assistance from Thomas Pogge, M.B.E. Smith, and Leif Wenar, 13 December 2013, Stanford Encyclopedia of PhilosophyHuman Rights Retrieved 14 August 2014 for ce ...
and the rule of law
The rule of law is the political philosophy that all citizens and institutions within a country, state, or community are accountable to the same laws, including lawmakers and leaders. The rule of law is defined in the ''Encyclopedia Britannica ...
.
The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient
Sentience is the capacity to experience feelings and sensations. The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin '' sentientem'' (a feeling), to distinguish it from the ability to ...
, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson
Joanna Joy Bryson (born 1965) is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.
Education
Bryson attended Glenbard North High School a ...
has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.
Threat to human dignity
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:
* A customer service representative (AI technology is already used today for telephone-based interactive voice response
Interactive voice response (IVR) is a technology that allows telephone users to interact with a computer-operated telephone system through the use of voice and DTMF tones input with a keypad. In telecommunications, IVR allows customers to interact ...
systems)
* A nursemaid for the elderly (as was reported by Pamela McCorduck
Pamela Ann McCorduck (October 27, 1940 – October 18, 2021) was a British-born American author of books about the history and philosophical significance of artificial intelligence, the future of engineering, and the role of women and technolog ...
in her book ''The Fifth Generation'')
* A soldier
* A judge
* A police officer
* A therapist (as was proposed by Kenneth Colby in the 70s)
Weizenbaum explains that we require authentic feelings of empathy
Empathy is the capacity to understand or feel what another person is experiencing from within their frame of reference, that is, the capacity to place oneself in another's position. Definitions of empathy encompass a broad range of social, co ...
from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[ Joseph Weizenbaum, quoted in ]
Pamela McCorduck
Pamela Ann McCorduck (October 27, 1940 – October 18, 2021) was a British-born American author of books about the history and philosophical significance of artificial intelligence, the future of engineering, and the role of women and technolog ...
counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[ However, ]Kaplan Kaplan may refer to:
Places
* Kapłań, Poland
* Kaplan, Louisiana, U.S.
* Kaplan Medical Center, a hospital in Rehovot, Israel
* Kaplan Street, in Tel Aviv, Israel
* Mount Kaplan, Antarctica
* Kaplan Arena, at the College of William & Mary in W ...
and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.
Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life.[
*
* , pp. 132–144
]
AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Liability for self-driving cars
As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. Recently, there has been debate as to the legal liability of the responsible party if these cars get into accidents. In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.
In another incident on March 18, 2018, Elaine Herzberg
The death of Elaine Herzberg (August 2, 1968 – March 18, 2018) was the first recorded case of a pedestrian fatality involving a self-driving car, after a collision that occurred late in the evening of March 18, 2018. Herzberg was pushing a bic ...
was struck and killed by a self-driving Uber
Uber Technologies, Inc. (Uber), based in San Francisco, provides mobility as a service, ride-hailing (allowing users to book a car and driver to transport them in a way similar to a taxi), food delivery (Uber Eats and Postmates), package ...
in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.
Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary. Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.
Weaponization of artificial intelligence
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomy.[Call for debate on killer robots](_blank)
, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[Navy report warns of robot uprising, suggests a strong moral compass](_blank)
, by Joseph L. Flatley engadget.com, Feb 18th 2009. Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a consequentialist
In ethical philosophy, consequentialism is a class of normative ethics, normative, Teleology, teleological ethical theories that holds that the wikt:consequence, consequences of one's Action (philosophy), conduct are the ultimate basis for judgm ...
view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral
A moral (from Latin ''morālis'') is a message that is conveyed or a lesson to be learned from a story or event. The moral may be left to the hearer, reader, or viewer to determine for themselves, or may be explicitly encapsulated in a maxim. A ...
framework that the AI cannot override.
There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race
An arms race occurs when two or more groups compete in military superiority. It consists of a competition between two or more states to have superior armed forces; a competition concerning production of weapons, the growth of a military, and t ...
is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype
Skype () is a proprietary telecommunications application operated by Skype Technologies, a division of Microsoft, best known for VoIP-based videotelephony, videoconferencing and voice calls. It also has instant messaging, file transfer, deb ...
co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky
Avram Noam Chomsky (born December 7, 1928) is an American public intellectual: a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. Sometimes called "the father of modern linguistics", Chomsky is ...
as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees
Martin John Rees, Baron Rees of Ludlow One or more of the preceding sentences incorporates text from the royalsociety.org website where: (born 23 June 1942) is a British cosmologist and astrophysicist. He is the fifteenth Astronomer Royal, ...
has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price
Huw Price (; born 17 May 1953) is an Australian philosopher, formerly the Bertrand Russell Professor in the Faculty of Philosophy, Cambridge, and a Fellow of Trinity College, Cambridge.
He was previously Challis Professor of Philosophy and Di ...
, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the Centre for the Study of Existential Risk
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price (Be ...
at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project
Open Philanthropy is a research and grantmaking foundation that makes grants based on the doctrine of effective altruism. It was founded as a partnership between GiveWell and Good Ventures. Its current co-chief executive officers are Holden K ...
writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artific ...
(MIRI) and the Future of Humanity Institute
The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the ...
(FHI), and there seems to have been less analysis and debate regarding them".
Opaque algorithms
Approaches like machine learning with neural network
A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
s can result in computers making decisions that they and the humans who programmed them cannot explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence.
Singularity
Many researchers have argued that, by way of an "intelligence explosion", a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[Muehlhauser, Luke, and Louie Helm. 2012]
"Intelligence Explosion and Machine Ethics"
. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book '' Superintelligence: Paths, Dangers, Strategies'', philosopher Nick Bostrom
Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the rev ...
argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[Bostrom, Nick. 2003]
"Ethical Issues in Advanced Artificial Intelligence"
. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to "enhance" ourselves.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. AI researchers such as Stuart J. Russell, Bill Hibbard, Roman Yampolskiy
Roman Vladimirovich Yampolskiy (russian: link=no, Роман Владимирович Ямпольский; born 13 August 1979) is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, secu ...
, Shannon Vallor
Shannon Vallor is a philosopher of technology. She is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She was at Santa Clara University in Santa Clara, California where she was the ...
, Steven Umbrello
Stephen or Steven is a common English first name. It is particularly significant to Christians, as it belonged to Saint Stephen ( grc-gre, Στέφανος ), an early disciple and deacon who, according to the Book of Acts, was stoned to death; h ...
and Luciano Floridi have proposed design strategies for developing beneficial machines.
Actors in AI ethics
There are many organisations concerned with AI ethics and policy, public and governmental as well as corporate and societal.
Amazon
Amazon most often refers to:
* Amazons, a tribe of female warriors in Greek mythology
* Amazon rainforest, a rainforest covering most of the Amazon basin
* Amazon River, in South America
* Amazon (company), an American multinational technology c ...
, Google
Google LLC () is an American multinational technology company focusing on search engine technology, online advertising, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. ...
, Facebook
Facebook is an online social media and social networking service owned by American company Meta Platforms. Founded in 2004 by Mark Zuckerberg with fellow Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin M ...
, IBM, and Microsoft
Microsoft Corporation is an American multinational technology corporation producing computer software, consumer electronics, personal computers, and related services headquartered at the Microsoft Redmond campus located in Redmond, Washing ...
have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The IEEE
The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronic engineering and electrical engineering (and associated disciplines) with its corporate office in New York City and its operation ...
put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization.
Traditionally, government
A government is the system or group of people governing an organized community, generally a state.
In the case of its broad associative definition, government normally consists of legislature, executive, and judiciary. Government is a ...
has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied.
Intergovernmental initiatives
* The European Commission
The European Commission (EC) is the executive of the European Union (EU). It operates as a cabinet government, with 27 members of the Commission (informally known as "Commissioners") headed by a President. It includes an administrative body o ...
has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its "Ethics Guidelines for Trustworthy Artificial Intelligence". The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020.
* The OECD
The Organisation for Economic Co-operation and Development (OECD; french: Organisation de coopération et de développement économiques, ''OCDE'') is an intergovernmental organisation with 38 member countries, founded in 1961 to stimulate e ...
established an OECD AI Policy Observatory.
Governmental initiatives
* In the United States
The United States of America (U.S.A. or USA), commonly known as the United States (U.S. or US) or America, is a country primarily located in North America. It consists of 50 states, a federal district, five major unincorporated territorie ...
the Obama
Barack Hussein Obama II ( ; born August 4, 1961) is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American president of the U ...
administration put together a Roadmap for AI Policy. The Obama Administration released two prominent white papers
A white paper is a report or guide that informs readers concisely about a complex issue and presents the issuing body's philosophy on the matter. It is meant to help readers understand an issue, solve a problem, or make a decision. A white paper ...
on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019).
*In January 2020, in the United States, the Trump Administration
Donald Trump's tenure as the List of presidents of the United States, 45th president of the United States began with Inauguration of Donald Trump, his inauguration on January 20, 2017, and ended on January 20, 2021. Trump, a Republican Party ...
released a draft executive order issued by the Office of Management and Budget (OMB) on "Guidance for Regulation of Artificial Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill.
*The Computing Community Consortium (CCC) weighed in with a 100-plus page draft report – ''A 20-Year Community Roadmap for Artificial Intelligence Research in the US''
* The Center for Security and Emerging Technology advises US policymakers on the security implications of emerging technologies such as AI.
* The Non-Human Party
Non-human (also spelled nonhuman) is any entity displaying some, but not enough, human characteristics to be considered a human. The term has been used in a variety of contexts and may refer to objects that have been developed with human intelligen ...
is running for election in New South Wales
)
, nickname =
, image_map = New South Wales in Australia.svg
, map_caption = Location of New South Wales in AustraliaCoordinates:
, subdivision_type = Country
, subdivision_name = Australia
, established_title = Before federation
, es ...
, with policies around granting rights to robots, animals and generally, non-human entities whose intelligence has been overlooked.
* In Russia, the first-ever Russian "Codex of ethics of artificial intelligence" for business was signed in 2021. It was driven by together with major commercial and academic institutions such as Sberbank, Yandex, Rosatom
Rosatom, ( rus, Росатом, p=rɐsˈatəm}) also known as Rosatom State Nuclear Energy Corporation, the State Atomic Energy Corporation Rosatom or Rosatom State Corporation, is a Russian state corporation headquartered in Moscow that speciali ...
, Higher School of Economics, Moscow Institute of Physics and Technology, ITMO University
ITMO University (russian: Университет ИТМО) is a state-supported university in Saint Petersburg and is one of Russia's National Research Universities. ITMO University is one of 15 Russian universities that were selected to particip ...
, Nanosemantics, Rostelecom
Rostelecom is Russia’s largest provider of digital services for a wide variety of consumers, households, private businesses, government and municipal authorities, and other telecom providers.
Rostelecom interconnects all local public operators ...
, CIAN
In Irish mythology, Cian or Cían (), nicknamed Scal Balb, was the son of Dian Cecht, the physician of the Tuatha Dé Danann, and best known as the father of Lugh Lamhfada. Cían's brothers were Cu, Cethen, and Miach.
Cían was slain by the Son ...
and others.
Academic initiatives
*There are three research institutes at the University of Oxford
, mottoeng = The Lord is my light
, established =
, endowment = £6.1 billion (including colleges) (2019)
, budget = £2.145 billion (2019–20)
, chancellor ...
that are centrally focused on AI ethics. The Future of Humanity Institute
The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the ...
that focuses both on AI Safety and the Governance of AI. The Institute for Ethics in AI, directed by John Tasioulas
John Tasioulas (born 18 December 1964) is a Greek-Australian moral and legal philosopher. He is the inaugural Director of the Institute for Ethics in AI (artificial intelligence), and Professor of Ethics and Legal Philosophy, Faculty of Philosoph ...
, whose primary goal, among others, is to promote AI ethics as a field proper in comparison to related applied ethics fields. The Oxford Internet Institute
The Oxford Internet Institute (OII) is a multi-disciplinary department of social and computer science dedicated to the study of information, communication, and technology, and is part of the Social Sciences Division of the University of Oxford ...
, directed by Luciano Floridi, focuses on the ethics of near-term AI technologies and ICTs.
*The AI Now Institute
The AI Now Institute at NYU (AI Now) is an American research institute studying the social implications of artificial intelligence. AI Now was founded by Kate Crawford and Meredith Whittaker in 2017 after a symposium hosted by the White House und ...
at NYU
New York University (NYU) is a private university, private research university in New York City. Chartered in 1831 by the New York State Legislature, NYU was founded by a group of New Yorkers led by then-United States Secretary of the Treasu ...
is a research institute studying the social implications of artificial intelligence. Its interdisciplinary research focuses on the themes bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure.
*The Institute for Ethics and Emerging Technologies (IEET) researches the effects of AI on unemployment, and policy.
*The Institute for Ethics in Artificial Intelligence
The TUM School of Social Sciences and Technology (SOT) is a school of the Technical University of Munich, established in 2021 by the merger of three former departments. As of 2022, it is structured into the Department of Educational Sciences, the ...
(IEAI) at the Technical University of Munich
The Technical University of Munich (TUM or TU Munich; german: Technische Universität München) is a public research university in Munich, Germany. It specializes in engineering, technology, medicine, and applied and natural sciences.
Establis ...
directed by Christoph Lütge
Christoph Lütge (born 10 November 1969) is a German philosopher and economist notable for his work on business ethics, AI ethics, experimental ethics and political philosophy. He is full professor of business ethics at the Technical University o ...
conducts research across various domains such as mobility, employment, healthcare and sustainability.
Private organizations
* Algorithmic Justice League
* Black in AI
Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by Computer science, computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later p ...
* Data for Black Lives
* Queer in AI
Role and impact of fiction
The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the ''Institut de Robòtica i Informàtica Industrial'' (Institute of robotics and industrial computing) at the Technical University of Catalonia notes, in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.
History
Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the Enlightenment: Leibniz
Gottfried Wilhelm (von) Leibniz . ( – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat. He is one of the most prominent figures in both the history of philosophy and the history of mathema ...
already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being, and so does Descartes, who describes what could be considered an early version of the Turing Test.
The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelley's ''Frankenstein
''Frankenstein; or, The Modern Prometheus'' is an 1818 novel written by English author Mary Shelley. ''Frankenstein'' tells the story of Victor Frankenstein, a young scientist who creates a sapient creature in an unorthodox scientific ex ...
''. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: ''R.U.R – Rossum's Universal Robots'', Karel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, ''robota'') but was also an international success after it premiered in 1921. George Bernard Shaw's play '' Back to Methuselah'', published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film ''Metropolis'' shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society.
Impact on technological development
While the anticipation of a future dominated by potentially indomitable technology has fueled the imagination of writers and film makers for a long time, one question has been less frequently analyzed, namely, to what extent fiction has played a role in providing inspiration for technological development. It has been documented, for instance, that the young Alan Turing saw and appreciated G.B. Shaw's play ''Back to Methuselah'' in 1933 (just 3 years before the publication of his first seminal paper which laid the groundwork for the digital computer), and he would likely have been at least aware of plays like ''R.U.R.'', which was an international success and translated into many languages.
One might also ask the question which role science fiction played in establishing the tenets and ethical implications of AI development: Isaac Asimov conceptualized his Three Laws of Robotics in the 1942 short story " Runaround", part of the short story collection ''I, Robot
''I, Robot'' is a fixup (compilation) novel of science fiction short stories or essays by American writer Isaac Asimov. The stories originally appeared in the American magazines ''Super Science Stories'' and ''Astounding Science Fiction'' betw ...
''; Arthur C. Clarke's short " The Sentinel", on which Stanley Kubrick's film ''2001: A Space Odyssey'' is based, was written in 1948 and published in 1952. Another example (among many others) would be Philip K. Dicks numerous short stories and novels – in particular '' Do Androids Dream of Electric Sheep?'', published in 1968, and featuring its own version of a Turing Test, the ''Voight-Kampff Test'', to gauge emotional responses of androids indistinguishable from humans. The novel later became the basis of the influential 1982 movie ''Blade Runner
''Blade Runner'' is a 1982 science fiction film directed by Ridley Scott, and written by Hampton Fancher and David Peoples. Starring Harrison Ford, Rutger Hauer, Sean Young, and Edward James Olmos, it is an adaptation of Philip K. Dick' ...
'' by Ridley Scott.
Science fiction has been grappling with ethical implications of AI developments for decades, and thus provided a blueprint for ethical issues that might emerge once something akin to general artificial intelligence has been achieved: Spike Jonze's 2013 film ''Her'' shows what can happen if a user falls in love with the seductive voice of his smartphone operating system; ''Ex Machina'', on the other hand, asks a more difficult question: if confronted with a clearly recognizable machine, made only human by a face and an empathetic and sensual voice, would we still be able to establish an emotional connection, still be seduced by it? (The film echoes a theme already present two centuries earlier, in the 1817 short story " The Sandmann" by E.T.A. Hoffmann.)
The theme of coexistence with artificial sentient beings is also the theme of two recent novels: ''Machines like me'' by Ian McEwan
Ian Russell McEwan, (born 21 June 1948) is an English novelist and screenwriter. In 2008, ''The Times'' featured him on its list of "The 50 greatest British writers since 1945" and ''The Daily Telegraph'' ranked him number 19 in its list of th ...
, published in 2019, involves (among many other things) a love-triangle involving an artificial person as well as a human couple. ''Klara and the Sun
''Klara and the Sun'' is the eighth novel by the Nobel Prize-winning British writer Kazuo Ishiguro, published on 2 March 2021. It is a dystopian science fiction story.
Set in the U.S. in an unspecified future, the book is told from the point of ...
'' by Nobel Prize
The Nobel Prizes ( ; sv, Nobelpriset ; no, Nobelprisen ) are five separate prizes that, according to Alfred Nobel's will of 1895, are awarded to "those who, during the preceding year, have conferred the greatest benefit to humankind." Alfr ...
winner Kazuo Ishiguro
Sir Kazuo Ishiguro ( ; born 8 November 1954) is a British novelist, screenwriter, musician, and short-story writer. Ishiguro was born in Nagasaki, Japan, and moved to Britain in 1960 with his parents when he was five.
He is one of the most cr ...
, published in 2021, is the first-person account of Klara, an 'AF' (artificial friend), who is trying, in her own way, to help the girl she is living with, who, after having been 'lifted' (i.e. having been subjected to genetic enhancements), is suffering from a strange illness.
TV series
While ethical questions linked to AI have been featured in science fiction literature and feature films
A feature film or feature-length film is a narrative film (motion picture or "movie") with a running time long enough to be considered the principal or sole presentation in a commercial entertainment program. The term ''feature film'' originall ...
for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series '' Real Humans'' (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series '' Black Mirror'' (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series ''Osmosis'' (2020) and British series ''The One'' deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series ''Love, Death+Robots'' have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.
Future visions in fiction and games
The movie ''The Thirteenth Floor
''The Thirteenth Floor'' is a 1999 science fiction neo-noir film written and directed by Josef Rusnak, and produced by Roland Emmerich through his Centropolis Entertainment company. It is loosely based upon '' Simulacron-3'' (1964), a novel by ...
'' suggests a future where simulated worlds with sentient inhabitants are created by computer game console
A video game console is an electronic device that outputs a video signal or image to display a video game that can be played with a game controller. These may be home consoles, which are generally placed in a permanent location connected to a t ...
s for the purpose of entertainment. The movie '' The Matrix'' suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism
Speciesism () is a term used in philosophy regarding the treatment of individuals of different species. The term has several different definitions within the relevant literature. A common element of most definitions is that speciesism involves t ...
. The short story " The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram
The Doctor, an Emergency Medical Hologram (or EMH for short), is a fictional character portrayed by actor Robert Picardo on the television series '' Star Trek: Voyager'', which aired on UPN between 1995 and 2001. He is an artificial intelligenc ...
of '' Starship Voyager'', which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies ''Bicentennial Man
''The Bicentennial Man'' is a Novella, novelette in the Robot series (Asimov), ''Robot'' series by American writer Isaac Asimov. According to the foreword in ''Robot Visions'', Asimov was approached to write a story, along with a number of other ...
'' and ''A.I.
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech rec ...
'' deal with the possibility of sentient robots that could love. ''I, Robot
''I, Robot'' is a fixup (compilation) novel of science fiction short stories or essays by American writer Isaac Asimov. The stories originally appeared in the American magazines ''Super Science Stories'' and ''Astounding Science Fiction'' betw ...
'' explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network
A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
'' Detroit: Become Human'' is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.
Over time, debates have tended to focus less and less on ''possibility'' and more on ''desirability'', as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis
Hugo de Garis (born 1947, Sydney, Australia) is a retired researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve artifici ...
and Kevin Warwick
Kevin Warwick (born 9 February 1954) is an English engineer and Deputy Vice-Chancellor (Research) at Coventry University. He is known for his studies on direct interfaces between computer systems and the human nervous system, and has also done ...
. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.
See also
Notes
External links
Ethics of Artificial Intelligence
at the ''Internet Encyclopedia of Philosophy
The ''Internet Encyclopedia of Philosophy'' (''IEP'') is a scholarly online encyclopedia, dealing with philosophy, philosophical topics, and philosophers. The IEP combines open access publication with peer reviewed publication of original pape ...
''
Ethics of Artificial Intelligence and Robotics
at the Stanford Encyclopedia of Philosophy
The ''Stanford Encyclopedia of Philosophy'' (''SEP'') combines an online encyclopedia of philosophy with peer-reviewed publication of original papers in philosophy, freely accessible to Internet users. It is maintained by Stanford University. Eac ...
*
BBC News: Games to take on a life of their own
, an article on humanity's fear of artificial intelligence.
AI Ethics Guidelines Global Inventory
b
Algorithmwatch
*
{{Philosophy of science
Philosophy of artificial intelligence
Ethics of science and technology
Regulation of robots