HOME

TheInfoList



OR:

A military artificial intelligence arms race is a competition or
arms race An arms race occurs when two or more groups compete in military superiority. It consists of a competition between two or more states to have superior armed forces; a competition concerning production of weapons, the growth of a military, and t ...
between two or more states to have their military forces equipped with the best
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech re ...
(AI). Since the mid-2010s many analysts have noted the emergence of such a global arms race, the AI arms race, between great powers for better military AI, coinciding with and being driven by the increasing geopolitical and military tensions of what some have called a
Second Cold War The Second Cold War, Cold War II, or the New Cold War are terms that refer to heightened political, social, ideological, informational, and military tensions in the 21st century. The term is used in the context of the tensions between th ...
. The context of the AI arms race is the AI Cold War narrative, in which tensions between the US and China lead to a cold war waged in the area of AI technology.


Terminology

The AI arms race is the race by global powers to develop and deploy
lethal autonomous weapon Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), auto ...
s systems, also known as "
slaughterbots ''Slaughterbots'' is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition to assassinate political opponents based on preprogram ...
" or "killer robots." These are weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without human intervention. More broadly, any competition for superior AI is sometimes framed as an "arms race". A quest for military AI dominance overlaps with a quest for dominance in other sectors, especially as a country pursues both economic and military advantages.


Risks

Stephen Cave of the Leverhulme Centre argues that the risks of an AI race are threefold, each with potential geopolitical implications. The first risk is that even if there was no race the terminology surrounding the race is dangerous. The rhetoric around the AI race and the importance of being first does not encourage the type of thoughtful deliberation with stake holders required to produce AI technology that is the most broadly beneficial to society and could self-fulfillingly induce a race. The second risk is the AI race itself, whether or not the race is won by any one group. Due to the rhetoric and perceived advantage of being the first to develop advanced AI technology, there emerges strong incentives to cut corners on safety considerations which might leave out important aspects such as bias and fairness. In particular, the perception of another team being on the brink of a break through encourages other teams to take short cuts and deploy an AI system that is not ready, which can be harmful to others and the group possessing the AI system. As Paul Scharre warns in ''
Foreign Policy A state's foreign policy or external policy (as opposed to internal or domestic policy) is its objectives and activities in relation to its interactions with other states, unions, and other political entities, whether bilaterally or through mu ...
'', "For each country, the real danger is not that it will fall behind its competitors in AI but that the perception of a race will prompt everyone to rush to deploy unsafe AI systems. In their desire to win, countries risk endangering themselves just as much as their opponents."
Nick Bostrom Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born Philosophy, philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, ...
and others developed a model that provides further evidence of such. The model found that a team possessing more information about other teams' capabilities caused more risk-taking and short cuts in their development of AI systems. Furthermore, the greater the enmity between teams, the greater the risk of ignoring precautions and leading to an AI disaster. Another danger of an AI arms race is the risk of losing control of the AI systems and the risk is compounded in the case of a race to artificial general intelligence, which Cave noted may present an
existential risk A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanen ...
. The third risk of an AI arms race is if the race is actually won by one group. An example of this risk is the consolidation of power and technological advantage in the hands of one group. If one group achieves superior AI technology " is only reasonable to conclude that AI-enabled capabilities could be used to threaten our critical infrastructure, amplify disinformation campaigns, and wage war.":1 Arms-race terminology is also sometimes used in the context of competition for economic dominance and "
soft power In politics (and particularly in international politics), soft power is the ability to co-opt rather than coerce (contrast hard power). In other words, soft power involves shaping the preferences of others through appeal and attraction. A defini ...
"; for example, the November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence, while stressing the role of diplomacy in engaging with China and Russia, adopts the language of a competitive arms race. It states that US military-technological superiority is vital to the existing world order:11 and stresses the ongoing US militarization of AI, together with militarization of AI by China and Russia, is for geopolitical purposes.:1-2


Stances toward military artificial intelligence


Russia

Russian General
Viktor Bondarev Viktor Nikolaevich Bondarev (russian: Виктор Николаевич Бондарев; born 7 December 1959) is a Colonel General and former Commander of the Russian Aerospace Forces (1 August 2015 – 26 September 2017), and the former Com ...
, commander-in-chief of the Russian air force, stated that as early as February 2017, Russia was working on AI-guided missiles that could decide to switch targets mid-flight. Russia's Military Industrial Committee has approved plans to derive 30 percent of Russia's combat power from remote controlled and AI-enabled robotic platforms by 2030. Reports by state-sponsored Russian media on potential military uses of AI increased in mid-2017. In May 2017, the CEO of Russia's Kronstadt Group, a defense contractor, stated that "there already exist completely autonomous AI operation systems that provide the means for UAV clusters, when they fulfill missions autonomously, sharing tasks between them, and interact", and that it is inevitable that "swarms of drones" will one day fly over combat zones. Russia has been testing several autonomous and semi-autonomous combat systems, such as Kalashnikov's "neural net" combat module, with a machine gun, a camera, and an AI that its makers claim can make its own targeting judgements without human intervention. In September 2017, during a National Knowledge Day address to over a million students in 16,000 Russian schools, Russian President
Vladimir Putin Vladimir Vladimirovich Putin; (born 7 October 1952) is a Russian politician and former intelligence officer who holds the office of president of Russia. Putin has served continuously as president or prime minister since 1999: as prime min ...
stated "Artificial intelligence is the future, not only for Russia but for all humankind... Whoever becomes the leader in this sphere will become the ruler of the world". Putin also said it would be better to prevent any single actor achieving a monopoly, but that if Russia became the leader in AI, they would share their "technology with the rest of the world, like we are doing now with atomic and nuclear technology". Russia is establishing a number of organizations devoted to the development of military AI. In March 2018, the Russian government released a 10-point AI agenda, which calls for the establishment of an AI and Big Data consortium, a Fund for Analytical Algorithms and Programs, a state-backed AI training and education program, a dedicated AI lab, and a National Center for Artificial Intelligence, among other initiatives. In addition, Russia recently created a defense research organization, roughly equivalent to DARPA, dedicated to autonomy and robotics called the Foundation for Advanced Studies, and initiated an annual conference on "Robotization of the Armed Forces of the Russian Federation." The Russian military has been researching a number of AI applications, with a heavy emphasis on semiautonomous and autonomous vehicles. In an official statement on November 1, 2017, Viktor Bondarev, chairman of the Federation Council's Defense and Security Committee, stated that "artificial intelligence will be able to replace a soldier on the battlefield and a pilot in an aircraft cockpit" and later noted that "the day is nearing when vehicles will get artificial intelligence." Bondarev made these remarks in close proximity to the successful test of Nerehta, an crewless Russian ground vehicle that reportedly "outperformed existing rewedcombat vehicles." Russia plans to use Nerehta as a research and development platform for AI and may one day deploy the system in combat, intelligence gathering, or logistics roles. Russia has also reportedly built a combat module for crewless ground vehicles that is capable of autonomous target identification—and, potentially, target engagement—and plans to develop a suite of AI-enabled autonomous systems. In addition, the Russian military plans to incorporate AI into crewless aerial, naval, and undersea vehicles and is currently developing swarming capabilities. It is also exploring innovative uses of AI for remote sensing and electronic warfare, including adaptive frequency hopping, waveforms, and countermeasures. Russia has also made extensive use of AI technologies for domestic propaganda and surveillance, as well as for information operations directed against the United States and U.S. allies. The Russian government has strongly rejected any ban on
lethal autonomous weapon Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), auto ...
systems, suggesting that such a ban could be ignored.


China

China is pursuing a strategic policy of
military-civil fusion Military-civil fusion (, MCF) or civil-military fusion is a strategy and policy of the Chinese Communist Party (CCP) with the stated goal of developing the People's Liberation Army (PLA) into a world-class military. Background The institutional ...
on AI for global
technological supremacy Technological supremacy is the notion of supremacy in the field of technology in either a regional or global international relations context, as well as in subfields, such as military-technological supremacy, including air supremacy. The notion of ...
. According to a February 2019 report by Gregory C. Allen of the
Center for a New American Security The Center for a New American Security (CNAS) is a Washington, D.C. based think tank established in 2007 by co-founders Michèle Flournoy, board member of military contractor Booz Allen Hamilton, and Kurt M. Campbell, coordinator for Indo-Paci ...
, China's
leadership Leadership, both as a research area and as a practical skill, encompasses the ability of an individual, group or organization to "lead", influence or guide other individuals, teams, or entire organizations. The word "leadership" often gets view ...
– including
paramount leader Paramount leader () is an informal term for the most important political figure in the People's Republic of China (PRC). The paramount leader typically controls the Chinese Communist Party (CCP) and the People's Liberation Army (PLA), often ho ...
Xi Jinping Xi Jinping ( ; ; ; born 15 June 1953) is a Chinese politician who has served as the general secretary of the Chinese Communist Party (CCP) and chairman of the Central Military Commission (CMC), and thus as the paramount leader of China, si ...
– believes that being at the forefront in AI technology is critical to the future of global military and economic power competition. Chinese military officials have said that their goal is to incorporate commercial AI technology to "narrow the gap between the Chinese military and global advanced powers." The close ties between Silicon Valley and China, and the open nature of the American research community, has made the West's most advanced AI technology easily available to China; in addition, Chinese industry has numerous home-grown AI accomplishments of its own, such as
Baidu Baidu, Inc. ( ; , meaning "hundred times") is a Chinese multinational technology company specializing in Internet-related services and products and artificial intelligence (AI), headquartered in Beijing's Haidian District. It is one of the la ...
passing a notable Chinese-language speech recognition capability benchmark in 2015. As of 2017, Beijing's roadmap aims to create a $150 billion AI industry by 2030. Before 2013, Chinese defense procurement was mainly restricted to a few conglomerates; however, as of 2017, China often sources sensitive emerging technology such as drones and artificial intelligence from private start-up companies. An October 2021 report by the
Center for Security and Emerging Technology The Center for Security and Emerging Technology (CSET) is a think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies, based at Georgetown University's School of Foreign Service. ...
found that "Most of the hinese militarys AI equipment suppliers are not state-owned defense enterprises, but private Chinese tech companies founded after 2010." The report estimated that Chinese military spending on AI exceeded $1.6 billion each year. The ''
Japan Times ''The Japan Times'' is Japan's largest and oldest English-language daily newspaper. It is published by , a subsidiary of News2u Holdings, Inc.. It is headquartered in the in Kioicho, Chiyoda, Tokyo. History ''The Japan Times'' was launched by ...
'' reported in 2018 that annual private Chinese investment in AI is under $7 billion per year. AI startups in China received nearly half of total global investment in AI startups in 2017; the Chinese filed for nearly five times as many AI patents as did Americans. China published a position paper in 2016 questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U. N. Security Council to broach the issue. In 2018, Xi called for greater international cooperation in basic AI research. Chinese officials have expressed concern that AI such as drones could lead to accidental war, especially in the absence of international norms. In 2019, former
United States Secretary of Defense The United States secretary of defense (SecDef) is the head of the United States Department of Defense, the executive department of the U.S. Armed Forces, and is a high ranking member of the federal cabinet. DoDD 5100.1: Enclosure 2: a The ...
Mark Esper Mark Thomas Esper (born April 26, 1964) is an American politician and manufacturing executive who served as the 27th United States secretary of defense from 2019 to 2020. A member of the Republican Party, he had previously served as the 23rd ...
lashed out at China for selling drones capable of taking life with no human oversight.


United States

In 2014, former Secretary of Defense
Chuck Hagel Charles Timothy Hagel ( born October 4, 1946)Third Offset Strategy" that rapid advances in artificial intelligence will define the next generation of warfare. According to data science and analytics firm Govini, the U.S.
Department of Defense Department of Defence or Department of Defense may refer to: Current departments of defence * Department of Defence (Australia) * Department of National Defence (Canada) * Department of Defence (Ireland) * Department of National Defense (Philippi ...
increased investment in artificial intelligence, big data and cloud computing from $5.6 billion in 2011 to $7.4 billion in 2016. However, the civilian NSF budget for AI saw no increase in 2017. ''
Japan Times ''The Japan Times'' is Japan's largest and oldest English-language daily newspaper. It is published by , a subsidiary of News2u Holdings, Inc.. It is headquartered in the in Kioicho, Chiyoda, Tokyo. History ''The Japan Times'' was launched by ...
'' reported in 2018 that the United States private investment is around $70 billion per year. The November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence confirmed that AI is critical to US technological military superiority. The U.S. has many military AI combat programs, such as the ''
Sea Hunter ''Sea Hunter'' is an autonomous unmanned surface vehicle (USV) launched in 2016 as part of the DARPA Anti-Submarine Warfare Continuous Trail Unmanned Vessel ( ACTUV) program. The ship was christened 7 April 2016 in Portland, Oregon. It was built ...
'' autonomous warship, which is designed to operate for extended periods at sea without a single crew member, and to even guide itself in and out of port. From 2017, a temporary US Department of Defense directive requires a human operator to be kept in the loop when it comes to the taking of human life by autonomous weapons systems. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. Project Maven is a
Pentagon In geometry, a pentagon (from the Greek πέντε ''pente'' meaning ''five'' and γωνία ''gonia'' meaning ''angle'') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simp ...
project involving using machine learning and engineering talent to distinguish people and objects in drone videos, apparently giving the government real-time battlefield command and control, and the ability to track, tag and spy on targets without human involvement. Initially the effort was led by
Robert O. Work Robert Orton Work (born January 17, 1953) is an American national security professional who served as the 32nd United States Deputy Secretary of Defense for both the Obama and Trump administrations from 2014 to 2017. Prior to that, Work was the U ...
who was concerned about China's military use of the emerging technology. Reportedly, Pentagon development stops short of acting as an AI weapons system capable of firing on self-designated targets. The project was established in a memo by the
U.S. Deputy Secretary of Defense The deputy secretary of defense (acronym: DepSecDef) is a statutory office () and the second-highest-ranking official in the Department of Defense of the United States of America. The deputy secretary is the principal civilian deputy to the se ...
on 26 April 2017. Also known as the Algorithmic Warfare Cross Functional Team, it is, according to Lt. Gen. of the
United States Air Force The United States Air Force (USAF) is the air service branch of the United States Armed Forces, and is one of the eight uniformed services of the United States. Originally created on 1 August 1907, as a part of the United States Army Signal ...
Jack Shanahan in November 2017, a project "designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the efenseDepartment". Its chief,
U.S. Marine Corps The United States Marine Corps (USMC), also referred to as the United States Marines, is the maritime land force service branch of the United States Armed Forces responsible for conducting expeditionary and amphibious operations through co ...
Col. Drew Cukor, said: "People and computers will work symbiotically to increase the ability of weapon systems to detect objects." At the second Defense One Tech Summit in July 2017, Cukor also said that the investment in a "deliberate workflow process" was funded by the Department f Defensethrough its "rapid acquisition authorities" for about "the next 36 months". The Joint Artificial Intelligence Center (JAIC) (pronounced "jake") is an American organization on exploring the usage of AI (particularly edge computing), Network of Networks, and AI-enhanced communication, for use in actual combat. It is a subdivision of the
United States Armed Forces The United States Armed Forces are the military forces of the United States. The armed forces consists of six service branches: the Army, Marine Corps, Navy, Air Force, Space Force, and Coast Guard. The president of the United States is the ...
and was created in June 2018. The organization's stated objective is to "transform the
US Department of Defense The United States Department of Defense (DoD, USDOD or DOD) is an executive branch department of the federal government charged with coordinating and supervising all agencies and functions of the government directly related to national secur ...
by accelerating the delivery and adoption of AI to achieve mission impact at scale. The goal is to use AI to solve large and complex problem sets that span multiple combat systems; then, ensure the combat Systems and Components have real-time access to ever-improving libraries of data sets and tools."


United Kingdom

In 2015, the UK government opposed a ban on lethal autonomous weapons, stating that "international humanitarian law already provides sufficient regulation for this area", but that all weapons employed by UK armed forces would be "under human oversight and control".


Israel

Israel's
Harpy In Greek mythology and Roman mythology, a harpy (plural harpies, , ; lat, harpȳia) is a half-human and half-bird personification of storm winds. They feature in Homeric poems. Descriptions They were generally depicted as birds with the he ...
anti-radar "fire and forget" drone is designed to be launched by ground troops, and autonomously fly over an area to find and destroy radar that fits pre-determined criteria. The application of artificial intelligence is also expected to be advanced in crewless ground systems and robotic vehicles such as the Guardium MK III and later versions. These robotic vehicles are used in border defense.


South Korea

The South Korean Super aEgis II machine gun, unveiled in 2010, sees use both in South Korea and in the Middle East. It can identify, track, and destroy a moving target at a range of 4 km. While the technology can theoretically operate without human intervention, in practice safeguards are installed to require manual input. A South Korean manufacturer states, "Our weapons don't sleep, like humans must. They can see in the dark, like humans can't. Our technology therefore plugs the gaps in human capability", and they want to "get to a place where our software can discern whether a target is friend, foe, civilian or military".


European Union

The European Parliament holds the position that humans must have oversight and decision-making power over lethal autonomous weapons. However, it is up to each member state of the European Union to determine their stance on the use of autonomous weapons and the mixed stances of the member states is perhaps the greatest hindrance to the European Union's ability to develop autonomous weapons. Some members such as France, Germany, Italy, and Sweden are developing lethal autonomous weapons. Some members remain undecided about the use of autonomous military weapons and Austria has even called to ban the use of such weapons. Some EU member states have developed and are developing automated weapons. Germany has developed an active protection system, the Active Defense System, that can respond to a threat with complete autonomy in less than a millisecond. Italy plans to incorporate autonomous weapons systems into its future military plans.


Trends

In 2014, AI specialist
Steve Omohundro Stephen Malvern Omohundro (born 1959) is an American computer scientist whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial i ...
warned that "An autonomous weapons arms race is already taking place". According to Siemens, worldwide military spending on robotics was US$5.1 billion in 2010 and US$7.5 billion in 2015. China became a top player in artificial intelligence research in the 2010s. According to the ''
Financial Times The ''Financial Times'' (''FT'') is a British daily newspaper printed in broadsheet and published digitally that focuses on business and economic current affairs. Based in London, England, the paper is owned by a Japanese holding company, Nik ...
'', in 2016, for the first time, China published more AI papers than the entire European Union. When restricted to number of AI papers in the top 5% of cited papers, China overtook the United States in 2016 but lagged behind the European Union. 23% of the researchers presenting at the 2017 American Association for the Advancement of Artificial Intelligence (AAAI) conference were Chinese.
Eric Schmidt Eric Emerson Schmidt (born April 27, 1955) is an American businessman and software engineer known for being the CEO of Google from 2001 to 2011, executive chairman of Google from 2011 to 2015, executive chairman of Alphabet Inc. from 2015 to 20 ...
, the former chairman of
Alphabet An alphabet is a standardized set of basic written graphemes (called letter (alphabet), letters) that represent the phonemes of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character ...
, has predicted China will be the leading country in AI by 2025.


Proposals for international regulation

The international regulation of autonomous weapons is an emerging issue for international law. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process. As early as 2007, scholars such as AI professor
Noel Sharkey Noel Sharkey (born 14 December 1948) is a computer scientist born in Belfast, Northern Ireland. He is best known to the British public for his appearances on television as an expert on robotics; including the BBC Two television series '' Robot W ...
have warned of "an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions". Miles Brundage of the
University of Oxford , mottoeng = The Lord is my light , established = , endowment = £6.1 billion (including colleges) (2019) , budget = £2.145 billion (2019–20) , chancellor ...
has argued an AI arms race might be somewhat mitigated through diplomacy: "We saw in the various historical arms races that collaboration and dialog can pay dividends". Over a hundred experts signed an open letter in 2017 calling on the UN to address the issue of lethal autonomous weapons; however, at a November 2017 session of the UN
Convention on Certain Conventional Weapons The United Nations Convention on Certain Conventional Weapons (CCW or CCWC), concluded at Geneva on October 10, 1980, and entered into force in December 1983, seeks to prohibit or restrict the use of certain conventional weapons which are consid ...
(CCW), diplomats could not agree even on how to define such weapons. The Indian ambassador and chair of the CCW stated that agreement on rules remained a distant prospect. As of 2019, 26 heads of state and 21 Nobel Peace Prize laureates have backed a ban on autonomous weapons. However, as of 2022, most major powers continue to oppose a ban on autonomous weapons. Many experts believe attempts to completely ban killer robots are likely to fail, in part because detecting treaty violations would be extremely difficult. A 2017 report from Harvard's
Belfer Center The Robert and Renée Belfer Center for Science and International Affairs, also known as the Belfer Center, is a research center located within the Harvard Kennedy School at Harvard University, in the United States. From 2017 until his death in Oc ...
predicts that AI has the potential to be as transformative as nuclear weapons.Allen, Greg, and Taniel Chan. "Artificial Intelligence and National Security." Report. Harvard Kennedy School, Harvard University. Boston, MA (2017). The report further argues that "Preventing expanded military use of AI is likely impossible" and that "the more modest goal of safe and effective technology management must be pursued", such as banning the attaching of an AI
dead man's switch A dead man's switch (see alternative names) is a switch that is designed to be activated or deactivated if the human operator becomes incapacitated, such as through death, loss of consciousness, or being bodily removed from control. Originally ...
to a nuclear arsenal.


Other reactions to autonomous weapons

A 2015 open letter by the Future of Life Institute calling for the prohibition of lethal autonomous weapons systems has been signed by over 26,000 citizens, including physicist Stephen Hawking, Tesla magnate
Elon Musk Elon Reeve Musk ( ; born June 28, 1971) is a business magnate and investor. He is the founder, CEO and chief engineer of SpaceX; angel investor, CEO and product architect of Tesla, Inc.; owner and CEO of Twitter, Inc.; founder of The B ...
, Apple's
Steve Wozniak Stephen Gary Wozniak (; born August 11, 1950), also known by his nickname "Woz", is an American electronics engineer, computer programmer, philanthropist, inventor, and technology entrepreneur. In 1976, with business partner Steve Jobs, he co- ...
and
Twitter Twitter is an online social media and social networking service owned and operated by American company Twitter, Inc., on which users post and interact with 280-character-long messages known as "tweets". Registered users can post, like, an ...
co-founder
Jack Dorsey Jack Patrick Dorsey (born November 19, 1976) is an American Internet entrepreneur and programmer who is a co-founder and former CEO of Twitter, Inc., as well as a co-founder and the CEO and chairperson of Block, Inc., the developer of the Squa ...
, and over 4,600 artificial intelligence researchers, including Stuart Russell,
Bart Selman Bart Selman is a Dutch-American professor of computer science at Cornell University. He has previously worked at AT&T Bell Laboratories. He is also co-founder and principal investigator of the Center for Human-Compatible Artificial Intelligence ...
and Francesca Rossi. The Future of Life Institute has also released two fictional films, ''
Slaughterbots ''Slaughterbots'' is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition to assassinate political opponents based on preprogram ...
'' (2017) and ''Slaughterbots - if human: kill()'' (2021), to warn the world about the risks of autonomous weapons and the urgent need for a ban, both of which went viral. Professor
Noel Sharkey Noel Sharkey (born 14 December 1948) is a computer scientist born in Belfast, Northern Ireland. He is best known to the British public for his appearances on television as an expert on robotics; including the BBC Two television series '' Robot W ...
of the
University of Sheffield , mottoeng = To discover the causes of things , established = – University of SheffieldPredecessor institutions: – Sheffield Medical School – Firth College – Sheffield Technical School – University College of Sheffield , type = P ...
has warned that autonomous weapons will inevitably fall into the hands of terrorist groups such as the
Islamic State An Islamic state is a state that has a form of government based on Islamic law (sharia). As a term, it has been used to describe various historical polities and theories of governance in the Islamic world. As a translation of the Arabic term ' ...
.


Disassociation

Many Western tech companies are leery of being associated too closely with the U.S. military, for fear of losing access to China's market. Furthermore, some researchers, such as
DeepMind DeepMind Technologies is a British artificial intelligence subsidiary of Alphabet Inc. and research laboratory founded in 2010. DeepMind was acquired by Google in 2014 and became a wholly owned subsidiary of Alphabet Inc, after Google's rest ...
's
Demis Hassabis Demis Hassabis (born 27 July 1976) is a British artificial intelligence researcher and entrepreneur. In his early career he was a video game AI programmer and designer, and an expert player of board games. He is the chief executive officer and ...
, are ideologically opposed to contributing to military work. For example, in June 2018, company sources at
Google Google LLC () is an American multinational technology company focusing on search engine technology, online advertising, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. I ...
said that top executive
Diane Greene Diane B. Greene (born June 9, 1955) is an American technology entrepreneur and executive. Greene started her career as a naval architect before transitioning to the tech industry, where she was a founder and CEO of VMware from 1998 until 2008. ...
told staff that the company would not follow-up Project Maven after the current contract expired in March 2019.


See also

* A.I. Rising *
Arms race An arms race occurs when two or more groups compete in military superiority. It consists of a competition between two or more states to have superior armed forces; a competition concerning production of weapons, the growth of a military, and t ...
*
Artificial general intelligence Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fictio ...
*
Artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech re ...
* Artificial Intelligence Cold War * Cold War *
Ethics of artificial intelligence The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of ''humans'' as they design, make, use and treat artific ...
*
Existential risk from artificial general intelligence Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the hum ...
*
Lethal autonomous weapon Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), auto ...
*
Military robot Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack. Some such systems are currently in use, and many are under development. History Broad ...
*
Nuclear arms race The nuclear arms race was an arms race competition for supremacy in nuclear warfare between the United States, the Soviet Union, and their respective allies during the Cold War. During this same period, in addition to the American and Soviet nuc ...
*
Post–Cold War era The –Cold War era is a period of history that follows the end of the Cold War, which represents history after the 1991 fall of the Soviet Union. This period saw the United States became the world's sole superpower in the world and paved the way ...
*
Second Cold War The Second Cold War, Cold War II, or the New Cold War are terms that refer to heightened political, social, ideological, informational, and military tensions in the 21st century. The term is used in the context of the tensions between th ...
*
Space Race The Space Race was a 20th-century competition between two Cold War rivals, the United States and the Soviet Union, to achieve superior spaceflight capability. It had its origins in the ballistic missile-based nuclear arms race between the tw ...
*
Unmanned combat aerial vehicle An unmanned combat aerial vehicle (UCAV), also known as a combat drone, colloquially shortened as drone or battlefield UAV, is an unmanned aerial vehicle (UAV) that is used for intelligence, surveillance, target acquisition, and reconnaissance ...
*
Weak AI Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of mind, or, as narrow AI, is focused on one narrow task. In John Searle's terms it “would be useful for testing hypotheses about minds, but would ...


References

{{Reflist


Further reading

* Paul Scharre, "Killer Apps: The Real Dangers of an AI Arms Race", ''
Foreign Affairs ''Foreign Affairs'' is an American magazine of international relations and U.S. foreign policy published by the Council on Foreign Relations, a nonprofit, nonpartisan, membership organization and think tank specializing in U.S. foreign policy a ...
'', vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although thesddsy perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) *The National Security Commission on Artificial Intelligence. (2019).
Interim Report
'. Washington, DC: Author. Artificial intelligence Technological races Computational neuroscience