Katana VentraIP

Artificial intelligence arms race

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI,[1][2] driven by increasing geopolitical and military tensions.

An AI arms race is sometimes placed in the context of an AI Cold War between the United States, Russia, and China.[3]

Terminology[edit]

Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention.[4] LAWS have colloquially been called "slaughterbots" or "killer robots". Broadly, any competition for superior AI is sometimes framed as an "arms race".[5][6] Advantages in military AI overlap with advantages in other sectors, as countries pursue both economic and military advantages.[7]

Risks[edit]

One risk concerns the AI race itself, whether or not the race is won by any one group. There are strong incentives for development teams to cut corners with regard to the safety of the system, which may result in increased algorithmic bias.[14][15] This is in part due to the perceived advantage of being the first to develop advanced AI technology. One team appearing to be on the brink of a breakthrough can encourage other teams to take shortcuts, ignore precautions and deploy a system that is less ready. Some argue that using "race" terminology at all in this context can exacerbate this effect.[16]


Another potential danger of an AI arms race is the possibility of losing control of the AI systems; the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk.[16]


A third risk of an AI arms race is whether or not the race is actually won by one group. The concern is regarding the consolidation of power and technological advantage in the hands of one group.[16] A US government report argued that "AI-enabled capabilities could be used to threaten critical infrastructure, amplify disinformation campaigns, and wage war"[17]:1, and that "global stability and nuclear deterrence could be undermined".[17]:11

Proposals for international regulation[edit]

The international regulation of autonomous weapons is an emerging issue for international law.[74] AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.[1][2][75] As early as 2007, scholars such as AI professor Noel Sharkey have warned of "an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions".[76][77]


Miles Brundage of the University of Oxford has argued an AI arms race might be somewhat mitigated through diplomacy: "We saw in the various historical arms races that collaboration and dialog can pay dividends".[78] Over a hundred experts signed an open letter in 2017 calling on the UN to address the issue of lethal autonomous weapons;[79][80] however, at a November 2017 session of the UN Convention on Certain Conventional Weapons (CCW), diplomats could not agree even on how to define such weapons.[81] The Indian ambassador and chair of the CCW stated that agreement on rules remained a distant prospect.[82] As of 2019, 26 heads of state and 21 Nobel Peace Prize laureates have backed a ban on autonomous weapons.[83] However, as of 2022, most major powers continue to oppose a ban on autonomous weapons.[84]


Many experts believe attempts to completely ban killer robots are likely to fail,[85] in part because detecting treaty violations would be extremely difficult.[86][87] A 2017 report from Harvard's Belfer Center predicts that AI has the potential to be as transformative as nuclear weapons.[78][88][89] The report further argues that "Preventing expanded military use of AI is likely impossible" and that "the more modest goal of safe and effective technology management must be pursued", such as banning the attaching of an AI dead man's switch to a nuclear arsenal.[89]

Other reactions to autonomous weapons[edit]

A 2015 open letter by the Future of Life Institute calling for the prohibition of lethal autonomous weapons systems has been signed by over 26,000 citizens, including physicist Stephen Hawking, Tesla magnate Elon Musk, Apple's Steve Wozniak and Twitter co-founder Jack Dorsey, and over 4,600 artificial intelligence researchers, including Stuart Russell, Bart Selman and Francesca Rossi.[90][81] The Future of Life Institute has also released two fictional films, Slaughterbots (2017) and Slaughterbots - if human: kill() (2021), which portray threats of autonomous weapons and promote a ban, both of which went viral.


Professor Noel Sharkey of the University of Sheffield argues that autonomous weapons will inevitably fall into the hands of terrorist groups such as the Islamic State.[91]

Disassociation[edit]

Many Western tech companies avoid being associated too closely with the U.S. military, for fear of losing access to China's market.[40] Furthermore, some researchers, such as DeepMind CEO Demis Hassabis, are ideologically opposed to contributing to military work.[92]


For example, in June 2018, company sources at Google said that top executive Diane Greene told staff that the company would not follow-up Project Maven after the current contract expired in March 2019.[57]

AI alignment

A.I. Rising

Arms race

Artificial general intelligence

Artificial intelligence

Artificial intelligence detection software

Artificial Intelligence Cold War

Cold War

Ethics of artificial intelligence

Existential risk from artificial general intelligence

Lethal autonomous weapon

Military robot

Nuclear arms race

Post–Cold War era

Second Cold War

Space Race

Unmanned combat aerial vehicle

Weak AI

Paul Scharre, "Killer Apps: The Real Dangers of an AI Arms Race", , vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)

Foreign Affairs

The National Security Commission on Artificial Intelligence. (2019). . Washington, DC: Author.

Interim Report