Major advancements in AI weapons and defence technology have sparked fears mankind is heading towards a “Terminator future” of war.
And as this ultra-tech advance, there are increasing concerns among anti-war campaigners that rogue states or terrorist groups could get their hands on advanced weaponry.
It comes after USV Ranger, an AI-guided warship part of the US Navy’s “Ghost Fleet” of uncrewed vessels, test-launched a one and a half-ton surface-to-air missile in a dramatic escalation of America’s autonomous weapons programme.
It’s not the first drone warship to launch a missile but, at 3,300lbs, the two-stage SM-6 missile is roughly 100 times larger than the Rafael SPIKE missile launched from an Israeli autonomous vessel in 2017.
The development of the Ghost Fleet Overlord project is a partnership between the US Navy and the US Department of Defense’s Strategic Capabilities Office.
Officially, the autonomous vessels are intended for use as support craft for conventional forces but this latest test demonstrates that the ships can be armed.
While official Pentagon policy dictates that autonomous weapons should not fire without a human controller pulling the trigger, Secretary of the Air Force Frank Kendall told the Air Force Association’s Air, Space & Cyber Conference on September 20 that artificial intelligence had been used to help to identify a target or targets in “a live operational kill chain”
Daan Kayser, an expert in autonomous weapons working for Dutch anti-war pressure group Pax For Peace, told Central Recorder these developments represent the latest steps in a deadly AI arms race.
“There are no winners in an AI arms race,” he said, predicting that once these weapons are developed it’s only a matter of time before they are obtained by rogue states and terror groups, making the military advantage gained by developing such weapons systems “temporary and limited”.
Former Pentagon AI weapons expert Paul Scharre warns that someone could soon build “a simple, autonomous weapon in their garage”.
He says the potential is almost here: ”These tools are available for free download. You can download them online,” he says. “[It] took me about three minutes online to find all of the free tools you would need to download this technology and make it happen.”
You can sign up to receive the Star’s brilliant new Spaced Out newsletter and much more by clicking here.
Daan Kayser points out that while AI weapons are developing fast, they can still be fooled into interpreting a perfectly innocent object as a potential threat.
He cites the example of researchers at MIT who published a video of an AI “seeing” a plastic turtle as an assault rifle.
But the Pentagon is pushing ahead with giving AI weapons the ability to open fire without a human “in the loop”.
General John Murray of the US Army Futures Command told an audience at the US Military Academy that as new types of weapons emerge, for example massive swarms of explosive drones, it may not be possible for a human to react quickly enough to defend against them.
“Is it within a human’s ability to pick out which ones have to be engaged? he asked, adding that if over a hundred individual combat decisions need to be made in a matter of seconds,“is it even necessary to have a human in the loop?.
Timothy Chung, a developer at US military technology workshop DARPA, said that humans only slow AI weapons down, claiming that actually “the systems can do better from not having someone intervene”.
Paul Scharre says developments like Russia’s fleet of “large, ground combat vehicles that have anti-tank missiles on them” could bring about “flash wars” where two fully-automated armies open fire on each other before their human masters can tell them to stop.
AI weapons have already killed a human enemy without a direct command from a human operator, and while uncrewed aircraft, ships and even tanks might seem a good way to reduce the body counts in future wars, some experts say their use only makes war more likely.
Sidharth Kaushal, from UK defence think tank the Royal United Services Institute, thinks that the proliferation of crewless ships and planes could make an easier decision for aggressors to fire the first shot.
Referring a 2016 incident in which China seized a small US underwater drone, he told New Scientist: “They may become targets for sub-threshold aggression, given that damaging or destroying them involves no loss of life”.
Daan Kayser, agrees, telling us: “The use of these systems will lower the threshold to go to war as there is less risk to one’s own troops.
“Also,” he continues, “states are developing cheaper [and] more expendable systems, which also lowers the risk of losing expensive military hardware.”
“This will make it easier for states to look for military solutions, instead of working on political solutions, which are necessary for a sustainable and just peace,” Daan adds.
Several times over the last few years, United Nations officials have discussed whether to implement a ban on lethal autonomous weapons systems, or what most people might call “killer robots”.
But while the politicians talk, the AI arms race is moving forward at terrifying speed.