Last Thursday The US State Department has outlined a new vision for the development, testing and verification of military systems, including weapons that use artificial intelligence.
The Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy represents a US attempt to steer the development of military artificial intelligence at a crucial time for the technology. The document is not legally binding on the US military, but it is hoped that allied nations will agree to its principles, creating a kind of global standard for the responsible creation of artificial intelligence systems.
Among other things, the declaration says that military artificial intelligence must be developed in accordance with international law, that countries must be transparent about the principles behind their technology, and that high standards are implemented to test the performance of artificial intelligence systems. It also states that only the people should make decisions about the use of nuclear weapons.
As for autonomous weapons systems, US military leaders have often assured that the human will remain “in the know” of decisions to use lethal force. But the official policy, first issued by the Department of Defense in 2012 and updated this year, does not require that to be the case.
Attempts to establish an international ban on autonomous weapons have not yet met with success. The International Red Cross and campaign groups such as Stop Killer Robots have been pushing for a deal at the United Nations, but some major powers – the US, Russia, Israel, South Korea and Australia – are reluctant to commit.
One reason is that many in the Pentagon see increased use of artificial intelligence throughout the military, including beyond non-weapons systems, as vital and inevitable. They argue that the ban will slow US progress and make its technology worse than rivals such as China and Russia. The war in Ukraine has shown how quickly autonomy, in the form of cheap, disposable drones made increasingly capable by machine-learning algorithms that help them perceive and act, can help secure an advantage in conflict.
Earlier this month, I wrote about one-time Google CEO Eric Schmidt’s personal mission to strengthen the Pentagon’s AI to ensure the US keeps up with China. It was just one story after months of reporting on efforts to embed artificial intelligence in critical military systems and how it is becoming central to U.S. military strategy, even as many of the technologies involved remain only nascent and untested in any crisis.
Lauren Kahn, a fellow at the Council on Foreign Relations, hailed the new US declaration as a potential building block for more responsible use of military artificial intelligence around the world.
Several countries already have weapons that operate without direct human control under limited conditions, such as anti-missile defenses that must respond with superhuman speed to be effective. Wider use of artificial intelligence could mean more scenarios where systems operate autonomously, such as when drones operate outside of communication range or in swarms too complex for human control.
Some claims about the need for AI in weapons, especially from the companies developing the technology, still seem a bit far-fetched. There have been reports of fully autonomous weapons being used in recent conflicts and AI assisting in targeted military strikes, but these have not been confirmed, and in fact many soldiers may be wary of systems that rely on algorithms that are far from flawless.
And yet, if autonomous weapons cannot be banned, then their development will continue. This will make it vital to ensure that the AI involved behaves as expected, even if the engineering required to fully realize intentions such as those in the new US declaration is not yet perfected.