The 1949 Geneva Conventions’ four main treaties and its three subsequent protocols, along with the Hague Conventions and the Geneva Protocol, lay down rules for the use of conventional, biological, and chemical weapons in armed conflict, defining the basic rights of military personnel, civilians, wartime prisoners, the wounded and sick in times of war.
These international agreements, particularly as they apply to the types and use of particular weaponry are a product of the time in which they were written, with the most recent update being agreed more than 40 years ago in the 1980 UN convention on “Certain Conventional Weapons,” it is time for an update.
JOIN US ON TELEGRAM
Follow our coverage of the war on the @Kyivpost_official.
New technological challenges
The history of developing new treaties shows that by the time an issue affecting warfare has been identified, interpreted against existing agreements, written as a document drafted to international legal standards, negotiated, and rewritten – it may be obsolete, as the nature of armed conflict may have already evolved.
Many believe that is the case with the existing agreements – particularly as we are faced with the challenge that the “hyper leap” in the capabilities modern technologies such as artificial intelligence (AI), robotics, and cyber weapons are bringing to the battlefield.
The rapid advancements in AI and drone technology have accelerated as a result of expediency on the battlefield in Ukraine,. This has raised ethical and moral concerns in the minds of politicians, scientists, human rights experts and, not least, the military. There are fears that the fusion of AI, autonomous decision-making and lethal weapons has the potential to rapidly get out of control unless we make a conscious decision to apply the brakes.
Warsaw Insider: Poland Gears Up for a Defining Knife-Edge Election
The Biden Administration has been pressing to achieve international consensus on the need to legislate for the inclusion of ethical guidelines for any process that involves AI that results in lethal action, including a level of human supervision of any autonomous decision-making process. This initiative has, to date, failed to achieve the support of nations, including China, Russia, and North Korea.
Geoffrey Hinton, known for his work on artificial neural networks for which he was dubbed the “godfather of AI,” who quit Google because of his concerns about the technology, voiced his concern about the direction research was taking in weapons development and other areas of decision making in an interview with the Japanese news outlet “Yomiuri Shimbun” in December.
In echoes of the “Terminator” movies, he said there was inherent danger in surrendering sole decision-making capability to the AI systems themselves where grand-scale objectives such as climate change were involved. He said that AI could identify humanity itself as the problem and could institute safeguards to protect itself from human interference to avoid being disconnected.
“AI systems could rapidly come to outperform humans in an increasing number of tasks… They threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society,” Hinton said.
He said he was even more concerned about the development of AI-equipped weapons being made capable of independently making targeting and attack decisions eliminating human control.
Referencing the development of the Geneva Conventions and the history of chemical warfare, which only came about following the disastrous events of World War I, he was concerned that the international community would only act once there was an AI-instigated catastrophe.
Hinton says obtaining international agreement on the regulation of AI was hamstrung “because of the competition between big tech companies as well as between nations.”
AI’s capability to enhance the efficiency of weapon systems, especially combined with technological advances in sensors, communications, and computing is enhancing the ability of drones on the battlefield and undoubtedly has the potential to do so with other tactical as well as strategic weapons.
Hinton was jointly awarded with the 2024 Nobel Prize in Physics with John Hopfield “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”
The need for legislation
There is a growing momentum toward an increase in conferring stand-alone decision-making capability on AI weapons such that the day that some could become fully autonomous is not far off.
The failure to agree and adhere to ethical principles and constraint in the application of AI to military use represents a potentially significant risk to the rights and protections currently afforded to both the military and non-combatants in the future.
Now is the time for government agencies and other organizations to get ahead of the curve and at the very least initiate the discussion on getting control of the AI global arms race.
In my view that should go further than mere talk. The “powers that be” should begin the undoubtedly lengthy process of devising and adding a protocol to the Geneva Conventions that will enshrine the control of the threat posed by AI on the battlefield into international humanitarian law.
The views expressed in this opinion article are the author’s and not necessarily those of Kyiv Post.
You can also highlight the text and press Ctrl + Enter