To pick up on a theme from a previous post, a meeting is being held in Geneva from 13 to 17 April in the framework of the Convention on Conventional Weapons (CCW) and on the subject of lethal autonomous weapons systems (LAWS). This is the second multilateral meeting on this topic under the CCW framework.
So far, several governments, civil society organizations and experts have presented their views and questions on the potential development and deployment of autonomous systems programmed to use force on behalf of a state. Some positions expressed before the meeting can be found here.
While this is still an issue where the target of the discussions remains mostly on a theoretical front, these talks are fundamental to understand the positions of UN members on the subject and to gather if there are grounds to launch a legal framework that would ban the development and deployment before it’s too late. A campaign set up by several NGOs has gained ground over the last three years and calls for a preemptive ban on such systems. Among the NGOs which are most vocal in this effort we find Human Rights Watch and Article 36.
It’s understandable that many doubts remain on the potential of such systems. Given that LAWS for the moment do not exist, at least not as completely autonomous machines which seek and attack targets based on previous programming, it’s perfectly reasonable that a clarifying and enlightening debate takes place on the subject, which reinforces the usefulness of these discussions.
What is more worrying, however, is when countries with large and technologically advanced defence industries, such as the UK, do not seem concerned about the potential of LAWS and express positions against an international ban of such systems, claiming that international humanitarian law “already provides enough regulation on this area”, as the British government said, or assume that human control will always be exercised in such a way that we will not have any fully autonomous weapon systems being developed and deployed.
To be fair, this does not necessarily mean that the prospect of fully autonomous machines taking decisions of life or death do not pose any problem for those countries, but their expressed unwillingness to support a worldwide ban on their potential development and deployment is not the most reassuring position I’ve ever seen. Almost as if the Galactic Republic would at one point discharge the Jedis from pursuing the Siths because, after all, they don’t exist anymore, right?
Once again, this is still a very murky topic on which key discussions about what constitutes autonomy and meaningful human control need to be held. What worries me is that countries with such large defence industries and global reaching foreign policies are not supporting a ban on such systems.