These are not the droids (or robots) you are looking for

This is what I hope will become the first of several posts about lethal autonomous weapon systems (LAWS), or as it has become common to call them, killer robots.


LAWS are military robots which have the technology and the capacity to identify a target, assess the target’s level of threat and use lethal force against it. LAWS can be aerial, ground or water vehicles – this is not relevant, the main point is their autonomy and how they can be programmed with a set of parameters that allows them to identify a target and attack it.

But let’s leave the theoretical side for a while and step into something much more practical: meet the Samsung SGR-A1. This stationary military robot has recently been deployed by the Republic of Korea (South Korea) to perform surveillance duties on the Demilitarized Zone (DMZ) between South and North Korea (the same Demilitarized Zone which is surrounded by the most militarized portions of territory in the planet, by the way). In other words, it’s a sentry bot. Its purpose is to defend a border from an invading force.

Needs a bit of marketing...perhaps brighter colours and an array of emojis that would let us know if it's in a fiery mood or not...

Needs a bit of marketing…perhaps brighter colours and an array of emojis that would let us know if it’s in a fiery mood or not…

The SGR-A1 is equipped with sensors which identify invaders and is able to use deadly force against them if it deems necessary. The act of opening fire can either be remotely done by a human operator from a distance – which would make the SGR-A1 a human in the loop system – or a decision taken autonomously by the machine, while human operators observe but only intervene if they decide to, also known as a human on the loop system. When a human remotely operates the SGR-A1 and is behind its every shot, this machine is just a weapon operated from a distance. When the SGR-A1 operates on its own, it’s a Lethal Autonomous Weapon System.

As it happens with every technology, the trend is for the development process to become cheaper, faster and less complex. Advances could potentially bring new models of high-mobility LAWS with a greater operating range, equipped with more powerful weapons. Arguments in favour of SGR-A1 point out its durability, resilience, constant state of alert and the fact that it cannot be affected by stress, emotions, physical or mental fatigue or trauma of any kind. In other words, remove all the human factors and you have the ultimate defensive weapon.

Could this mean that the ultimate offensive weapon would too be much better from all points of view if the human factor is removed from combat?

In principle, this does not sound bad at all. Those who defend the use of this technology claim it can render armed conflicts less onerous, reduce military casualties significantly and provide a fundamental help to soldiers by doing most of the physical work that a human would do. In the long run, one can even imagine future wars being extremely confined in time and space, fought strictly between autonomous machines, with humans (both civilian and military) at a safe distance from hostilities. In other words, a politician’s dream – the glory of victory, with no footage of body bags on the news.

This view completely disregards a simple fact: the vast majority of current day armed conflicts do not fit the classic frame of an open war between states with standing armies. The Korean DMZ example only stands out because this is a relic from a Cold War conflict and an extraordinary exception in our world. The vast majority of today’s conflicts are fought between a state (or a coalition of states) on one side and armed militants on the other side – civilians or former soldiers who have been indoctrinated and who joined religious and/or politically radical groups, which fall outside the scope of regular armed forces.

One doesn’t need to do a very in-depth research – Iraq, Syria, Libya, DR Congo, Nigeria, Central African Republic, Sudan and Afghanistan are all examples where the state has its authority and legitimacy rejected by non-state actors in the form of armed insurgent groups. These groups don’t hesitate to use brutal methods that violate every single piece of international law which regulates the use of force. I mention this because regular armed forces and states have to abide by rules. Rules such as the Geneva Conventions and previously, the Hague Conventions of 1899 and 1907. Granted, a country’s armed forces violate the rules if they think they can get away with it, but we have the instruments and the institutions to punish those who do so – they don’t always work as they should, but that’s for another discussion. Regular armed forces represent a sovereign state, which is a subject of international law and if said state is party to conventions which regulate the use of force and its soldiers violate those same conventions, there’s an array of tools and procedures available to address this.

What’s my problem then? Deploying autonomous machines that use lethal force on their own against armed insurgents not only will not decide a war by itself (the idea that technology alone can win a war does not stand, otherwise all conflicts where one side deployed technologically superior assets would have been solved very quickly) but it also raises serious legal and ethical concerns regarding the way they engage their targets and use force.

Let’s take another look at the Samsung SGR-A1. Its technology allows it to distinguish a human being and to detect if said human being lays down their weapon and shows an intention to surrender (under international law, a soldier cannot fire upon another soldier who wishes to surrender and who lays down their weapon, doing so is a war crime). Even assuming that future LAWS will have more advanced technologies, there’s a simple issue of common sense that cannot be forgotten: war is the result of human agency. This is what stands behind all the pieces of international law that we have which regulate the use of force. All those Conventions have been written under the assumption that force is used by sentient humans who are aware of what they are doing. Let’s now take the proportionality criteria, for example. During an armed conflict, force can be used against a legitimate target but only in a proportional amount – in order to devise what constitutes a proportional amount of force, a human assessment is required; to determine whether the shots should kill or not, a human assessment is required as well. Machines can calculate the angle and trajectory of a shot but they cannot assess what constitutes a proportionate amount of force against a target.

Within the Itchy & Scratchy world, any amount of force is acceptable.

Within the Itchy & Scratchy world, any amount of force is acceptable.

This leads to a problem of legal liability. As we saw earlier, soldiers are bound by laws and rules. Not only do they have to obey the rules of the military force they are part of, but when involved in an armed conflict they have to obey international law as well. Let’s now look at a conflict between a state which deploys LAWS against an armed insurgent group. Such groups have commonly used child soldiers and civilians as human shields, they have also concealed weapons and explosives under the guise of civilian structures and vehicles. How could LAWS be able to tell who they are fighting against: if they are a legitimate target or not; if the object they are carrying is a weapon or simply looks like a weapon; what to do in case a legitimate target is surrounded by unarmed civilians being held as human shields…in other words, how can LAWS be programmed to minimize civilian casualties and comply with international law?

Now let’s take this one step further: an autonomous robot shoots and kills someone who was not armed or who did not fit the parameters of an enemy combatant. What happens to that robot? A machine has no legal personality, a machine cannot answer in court, a machine cannot stand trial and say why they committed that act. Who will be legally accountable for potential violations committed by a machine? The military officers who deployed it? The programmers who set its instructions? The engineers who planned and designed it? All of them? None of them? Does evolving military technology necessarily needs to implicate an erosion of legal accountability? Or are defence contractors and politicians depositing so much blind faith into such systems that they assume they are perfect and unable of making mistakes? Are we really ready to accept that in a near future, the decision and the act of killing someone (even a legitimate target) can be left to machines?

I, [killer] Robot?

I, [killer] Robot?

Fortunately, not all is gloomy. There is a growing awareness to this issue, thanks in great part to the Campaign to Stop Killer Robots, a platform of organizations and groups which is actively calling out on banning such devices before they become part of our reality. The United Nations is looking into this as well, several meetings have been held on this matter (the next will take place from 13 to 17 April 2015 in Geneva) under the framework of the Geneva Conventions, with some UN member states expressing serious concerns about the prospects of LAWS becoming common. To these, we should also add the dangers of proliferation, theft, hacking, reverse engineering and other nightmarish hypotheses which can potentially allow non-state actors to use such systems in the future. In this regard, I think Hollywood will be much more effective than me at exploring potential scenarios.

Begun the robot wars have not…for the time being, and I hope their beginning will never be proclaimed. Let’s keep robots without any capacity to make decisions of life or death, those are definitely not the robots we should be looking for.

"Did you hear that, R2? These new roles they are assigning us are absolutely appaling!"

« Did you hear that, R2? These new roles they are assigning us are absolutely appalling! »

Un commentaire