The multilingual contents of the site are the result of an automatic translation.


Other sources


Other sources

Saut de ligne
Saut de ligne

Ethics and Lethal Earth Robots: "Free the Monster! »

Earth Thought Notebooks
Science & technology
Saut de ligne
Saut de ligne

The debate is already raging about the use of combat drones in ground strikes, for reasons of international law, humanitarian law and ethics. There is, however, one emerging area that will undoubtedly trigger a controversy that is at least comparable: the use of land-based weapons systems capable of deciding alone on the life or death of their own target. This is what the author of this article convincingly presents to us.

Are Autonomous Lethal Weapon Systems on Land (ALWS) an ethical aberration?

Azincourt (1415), Dien Bien Phu (1953-1954): Honour and courage alone are sometimes (often?) insufficient in the face of cold determination and efficiently used technology!

Since the invention of the first throwing weapon, it has always been a question of being able to hit while remaining as safe from the opponent's blows as possible. In addition to this primary and legitimate concern, so-called "civilised" armies have also been concerned about being able to discriminate against military objectives within their environment, including their human environment. Can the use of robots, particularly those with lethal effects, fully satisfy these two concerns?

It would be harmful to neglect this technological and tactical, or even strategic, advance for an ill-defined moral reason or one based on a priori apprehension. The path of autonomous lethal robots must therefore be explored, particularly for land combat, so as not to have, once again, a "war of delay".

In spite of real risks, the real immorality would be to refuse research on SALATs!


We are not talking here about remote-controlled or autonomous combat aircraft, nor about ground robots that act as "load-carrying mules" or deminers, nor about exoskeletons. The former are outside the terrestrial domain and the latter do not really raise ethical questions, confined as they are to their role of supporting the combatant. SALAT[1], as defined by Gérard de Boisboissel[2], is a"mobile system, reusable in all environments [on land, in particular here], with the possibility of firing autonomously [...]. It can be mobile, in which case we will speak of an armed autonomous robot, or an autonomous lethal robot". It is not a simply automated system, but an autonomous one, implementing a real decision-making process from detection to eventual and voluntary destruction [3].

From a tactical - practical point of view, this type of equipment offers advantages in terms of staff savings, both in terms of volume of personnel and their preservation in a particularly hostile environment. Subject to energy autonomy, the robots can provide a permanent "friendly" presence in the same type of environment or for a given period (night, CBRN pollution, etc.), if necessary alternating with a human detachment.

Tactics versus precaution?

As Colonel Fenon[4] reminds us, the purpose of tactics is to win a confrontation thanks to the combination of lethal or kinetic effects by destroying or neutralizing a military aircraft or an opposing fighting force. While ethics or international law may provide a framework for its execution, the use of force and weapons remains at the heart of tactical thinking. This raises the spectre of the so-called precautionary principle, which seeks a moratorium or a ban on the development of SALAT. Indeed, organisations such as Human Rights Watch (HRW) and the International Committee for Robots Arms Control (ICRAC) are already making their voices heard at the United Nations and NATO in this regard. They often forget that, although close, moral questions (reflections in the search for ideals) and legal questions (solutions framing a reality) are two different fields.

Why and how should the development of SALATs be framed? A counter-productive precautionary ethic?

Let us imagine for a moment two concomitant postulates. Firstly, at the end of a primitive research phase, one (or more) technologically capable states decide to pursue the path of robotisation of the land battlefield, noting that this is a promising field. At the same time, an emerging law, fearing a priori that SALATs do not meet the accepted criteria of the law of armed conflict (discrimination, proportionality...), obtains, in the absence of constructive opposition, a ban or a moratorium by the UN and/or NATO on the development of such research. What will happen then? Here again, two phenomena. The first, public, will see states announce that they adhere to this legislation. Here we will find the traditionally "humanist" states, in good faith, and those that cannot afford such research. Missing will be those that have given priority to their defence. They will be the first to promote a second phenomenon: the further development of SALATs in a semi-clandestine environment, far removed from any concrete technical or legal regulations. It should be noted in passing that some countries ideologically opposed to the signing of regulations limiting the means of warfare are already at the forefront in the field of military robotics (United States of America, Israel...).

A minimum ethical or legal corpus

Just as Isaac Asimov developed the three laws of robotics [5] in science fiction books, starting points for many other authors, we must now be able to establish a legal corpus around what can make consensus in the field of SALAT. These rules may already exist. George Lucas[6] expresses ten principles aimed at the self-discipline of actors, respect for the law of armed conflict, and even the definition of responsibilities in the event of collateral damage. These rules would be the outline of "best practice" serving as a basis for discussion for the establishment of positive law by lawyers and practitioners[7], without encouraging clandestine or delayed research. Of course, this legislation presupposes that existing law is respected: for example, no use on the territory of a neutral state. However, this point is not obvious, as some targeted strikes in Pakistan against Afghan insurgents using drones, which undermine the legitimacy of their use, show.

A necessarily limited use

The SALATs will be equipped with sensors enabling them to detect, identify and discriminate their objectives. The nature of these sensors remains to be determined according to the desired effects and the zones of engagement. Facial recognition may be envisaged for very specific targets, vibration detectors coupled with mass or silhouette recognition for tracked vehicles.s... However, they should not be engaged in contexts where they could be faulty and thus generate collateral damage. Use in densely populated areas against an enemy taking the appearance of local civilians would not be wise. Indeed, a human soldier will always be better able to distinguish between the different attitudes of those around him [8] than a set of sensors, however sophisticated they may be. On the other hand, the hypotheses of no man' s land or isolated personnel still leave ample room for improvement. Here are a few illustrations:

  • defense of a night bivouac in a difficult environment, allowing the small detachment it protects to regain strength;
  • rescue equipment for an aircraft pilot forced to land in a hostile area and awaiting recovery. A SALAT could be installed in an aircraft pilot's ejection seat and the pilot could activate his robot during his rest periods or in the event of a proven threat;
  • prohibition of a no man's land between two opposing lines of defence;
  • reconnaissance - destruction in difficult environments: jungle, Malian caves or tunnels, sewers, high altitude, polluted or contaminated areas...
  • prohibition of a clearly identified highly sensitive area;
  • combat against devices with a specific signature (armoured vehicles, tanks, artillery, etc.);
  • support for the breaking of contact of an isolated element for a specified time...

Humans would always be present... or almost always...

What is most frightening when using SALATs is leaving it up to a machine to decide whether or not to destroy a human target. In reality, this decision-making capacity is not unlimited; on the contrary. First, the decision to use this equipment and where it will be used rests entirely on man. Even pushing the development of these devices to the extreme, historically, no one should bear more moral responsibility than those who ordered the bombings of Hiroshima and Nagasaki [9]! Upstream, the SALATs will be manned. This makes them highly specialized "hunters". We can also include "backup" modes giving back the controls of the craft to an operator if he requests it or in an automatic and programmed way in case of failure of one or more sensors [10].

In reality, therefore, the robot would have a real autonomy only in space-time and according to the criteria imposed by humans. In fact, the field that is not left to SALAT, once active, is that of responsibility [11]. 11] There will always be a decision-maker to give the order to use this equipment. A real effort of traceability will then have to be made in order to know the human involvement and the work involved.

The educational system for exercising this responsibility will be set up in parallel with the development of robots. SALAT remains a tool and its autonomy is relative!

The use of a SALAT is not a tactico-ethical revolution.

If we take the ethical issue down to the soldier or tactical leader level, the decision to use a SALAT is no more onerous than the decision to use artillery fire. It is true that shells are fired on human order and action, after human identification. However, the number of collateral damages, unavoidable to a certain extent, shows that errors are also possible when man is fully present in the kinetic chain. The situation can also change during the flight of the shells. Still at the level of the fighter, if the robot is not as subtle as the human in its analysis of a potential danger, at least it is not subject to anger or hatred [12]. The notion of face-to-face combat [13] is still a long way off, but this is the case ever since a man came up with the idea of using a spear rather than a sword to shoot his opponent.

The preservation of non-combatants is a laudable and obligatory objective. So is the goal of minimizing damage to the environment and civilian property. It must be recognized, however, that the preservation of friendly troops is also a goal set by public opinion, if not by a body of law. The search for "zero death" is therefore a challenge, certainly utopian, and the use of SALAT is a step forward in this direction. If we cannot achieve a conflict without victims (!), we must resolve to confine them as much as possible to the ranks of the enemy combatants.

The next weapon of deterrence?

In addition to being credible and usable both politically and technically, the weapon of deterrence must be adapted to the threat, which has been demonstrated by General André Beaufre [14]. [14] Because it is not employable against a terrorist group or following a small-scale attack, a nuclear weapon is not a deterrent against an asymmetric enemy with a small number of personnel and without subordination to a given country. As far as France is concerned, this type of enemy currently tends to take refuge in towns or villages, but also in caves. The latter offer shelters whose conquest may recall certain aspects of the fighting of the Great War. The use of SALAT to inspect and neutralize these landmarks could be serenely considered and shift some of the stress to the opposing camp. This enemy is ready to become a suicide bomber for his cause, as long as he takes an enemy soldier with him. He may not be as determined in the face of a robot. Why not imagine him surrendering if he can't run away?

Let's take the deterrent reasoning one step further. The nuclear weapon became a real deterrent and, paradoxically, a vehicle for peace when at least two great nations were equipped with it. If a nation develops SALATs on a large scale, i.e. enough to be able to saturate an area without necessarily paying attention to the human environment or to neutralize a symmetrical battle body, will it be appropriate or possible to use nuclear weapons in retaliation? The gradation of the retort is not obvious. But if one or more other nations are capable of using these materials, the balance of fears may lead to a status quo as in the past with nuclear weapons.

In any case, seeing the development of SALATs as a means of combating a shameful asymmetric enemy or as a new deterrent weapon may motivate further research in this area. With what limits? Probably, for a time at least, a perverse but already existing effect in the fight against fanatical enemies, as described by Caroline Galacteros [15]: "...] In combat, our hyper-modernity generates hyper-archaism. Our unattainable technological advance in combat pushes the adversary to oppose his choice of sacrifice to our fear of death, his maximum risk-taking to our willingness to kill at a distance, his overreaching to our concern for proportionality... We produce the suicide bomber, the terrorist, the hostage-taker". However, this not insignificant disadvantage stems more from the lack of discrimination in the objectives than from the vector used. On another scale, the "peacemaker" aspect of nuclear weapons is an accidental consequence of the development of this weapon. It is unclear whether the intentional pursuit of this outcome is having the desired effect. This unknown brings us back to the need to develop rules to govern research, but the military and even geopolitical perspectives make it impossible to refuse such research.

If we still have to argue about the usefulness of developing research in the field of robotics, we would point out that many projects, originally military, have proved invaluable in terms of their outlets in the civilian world, particularly among the new technologies: the Internet, for example. Secondly, should we do without research that would give us a tactical advantage over potential adversaries?

Concern for combat ethics does not preclude the use of SALATs, if only for the preservation of our own combatants. On the other hand, it does require a framework for the development of what could be a future weapon of deterrence or destruction. The responsibility of those who decide to use autonomous lethal robots is in fact the essential point that should not be overlooked: always know what orders, limits and criteria are given and, above all, who is responsible for them. In order to define these, rules will have to be drawn up, and those who take part in drawing them up will also have to be able to assess what the human sense of responsibility is capable of bearing. And finally, who, or what international organization, will be able to ensure that these rules are respected? This last question is perhaps the most difficult...

1] In English, Lethal autonomous Robotics (LAR).

[2] Research engineer at the CREC Saint-Cyr, during the colloquium "Robots and lethality" on April 08, 2014 at Les Invalides, Paris.

3] One will thus exclude the "Rupert" dropped during the night of June 5 to 6, 1944 behind the Normandy beaches, the M93 Hornet mines or the Counter Rocket Artillery Mortars, for example.

4] General principles of tactical thinking, lectures given at DESTIA, June 2014.

5] Les Robots, I. Asimov, 1967: "A robot cannot harm a human being nor, remaining passive, allow a human being to be exposed to danger..."

Professor of Ethics and Public Policy at the Naval Postgraduate School, Monterey, California. These rules were exposed in particular during the symposium "Robots and lethality" on April 8, 2014 at Les Invalides, Paris. We can mention for example the principles of "unnecessary risk", moral asymmetry of adversaries, the greatest conformity to the principle of proportionality, prudence, criminal negligence... (no link with the Star Wars series of films).

7] Like the conventions of the Pugwash movement concerning the limitation of nuclear weapons, winner of the 1995 Nobel Peace Prize.

8] Children shouting and playing with fake weapons in the vicinity of a SALAT could be misjudged by the SALAT...

9] It is even thought that they shortened the second world conflict in the Pacific, and even saved lives...

10 ] Examples presented by Mr. Gérard de Boisboissel, research engineer at the CREC Saint-Cyr, during the conference "Robots and lethality" on April 8, 2014 at Les Invalides, Paris.

11] For an enriching philosophical deepening on this notion of responsibility: "Combat robots and morals: anticipating responsibility", Captain Emmanuel Goffi, Penser les ailes françaises n°29.

12] Work by Professor Ronald C. Arkin for the United States Department of Defense. " that robots will make less mistakes [...] than humans do in the battlefield".

13] Notably defended in "UAV Theory", Grégoire Chamayou, la Fabrique, 2013.

[14 ] «Introduction to the strategy", André Beaufre, Armand Colin, 1963.

15] In Le Point en ligne of 02 October 2014.

Title : Ethics and Lethal Earth Robots: "Free the Monster! »
Author (s) : le Chef d’escadron Jean CHAPON