Sei sulla pagina 1di 3

1

Lethal Autonomous Weapons: Where do we go from here?


Armies around the world have always been looking throughout time for ways to lessen the
human cost of war and to gain military advantage. Unmanned vehicles have been developed and
used since thousands of years in order to limit military casualties. But it was not until the two
World Wars and the Cold War that major developments and use of unmanned weapons really
happened. The last two decades have seen a rapid evolution of technology and of artificial
intelligence (AI) and as consequence a significant increase in the development of Unmanned
Combat Vehicles by states such as the United States, China, Russia, Israel, France and the UK.
These unmanned vehicles currently still rely on the supervision of human operators when it comes
to finding their way, targeting and shooting. This is in the process of changing with new weapons
with increasing autonomy and requiring limited or no supervision from human operators, being
developed by militaries around the world.

These weapons would constitute Lethal Autonomous Weapons Systems (LAWS). They
could bring military advantages to states that will possess them and possibly reduce collateral
damage to civilians. Western societies are increasingly less willing to send troops in military
operations and have them risk sustaining casualties or injuries. LAWS would limit this risk and
the long-term issues and costs related to injuries or death of troops. In addition, if an autonomous
weapon is damaged it can be destroyed or repaired. Concerning military advantage, LAWS could
potentially target and fire on their own, fight for longer hours or wait for specific targets therefore
reducing the risk of missing targets due to a delay in decision making or communication with the
operator. LAWS would not possess human feelings and emotions and would eliminate the risk of
inappropriate, uncontrolled or unreliable behavior or response; unlike human soldiers they cannot
hate, get mad, know fear and they have no survival instinct.

Even though it is easy to have a general understanding of the potential capacities and
military advantages of LAWS, a few important issues exist. They involve definitional, legal,
practical and ethical considerations.

There is currently no international definition of what constitutes an autonomous weapon,


or of how and under what regulations such weapons could be used. The absence of an international
definition leaves significant latitude for many countries to continue developing weapons with more
and more autonomy but no overriding guidelines.

States and NGOs at the international level at the United Nations in Geneva within the
framework of the Convention on Certain Conventional Weapons1 have started to address
definitional aspects and some of the legal aspects. Three meetings of experts on LAWS have taken
place since 2014, and in 2016 the Fifth Review Conference decided to establish a Group of
Governmental Experts on LAWS. States tried to come up with an international definition of LAWS
and exposed their views on the possibility for these weapons to respect International Humanitarian
Law (IHL) rules.

Despite these efforts, no agreement has been reached so far on a common international
definition, mostly because states cannot seem to agree on what constitutes autonomy versus
1
http://www.unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument
2

automation and on the level of human control involved. Todays unmanned weapons are
considered automated because they are under the constant supervision and control of a human
operator (except for some defensive weapons) and are incapable of learning new information or of
choosing or changing their goals or targets on their own. LAWS on the contrary would potentially
be able to choose a valid target, to fire, to change or stop attacks on their own and to learn, evolve
and possibly cooperate with each other. Most countries agree (so far) on the fact that they will not
develop or launch fully autonomous weapons that would operate without any human supervision.
However no agreement exists on the degree of human control that will separate an automated
weapon from an autonomous weapon. The ICRC2 and states such as Switzerland3, France4, the
USA and the UK5 have each proposed different and sometimes conflicting definitions of LAWS.

This lack of definition should not stop the international community from looking at the
risks that the use of LAWS would pose, especially concerning their capacity to respect IHL: How
can it be ensured that LAWS would integrate and respect IHL? Before they develop and field
autonomous weapons, militaries must make sure that, once deployed, they can be used in
accordance to the rules of IHL. In particular, they should be able to distinguish between lawful or
unlawful targets, to function in populated areas, or in unclear situations such as counterinsurgency
or entangled offensive situations. They would be bound by the rules of IHL such as distinction
(Art 48, 51(2) and 52(2) of Additional Protocol I (API)), proportionality (Art 515B API) and
precaution in attack. IHL was created for application by humans and this application sometimes
involves decisions of a subjective or moral nature. So far only humans can decide to comply or
not with these rules. There is therefore a lot of debate between technical experts and legal experts
on whether or not LAWS can respect IHL rules or not. A few States (Cuba, Ecuador, Egypt, the
Holy See, and Pakistan), a coalition of NGOs such as the Campaign to Stop Killer Robots6 and
various scientists7 have called for a preemptive ban on fully autonomous LAWS because they
believe that these weapons will never be capable of respecting IHL rules. Other experts are saying
that LAWS can be programmed to follow IHL rules.

Beyond the definitional and legal issues, it is clear that weapons with increasing degrees
of autonomy and of AI will also bring very important ethical and responsibility issues. Should man
delegate to machines the ability and the will to kill other human beings? Delegating life and death
decisions to machines, even ones with artificial intelligence, remains a troubling, dangerous and
dehumanizing notion. The use of such weapons will potentially create an accountability gap in
cases such as IHL violations, error, malfunction or hacking. For now, machines cannot willingly
decide to violate IHL but if it were the case in the future, who would be accountable since
International Criminal Law only punishes voluntary violations of IHL and only applies to humans?

2
http://www.unog.ch/80256EDD006B8954/(httpAssets)/B3834B2C62344053C1257F9400491826/$file/2016_LAW
S+MX_CountryPaper_ICRC.pdf
3
http://www.unog.ch/80256EDD006B8954/(httpAssets)/A204A142AD3E3E29C1257F9B004FB74B/$file/2016.04.
12+LAWS+Definitions_as+read.pdf
4
http://www.unog.ch/80256EDD006B8954/(httpAssets)/5FD844883B46FEACC1257F8F00401FF6/$file/2016_LA
WSMX_CountryPaper_France+CharacterizationofaLAWS.pdf
5
http://www.unog.ch/80256EDD006B8954/(httpAssets)/44E4700A0A8CED0EC1257F940053FE3B/$file/2016_LA
WS+MX_Towardaworkingdefinition_Statements_United+Kindgom.pdf
6
http://www.stopkillerrobots.org/
7
http://futureoflife.org/open-letter-autonomous-weapons/
3

Who would be held responsible should a machine turn against its own troops, or against civilian
populations?

So altogether, as technological development inevitably continues at a number of levels in


the underlying technologies that will enable the introduction, albeit gradual, of such weapons, it is
urgent to better define and address the issues that they will create and to press for an international
framework for LAWS.

Potrebbero piacerti anche