Sei sulla pagina 1di 3

Uk has taken the view that programming might in the future represent an

acceptable form of meaningful human control, and research into such


possibilities should not be pre-emptively banned. In future, they might even
reduce civilian casualties.

At the geneva meeting on LARs, many nations and experts supported the idea of
meaningful human control however China believes that there exists no clear
definition and no clear guideline as to what constitutes meaningful control. As a
result, the boundaries between human control and autonomy can be blurred in
the development of future technologies. Hence China wishes to propose an
amendment to help define the term meaningful control as a preventative
measure to a possible significiant humanitarian crisis. Currently, there are many
disparities in the definition of meaningful control and China seeks to clarify the
definition.
Furthermore, China believes that all robots build and used for military warfare
must fall under the restrictions of meaningful control under our proposed
amendment.
Moreover, on the issue of meaningful control, China believes that at the
increasing rate of technology advancements, the day will come when fully
autonomous robots are developed and considered for warfare. Fully autonomous
robots, are categorised on a completely different level than current autonomous
systems with their ability to select targets and engage without human
intervention. Additonally, China believes that the root of the concern regarding
autonomous robots is the fear of sentient robots. Therefore China, will propose a
ban on the production, testing, acquirement and deployment of fully autonomous
robots. Too often, the UN acts upon a humanitarian issue when it has happened,
China believes it is time for the UN to take pre-emptive action on this issue and
ban fully autonomous robots before they come into existence.
China believes that

Robotic weapons systems should not be making life and death decisions
on the battlefield on their own as it would be inherently wrong, morally
and ethically
Fully autonomous weapons are likely to violate international humanitarian
law due to the their lack of moral consideration and conscience

The idea is that human control over life and death


decisions must always be significant in other words, it
must be considerably more than none at all and, putting it
bluntly, it must also involve more than the mindless
pressing of a button in response to machine-processed

information. According to current practice, a human


operator of weapons must have sufficient information
about the target and sufficient control of the weapon, and
must be able to assess its effects, in order to be able to
make decisions in accordance with international law. But
how much human judgment can be transferred into a
technical system and exercised by algorithms before
human control ceases to be meaningful in other
words, before warfare is quite literally dehumanized?
One thing seems clear: in the future, certain time limits
would have to apply if LAWS are not to become a reality
across a broad front. The fact is that the human brain
needs time for complex evaluation and decision-making
processes time which must not be denied to it in the
interaction between human and machine, if the human
role is to remain relevant; in other words, if the decisionmaking process is merely to be supported, not dominated,
by the machine.

Some of ICRACs members in discussion at the UN in Geneva

The concept of meaningful human control is, at present,


not fully fleshed out yet, and in the further course of the
CCW process there will undoubtedly be considerable
wrangling over precisely how it should be filled with
meaning. In that process, the Campaign will be pressing

for the greatest possible role for the exercise of human


judgment not only in relation to killing but also in other
decisions on the use of violence or non-lethal force.
Against this background, members of ICRAC are currently
(and have been for some time) thinking more in-depth
about what (meaningful) human control can and should be
all about, e.g. both in terms of differentiating discrete
levels of supervisory control from an analytical perspective
(Sharkey 2014) and in terms of a normative reminder to
seek a definition that is as clear-cut, simple and with as
little degrees of meaning as possible (Gubrud 2014). In its
working paper series, ICRAC has already been thinking
even further ahead, pondering the design of legally
binding instruments and suggesting verification and
compliance measures for a possible future convention on
autonomous weapons (Gubrud and Altmann 2013).