Sei sulla pagina 1di 8

Ethics for Robots: A Review of the 2017-2019 Literature

Anna Beck
Abstract:

Introduction:
The past decade has been a time period of technological advances and progression in many fields, including
automotives and personal entertainment. Numerous advancements in the world comes with the price of the
problems that arise along with them. Unaccounted for complications emerge with the technological growth
that shapes the lives of many humans worldwide and those issues can negatively impact people, doing the
opposite of what the intentions of such device planned on doing, which was to have a positive impact and to
make life easier for people. Technological growth evidentially exists in the world to enhance one’s life, not to
deter it in any way shape or form. If a device causes numerous issues and negatively impacts the world as a
whole, it will not be needed and will most likely be discarded. As seen in the world today, many devices have
faults and cause problems that would not have arisen without their existence, but at the same time, continue
to benefit society. Two prevalent examples that are controversial are automatic cars and the presence of
artificial intelligence devices (such as Apple’s Siri and Amazon’s Amazon Alexa) in homes and in public. The
use of these two creations intends to benefit society, reducing number of casualties and making life easier for
people, however, fails under some circumstances when faced with a situation that requires them to make a
moral decision.

Human Interactions with Robots:


Fifty years ago, the idea of humans and robots co-working in the same workplace was a foreign concept to
not only the general public, but to engineers and scientists as well. The workplace was a designated place for
humans to attend from 9-5, five hours a week and work alongside commonly educated folk to them. With
improvements in robotic technology, common sights in a workplace may include a human and a robot
working together on a project. Saying “robot” does not imply a human-like figure that can walk and talk like
a human, but for now, it implies a machine that has some human-like capabilities. Viewing both the positive
and negative effects of having humans and robots cowork, it can be concluded that the negative effects
outweigh all of the positive. Described by You and Robert in their article “Human–Robot Similarity and
Willingness to Work with a Robotic Co-worker,” robots and humans differ in many ways, benefiting each
other sometimes and hurting each other at other times. In the introduction of their paper, they mention the
increment of robotic workers being placed in the workforce, saying “Amazon is adding 15,000 robots yearly
to work alongside employees across its 20 fulfillment centers. In fact, robots are expected to replace as much
as half the workforce in 10-20 years” (Sangseok & You 1). The two authors conducted a study to evaluate the
trust that lay between a human and their robotic coworker. The question arises whether “trusting a robot”
means trusting the actual robot and treating it as an equal being, or trusting the code that allows the robot to
run. The study concluded with results that showed trust was vital and needed for successful workspace
collaboration, but the trust does not come around easily, as robots are machines and not beings [7].

A difficult concept to grasp in most human to human interaction has proved higher with human to robot
interactions. Studies have found that humans tend to trust robots with more ease then they would trust
another being. Reasons for this phenomenon could include that robots will not lie, cheat or steal, which are
the main reasons that humans tend to not trust one another. As a machine, robots are programmed to perform
certain methods given certain actions from a human and background activities, which do not include those
detrimental factors that cause humans to lose trust between each other. In his essay, Howard speaks about
the relationship between robots and humans. He says, “The focus on trust emerges because research shows
that humans tend to trust robots similarly to the way they trust other humans; thus, the concern is that people
may underappreciate or misunderstand the risk associated with handing over decisions to a robot. For
example, if a robotic hospital assistant is supposed to serve a drink but it cannot pick up a particular type of
cup, the result may be an inconvenience for the patient receiving the drink. Or in the case of an autonomous
vehicle, misplaced trust (assuming the car will stop) could be deadly for those inside and outside of the vehicle”
(Howard). This quote shows that humans may not trust robots given an accident. The two examples provided
are situations where trust can be broken between humans and robots. If humans put all of their trust towards
these machines, they will not be able to gain it back if broken. A human can prove they are trustworthy
through making up for their mistakes using emotional and meaningful gestures, but robots lack those skills
and the trust will not be gained back as easily, or at all for that matter. Howard also speaks about the
incorporation of robots in therapy sessions for mental disorders and people who seek help for other issues.
He says, “Our own research shows that trust and compliance increase when participants interact with robot
therapy partners rather than with a human therapist. In these therapy exercise scenarios, we show that
participants are not only compliant with respect to the exercise guidance provided by their robot therapy
partners, but there is a trend of higher self-reported trust” (Howard) [5]. As previously stated, robots may be
more trustworthy than humans because they are coded and cannot lie and overshare information that they are
not programmed to share with others. Ethics come into play with robot and human therapy sessions, however,
if a human tells a robot something illegal that authorities need to be noted about. Robots need to be
programmed to do the correct thing given certain harmful information(concept spoken about in depth later
in this essay).

Robot Interactions with Humans


Conceptualizing the fact that robots may be capable of carrying out a full conversation with a human may be
a tough concept to grasp for some, given that the idea of communicating with a robot may be scary because
of the novelty of the thought. The interactions of robots with humans may not be direct interactions, but
more of interactions where the robot processes certain information they directly obtain from human beings
in their vicinity. Many recent examples of this form of communication stem from artificial intelligence devices
that many people have present in their houses and on their phones, such as Amazon’s Alexa, Google’s Home
and Apple’s Siri. These devices are new to the world and have the capabilities of recording information at all
times when they are turned on, even if the information does not intend to be directed towards them. Recently,
these devices have been of assistance in crime scenes, recording conversations that may have occurred before
a murder or picking up words that may lead to a suspicion of drug use. For example, as stated in “Ethics for
Robots as Experimental Technologies: Pairing Anticipation with Exploration to Evaluate the Social Impact
of Robotics,” “A digital assistant faced with a drug-taking teen would weigh up the demands of the different
points of view and try to find a course of action that pleases them all. It does this by mapping out the various
arguments from each stakeholder, noting which ones clash ("involve the police" versus "respect individual
autonomy", for example). Conflicting demands are removed, and the system decides a course of action based
on those that remain.” The robot will need to make a decision based on their “interaction” with a human in
this situation. An advanced robot may speak to the human and ask them follow up questions about this certain
situation, but a less advanced one, such as Apple’s Siri or Amazon’s Alexa, may not be able to have a full
conversation with a human and may just record the data, storing it and possibly sending it somewhere given
certain keywords that when heard, trigger the device to send the data to authorities. [1]

New technologies have allowed for the recognition of humans in an abstract way. Robots have the ability to
detect a human being by using their RGBD sensors and by using an OpenNI tracker which establishes their
3D positions. If someone is standing backwards or sidways, the sensor may detect a human incorrectly or
mistake them for something else, but usually does not go wrong. Altering the contrast of the RGBD sensor
will allow for the data to have more contrast and limit the amount of errors that occur.[2]

Robot Emotions:
Sangseok and You describe in their article how humans prefer to work alongside robots that have a similar
personality to them. The concept of giving robots a personality is unheard and foreign for many, as people
associate having a personality with a living organism that has a soul, not a machine created out of wires and
metal parts. To build a machine with a soul would be impossible, as a soul is something a person is born with
and does not obtain over time, so saying that a machine created by mankind has a personality seems merely
impossible as well. Kanjo, the author of “Deep learning analysis of mobile physiological, environmental and
location sensor data for emotion detection,” says in his essay, “With the increased availability of low cost
wearable sensors (e.g., Fitbit, Microsoft writs [sic] bands), there is an emergence in research interest in using
human physiological data (e.g. galvanic skin response (GSR), heart rate (HR), electroencephalography(EEG),
etc.) for emotion detection” (E. Kanjo et al. 49). He describes how human emotions cover a large range of
expressions that are unable to be represented by technology and applied to robots and artificial intelligence as
of now. This may be an issue because if a robot cannot express all of the emotions a human is capable of
expressing, danger may arise if they are viewed on the same level as humans, as expressing a wide variety of
emotions is a distinguishing characteristic of humans that all other species lack. In the future, there are
possibilities that this can be addressed and the incorporation of all human emotions can be held by robots,
but currently technology that can permit that does not exist.

Kanjo introduces the concept of deep learning being applied to the incorporation of human-like aspects in
technological devices. Although an advanced topic, the deep learning process that is integrated into the
construction of many robots can be simplified. Deep learning is the method of incorporating man-made
neural networks into robots by using data collected from the human brain. In a way, deep learning is the
method of constructing a near-perfect model of the brain, using models to represent neural networks. These
methods allow the robot to make decisions, have a memory and pick up background information [6]. Robots
can interact with humans when given these characteristic features using this method but there will still be
flaws, given the fact that it is impossible to perfectly replicate a brain.

Construction of an Ethical Robot:


There are a set of legal rules or regulations for almost anything that can be brought to mind. Does it include
robots? The dense prevalence of robots in the world has recently increased exponentially with all of the
advances in electrical and computer technology and there is currently no set of laws or regulations pertaining
to the existence of the bots. Laws and regulations normally pertain to humans, as the human species has the
brain to fully understand the meaning of them and has the means to follow them. As stated before, the robots
are not beings, they do not have a soul, and therefore should not be considered humans under any
circumstances. Does this mean that they should have to follow a set of rules, or should the potential legal
issues shoot back to the creator of the robot, even if they did not account for a specific situation occurring.
To be precise, say that a robotic self-driving car hit a person who was moving in a pattern they were not
programmed to stop to, would the human being be responsible for not precisely covering that situation? A
set of laws should be created before introducing the concept of self-driving cars into everyone’s driveway to
avoid legal issues.

The author describes the concept of roboethics, the type of ethics that relates to the designers and the users
of the robots, not the robots. This term cannot be coined to the actions of robots because regardless of the
robots actions, the humans are responsible for whatever happens as a result of those actions. He mentions
that robots are currently considered experimental technologies, a term used to name technologies that that
are still developing and are limited in many ways. The robots are treated as experimental technologies because
they are still limited and do not have all of the capabilities that are needed for them to be completely sufficient.
The robots have risks and benefits associated with them, but currently, the effects of the risks outweigh the
benefits. Commonly, automated robots are also studied as a social experiment because people are still adjusting
to their presence and the results of their incorporations in society are still unknown. [1]

When a robot is granted full control of itself, such as a device that includes autopilot, there will always be an
option to give complete control back to the user, but when will that control be automatically given back to
the user? At what point will the robot say “I cannot control this situation because I do not have the knowledge
to successfully take control and cannot make the decision that needs to be made?” Situations that arise from
complications in robots that are based on coding error direct attention to the programmers and companies
that built the machine. The programming needs to be spot on, or errors will arise and robots will not have the
ability to perform correctly in certain situations. There are many different scenarios where errors in coding
and mechanical engineering have negatively impacted humans already in the world, for example, recent errors
in building cars. Errors in artificial intelligence machines and robots may not only hurt users physically, but
give users incorrect data and therefore impacting their lives in other ways. Robots need to be coded to know
what to do in every situation, but that seems almost impossible, as life is unpredictable and every situation can
never be planned out.

“Suppose you are walking on a sidewalk and are nearing an intersection. A flashing sign indicates that you can
cross the street. To cross safely you have to rely on nearby cars to respect the rules of the road. Yet what
should happen when pedestrians (and others) are interacting with a self-driving car? Should we, as pedestrians,
trust that this car, without a human behind the wheel, will stop when we enter a crosswalk? Are we willing to
stake our lives on it?” (Howard) [5]. This situation is one that currently arises frequently in discussions of
whether or not to transition to a society dominated by self-driving cars, which are a form of robots that are
programmed by humans to ease the lives of others. There have been cases where self-driving cars were at
“fault” (in quotations because can a robot really be at fault?) and people were killed and injured. Due to the
small number of automated cars in existence as of now, there are not many instances of this situation arising.
The reason that automated cars are being incorporated into societies are so less accidents occur between two
vehicles or a vehicle and a pedestrian, so before an entire society transitions to this new style of driving, self-
driving cars need to have perfect codes and algorithms for predicting what will happen in every situation [5].

Help w/ data table!!! No data available!

Described in the essay, the authors propose a model that will determine whether to grant control to the human
operator in a certain situation. The model consists of a complex equation, but to simplify the terms, there are
two states that the model can contain, a state of failure, where control is granted to the human, and a state of
passing, where the robot maintains control over a certain situation. Using equations of optimization, the state
of passing or failure is obtained by computing the ratio of losses to actions where a confidence threshold is
calculated. If the confidence threshold is high enough, the equation passes and the robot maintains control
and otherwise the human operator takes over. As said in the essay, “Briefly stated, the algorithms might
incorrectly interpret a situation because its decision is based on flawed or omitted data. One of the primary
information sources that feeds the intelligence of many autonomous vehicles is computer vision algorithms.
They allow the system to see and interpret its surroundings--whether that includes the road, other cars, or
even pedestrians. Computer vision algorithms have their own inherent biases, though, especially when it
comes to interpretation. For example, various facial recognition systems struggle with identifying non-
Caucasian faces with the same rate of accuracy. Vision systems can also be "tricked" into seeing things that
are not there” (Howard). With the knowledge that certain situations can cause the robots to be tricked and be
the result of errors in the machines decisions, some changes need to be made to prevent this from happening.
As stated in the essay, there are a few algorithms that are perfected to interpret every situation involving
humans without errors. If the systems do not have the ability to make correct split second decisions about
what to do and when, the process will harm, rather than benefit the humans using it [5].

The most important part of the algorithm above is the fact that the robot must stop or divert their actions
when a certain situation is too dangerous to proceed forward with a certain manner. An emergent situation
that may arise in a robot controlled environment must not be carried out by the robot and a button must be
installed that removes all control from the machine, granting it to the human operator. Failure to include an
emergency stop button will lead to problems, such as injured people or damaged objects. [1] An example of
a method used recently in a robot granting control back to a human was seen with a rehabilitation robot. If
they felt their patient was in danger, they projected a message saying “please contact your physician ; injury
may result if you continue.”[5] This method used takes away the control from the robot, giving the user the
option to continue on their own without the assistance of the machine (laying all legal liabilities on them, not
the robot because the robot directed them to stop) or they will contact the physician (the human operator
that will take over in this case).

Algorithmic Bias in Robots:


To successfully create a fully functioning human being that contains every single aspect of the brain and has
all of the neurons and pathways that are located within the body that are used to make decisions would be
impossible, so what makes people think they can code a human-like robot that can make such decisions? The
concept of having a robot in the world that can be a substitute to a human currently exists in the minds of
many humans, regardless of whether or not they are comfortable with. There are just some aspects of a human
that cannot be replicated in a machine, such as the billions of pathways in the brain that allow for the decision
making processes humans are capable of. In a robot, every aspect must be programmed and the robot will
not have those capabilities that a human brain allows the human species to have. Due to this situation, current
robots cannot detect everything at the level they must be capable of doing so for them to be completely fused
into the daily lives of humans.

The bias that is present in many current situations involving automated robots go against many social norms
that have recently been incorporated into society, such as allowing one to change their gender at any age and
the social acceptance of males and females doing things that would have previously been considered “activities
of the opposite sex” (e.g. females playing with video games, males wearing makeup, etc.). [5] As said by
Howard in “Trust and Bias in Robots,”“Bias involves the disposition to allow our thoughts and behaviors,
often implicitly or subconsciously, to be affected by our generalizations or stereotypes about others. For
example, depending on the cultural biases of the algorithm, self-driving cars might be more inclined to save
younger rather than older pedestrians in cases where an accident cannot be avoided. While bias is not always
problematic, it can frequently lead to unfairness or other negative consequences” (Howard). These biases
would be a result of the training the robots receive before being integrated into human lives. Commonly,
robots are programmed to associate female activities with girls and male activities with boys, but they do not
currently have the capabilities to recognize there may be exceptions to certain rules. Since robots do not have
a pair of eyes and a brain, they cannot physically see a child playing with a toy and see their gender. Robots
just use context clues, such as the length of the child’s hair, even though a child with long hair may not be a
girl. Situations such as this one could arise and people may be very offended by the assumptions made by the
machine.

One main source of bias that has recently been found is with self-driving cars. Self-driving cars must have the
ability to distinguish certain things from each other. As said by Howard, “The automated systems must have
the capability of deciphering vague or ambiguous data--is that sunlight reflecting from the road, sensor noise,
or a small child? Again, determining how best to construct these algorithms and weighing the resulting
decisions ultimately gives rise to another aspect of algorithmic bias” (Howard). [5]. If the car detects a block
in the road, but overturns it as a shadow of a something when actually it is a child, the child would get hit and
potentially die. Issues like this one are present in many other situations, such as robots working in factories
and in other places, where they assume one thing incorrectly.

Recently, there have been training methods used to prevent bias in these devices. The robots are taught to
adapt to certain situations using penalization, but this brings up the concern if robots can even be penalized
because they do not have emotions. The goal is for robots to be “punished” if they assume something that is
not true about a certain thing (e.g. people, cars, colors, activities, etc.). Quoted by the authors of “Learn from
experience: Probabilistic prediction of perception performance to avoid failure,” Tzang (the quoted author)
says, “This can be implemented under the recently proposed framework of adversarial training, whereby an
additional adversarial network is trained to discriminate between the two domains, given the internal
representations of the first network. The original network is then also trained to fool the adversary (i.e. by
adding the negative of the discriminator loss), thereby encouraging it to learn representations that are domain-
invariant. (Tzeng et al., 2016)” (Gurau et al., 983) [4]. This method will help to prevent bias in future
programming, just needs to be perfected so every case is covered and nobody becomes offended or hurt by
the actions of a robot.

Conclusion:
Slowly taking over the world and becoming the dominant species, the capabilities of robots are increasing
exponentially. Being present in many homes and workplaces, robots and humans are constantly
communicating, either directly or indirectly. Robots have the means to record and interpret data as a human
being would be able to do so. They are becoming more technologically advanced and more recently have been
modeling an internal system that closely represents the human brain. Robots are able to model human
emotions by using the process of deep learning in their communications with them which is a debatable topic,
as robots are machines and not humans. With all of the new advances in robotic technology, an ethical debate
arises when deciding whether to coin robots as responsible for their actions, but the consequences become
directed towards the engineers and programmers who constructed the robot. Robots cannot be fully
integrated into the world due to their current limitations which include the ability to make split second
decisions and bias in certain situations. Evidence shows that in upcoming years, robots will be even more
prevalent in society when those issues are resolved. Currently, the benefits of having robots in the world are
outweighed by all negative aspects that come along with their presence, so they are not completely integrated
in society as they could be.
References

[1] Amigoni, F., Schiaffonati, V., “Ethics for Robots as Experimental Technologies: Pairing Anticipation
with
Exploration to Evaluate the Social Impact of Robotics.” IEEE Robotics and Automation Magazine, 2018 25:
30-36. Found via Computers and Applied Sciences (Complete) search.

[2] Duckworth, P., Hogg, D. C., Cohn, A. G., “Unsupervised human activity analysis for intelligent mobile
robots.” Artificial Intelligence, 2019. 270: 67-92. Found via Computers and Applied Sciences (Complete)
search.

[3] Frank, S., “AI could decide to snitch on you.” New Scientist, 2019. 241: 16-16. Found via Computers and
Applied Sciences (Complete) search.

[4] Gurau, K., Rao, D., Tong, C. H., Posner, I., “Learn from experience: Probabilistic prediction
of perception performance to avoid failure.” International Journal of Robotics Research,
2018. 37: 981-995. Found via Computers and Applied Sciences (Complete) search.

[5] Howard, A., Borenstein, J., “Trust and Bias in Robots.” American Scientist, 2019. 107: 86-89.
Found via Computers and Applied Sciences (Complete) search.

[6] Kanjo, E., Younus, E. M. G, Ang, C. S. “Deep learning analysis of mobile physiological,
environmental and location sensor data for emotion detection.” Information Fusion, 2019.
49: 46-56. Found via Computers and Applied Sciences (Complete) search.

[7] Sangseok, Y., Robert Jr., L. P., “Human--Robot Similarity and Willingness to Work with a
Robotic Co-worker.” ACM/IEEE International Conference on Human-Robot Interaction,
2018. 251-260. Found via Computers and Applied Sciences (Complete) search.
Things to work on/fix before final copy
● Citation style with source [x] and (authors) after using it - what exactly should it look like
● Find data for a table - it has been a bit challenging

Potrebbero piacerti anche