Sei sulla pagina 1di 8

Ethics for Robots: A Review of the 2017-2019 Literature Commented [CAM1]: Missing most of the required

information for the first page — see the syllabus


Anna Beck
Intended citation style is missing, so it’s impossible for
Abstract: me to give valid guidance on how well you’ve
implemented that citation style. Your peer reviewers
Introduction: were at a similar disadvantage.
The past decade has been a time period of technological advances and progression in many fields, including Title itself is pretty good
automotives and personal entertainment. Numerous advancements in the world comes with the price of the
Commented [CAM2]: Because this draft is so long and
problems that arise along with them. Unaccounted for complications emerge with the technological growth because we need to tighten it considerably to suit the lit
that shapes the lives of many humans worldwide and those issues can negatively impact people, doing the review genre, I will use pink highlighting for things that
opposite of what the intentions of such device planned on doing, which was to have a positive impact and to can simply be cut
make life easier for people. Technological growth evidentially exists in the world to enhance one’s life, not to Commented [CAM3]: NOTE: most of my notes are at
deter it in any way shape or form. If a device causes numerous issues and negatively impacts the world as a the beginning. However, these notes are global — that
is, they concern the whole draft. And each note is quite
whole, it will not be needed and will most likely be discarded. As seen in the world today, many devices have long . . . (apologies!)
faults and cause problems that would not have arisen without their existence, but at the same time, continue
Commented [CAM4]: Intro begins far far far too
to benefit society. Two prevalent examples that are controversial are automatic cars and the presence of broadly. The title promises ethics for robots — start
artificial intelligence devices (such as Apple’s Siri and Amazon’s Amazon Alexa) in homes and in public. The there. (I mean, consider, the first sentence is literally
use of these two creations intends to benefit society, reducing number of casualties and making life easier for true for every decade)
people, however, fails under some circumstances when faced with a situation that requires them to make a Remember, in this paper, we need to present evidence
moral decision. first and present our claims only in the conclusion (this
is exactly backwards from what other genres — and
most student genres — do, so it’s hard)
Human Interactions with Robots:
Fifty years ago, the idea of humans and robots co-working in the same workplace was a foreign concept to This project needs to be tight and efficient — no extra
not only the general public, but to engineers and scientists as well. The workplace was a designated place for stuff, no extra words
humans to attend from 9-5, five hours a week and work alongside commonly educated folk to them. With
Avoid vaguely used sort-of academic words (like
improvements in robotic technology, common sights in a workplace may include a human and a robot “prevalent”) that are imprecise. I’ve highlighted a few in...
working together on a project. Saying “robot” does not imply a human-like figure that can walk and talk like Commented [CAM5]: Perhaps begin the review with
a human, but for now, it implies a machine that has some human-like capabilities. Viewing both the positive the ideas in this sentence? This sentence on its own is
and negative effects of having humans and robots cowork, it can be concluded that the negative effects quite rich — it implies many things, and they all need to
outweigh all of the positive. Described by You and Robert in their article “Human–Robot Similarity and be discussed explicitly and individually: ...
Willingness to Work with a Robotic Co-worker,” robots and humans differ in many ways, benefiting each Commented [CAM6]: Begin sentence here, add citation
other sometimes and hurting each other at other times. In the introduction of their paper, they mention the at end
increment of robotic workers being placed in the workforce, saying “Amazon is adding 15,000 robots yearly Commented [CAM7]: Can just say, “evaluated”
to work alongside employees across its 20 fulfillment centers. In fact, robots are expected to replace as much Whenever you can, avoid the phrasing “the study”
as half the workforce in 10-20 years” (Sangseok & You 1). The two authors conducted a study to evaluate the “conducted a study” and so forth ...
trust that lay between a human and their robotic coworker. The question arises whether “trusting a robot” Commented [CAM8]: Says who? That is, if this is from
means trusting the actual robot and treating it as an equal being, or trusting the code that allows the robot to you, delete. If this is from the study, we need to say
run. The study concluded with results that showed trust was vital and needed for successful workspace something like, “Trusting a robot can be interpreted in
two ways[cite; Sangseok and You found that . . .”]
collaboration, but the trust does not come around easily, as robots are machines and not beings [7].
Commented [CAM9]: Some citations are like this
(Sanseok & You 1) and some are like this [7] — chose
A difficult concept to grasp in most human to human interaction has proved higher with human to robot one method and use only that one method throughout.
interactions. Studies have found that humans tend to trust robots with more ease then they would trust I recommend this one [7]
another being. Reasons for this phenomenon could include that robots will not lie, cheat or steal, which are Commented [CAM10]: The paragraph above (and
the main reasons that humans tend to not trust one another. As a machine, robots are programmed to perform many below) deal with only one study. This is fine for
certain methods given certain actions from a human and background activities, which do not include those this draft, however, for the final, we will need to
rearrange things. Instead of goiing study by study, you...
detrimental factors that cause humans to lose trust between each other. In his essay, Howard speaks about
the relationship between robots and humans. He says, “The focus on trust emerges because research shows Commented [CAM11]: I think there’s something
important in here but this is not clear
that humans tend to trust robots similarly to the way they trust other humans; thus, the concern is that people
may underappreciate or misunderstand the risk associated with handing over decisions to a robot. For Commented [CAM12]: Says who? That is, this is a big
claim, it’s hard to prove “reasons” — save this for the
example, if a robotic hospital assistant is supposed to serve a drink but it cannot pick up a particular type of end. Instead, just work on demonstrating that humans
trust robots. Give the actual data from multiple studies
cup, the result may be an inconvenience for the patient receiving the drink. Or in the case of an autonomous
vehicle, misplaced trust (assuming the car will stop) could be deadly for those inside and outside of the vehicle”
(Howard). This quote shows that humans may not trust robots given an accident. The two examples provided
are situations where trust can be broken between humans and robots. If humans put all of their trust towards
these machines, they will not be able to gain it back if broken. A human can prove they are trustworthy
through making up for their mistakes using emotional and meaningful gestures, but robots lack those skills
and the trust will not be gained back as easily, or at all for that matter. Howard also speaks about the
incorporation of robots in therapy sessions for mental disorders and people who seek help for other issues.
He says, “Our own research shows that trust and compliance increase when participants interact with robot
therapy partners rather than with a human therapist. In these therapy exercise scenarios, we show that
participants are not only compliant with respect to the exercise guidance provided by their robot therapy
partners, but there is a trend of higher self-reported trust” (Howard) [5]. As previously stated, robots may be
more trustworthy than humans because they are coded and cannot lie and overshare information that they are
not programmed to share with others. Ethics come into play with robot and human therapy sessions, however,
if a human tells a robot something illegal that authorities need to be noted about. Robots need to be
programmed to do the correct thing given certain harmful information(concept spoken about in depth later
in this essay).

Robot Interactions with Humans Commented [CAM13]: Your sectioning is generally


Conceptualizing the fact that robots may be capable of carrying out a full conversation with a human may be good — these sections make sense. So,the
rearranging is mostly about rearranging things within
a tough concept to grasp for some, given that the idea of communicating with a robot may be scary because sections
of the novelty of the thought. The interactions of robots with humans may not be direct interactions, but
more of interactions where the robot processes certain information they directly obtain from human beings
in their vicinity. Many recent examples of this form of communication stem from artificial intelligence devices
that many people have present in their houses and on their phones, such as Amazon’s Alexa, Google’s Home
and Apple’s Siri. These devices are new to the world and have the capabilities of recording information at all
times when they are turned on, even if the information does not intend to be directed towards them. Recently,
these devices have been of assistance in crime scenes, recording conversations that may have occurred before
a murder or picking up words that may lead to a suspicion of drug use. For example, as stated in “Ethics for
Robots as Experimental Technologies: Pairing Anticipation with Exploration to Evaluate the Social Impact
of Robotics,” “A digital assistant faced with a drug-taking teen would weigh up the demands of the different
points of view and try to find a course of action that pleases them all. It does this by mapping out the various
arguments from each stakeholder, noting which ones clash ("involve the police" versus "respect individual
autonomy", for example). Conflicting demands are removed, and the system decides a course of action based
on those that remain.” The robot will need to make a decision based on their “interaction” with a human in
this situation. An advanced robot may speak to the human and ask them follow up questions about this certain
situation, but a less advanced one, such as Apple’s Siri or Amazon’s Alexa, may not be able to have a full
conversation with a human and may just record the data, storing it and possibly sending it somewhere given
certain keywords that when heard, trigger the device to send the data to authorities. [1]

New technologies have allowed for the recognition of humans in an abstract way. Robots have the ability to
detect a human being by using their RGBD sensors and by using an OpenNI tracker which establishes their
3D positions. If someone is standing backwards or sidways, the sensor may detect a human incorrectly or
mistake them for something else, but usually does not go wrong. Altering the contrast of the RGBD sensor
will allow for the data to have more contrast and limit the amount of errors that occur.[2]

Robot Emotions:
Sangseok and You describe in their article how humans prefer to work alongside robots that have a similar
personality to them. The concept of giving robots a personality is unheard and foreign for many, as people
associate having a personality with a living organism that has a soul, not a machine created out of wires and
metal parts. To build a machine with a soul would be impossible, as a soul is something a person is born with
and does not obtain over time, so saying that a machine created by mankind has a personality seems merely
impossible as well. Kanjo, the author of “Deep learning analysis of mobile physiological, environmental and
location sensor data for emotion detection,” says in his essay, “With the increased availability of low cost
wearable sensors (e.g., Fitbit, Microsoft writs [sic] bands), there is an emergence in research interest in using
human physiological data (e.g. galvanic skin response (GSR), heart rate (HR), electroencephalography(EEG),
etc.) for emotion detection” (E. Kanjo et al. 49). He describes how human emotions cover a large range of
expressions that are unable to be represented by technology and applied to robots and artificial intelligence as
of now. This may be an issue because if a robot cannot express all of the emotions a human is capable of
expressing, danger may arise if they are viewed on the same level as humans, as expressing a wide variety of
emotions is a distinguishing characteristic of humans that all other species lack. In the future, there are
possibilities that this can be addressed and the incorporation of all human emotions can be held by robots,
but currently technology that can permit that does not exist.

Kanjo introduces the concept of deep learning being applied to the incorporation of human-like aspects in
technological devices. Although an advanced topic, the deep learning process that is integrated into the
construction of many robots can be simplified. Deep learning is the method of incorporating man-made
neural networks into robots by using data collected from the human brain. In a way, deep learning is the
method of constructing a near-perfect model of the brain, using models to represent neural networks. These
methods allow the robot to make decisions, have a memory and pick up background information [6]. Robots
can interact with humans when given these characteristic features using this method but there will still be
flaws, given the fact that it is impossible to perfectly replicate a brain.

Construction of an Ethical Robot:


There are a set of legal rules or regulations for almost anything that can be brought to mind. Does it include
robots? The dense prevalence of robots in the world has recently increased exponentially with all of the
advances in electrical and computer technology and there is currently no set of laws or regulations pertaining
to the existence of the bots. Laws and regulations normally pertain to humans, as the human species has the
brain to fully understand the meaning of them and has the means to follow them. As stated before, the robots
are not beings, they do not have a soul, and therefore should not be considered humans under any
circumstances. Does this mean that they should have to follow a set of rules, or should the potential legal
issues shoot back to the creator of the robot, even if they did not account for a specific situation occurring.
To be precise, say that a robotic self-driving car hit a person who was moving in a pattern they were not
programmed to stop to, would the human being be responsible for not precisely covering that situation? A
set of laws should be created before introducing the concept of self-driving cars into everyone’s driveway to
avoid legal issues.

The author describes the concept of roboethics, the type of ethics that relates to the designers and the users
of the robots, not the robots. This term cannot be coined to the actions of robots because regardless of the
robots actions, the humans are responsible for whatever happens as a result of those actions. He mentions
that robots are currently considered experimental technologies, a term used to name technologies that that
are still developing and are limited in many ways. The robots are treated as experimental technologies because
they are still limited and do not have all of the capabilities that are needed for them to be completely sufficient.
The robots have risks and benefits associated with them, but currently, the effects of the risks outweigh the
benefits. Commonly, automated robots are also studied as a social experiment because people are still adjusting
to their presence and the results of their incorporations in society are still unknown. [1]

When a robot is granted full control of itself, such as a device that includes autopilot, there will always be an
option to give complete control back to the user, but when will that control be automatically given back to
the user? At what point will the robot say “I cannot control this situation because I do not have the knowledge
to successfully take control and cannot make the decision that needs to be made?” Situations that arise from
complications in robots that are based on coding error direct attention to the programmers and companies
that built the machine. The programming needs to be spot on, or errors will arise and robots will not have the
ability to perform correctly in certain situations. There are many different scenarios where errors in coding
and mechanical engineering have negatively impacted humans already in the world, for example, recent errors
in building cars. Errors in artificial intelligence machines and robots may not only hurt users physically, but
give users incorrect data and therefore impacting their lives in other ways. Robots need to be coded to know
what to do in every situation, but that seems almost impossible, as life is unpredictable and every situation can
never be planned out.

“Suppose you are walking on a sidewalk and are nearing an intersection. A flashing sign indicates that you can
cross the street. To cross safely you have to rely on nearby cars to respect the rules of the road. Yet what
should happen when pedestrians (and others) are interacting with a self-driving car? Should we, as pedestrians,
trust that this car, without a human behind the wheel, will stop when we enter a crosswalk? Are we willing to
stake our lives on it?” (Howard) [5]. This situation is one that currently arises frequently in discussions of Commented [CAM14]: 1. repetitive
whether or not to transition to a society dominated by self-driving cars, which are a form of robots that are 2. you only show one source so how do your readers
know that this is a frequent conversation?
programmed by humans to ease the lives of others. There have been cases where self-driving cars were at
“fault” (in quotations because can a robot really be at fault?) and people were killed and injured. Due to the That is, you are providing a summary of the research
small number of automated cars in existence as of now, there are not many instances of this situation arising. AND and evaluation of the quality of that research AND
then a description of what we don’t know and what
The reason that automated cars are being incorporated into societies are so less accidents occur between two studies we need to do next. Only that.
vehicles or a vehicle and a pedestrian, so before an entire society transitions to this new style of driving, self-
Commented [CAM15]: 1. Yes, they can (by our current
driving cars need to have perfect codes and algorithms for predicting what will happen in every situation [5]. laws and insurance practices)
2. This is Anna commentary — this project should have
Help w/ data table!!! No data available! no Anna commentary until the conclusion. So, there
are a lot of things to be cut in this draft

Described in the essay, the authors propose a model that will determine whether to grant control to the human Commented [CAM16]: Is this actually true? That is,
how did your sources find their information?
operator in a certain situation. The model consists of a complex equation, but to simplify the terms, there are Questionnaires? Surveys? Interviews? Surely they
two states that the model can contain, a state of failure, where control is granted to the human, and a state of report some sort of data
passing, where the robot maintains control over a certain situation. Using equations of optimization, the state
Ah, I see — after looking at your references, I see that
of passing or failure is obtained by computing the ratio of losses to actions where a confidence threshold is only about half are scholarly (research) articles. So,
calculated. If the confidence threshold is high enough, the equation passes and the robot maintains control see my notes there.
and otherwise the human operator takes over. As said in the essay, “Briefly stated, the algorithms might
incorrectly interpret a situation because its decision is based on flawed or omitted data. One of the primary IMPORTANT: Add a paragraph that summarizes the
methods used by each of your sources. How did they
information sources that feeds the intelligence of many autonomous vehicles is computer vision algorithms. get their information?
They allow the system to see and interpret its surroundings--whether that includes the road, other cars, or
even pedestrians. Computer vision algorithms have their own inherent biases, though, especially when it
comes to interpretation. For example, various facial recognition systems struggle with identifying non-
Caucasian faces with the same rate of accuracy. Vision systems can also be "tricked" into seeing things that Commented [CAM17]: Only use scare quotes if your
are not there” (Howard). With the knowledge that certain situations can cause the robots to be tricked and be sources do
the result of errors in the machines decisions, some changes need to be made to prevent this from happening. Next, for this project, we should have no direct quotes
As stated in the essay, there are a few algorithms that are perfected to interpret every situation involving from sources, only paraphrase and summary. Why?
humans without errors. If the systems do not have the ability to make correct split second decisions about Because our goal is to show where our sources agree
— and that requires paraphrasing from several of them
what to do and when, the process will harm, rather than benefit the humans using it [5]. all at once. A quote shows only what one source says.

The most important part of the algorithm above is the fact that the robot must stop or divert their actions
when a certain situation is too dangerous to proceed forward with a certain manner. An emergent situation
that may arise in a robot controlled environment must not be carried out by the robot and a button must be
installed that removes all control from the machine, granting it to the human operator. Failure to include an
emergency stop button will lead to problems, such as injured people or damaged objects. [1] An example of
a method used recently in a robot granting control back to a human was seen with a rehabilitation robot. If
they felt their patient was in danger, they projected a message saying “please contact your physician ; injury
may result if you continue.”[5] This method used takes away the control from the robot, giving the user the
option to continue on their own without the assistance of the machine (laying all legal liabilities on them, not
the robot because the robot directed them to stop) or they will contact the physician (the human operator
that will take over in this case).

Algorithmic Bias in Robots:


To successfully create a fully functioning human being that contains every single aspect of the brain and has
all of the neurons and pathways that are located within the body that are used to make decisions would be
impossible, so what makes people think they can code a human-like robot that can make such decisions? The
concept of having a robot in the world that can be a substitute to a human currently exists in the minds of
many humans, regardless of whether or not they are comfortable with. There are just some aspects of a human
that cannot be replicated in a machine, such as the billions of pathways in the brain that allow for the decision
making processes humans are capable of. In a robot, every aspect must be programmed and the robot will
not have those capabilities that a human brain allows the human species to have. Due to this situation, current
robots cannot detect everything at the level they must be capable of doing so for them to be completely fused
into the daily lives of humans.

The bias that is present in many current situations involving automated robots go against many social norms
that have recently been incorporated into society, such as allowing one to change their gender at any age and
the social acceptance of males and females doing things that would have previously been considered “activities
of the opposite sex” (e.g. females playing with video games, males wearing makeup, etc.). [5] As said by
Howard in “Trust and Bias in Robots,”“Bias involves the disposition to allow our thoughts and behaviors,
often implicitly or subconsciously, to be affected by our generalizations or stereotypes about others. For
example, depending on the cultural biases of the algorithm, self-driving cars might be more inclined to save
younger rather than older pedestrians in cases where an accident cannot be avoided. While bias is not always
problematic, it can frequently lead to unfairness or other negative consequences” (Howard). These biases
would be a result of the training the robots receive before being integrated into human lives. Commonly,
robots are programmed to associate female activities with girls and male activities with boys, but they do not
currently have the capabilities to recognize there may be exceptions to certain rules. Since robots do not have
a pair of eyes and a brain, they cannot physically see a child playing with a toy and see their gender. Robots
just use context clues, such as the length of the child’s hair, even though a child with long hair may not be a
girl. Situations such as this one could arise and people may be very offended by the assumptions made by the
machine.

One main source of bias that has recently been found is with self-driving cars. Self-driving cars must have the
ability to distinguish certain things from each other. As said by Howard, “The automated systems must have
the capability of deciphering vague or ambiguous data--is that sunlight reflecting from the road, sensor noise,
or a small child? Again, determining how best to construct these algorithms and weighing the resulting
decisions ultimately gives rise to another aspect of algorithmic bias” (Howard). [5]. If the car detects a block
in the road, but overturns it as a shadow of a something when actually it is a child, the child would get hit and
potentially die. Issues like this one are present in many other situations, such as robots working in factories
and in other places, where they assume one thing incorrectly.

Recently, there have been training methods used to prevent bias in these devices. The robots are taught to
adapt to certain situations using penalization, but this brings up the concern if robots can even be penalized
because they do not have emotions. The goal is for robots to be “punished” if they assume something that is
not true about a certain thing (e.g. people, cars, colors, activities, etc.). Quoted by the authors of “Learn from
experience: Probabilistic prediction of perception performance to avoid failure,” Tzang (the quoted author)
says, “This can be implemented under the recently proposed framework of adversarial training, whereby an
additional adversarial network is trained to discriminate between the two domains, given the internal
representations of the first network. The original network is then also trained to fool the adversary (i.e. by
adding the negative of the discriminator loss), thereby encouraging it to learn representations that are domain-
invariant. (Tzeng et al., 2016)” (Gurau et al., 983) [4]. This method will help to prevent bias in future
programming, just needs to be perfected so every case is covered and nobody becomes offended or hurt by
the actions of a robot.

Conclusion:
Slowly taking over the world and becoming the dominant species, the capabilities of robots are increasing Commented [CAM18]: Oh heavens. Really? Too
exponentially. Being present in many homes and workplaces, robots and humans are constantly dramatic for a literature review.
communicating, either directly or indirectly. Robots have the means to record and interpret data as a human Our tone must be objective, neutral, scholarly
being would be able to do so. They are becoming more technologically advanced and more recently have been throughout.
modeling an internal system that closely represents the human brain. Robots are able to model human
So, this conclusion must do three things:
emotions by using the process of deep learning in their communications with them which is a debatable topic, • identify the gap in our current knowledge
as robots are machines and not humans. With all of the new advances in robotic technology, an ethical debate • identify the most promising line of inquiry/research (in
arises when deciding whether to coin robots as responsible for their actions, but the consequences become this case, it may be a particular methodology for
directed towards the engineers and programmers who constructed the robot. Robots cannot be fully studying robot/human interactions)
• lay out exactly what studies need to be done next
integrated into the world due to their current limitations which include the ability to make split second
decisions and bias in certain situations. Evidence shows that in upcoming years, robots will be even more We don’t need to recap anything or summarize further
prevalent in society when those issues are resolved. Currently, the benefits of having robots in the world are
outweighed by all negative aspects that come along with their presence, so they are not completely integrated
in society as they could be.
References Commented [CAM19]: I have reformatted the first
reference in the correct style. Please update the rest.
[1] Amigoni, F., Schiaffonati, V. “Ethics for Robots as Experimental Technologies: Pairing Anticipation
with Exploration to Evaluate the Social Impact of Robotics,” IEEE Robotics and Automation Magazine,
2018, 25: 30–36.

[2] Duckworth, P., Hogg, D. C., Cohn, A. G., “Unsupervised human activity analysis for intelligent mobile
robots.” Artificial Intelligence, 2019. 270: 67-92. Found via Computers and Applied Sciences (Complete)
search.

[3] Frank, S., “AI could decide to snitch on you.” New Scientist, 2019. 241: 16-16. Found via Computers and
Applied Sciences (Complete) search. Commented [CAM20]: Not a scholarly article — this is
meant for a public audience. You may use this for your
intro and conclusion only
[4] Gurau, K., Rao, D., Tong, C. H., Posner, I., “Learn from experience: Probabilistic prediction
of perception performance to avoid failure.” International Journal of Robotics Research,
2018. 37: 981-995. Found via Computers and Applied Sciences (Complete) search.

[5] Howard, A., Borenstein, J., “Trust and Bias in Robots.” American Scientist, 2019. 107: 86-89. Commented [CAM21]: Not a scholarly article — but this
one has a very promising bibliography and you can
Found via Computers and Applied Sciences (Complete) search. track back to the scholarly sources they used

[6] Kanjo, E., Younus, E. M. G, Ang, C. S. “Deep learning analysis of mobile physiological,
environmental and location sensor data for emotion detection.” Information Fusion, 2019.
49: 46-56. Found via Computers and Applied Sciences (Complete) search.

[7] Sangseok, Y., Robert Jr., L. P., “Human--Robot Similarity and Willingness to Work with a
Robotic Co-worker.” ACM/IEEE International Conference on Human-Robot Interaction, Commented [CAM22]: This is a conference paper, not
a scholarly article. We’ll allow it here (but this would not
2018. 251-260. Found via Computers and Applied Sciences (Complete) search. be allowed in a regular length course on campus_

We must use scholarly, peer-reviewed articles only.


Things to work on/fix before final copy
● Citation style with source [x] and (authors) after using it - what exactly should it look like
● Find data for a table - it has been a bit challenging

Potrebbero piacerti anche