Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
New technologies have allowed for the recognition of humans in an abstract way. Robots have the ability to
detect a human being by using their RGBD sensors and by using an OpenNI tracker which establishes their
3D positions. If someone is standing backwards or sidways, the sensor may detect a human incorrectly or
mistake them for something else, but usually does not go wrong. Altering the contrast of the RGBD sensor
will allow for the data to have more contrast and limit the amount of errors that occur.[2]
Robot Emotions:
Sangseok and You describe in their article how humans prefer to work alongside robots that have a similar
personality to them. The concept of giving robots a personality is unheard and foreign for many, as people
associate having a personality with a living organism that has a soul, not a machine created out of wires and
metal parts. To build a machine with a soul would be impossible, as a soul is something a person is born with
and does not obtain over time, so saying that a machine created by mankind has a personality seems merely
impossible as well. Kanjo, the author of “Deep learning analysis of mobile physiological, environmental and
location sensor data for emotion detection,” says in his essay, “With the increased availability of low cost
wearable sensors (e.g., Fitbit, Microsoft writs [sic] bands), there is an emergence in research interest in using
human physiological data (e.g. galvanic skin response (GSR), heart rate (HR), electroencephalography(EEG),
etc.) for emotion detection” (E. Kanjo et al. 49). He describes how human emotions cover a large range of
expressions that are unable to be represented by technology and applied to robots and artificial intelligence as
of now. This may be an issue because if a robot cannot express all of the emotions a human is capable of
expressing, danger may arise if they are viewed on the same level as humans, as expressing a wide variety of
emotions is a distinguishing characteristic of humans that all other species lack. In the future, there are
possibilities that this can be addressed and the incorporation of all human emotions can be held by robots,
but currently technology that can permit that does not exist.
Kanjo introduces the concept of deep learning being applied to the incorporation of human-like aspects in
technological devices. Although an advanced topic, the deep learning process that is integrated into the
construction of many robots can be simplified. Deep learning is the method of incorporating man-made
neural networks into robots by using data collected from the human brain. In a way, deep learning is the
method of constructing a near-perfect model of the brain, using models to represent neural networks. These
methods allow the robot to make decisions, have a memory and pick up background information [6]. Robots
can interact with humans when given these characteristic features using this method but there will still be
flaws, given the fact that it is impossible to perfectly replicate a brain.
The author describes the concept of roboethics, the type of ethics that relates to the designers and the users
of the robots, not the robots. This term cannot be coined to the actions of robots because regardless of the
robots actions, the humans are responsible for whatever happens as a result of those actions. He mentions
that robots are currently considered experimental technologies, a term used to name technologies that that
are still developing and are limited in many ways. The robots are treated as experimental technologies because
they are still limited and do not have all of the capabilities that are needed for them to be completely sufficient.
The robots have risks and benefits associated with them, but currently, the effects of the risks outweigh the
benefits. Commonly, automated robots are also studied as a social experiment because people are still adjusting
to their presence and the results of their incorporations in society are still unknown. [1]
When a robot is granted full control of itself, such as a device that includes autopilot, there will always be an
option to give complete control back to the user, but when will that control be automatically given back to
the user? At what point will the robot say “I cannot control this situation because I do not have the knowledge
to successfully take control and cannot make the decision that needs to be made?” Situations that arise from
complications in robots that are based on coding error direct attention to the programmers and companies
that built the machine. The programming needs to be spot on, or errors will arise and robots will not have the
ability to perform correctly in certain situations. There are many different scenarios where errors in coding
and mechanical engineering have negatively impacted humans already in the world, for example, recent errors
in building cars. Errors in artificial intelligence machines and robots may not only hurt users physically, but
give users incorrect data and therefore impacting their lives in other ways. Robots need to be coded to know
what to do in every situation, but that seems almost impossible, as life is unpredictable and every situation can
never be planned out.
“Suppose you are walking on a sidewalk and are nearing an intersection. A flashing sign indicates that you can
cross the street. To cross safely you have to rely on nearby cars to respect the rules of the road. Yet what
should happen when pedestrians (and others) are interacting with a self-driving car? Should we, as pedestrians,
trust that this car, without a human behind the wheel, will stop when we enter a crosswalk? Are we willing to
stake our lives on it?” (Howard) [5]. This situation is one that currently arises frequently in discussions of Commented [CAM14]: 1. repetitive
whether or not to transition to a society dominated by self-driving cars, which are a form of robots that are 2. you only show one source so how do your readers
know that this is a frequent conversation?
programmed by humans to ease the lives of others. There have been cases where self-driving cars were at
“fault” (in quotations because can a robot really be at fault?) and people were killed and injured. Due to the That is, you are providing a summary of the research
small number of automated cars in existence as of now, there are not many instances of this situation arising. AND and evaluation of the quality of that research AND
then a description of what we don’t know and what
The reason that automated cars are being incorporated into societies are so less accidents occur between two studies we need to do next. Only that.
vehicles or a vehicle and a pedestrian, so before an entire society transitions to this new style of driving, self-
Commented [CAM15]: 1. Yes, they can (by our current
driving cars need to have perfect codes and algorithms for predicting what will happen in every situation [5]. laws and insurance practices)
2. This is Anna commentary — this project should have
Help w/ data table!!! No data available! no Anna commentary until the conclusion. So, there
are a lot of things to be cut in this draft
Described in the essay, the authors propose a model that will determine whether to grant control to the human Commented [CAM16]: Is this actually true? That is,
how did your sources find their information?
operator in a certain situation. The model consists of a complex equation, but to simplify the terms, there are Questionnaires? Surveys? Interviews? Surely they
two states that the model can contain, a state of failure, where control is granted to the human, and a state of report some sort of data
passing, where the robot maintains control over a certain situation. Using equations of optimization, the state
Ah, I see — after looking at your references, I see that
of passing or failure is obtained by computing the ratio of losses to actions where a confidence threshold is only about half are scholarly (research) articles. So,
calculated. If the confidence threshold is high enough, the equation passes and the robot maintains control see my notes there.
and otherwise the human operator takes over. As said in the essay, “Briefly stated, the algorithms might
incorrectly interpret a situation because its decision is based on flawed or omitted data. One of the primary IMPORTANT: Add a paragraph that summarizes the
methods used by each of your sources. How did they
information sources that feeds the intelligence of many autonomous vehicles is computer vision algorithms. get their information?
They allow the system to see and interpret its surroundings--whether that includes the road, other cars, or
even pedestrians. Computer vision algorithms have their own inherent biases, though, especially when it
comes to interpretation. For example, various facial recognition systems struggle with identifying non-
Caucasian faces with the same rate of accuracy. Vision systems can also be "tricked" into seeing things that Commented [CAM17]: Only use scare quotes if your
are not there” (Howard). With the knowledge that certain situations can cause the robots to be tricked and be sources do
the result of errors in the machines decisions, some changes need to be made to prevent this from happening. Next, for this project, we should have no direct quotes
As stated in the essay, there are a few algorithms that are perfected to interpret every situation involving from sources, only paraphrase and summary. Why?
humans without errors. If the systems do not have the ability to make correct split second decisions about Because our goal is to show where our sources agree
— and that requires paraphrasing from several of them
what to do and when, the process will harm, rather than benefit the humans using it [5]. all at once. A quote shows only what one source says.
The most important part of the algorithm above is the fact that the robot must stop or divert their actions
when a certain situation is too dangerous to proceed forward with a certain manner. An emergent situation
that may arise in a robot controlled environment must not be carried out by the robot and a button must be
installed that removes all control from the machine, granting it to the human operator. Failure to include an
emergency stop button will lead to problems, such as injured people or damaged objects. [1] An example of
a method used recently in a robot granting control back to a human was seen with a rehabilitation robot. If
they felt their patient was in danger, they projected a message saying “please contact your physician ; injury
may result if you continue.”[5] This method used takes away the control from the robot, giving the user the
option to continue on their own without the assistance of the machine (laying all legal liabilities on them, not
the robot because the robot directed them to stop) or they will contact the physician (the human operator
that will take over in this case).
The bias that is present in many current situations involving automated robots go against many social norms
that have recently been incorporated into society, such as allowing one to change their gender at any age and
the social acceptance of males and females doing things that would have previously been considered “activities
of the opposite sex” (e.g. females playing with video games, males wearing makeup, etc.). [5] As said by
Howard in “Trust and Bias in Robots,”“Bias involves the disposition to allow our thoughts and behaviors,
often implicitly or subconsciously, to be affected by our generalizations or stereotypes about others. For
example, depending on the cultural biases of the algorithm, self-driving cars might be more inclined to save
younger rather than older pedestrians in cases where an accident cannot be avoided. While bias is not always
problematic, it can frequently lead to unfairness or other negative consequences” (Howard). These biases
would be a result of the training the robots receive before being integrated into human lives. Commonly,
robots are programmed to associate female activities with girls and male activities with boys, but they do not
currently have the capabilities to recognize there may be exceptions to certain rules. Since robots do not have
a pair of eyes and a brain, they cannot physically see a child playing with a toy and see their gender. Robots
just use context clues, such as the length of the child’s hair, even though a child with long hair may not be a
girl. Situations such as this one could arise and people may be very offended by the assumptions made by the
machine.
One main source of bias that has recently been found is with self-driving cars. Self-driving cars must have the
ability to distinguish certain things from each other. As said by Howard, “The automated systems must have
the capability of deciphering vague or ambiguous data--is that sunlight reflecting from the road, sensor noise,
or a small child? Again, determining how best to construct these algorithms and weighing the resulting
decisions ultimately gives rise to another aspect of algorithmic bias” (Howard). [5]. If the car detects a block
in the road, but overturns it as a shadow of a something when actually it is a child, the child would get hit and
potentially die. Issues like this one are present in many other situations, such as robots working in factories
and in other places, where they assume one thing incorrectly.
Recently, there have been training methods used to prevent bias in these devices. The robots are taught to
adapt to certain situations using penalization, but this brings up the concern if robots can even be penalized
because they do not have emotions. The goal is for robots to be “punished” if they assume something that is
not true about a certain thing (e.g. people, cars, colors, activities, etc.). Quoted by the authors of “Learn from
experience: Probabilistic prediction of perception performance to avoid failure,” Tzang (the quoted author)
says, “This can be implemented under the recently proposed framework of adversarial training, whereby an
additional adversarial network is trained to discriminate between the two domains, given the internal
representations of the first network. The original network is then also trained to fool the adversary (i.e. by
adding the negative of the discriminator loss), thereby encouraging it to learn representations that are domain-
invariant. (Tzeng et al., 2016)” (Gurau et al., 983) [4]. This method will help to prevent bias in future
programming, just needs to be perfected so every case is covered and nobody becomes offended or hurt by
the actions of a robot.
Conclusion:
Slowly taking over the world and becoming the dominant species, the capabilities of robots are increasing Commented [CAM18]: Oh heavens. Really? Too
exponentially. Being present in many homes and workplaces, robots and humans are constantly dramatic for a literature review.
communicating, either directly or indirectly. Robots have the means to record and interpret data as a human Our tone must be objective, neutral, scholarly
being would be able to do so. They are becoming more technologically advanced and more recently have been throughout.
modeling an internal system that closely represents the human brain. Robots are able to model human
So, this conclusion must do three things:
emotions by using the process of deep learning in their communications with them which is a debatable topic, • identify the gap in our current knowledge
as robots are machines and not humans. With all of the new advances in robotic technology, an ethical debate • identify the most promising line of inquiry/research (in
arises when deciding whether to coin robots as responsible for their actions, but the consequences become this case, it may be a particular methodology for
directed towards the engineers and programmers who constructed the robot. Robots cannot be fully studying robot/human interactions)
• lay out exactly what studies need to be done next
integrated into the world due to their current limitations which include the ability to make split second
decisions and bias in certain situations. Evidence shows that in upcoming years, robots will be even more We don’t need to recap anything or summarize further
prevalent in society when those issues are resolved. Currently, the benefits of having robots in the world are
outweighed by all negative aspects that come along with their presence, so they are not completely integrated
in society as they could be.
References Commented [CAM19]: I have reformatted the first
reference in the correct style. Please update the rest.
[1] Amigoni, F., Schiaffonati, V. “Ethics for Robots as Experimental Technologies: Pairing Anticipation
with Exploration to Evaluate the Social Impact of Robotics,” IEEE Robotics and Automation Magazine,
2018, 25: 30–36.
[2] Duckworth, P., Hogg, D. C., Cohn, A. G., “Unsupervised human activity analysis for intelligent mobile
robots.” Artificial Intelligence, 2019. 270: 67-92. Found via Computers and Applied Sciences (Complete)
search.
[3] Frank, S., “AI could decide to snitch on you.” New Scientist, 2019. 241: 16-16. Found via Computers and
Applied Sciences (Complete) search. Commented [CAM20]: Not a scholarly article — this is
meant for a public audience. You may use this for your
intro and conclusion only
[4] Gurau, K., Rao, D., Tong, C. H., Posner, I., “Learn from experience: Probabilistic prediction
of perception performance to avoid failure.” International Journal of Robotics Research,
2018. 37: 981-995. Found via Computers and Applied Sciences (Complete) search.
[5] Howard, A., Borenstein, J., “Trust and Bias in Robots.” American Scientist, 2019. 107: 86-89. Commented [CAM21]: Not a scholarly article — but this
one has a very promising bibliography and you can
Found via Computers and Applied Sciences (Complete) search. track back to the scholarly sources they used
[6] Kanjo, E., Younus, E. M. G, Ang, C. S. “Deep learning analysis of mobile physiological,
environmental and location sensor data for emotion detection.” Information Fusion, 2019.
49: 46-56. Found via Computers and Applied Sciences (Complete) search.
[7] Sangseok, Y., Robert Jr., L. P., “Human--Robot Similarity and Willingness to Work with a
Robotic Co-worker.” ACM/IEEE International Conference on Human-Robot Interaction, Commented [CAM22]: This is a conference paper, not
a scholarly article. We’ll allow it here (but this would not
2018. 251-260. Found via Computers and Applied Sciences (Complete) search. be allowed in a regular length course on campus_