Sei sulla pagina 1di 6

Cover Page

Title: Teach Our Robot To Dance

Investigators: Brian Scassellati, faculty; Dan Leyzberg, graduate student;


Eleanor Avrunin, undergraduate student, class of 2011

Department: Computer Science

Address: 51 Prospect St., New Haven, CT 06511

Phone: 732 642 8550

Email: dan.leyzberg@yale.edu

Project Period: 1 year

Research Site: Yale University, Social Robots Lab, Watson Hall, Room 400. (All
investigators are affiliated with this site.)

Certifications are attached separately.


Project Description
Introduction

We intend to conduct a psychological experiment about human-robot interaction.


We will examine whether people are better teachers/trainers (i.e. more giving of
their time and more invested in the act of teaching) when they are asked to teach
an emotionally-intelligent robot than when they are asked to teach an
emotionally-less-sophisticated robot.

The robot we'll use is called Keepon: it's yellow, a


foot tall, and shaped like two stacked tennis balls with dots for eyes and a dot for
a nose. See picture to the right. It is capable of rotating side to side, leaning side
to side, leaning forward and back, and squishing up and down. It can also speak
(i.e. can play audio), hear (i.e. has a microphone), and see (i.e. has a camera).

We will ask our participants to teach Keepon five short dances and we will
measure how many times they elect to teach each dance. There will be two
experimental groups: one will see the emotionally-sophisticated robot (the
treatment group), the other will see an emotionally-less-sophisticated version (the
control group) but in both cases, the robot will learn at the same rate, and, in fact,
no matter how well or poorly any individual participant teaches, the robot will
perform the dances with the same sequence of accuracies for each participant.

Purpose
The purpose of the experiment is to find out whether people teach differently
when teaching an emotionally-intelligent robot than with an emotionally-less-
sophisticated one. We hypothesize that people will be better teachers in the case
of the more emotionally-sophisticated robot.

Background

Last year, our lab studied how people talk when they teach a robot. [1] We found
that people approach robots like they do pets or other people: they talk a lot to
the robot in the form of guidance, encouragement, and feedback all throughout
the robot's actions. Humans-teaching-robots is a much studied field at the
intersection of psychology and computer science: this research anticipates the
need for robot-builders to know how people will want to teach and use the
machines they build. [2-5] As artificial intelligence becomes more powerful, robots
become more useful; and as robots become more useful, those that build them
need to know how people can more easily control, train, and teach them.

In this experiment, we're studying whether the robot's emotional-intelligence will


incentivize people to be better teachers. Throughout this document, when we
refer to an "emotionally-intelligent" (or "emotionally-sophisticated") robot... we're
referring specifically to this property:

Significance

Knowing how people behave when they teach our robot to dance will let us
generalize about better ways to build robotic assistants, teammates, and
therapeutic agents of many sorts. Robots are used now to help rehabilitate stroke
victims [6], help people stick to their diets [7], help elderly people cope with
depression [8], and many more applications are currently in development in our
lab and elsewhere. Our research will inform whether emotional-intelligence (of the
sort we're implementing) ought to be part of these sorts of robot therapies.

Test Procedures

Participants will be asked to teach Keepon a series of five short dances, displayed
on a screen behind the robot. They will stand on a Nintendo Wii Fit Balance Board,
a white scale-like device that measures how a person is standing (i.e. whether
he/she is tilting, leaning, rotating, scrunching) as a means of conducting the
robot's movements. The participants will be asked to follow the dance moves on
the screen in order to help the robot learn the moves. As they dance, the robot
will seem like its imitating their movements. The dance moves are a series of
body poses (tilting left or right, leaning forward or back, rotating left or right, or
scrunching up or down). A song will last between 30 and 60 seconds, after which a
score will be displayed on the screen measuring the robot's performance on that
trial. Then the participant will be asked to choose whether to they want to teach
that song again or move on to the next song.

Throughout this process, the robot will remark about how it feels about its
learning. In the case of the emotionally-sophisticated robot, it will deliver a
sequence of emotional-intelligent coping strategies to deal with its mistakes: "At
least I'm getting some exercise!" (i.e. finding a silver lining), "I'll pay more
attention next time, I promise." (i.e. planning to improve), "I'm so sorry!" (i.e.
making amends). The less-sophisticated version will deliver these same lines, but
in random order.

The experiment ends when the participant has played all five songs.

Deception

The experiment requires some deception: we will tell the participants that the
robot is learning from their actions, whereas, in reality the robot's behavior will be
scripted and identical for all participants. In addition, during the non-dancing
portions of the interaction, we will be controlling the speech of the robot from
another room, secretly, in order to respond to any comments or questions that the
participant may make or ask the robot. These workarounds simulate what will one
day be possible with artificial intelligence.

Possible Risks To Participants

We don't anticipate significant emotional or physical risks for participants in this


experiment.
The physical component of this experiment has participants standing on a large
platform and leaning or rotating or scrunching their bodies slightly. We don't
anticipate any increased risk of bodily harm than experienced while walking or
dancing.

The emotional component of this experiment has participants convinced the robot
is somewhat intelligent, being that it speaks in English and can understand and
respond to their questions. We don't anticipate significant emotional risk or
disturbance when informing our participants that the robot was, in reality, being
remotely controlled throughout the experiment.

Benefits To Participants

We don't anticipate any direct benefits to participants in this experiment. This


experiment is designed to discover generalizable human inclinations
and preferences.

References
[1] Kim, E. S., Leyzberg, D., Tsui, K. M., and Scassellati, B. 2009. How people talk when teaching a
robot. In Proceedings of the 4th ACM/IEEE international Conference on Human Robot
interaction (La Jolla, California, USA, March 09 - 13, 2009). HRI '09. ACM, New York, NY, 23-30.

[2] Thomaz, A. L. and Breazeal, C. 2008. Teachable robots: Understanding human teaching
behavior to build more effective robot learners. Artif. Intell. 172, 6-7 (Apr. 2008), 716-737.

[3] Otero, N., Saunders, J., Dautenhahn, K., and Nehaniv, C. L. 2008. Teaching robot companions:
the role of scaffolding and event structuring. Connect. Sci 20, 2-3 (Jun. 2008), 111-134.

[4] Chatila, R. 2008. Toward cognitive robot companions. In Proceedings of the 3rd ACM/IEEE
international Conference on Human Robot interaction (Amsterdam, The Netherlands, March 12 -
15, 2008). HRI '08. ACM, New York, NY, 391-392.

[5] Argall, B., Browning, B., and Veloso, M. 2007. Learning by demonstration with critique from a
human teacher. In Proceedings of the ACM/IEEE international Conference on Human-Robot
interaction (Arlington, Virginia, USA, March 10 - 12, 2007). HRI '07. ACM, New York, NY, 57-64.
[6] Mahoney, R. M., Van Der Loos, H. F., Lum, P. S., and Burgar, C. 2003. Robotic stroke therapy
assistant. Robotica 21, 1 (Jan. 2003), 33-44.

[7] Kidd, C. D. 2008 Designing for Long-Term Human-Robot Interaction and Application to Weight
Loss. Doctoral Thesis. UMI Order Number: AAI0819995., Massachusetts Institute of Technology.

[8] Heerink, M., Kröse, B., Wielinga, B., and Evers, V. 2008. Enjoyment intention to use and actual
use of a conversational robot by elderly people. In Proceedings of the 3rd ACM/IEEE international
Conference on Human Robot interaction. HRI '08. ACM, New York, NY, 113-120.

Subject Population

Treatment Of Data

Potrebbero piacerti anche