Sei sulla pagina 1di 4

2013 Electronics and Computer Science MSc Project Brief

Name Devangini Patel ID no 25896806 Email Dp2c12@soton.ac.uk

Supervisor Co-supervisor

Dr. Jason Noble

Date of 1st meeting

18-06-20913

Project title

Self-recognition of Charlie, the Robotic Human Head, using the mirror test

Description of project Mirror test is a measure of self-awareness, developed by Gordon Gallup Jr., to check whether individuals are capable of being self-aware. The subject is supposed to be able to recognize itself in the mirror. Only certain animals: humans, bonobos, chimpanzees, orang-utans, gorillas, bottleneck dolphins, orcas, elephants, European magpies have been able to pass the test. Babies are able to pass Rouge test, a variant of this test, around 18 months of age. [1, 2, 3] Charlie is a robotic human head developed for Biologically Inspired Robotics (BIR) Module. It can see, hear, sense touch on its skin. It is able to move its eyes, eyelids, jaw, neck and facial muscles. It has a realistic human latex mask for its face. [4] When it sees itself in the mirror, it has to tell that the face that it is seeing is his. If it is able to successfully tell that the face it is seeing is his, then it means that it is able to do self-recognition. For all this, image processing has to be done. To test the accuracy of the code, still photographs and videos of Charlie himself will be shown in front of him.

[1] https://en.wikipedia.org/wiki/Mirror_test [2] http://www.sciencedaily.com/articles/m/mirror_test.htm [3] Gordon G. Gallup, Jr., James R. Anderson, and Daniel J. Shillito (2002). The Mirror Test. In Marc Bekoff, Colin Allen, and Gordon M. Burghardt, eds., The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition, pp. 325-333. Cambridge: MIT Press. [4] Sriram Kumar, Devangini Patel, Sara Abad Guaman, Suraj Padhy, Stabak Nandi (2013). Charlie: the robot head, Biologically Inspired Robotics, Southampton University.

Does your project involve laboratory work? I have completed a risk assessment form Does your project involve human subjects? I have completed an ethics application

YES NO NO NO

Detailed description of project aims, objectives and methods Previous work: Self-recognition for robots can work two ways: the first is to compare motion visible to the robot else and the second is to compare facial expressions. [5] The first approach has been implemented by two robots: Nico [6], Cog [7]. The methods adopted by these two are generic (dont consider how the robot looks like) and are applicable to other robots. Khepera II follows a concept which is similar to the second approach since it cant see its body with its own eyes. But it moves towards or far away from the image it sees and two Khepera II robots are used so it cant be used for heterogeneous robots. [8]. Nico is a humanoid robot which learns to classify objects in its visual field into three categories: inanimate object, self and other animate objects using a dynamic Bayesian model. Nico moves its arms and if it can see an object moving with the same speed then it is itself; all other animate objects will be moving at different speeds and everything else would be inanimate. Here, Nico doesnt know how it looks like, so this concept is applicable to all robots. [6] Cog sees different parts of its body in the mirror and then shakes it. It has to learn the association between the two views it sees. It uses the notion of correlating multiple sensor modalities to understand that the reflections motion and its own motion are related. [7] Khepera II calculates the coincidence rate of behaviour between the self robot and another robot. Junichi Takeno has also considered the case when another robot imitates the first one; in which case the coincidence rate is not that high as expected and so Khepera II succeeds in 100% mirror image recognition. A framework constructed from MoNADs in the form of a recurrent neural network was developed to replicate consciousness to be self-recognize. There are three monads: (i) imitation monad which acts a simple reasoning system to copy the other robot, (ii) distance monad which acts as a simple feelings system and directs the robot towards or far away depending on the distance between them and (iii) settlement monad that acts as a simple association system restricts the behaviour of the other two subsystems. Continuously, the coincidence rate is calculated and once it reaches a threshold, then it is inferred that other is self-image. [8] Idea in detail: The robot should be able to detect human faces initially. The robot does not know how it looks like. So it will have to make actions by which it can decide the face it is seeing is his or not. This will also be applicable for the face it is seeing in the mirror. If Charlie is seeing some human face, then when making expressions or by doing some actions, the human is not able to do the same actions, it means that it is definitely not seeing itself. In the case of a mirror image, the mirror image will replicate all those actions and hence will not be able to tell itself from the mirror image. Thus, the mirror image is itself and will recognise itself in the mirror. When Charlie is kept in from of a video of itself, then there will be some actions that will not match and the difference will be made. For this to be achieved, Charlie has to have the notion of moving his face part at some angle in some direction. For example, if Charlie is dropping his jaw, then its self-image should also drop his jaw. If Charlie stops moving his jaw then the self-image also has to stop moving his jaw. So Charlie should know which part of his face he is moving and which part(s) is the self-image moving. So Charlie has to first detect human face and then detect moving parts in the human face and tell which parts are moving. Charlie randomly decides to move his face parts in some random way. Now Charlie doesnt know how his face looks like and also which speed in which the face part is moving. Even though the servo motors that are controlling that face part are moving at some speed, it may not mean that the face part will also be moving at that speed. So it has to map the speed of expected speed and actual speed of the human face it is seeing. It at all times, the ratio between both these speeds for each face part is maintained then it means that the human face it is seeing is itself. Another way to achieve this would be by fitting stress sensors under the skin and detecting the pressure created when the motors controlling the facial muscles move. Regression can be applied for some time to understand how much distance corresponds to how much pressure. Then the pressure to observed estimated facial change can be compared. Charlie also has to consider the delay between the when he has decided to move and when the face part is actually moving. This involves acknowledgement from the microcontroller about the state of the servo motors

and then mapping to how much the face part would have moved. Possible Extensions: If this is done, then the human face recognition module can be removed and Charlie can be taught to recognise categories of objects including human faces. So when it sees itself in the mirror, it can tell that the thing it is seeing is a human face. In this case, Charlies intentions are opposite. He is not testing whether the objects it sees is himself but explores the object using vision and finds it odd that the self-image is replicating himself. This would be an attempt of converting top-down code to bottom-up code. Also, after this phase, it can recognise himself in images and videos and make a concept of self. [5] Haikonen, P. O. (2007). Reflections of consciousness: The mirror test. In Proceedings of the 2007 AAAI Fall Symposium on Consciousness and Artificial Intelligence 67-71. [6] Gold, K., & Scassellati, B. (2009). Using probabilistic reasoning over time to self-recognize. Robotics and autonomous systems, 57(4), 384-392. [7] Fitzpatrick, P., & Arsenio, A. (2004). Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self. 59-66 [8] Takeno, J. (2008). A Robot Succeeds in 100% Mirror Image Cognition. International journal on smart sensing and intelligent systems, 1(4), 891-911.

Work Plan and Milestones


Week # week beginning : Develop serial communication code between Arduino and python My holiday Do stereo vision, track objects of interest, find moving objects Read about mirror test and how babies and animals are able to self-recognize, and find the parts of the face that are moving at in which direction, randomly move some face part Improve the hardware of Charlie, combine all the modules so that they can communicate Learning object categorization, convert to bottom-up approach Writing-up Milestone demonstrate to supervisor/examiner Milestone dissertation draft complete Final corrections Milestone Hand-in 1 3/6 2 10/6 3 17/6 4 24/6 5 1/7 6 8/7 7 15/7 8 22/7 9 29/7 10 5/8 11 12/8 12 19/8 13 26/8 14 2/9

Description and dates of milestones: 1. 10/6/2013, Develop serial communication code between Arduino and python 2. 24/6/2013, Do stereo vision, track objects of interest, find moving objects 3. 8/7/2013, Read about mirror test and how babies and animals are able to self-recognize, and find the parts of the face that are moving at in which direction, randomly move some face part 4. 15/7/2013, Improve the hardware of Charlie, combine all the modules so that they can communicate 5. 5/8/2013, Learning object categorization, convert to bottom-up approach 6. 19/8/2013, Demonstration to supervisor 7. 26/8/2013, Demonstration to second examiner 8. 2/9/2013, Writing-up 9. 6/9/2013, Hand in final copy of dissertation

Potrebbero piacerti anche