Sei sulla pagina 1di 50

School of Engineering, University of Birmingham

EE4U: Human
Factors &
Interactive System
Design
Assignment
A Future Human-Controlled/Supervised Robotic System for
Elderly Assisted Living

Matt Dennis (MEng Elec + Elec Engineerg FT)


mxd589@bham.ac.uk
Acknowledgements
The author would like to thank Professor Bob Stone for his engaging approach to
teaching Human Factors and Interactive Systems. The support that he has provided
throughout the project has been invaluable and has inspired the author to become more
considerate of what makes good design and its effect on the end-user. The author would also
like to thank the guest lecturer, Dr Jim Knight, for his expertise teaching user centred design
and interactive method of human factors techniques. The authors’ final acknowledgment
goes to the Human Interface Technologies Team, who provided impressive demonstrations
of their research which allowed the author to better understand the discipline of human
factors and the importance of its application to design.
Abstract
The aim of this assignment is to create a robotic support system to assist the elderly
within their own homes using a human centred design approach. A comprehensive literature
review was conducted to fully research and understand the final goal of the project. The first
area of the literature review discusses Human Factors and Human Centred design to further
understand how it could be implemented into the project. The second area discussed is the
end-user (home bound elderly population) and the types of debilitations they are more likely
experience as they age. The final area discussed is existing assistive technology and how it
supports the target user. The information gathered in the review then is used to detail the
design specifications of the assistive robot that will fully meets the requirements of the end-
user. The full design process is included to show how the robot developed from initial
concepts to the final design where it is then evaluated against the original design and usability
criteria. Where refinements to the original design were needed, this is stated in the future
developments section of the report where other potential applications and opportunities for
the robot are considered.
Contents
Acknowledgements................................................................................................................................. 1
Abstract ................................................................................................................................................... 2
0 Introduction ......................................................................................................................................... 5
1 Literature Review ................................................................................................................................. 7
1.1 Human Factors and User-Centred Design..................................................................................... 7
1.1.1 Human Factors ....................................................................................................................... 7
1.1.2 Human Centred Design .......................................................................................................... 8
1.1.3 Usability and User interaction ............................................................................................... 9
1.1.4 Anthropometry .................................................................................................................... 11
1.2 End-User Interaction ................................................................................................................... 11
1.2.1 Cognitive impairments ......................................................................................................... 12
1.2.2 Perceptual and physical impairments .................................................................................. 12
1.2.3 Gerontechnology and KSA – TSAs ........................................................................................ 13
1.3 Existing Assistive Technology ...................................................................................................... 15
2 Summary of the Literature Review .................................................................................................... 19
2.1 Human Factors and User Centred Design ................................................................................... 19
2.2 End User Interaction ................................................................................................................... 20
3 Interface Design Process .................................................................................................................... 21
3.1 Understand and specify the context of use ................................................................................ 21
3.1.1 The end user story ............................................................................................................... 21
3.2 Task Analysis ........................................................................................................................... 23
3.3 Specify the user requirements ...................................................................................................... 2
3.3.1 User requirements ................................................................................................................. 2
3.4 Produce design solutions to meet user requirements.................................................................. 4
3.4.1 System requirements ............................................................................................................. 4
4 Final design overview ........................................................................................................................... 9
4.1 Dimensions.................................................................................................................................. 10
4.1.1 Overall height, width and breadth ....................................................................................... 10
4.1.2 Arm adjustable height .......................................................................................................... 11
4.1.3 Arm adjustable width ........................................................................................................... 12
4.1.4 Screen size and position....................................................................................................... 13
4.2 User interface.......................................................................................................................... 14
5 Evaluation Plan................................................................................................................................... 16
5.1 The user must be able to understand the robot clearly ............................................................. 16
5.2 The user must be able to receive assistance taking medicine .................................................... 17
5.3 The user must have an SOS emergency routine ......................................................................... 17
5.4 The user must be able to contact close friends through popular video telephony services ...... 17
5.5 The user must be encouraged to exercise regularly ................................................................... 17
5.6 The user must be able to receive assistance making meals ....................................................... 17
5.7 The user must be able to tell the robot where they have left things in the house .................... 17
5.8 The user must feel comfortable when using the robot at all times ........................................... 18
6 Future Research Directions and Opportunities ................................................................................. 18
7 Conclusion .......................................................................................................................................... 20
0 Introduction
Human Factors is a discipline that creates a bridge between humans, engineering and
the systems we use every day to accomplish our desired goals. It is the reason why some
products feel that they are “made for us” and integrate seamlessly into our lives. This is
because they are extremely usable and ergonomic where the design authority has considered
the human or “end-user” at their focus before and during the product design process. When
a product is so easy to use and integrates well into the task at hand, it then becomes more
difficult to realise how well the product has been designed (Norman 2002) and we, as the
user, also become more satisfied and effective in our role. However, when the human is not
considered in the design process this can have serious implications for the product, even
when designed thoroughly. This is because if it is the human that is using the system, the
human will dictate the entire system performance. This means that “well designed” products
can be subject to bad performance if the user is unable to interact with it as intended. Human
factors concerns the interaction between humans and the system to be used and the methods
to ensure that the system performance and human satisfaction are optimize (IEA 2019). Each
individual human has cognitive and physical limits where products must be designed carefully
around them to meet the users’ needs to ensure usability, satisfaction and effectiveness.

To ensure that the design of the robot is within the physical and cognitive limits of the
end user, human centred design techniques will be employed throughout the assignment.
This, therefore, will minimize the risk of human factors problems occurring and reducing
overall system performance when the design is finished. It will also allow an iterative design
process that will enable the evaluation of the design against the original criteria; if the original
context of use is not aligned with the design, further improvements can be made until the
specification is met. A compulsory document to consider during the assignment that has
standardized human centred design is BS ISO 9241. The standard will be used to implement
an iterative structure to the design process and make certain that human factors techniques
are utilised further understand and mitigate the type of problems humans in the system could
potentially create.

The entire population of Europe is getting older (SilverEco 2018) and therefore is
heavily influencing the demand for more assistive care to look after the elderly population
over 65. This therefore is increasing the need to design and make products for the age group
in order for them to lead more independent and satisfying lifestyles despite the many health
problems that can debilitate them. Consequently, it is now more important than ever to fully
understand the physical and cognitive limits of the elder generation and their attitudes to
technology. More knowledge of this subject will make better use of human factors methods
and design process to make optimised devices for the assistive care of the elderly.
Furthermore, human assistive carers for the elderly take the role of helping their patients with
household tasks that are generally more difficult in old age, health check-ups and also for
companionship; an important part of ensuring that the mental health of the senior is in good
condition. Humans naturally learn to form strong bonds between each other through
friendship i.e. carer and patient, however, this assignment explores how a similar connection
can be formed between man and machine.

EE4U is a module that explores the discipline of Human Factors and topics related to
Interactive System Design. During the course, students looked specifically at human factors
integration into design, human factors methods, display and control interfaces, usability and
Gerontechnology. The end result was that the student received a foundation of required
knowledge to complete the end goal of the assignment; to design a concept for an assistive
robot for senior citizens living alone with a human centred design approach. However, it is
also imperative that the student completed their own research to further their
comprehension of topics that were presented in the lectures and consider the end-user in
more detail and their impact on the user requirements.

The report will be structured starting with a comprehensive literature review. The
literature review will consist of 3 main areas that will assist in the design of the robot. The
first area is a review of human factors, human centred design and the techniques that are
used to prevent human factors problems. The second area is of the elderly population. Older
people naturally experience decline in their physical and cognitive function, which is very
important to consider when employing human centred design techniques. The review
discussed the main problems that older people are likely to face which will heavily influence
the design requirements. The final section discussed existing assistive technology and the
successful aspects of their design which can be integrated into the assignment. The
information gathered from the review will then be used to create a user requirements
specification that will form the foundation of the design of the robot. This section of the
report will show how the requirements were used to influence the design of the 3D Model of
the concept made in SolidWorks. The human centred design process in ISO 9241 is an iterative
technique to frequently evaluate the design against the original criteria and context for use,
however, whilst multiple iterations of the design will not be included in this report an
evaluation of the first iteration will be included, followed by a section to show how future
improvements and applications could be made.
1 Literature Review
To produce a quality design of an assistive robot, an extensive literature review must
be completed. There are three broad subject areas that must be considered before starting
the design process. The first will include a review of Human Factors and User-Centred Design.
The robot is primarily made for interaction with elderly humans and therefore influence the
end-user requirements that must be considered in all stages of the design. Human Factors
and User Centred design methodologies offer the techniques and approach to successfully
create a product that operates seamlessly with the end-user. The second subject area will
review the End user (people aged 65 or over) in further detail and the increasing importance
of Gerontechnology. The population of the developing world continues to age. Therefore, the
demand for assistive services will increase. Gerontechnology presents an opportunity to use
the potential of modern technology to fill the expanding gap in the market. The final area of
research included in this report will include a review of the current assistive technologies that
are either available on the market or in production.

1.1 Human Factors and User-Centred Design


1.1.1 Human Factors
In 1933 the city of Chicago hosted an international exposition to celebrate 100 years
of progress in technological innovation. Exhibits included the best of its time in architecture,
transportation and engineering that had improved quality of life, underpinned by the motto
“Science Finds, Industry Applies, Man Conforms”. However, in the 21st Century, this
statement has been found to be false, where humankind has redefined its relationship with
technology. This was further demonstrated by (Norman 1993), where the motto was re-
written as “People Propose, Science Studies, Technology Conforms”. To create a successful
device for human-use, it is imperative to ensure that the technology is designed around the
physical and cognitive limits of the human. This makes certain that the integration of the user
and the device is seamless; a well-designed product fits the users’ needs without drawing
unwanted attention to itself (Norman 2002).

Despite the application of existing methods of design, analysis and review, system
performance does not always meet the required standard as expected from the end-user.
When human performance in the system is not considered, varied problems can manifest and
remain undiscovered during the design and review phases (Stanton, Salmon et al.
2006).Ultimately, this will result in rendering the system less capable than intended. Stanton
et al (2006) explain the reason why these problems can remain undiscovered throughout the
design process is that they are neither engineering nor human science problems. These design
failures are known as Human Factors problems. Human Factors or Ergonomics is a science-
based discipline that combines physiology, psychology, engineering and statistics (CIEHF
2019).Human Factors methods are used to identify the problems that humans in a system
could potentially create and can be implemented into all stages of the design process. This
ensures that the design will not force the user to adapt and work in an uncomfortable manner;
the system/product will be designed to suit them (CIEHF 2019).

A human factors method that is used frequently to gather user data to understand
how the product will work successfully with the user is Task Analysis. This is described as a set
of techniques aimed at drawing out what people do, representing the tasks, predicting
difficulties and evaluating the system against requirements (Preece 1994). There are two
approaches to Task Analysis. One method is task decomposition, where the actions of people
are described and structured using tasks and subtasks. The other method is Hierarchical Task
Analysis which is a graphical representation of a task divided into a “hierarchy” of operations
and plans. HTA is also more advantageous where it aims to identify tasks likely to fail and
propose solutions. The hierarchal structure of the approach allows for the creation of
operations and sub-goals at any level of abstraction of which is needed. This allows the
designer to ensure that they can go into the required level of detail for each task. When more
information about the task and the relative goals and sub-goals of the user are collected,
further evaluation against the user requirements can be made. This allows the operations to
be modified where improvements can be made (the task could be too complicated or fall out
of the physical or cognitive limits of the user) or validated if it is aligned with the objectives of
the end-user. This ensures that the system is built around the behaviours of the end-user and
the focus of the design is human centred.

1.1.2 Human Centred Design


Human-centred design is an approach that focuses on the end user and applies human
factors methods, usability knowledge and techniques (BSI 2010) and has been standardized
for industrial use by the British Standard ISO 9241. The standard describes the benefits of
human-centred design that improve the quality of the product by:

 Increasing the productivity of users and the operational efficiency of


organizations
 Being easier to understand and use, thus reducing training and support costs
 Increasing usability for people with a wider range of capabilities and thus
increasing accessibility
 Improving user experience
 Reducing discomfort and stress
 Providing a competitive advantage, for example by improving brand image
 Contributing to sustainability objectives

An increase in productivity/efficiency of the user will occur as a result of the product


being easier to understand and use. For products designed for industrial use, this will increase
the competitive advantage of the company in the market. End-users of the product designed
for commercial use will benefit from an overall improved user experience. In addition,
(Stanton, Salmon et al. 2006) also argue that the early implementation of the methods can
save considerable time, effort and expense. This is because if the human cannot use the
system as intended it can be very expensive to make changes in the later stages of a design
process.

The Human Centred-Design approach from planning to final design is depicted in


Figure 1. The diagram shows that the design solutions of the product are evaluated against
the requirements in a feedback loop that allows for the design authority to further
understand and specify both the context of use and the user requirements. Figure 1 also
stresses the “interdependence” of all the activities in the design process; a linear process is
not implied, each design activity uses outputs from other activities (BSI, 2010) to complete a
number of iterations of the process before the designed solution meets the all of the user
requirements.

Figure 1 – Interdependence of Human-centred design activities (BSI 2010)

1.1.3 Usability and User interaction


Usability knowledge is fundamental to human centred design as stated by ISO 9241. The
international standard defines usability as the effectiveness, efficiency and satisfaction with
which specified users achieve specified goals in specified environments (ISO, 2010). As
described previously, a well-designed product will allow the end user to meet their goals, thus
increasing their productivity in their working role and/or increasing levels of satisfaction. The
Nielsen Norman Group further argue the importance of usability where it is a condition for
the survival of the product: when users find the design difficult to use and cannot use the
interface as intended, end users are likely to stop using the product and look for an alternative
(Nielsen 2012). To improve the usability of a product, the Nielson group recommends
evaluation against 5 quality components:
 Learnability: How easy is it for users to accomplish basic tasks the first time they
encounter the design?
 Efficiency: Once users have learned the design, how quickly can they perform tasks?
 Memorability: When users return to the design, how easily can they re-establish
proficiency?
 Errors: How many errors do users make, how severe are these errors, and how easily
can they recover?
 Satisfaction: How pleasant is it to use the design?

Each component of quality is designed to complement the overall experience of using the
product and therefore reinforces design that prioritizes the end-user. Furthermore, it
prevents the design authority from “over-engineering” the product where features and
components are added for design’s sake and not to increase the usability of the product. A
well-designed product does not draw attention to itself, making the aptitude of the design
hard to notice (Norman 2002)

To increase the usability of the design, one must also consider the cognitive limits of
humans in the system. This is supported by the model human information processor devised
by Card, Moran & Newell in 1983.

Figure 2 - Human processor model (Card, Moran et al. 1986)

The Model Human Processor (MHP) is described by a set of memories and processors
operating through a set of principles for making approximate predictions of gross human
behaviour (Card, Moran et al. 1986). Figure 2 shows the three subsystems, perceptual,
cognitive and motor. Each subsystem is equipped with its own memories and processors.
They are interconnected and used to describe how humans interpret their senses and react
to them depending on our memories in the short and long term. The principles of operation
include the laws and mathematical models through which the subsystems are bound by.
Through modelling the cognitive limits of the end-user in the system using the human
processor model, it is possible to calculate the time taken to perform a certain task and the
likely level of usability that the end user will experience. This tells the design authority how
well the interface/product will meet the user requirements and predict which features should
be expected to work.

1.1.4 Anthropometry
The cognitive limits of the end-user must be considered for a successful design as
discussed in section 1.1.4, however, the physical limits must also be included to ensure
maximum usability. Anthropometry involves the systematic measurement of the physical
properties of the human body, primarily dimensional descriptors of body size and shape
(Islam, Asadujjaman et al. 2013). This allows design authorities to specify the dimensions of
workspaces, equipment etc. using anthropometric data. Therefore, the data enables the
design authority to “fit the task to the man” (Grandjean 1980) and make certain the design
process is following human-centred principles. In the context of this project, the
anthropometric data will determine the location of the display and buttons to operate the
robot; the end-user experience will be able to use the robot with ease by design where their
physical limits are not exceeded. It is also important to account in this report the contribution
to human factors and anthropometry of Henry Dreyfuss. Dreyfuss had a pragmatic approach
to design, stating that when the point of contact between the product and people becomes
a point of friction, then the industrial designer has failed (Britannica 2019). This is clearly
reflected in his work, The Measure of Man; a study of human physical limits and
anthropometric data which is used as an industrial tool appropriate for implementing human-
centred design.

1.2 End-User Interaction


The Office for National Statistics reported that the UK population is living longer than
ever before and is also, like many other developed countries, ageing. There has been a 2.3%
increase in the population over 65 over the past decade which is predicted to grow further to
at least 20.7% by 2027 (ONS 2018). These statistics highlight the importance of the
development of assistive technology, where an increase in the elderly population will
subsequently increase the demand for their care. Furthermore, whilst the population and
working age range of the UK are both increasing, this may not directly increase the number
of people working in the assistive care industry, further stressing the requirement for
technical alternatives. However, such alternatives require consideration into the target
audience where a human centred design approach is mandatory for a successful product. As
stated in section 1.1.3, the age of the end-user will have a significant impact on the user
requirements of the robot where they must be specified to meet the design criterion of ISO
9241. The process of ageing has a large impact on humans, where the elderly population are
affected by many debilitating impairments that must be considered as well as their level of
acceptance of new technology into their lifestyle.
1.2.1 Cognitive impairments
Age itself, not including dementia or other forms of cognitive impairment, has a
significant impact on cognitive decline; as we are living longer it is becoming more feared and
an increasingly major problem (Gow, Penke et al. 2009). However, age as previously stated is
not the only factor to limit cognitive function. For example, dementia creates significant
impairments to cognition; people affected by the disease develop symptoms including
memory loss, language problems, lack of judgement and more (Alzheimer’s UK, 2019). 7.1%
of people over 65 suffer from the disease (Prince, Knapp et al. 2014) where ongoing debate
continues to determine whether age is the cause of the disease or a related issue (Gow and
Gilhooly 2003). There is currently no cure for dementia (NHS 2019) however many hold the
belief that puzzle games and quizzes e.g. chess and Sudoku increase cognitive function, but,
this is a misconception; they improve performance in the cognitive domain trained have no
overall benefit in cognitive function (Kane, Butler et al. 2017) that would prevent the related
diseases. This means that the robot should be focussed around the usability of the design
considering lower cognitive function rather than including more features to “prevent” it.

The Model Human Processor (MHP) as discussed in section 1.1.4 was created to
approximate human behaviour when using an interface. Whilst the model is regarded highly
and is one of the most influential, the results were based on empirical data gained from
college students (Jastrzembski and Charness 2007). 85% of US college students in 1980 were
between the ages of 18 and 34 (NCES 2016) which is considerably lower than the age of the
end user. Younger people face fewer impairments than older people and therefore it is likely
that they have higher cognitive limits to use interfaces. This means that the MHP could
overestimate the ability of the end-user if the appropriate adjustments to the model are not
made. This was shown in 2007, where Jastrzembski & Charness used the MHP to produce
older adult (aged 69) models that were validated against a focus group performing mobile
phone tasks; when the parameters were changed, the models predicted performance
extremely well.

1.2.2 Perceptual and physical impairments


Vision is a sensory function that humans experience a decline in as they age. Age-
related macular degeneration (AMD) is the second highest cause of sight loss in the UK where
1.2 million people over the age of 75 live with eye problems (RNIB 2018).Older adults face
significantly more difficulty than their younger counterparts in visual tasks including
processing visual information quickly, light sensitivity, dynamic vision, near vision and visual
search (Ball and Owsley 1993). Therefore, it is important to understand how this relates to
the end user’s performance of everyday visual tasks that could be affected and the relevant
assistance that the robot would consequently provide. Furthermore, the user interface of the
robot itself must be designed to allow those with visual limitations to interact easily with the
device. Another sensory function that must be considered is the ability of the end-user to
hear. The term presbycusis derives from the Greek “presbys” meaning old and “akousis” for
hearing (Etymonline 2019) to describe the progressive loss of hearing humans experience
when ageing. The term is used to describe all the conditions that lead to impaired hearing in
the elderly (Understanding and Aging 1988). It is the second most common illness next to
arthritis affecting one in two by age 75. It is relevant to the robot where people affected
experience more difficulty in their ability to understand speech and detecting and localising
sounds (Gates and Mills 2005). This means that the robot, while interacting with the end user,
should use plain English that is easy to understand supported with strong visual cues to aid
localisation.

The final impairment to discuss is the effect of age on the physical state and physiology
of the human. From observational Mark I eyeball analysis, the effects of age are easy to
detect; the elderly require more physical support through mobility aids (Zimmer frames,
walking stick), often have slower motility rates and are at more risk of injury during physical
tasks. While the robot will not be designed as a mobility aid, it still worth considering into the
design the type of physical tasks that the elderly population generally have problems
completing that the robot could aid with i.e. lifting heavy objects (shopping/housework). One
contributing factor to the decline of the physical state of the older generation is sarcopenia,
a term used to describe the loss of muscle mass and strength that occurs naturally with ageing
(Morley, Baumgartner et al. 2001)However, a preventative measure is to ensure regular
exercise (Daley et. Al 2012) to slow down the process. Furthermore, physical activity in the
older generation has shown to have other benefits as a result; participation has the potential
to increase social contacts, reduce cardiovascular risk (Shepard, 1990) and slow down the
decline in cerebral cognitive function (Chodzko-Zajko 1991). A robot that could encourage
the end user to partake in regular workouts and therefore would be a beneficial aspect to its
functionality. Exercise programs designed for building muscle in the elderly exist (Lantham et
al., 2009, where their routines could be adapted for use in the home. The end-user, as a result,
could experience increased independence (Campanelli 1996) as they become stronger and
decrease their risk of falling over (Lord et al. 2005).

1.2.3 Gerontechnology and KSA – TSAs


Assistive devices made for the elderly to improve their quality life and level of
independence are part of an emerging field of research (Bock, Engelhard et al. 2008) known
as Gerontechnology (Bock, Engelhard et al. 2008). The discipline focuses on research
surrounding complex interaction with the elderly and technology to produce a better
environment or medicinal care (Pinto et al., 1997). As stated in previously the ONS has
reported that the population of the UK is ageing, where as a result, greater assistive care will
be required in the future. This is because people are living longer as our population continues
to increase. As also mentioned before, technology is becoming a more viable option to
supplement the potential lack of human assistive carers in the future, however, there remains
one problem to be solved; the level of acceptance of new technologies in older generations.
Most older people share a positive attitude to new technologies (Chen and Chan
2011),however, their level adoption remains low as usage rates in computers and mobile
phones are less than average(Chen and Chan 2011). The technology acceptance model as
shown in the figure below is used to model how end users perceived usefulness, attitudes
and intentions contribute to the adoption of new technology.

Figure 3 – The technology acceptance model (Davis, Bagozzi & Warshaw 1989)

The model appears to thoroughly define the process in which people adopt new
technology, however, Chen et al., 2011 found that in the older generation, additional
variables must be added in order to make the model more accurate for use. This is because,
as stated previously there are age related factors, including physical and cognitive, that affect
the users’ ability to use technology. If the end-user is not able to use the technology with
relative ease for any reason, they are less likely to want to continue their usage and will look
for alternatives (Nielsen 2012). Furthermore, the technology acceptance model lacks many
influencing factors that form the initial impressions of new technology that can be divided
into six themes (Peek, G. Luijkx et al. 2017) as shown in figure 4;

Figure 4 - Model of pre-implementation acceptance (Peek, G. Luijkx et al. 2017)


However, a more promising model for the adoption of new technology by the elder
generation is the Cycle of Technology Acquirement by Independent-Living Seniors (C-TAILS)
model as shown in figure 5. It is of benefit for the design authority for use where it provides
more contextual factors required to successfully create a product for the end user.

Figure 5 - Cycle of Technology Acquirement by Independent-Living Seniors (C-TAILS) (Peek,


G. Luijkx et al. 2017)

The model shows how an independent-living elderly adult will make decisions to
acquire new technologies based on a positive feedback loop of motivations to improve and
the change of experiences in their status quo. It is effective for use with older people because
it considers the end-users requirements at any given point in time, where they are more
subject to change (Peek, G. Luijkx et al. 2017). Overall, the model is better suited for a human
centred design process, where it allows the design authority to more accurately determine
the end user requirements.

1.3 Existing Assistive Technology


The market for assistive technology is already lucrative, where it is predicted to
surpass $26 billion by 2024 (McCue 2017). A range of assistive robots already exist or are in
production which this section of the report will discuss the ways in which they are designed
to increase the independence and improve the quality of life of the elderly generation.
Broekens et. al (2011) describes two categories of an assistive robot; service type robots and
companion type robots. Service type robots are defined as ones with the functionality to
“support independent living, mobility, household maintenance and maintain safety.”
Companion type robots are assistive with the general well-being and mental health of the
end-user through “pet-like” companionship.
An example of a companion type robot is PARO (Figure 6); an interactive seal robot
designed in Japan to improve the psychological welfare and emotional state elderly users.
PARO was first developed in 2003 where it is currently in its 8 th generation of design is used
in Japan and Europe (parorobots 2019). Since its release parorobots.com has won an award
for the most Therapeutic robot in the world and has reported 4 main benefits of its
functionality (parorobots 2019)

 PARO has been found to reduce patient stress and their caregivers
 PARO stimulates interaction between patients and caregivers
 PARO has been shown to have a Psychological effect on patients, improving their
relaxation and motivation
 PARO improves the socialization of patients with each other and with caregivers

Figure 6 - User interaction with PARO

To accomplish these benefits, PARO uses 5 sensors (tactile, light, audio, temperature
and posture) to create a companion that reacts to people and the environment in a realistic
and pet-like manner. PARO uses its sensors to continually learn from the end user to find the
optimum way of behaving using a behavioural generation system as shown in figure 6. For
example, if the user hits the robot with enough force, the robot will understand its behaviour
was not to the user’s satisfaction and remember not to repeat it. On the other hand, PARO
can recognise if it is being stroked or held and is, therefore, more likely to act in a similar
manner to get the desired response from the user. This has made possible through the two
hierarchical process layers as shown in figure 7, where the proactive and reactive layers allow
the seal robot to generate proactive, reactive and physiological behaviour (Wada, Shibata et
al. 2009).
Figure 7 – Behavioural generation system of Paro (Wada, Shibata et al. 2009)

The therapeutic effect on humans has been shown on humans, where end user
interaction with PARO was observed for five years in a Japanese health service facility. The
overall effect on the elderly people was that interaction with the seal robot improved moods
and depression over the time period (Wada, Shibata et al. 2009).This can be further illustrated
through the case study directed by Wade et al. on an elderly seal robot user over the 5-year
period. Using a “face scale” to allow the subject of the case study to self-evaluate their own
internal emotional wellbeing Wade et al. were able to quantify the effects of how the subject
felt before and after using PARO.

Figure 8 - Face scale score before and after using PARO from 2003 to 2008 (Wada, Shibata et
al. 2009)

Figure 8 shows how the subjects’ mood improved after using the PARO, where a face
scale score of 1 indicated the highest level of happiness the user it is experiencing. Overall,
the results show that the PARO seal robot has a beneficial effect on the users’ level of
emotional wellbeing. In evaluation, one reason why this robot is successful is that it is very
heavily influenced by the end user. This is because the robot adapts to the end-user during
interactions meaning that increased usage will result in increased satisfaction.

An example of a service type robot is RUDY that has been available in America since
2017. The target user of RUDY is the elderly and those with disabilities to increase their
independence and to cut down on hospital visits (inforobotics 2019). Like PARO, Rudy also
adapts to the user over time to cater specifically to their needs and to develop a strong bond
between the end user and the robot. RUDY does this using artificial intelligence and machine
learning that is built into the robot’s software architecture. Figure 9 shows a picture of user
interaction with RUDY, where the user is talking to a friend through a video telephony service.
This shows how RUDY promotes social interaction to increase the wellbeing and mental
health of the end user by decreasing their loneliness and improving their level of social
engagement. Furthermore, the skills and knowledge gap that is prevalent amongst older
people when using newer technology could mean that RUDY makes it easier for them to
access services like video telephony and others that they may have not been able to
previously. Furthermore, RUDYs humanoid structure and dexterity in its arms allow it to
complete tasks that would normally have been done by an assistive carer, such as taking
meals out of a microwave, carrying items and keeping track of commonly misplaced
belongings.

Figure 9 - End user interaction with RUDY (inforobotics 2019)


2 Summary of the Literature Review
2.1 Human Factors and User Centred Design
Over time as humans have made more technical advancements and scientific
discoveries, our relationship and attitudes toward technology has changed. Before, humans
proposed that technology is the ultimate priority where the needs of the human can change.
However, as the discipline of humans factors has grown proportionally as systems have
increased in complexity we have found that where the design is human centred the end-user
becomes more efficient, effective and satisfied with their working experience. The reason
why it is important to consider human factors during the design process is because the
problems humans make in a system can render them less useful than intended (Stanton,
Salmon et al. 2006).For example, older people experience cognitive decline. If the system
interface is not accessible to the end-user, they will not be able to use it. This will result in the
final product being a failure due to the lack of consideration of the target audience.

There are many methods to mitigate human factors errors. In this assignment,
hierarchical task analysis will be used to define the operations and plans associated with the
interaction of the robot. This method, therefore, will allow close analysis of the operations
that the end-user may have difficulty completing. This will result in the system becoming more
closely aligned with the original user requirements and context of use.
BS ISO 9241 standardizes human centred design for use in industry. The
interdependence of human centred design model defines a design process focused around
meeting the design criteria specified by the users’ needs and requirements. The model also
stresses the importance of iterations of the design where if it does not meet the specification.
This allows the design authority to consistently evaluate how successful each iteration of
design is. A design that meets more requirements will result in the user being able to use the
system as intended and increase performance.
Usability knowledge is an integral part of human centred design. ISO 9241-11
describes methods of measuring and evaluating usability in designs. Usability is important
where it increases productivity and satisfaction and stops the user looking for alternatives
that are more usable (Nielsen 2012). The five quality components of usability (Nielsen
2012)are questions the design authority can evaluate their design against to test the usability
of the design.
User centred design ensures that the technology does not fall out of the physical and
cognitive limits of the humans. Therefore both limits must be considered in order to make the
device usable and pleasant for use. Cognitive limits were explored in the literature review
where the human perceptual, cognitive and motor processing can be modelled as subsystems
that are interconnected and governed by principles of operation. This model is known as the
Model Human Processor which use to calculate the approximate time to complete a task.
However, when applying the model to this assignment, it is important to consider that the
cognitive limits of the end user are different where the parameters in the model must be
changed (Jastrzembski and Charness 2007). Physical limits were also explored in the literature
review, where anthropometrical data will be used to make certain that the robot fits to the
corporeal dimensions of the end user. It was also found that whilst the “average” sized human
does not exist, it is important to consider the 5th, 50th and 95th percentiles depending on the
part of the robot that is being designed. For example, when deciding the position of the screen
on the robot, the measurement of the eye level of a person sitting must be taken into account
which is shown in figure 10. To make sure all users can see the screen comfortably when sat
down and maintain a safe posture, the range of positions that the screen can be adjusted to
must range between the 5th and 95th percentile.

Figure 10 – Anthropometric data to show 95th and 5th percentiles of a man and woman sat
down (Tilley et al., 2002)

2.2 End User Interaction


The research conducted on the end user confirmed that understanding their physical
and cognitive limits and attitudes to technology is not more important than ever as the
population continues to age (ONS 2019). The end-user when interacting the robot will use
their sense of sight and hearing to process the visual and auditory feedback generated.
However, the senses described previously are ones in which the humans experience decline
in as they age. This means that this will form part of the user requirements, where special
consideration must be given to the ability of the end user to see the display and to hear the
robot. Furthermore, it was found that consideration must also be given to the cognitive limits
of the end user. While there are misconceptions around how to improve cognitive function
(Kane, Butler et al. 2017) , it has been shown that physical exercise can help slow down the
decline (Busse, Gil et al. 2009). In addition, physical exercise has other benefits to the user,
including becoming stronger and more independent. This therefore has generated initial
ideas of the functionality of the robot including:

 Personal training programme


 Contact the emergency services
 Contact friends and family
 Remembering where important valuable items are
It was discovered that the technology acceptance model is not fit for purpose when
applied to the end-user, where more variables must be added. The living situation of the
elderly person and their desire to improve it is a key motivator to whether the end user is
likely to accept the technology (Peek 2017). This is demonstrated in the C-TAILS model, which
is specifically designed to represent independent living seniors and the process in which they
make decisions to improve their lives.

3 Interface Design Process


This section of the assignment will show the how a human centred design process was
used to develop the robot from initial concepts to design solutions. The interdependent
design activities listed in ISO 9241 will be used to ensure that the design solutions meet the
initial user focused requirements.

3.1 Understand and specify the context of use


3.1.1 The end user story
The literature review provided valuable information that can be used to paint a vivid
picture of the end user in their environment interacting with the robot. The following section
will describe an end user benefiting from the various functionalities in the robot and their
potential needs throughout the day. This will allow further understanding of the context of
use, which will enable a more detailed specification of the user requirements. Ultimately, this
will result in a robot that is more usable and satisfying to interact with.
Frank, 82, lives independently and is starting to require more assistance to perform
household chores and tasks which were once easier in the past. Furthermore, Frank’s health
is starting to deteriorate; he still is in reasonably good health, however, he must take daily
prescribed medicine in order to maintain this. It is very important that the medicine is taken
every day in pill capsules and sometimes different pills are required on different days. Frank
already has a 7 day pill organiser to help, however, it often can be difficult for him to organise
without the help of a friend or family member.

Frank is also recovering from a stroke he had 10 months previously. He was lucky that
a family member came to visit him and noticed numbness in the left side of his face which
Frank himself was not aware of. Had Frank waited any longer to go to hospital, he potentially
could have suffered greater consequences.
Frank also misses his wife who passed away 5 years ago. With no-one else currently
living in the house, limited to friends and less family visits than he would like Frank feels lonely
and sometimes suffers with mild depression. Frank has seen on television and his
grandchildren using their mobile devices to talk one another with video enabled capabilities.
Frank would like to try to use such a service to contact those who he feels close to, however,
lacks the skills to operate a computer or smartphone.
Frank enjoys exercise and often visits the local gym where he takes part in aerobics
and swim classes for seniors. However, although Franks enjoys the social aspect of the classes,
it also is starting to become difficult to attend to the classes where his general level of mobility
is becoming worse. Frank would like to continue to do more exercise as he understands the
benefits but does not know of any alternatives that he is aware of.

Very recently, Frank has been finding it more difficult to remember the order of many
familiar tasks. For example, sometimes Frank cannot remember the order to put on clothes or
the method to make a meal. Furthermore, Frank occasionally misplaces items in incorrect
places; a family member found his house keys in the fridge and shoes in the kitchen cupboard.
Finally, Frank often has found himself becoming very confused and lost; even in familiar places.
He momentarily couldn’t remember the supermarket he was in when doing his weekly shop
for groceries and why he’d even entered the shop in the first place. Eventually, Frank did come
to his senses, but is slightly concerned if he were to ever have that problem again.
Frank meets Ronnie (the robot) for the first time when it is provided to him as a present
from his family. The family believe that Ronnie will help Frank start to regain some of his
independence, ensure that he is safe and help him to continue exercise routines within the
comfort of his own home. Frank is initially sceptical of Ronnie, however, decides to at least try
to cooperate with his new android companion.
One year later, the partnership between Frank and Ronnie continues to be a success.
Ronnie reminds Frank at the correct time everyday which medicine to take using simple terms
to describe what each pill looks like in a way that Frank understands. Ronnie can do this
because it has learned from Frank over time to understand how to communicate with Frank
to gain a positive response. Ronnie is also aware of how Frank usually behaves when he is
good health. If Frank has fallen over, not got out of bed at his usual time, showing signs of
stroke or any other potential health concern, Ronnie is able to use discretion to either alert the
family, medical emergency or both. Frank can video call his close ones using simple voice
commands such as “Video call my son” which enables him to use Skype much easier than
before. Ronnie’s main USP is that it allows older people to exercise in their homes, motivates
them to do it and can track their progress. Ronnie slowly increments the amount of exercise
into Franks routine, never pushing him too far out of his limits. The exercises are designed to
boost strength in the major muscles groups which will help frank to perform tasks like getting
out of chair and keeping balance to not fall over. Ronnie custom designed exercise “arms”
allow Frank to complete a range of compound resistance exercises which are based on squats,
deadlifts, bench press and the overhead press. As Frank completes the routine, he notices he
is stronger and as a result feels happy with his progress. Ronnie is also able to keep track of
the important belongings of Frank and ensure that he puts them in the correct place. Ronnie
also keeps a log of when Frank puts things in the wrong place or has moments of confusion
which can be accessed by a doctor, carer or family member. Ronnie is also able to assist in the
kitchen, where it can dictate the recipes for basic meals or microwave meals on request.
3.1.1.1 End user story summary
From the user story, we can extract basic user requirements from the problems that
Frank, the end-user, is likely to have and the ways in which Ronnie can assist. In summary, the
longer Frank spends time with Ronnie, the more chance that Ronnie will have to learn Frank’s
character and adapt to behave in a way that is satisfactory. This would create a personalised
experience where even though the design of the robot is human centred, Ronnie will continue
to redesign itself through software to adapt to the end-users’ specific requirements. For
example, Ronnie will speak more slowly or use simpler English if it can detect if it is not being
understood. Ronnie is also able to assist with medicine where many of the older generation
may struggle to keep track of the various medication that has been prescribed to them. There
is a safety concern where older people live independently, where if an accident was to happen
there is a risk that no-one would know or be able to help them immediately. Therefore, it is
important that Ronnie is able to analyse any compromising situations and also be able to
contact the right people. Calling the emergency services for a minor accident would be
wasteful of local authority resources so Ronnie must be able to accurately determine the
severity of the situation. Loneliness has been shown to be associated with adverse health
consequences that can be either both physical and mental (Luanaigh and Lawlor 2008), which
supports the inclusion of a video telephony service that is easy to use. Ronnie’s main feature
is to motive and provide exercises that allow Frank to maintain a level of fitness in his own
home. As discussed previously, Ronnie will learn from Frank and therefore be able to tailor
workouts based on its perception of Frank’s strength and motivation. Ideally, Ronnie will start
with smaller and less frequent workouts and progress to more intense and frequent routines.
A basic list of fundamental requirements can be derived:
1. The robot must be able to adapt to the character of the end user over time
2. The robot must be able to assist with any medicine the end user is required to take
3. The robot must be able to detect if the user requires emergency medical attention or
support from a close friend
4. The robot must integrate with existing video telephony services e.g. (Skype, Google
Hangouts)
5. The robot must be able to create personalised training routines for the end-user
6. The robot must be able to use the microwave and dictate basic recipes for the user to follow

3.2 Task Analysis


The next step in the design process is to create a hierarchical task analysis of the robots
interaction with the user and vice versa.
Figure 11 - Robot interaction with user
Figure 12 - User interaction with robot
3.2.1 Robot interaction with human
Further analysis of the tasks to be completed during robot-human interaction will be
conducted in this section of the report. As shown in figure 11, the robot undertakes 6 main
tasks based on the basic requirements derived in the end user story summary.
3.2.1.1 1: Adapt to the user
The first basic requirement and most important functionality of the robot is to learn
from the user and adapt to them to create a more satisfying and personal experience. The
robot will use the average speech rate of 150wpm (Martins, Vieira et al. 2007) and a literacy
level of Level 3 (Trust 2019) as the default setting. If the user is unable to understand the
robot, it will be able to detect confusion in the end-users face and understand verbal feedback
from the user that may confirm this. The robot, however, will always repeat itself at a slightly
higher volume before changing the language settings, to ensure that the user has not simply
misheard it. If the user cannot hear the robot multiple times, the robot will adjust its volume
accordingly, provided it is within a safe limit. Where the user is unable to understand the
robot due to lower cognitive limits, the robot will lower the literacy level of its speech and
decrease the pace of its speech rate.
3.2.1.2 2: Assist with medicine
The second requirement was to ensure that the robot would assist with the user’s
medicine. Using the camera on the robot, it is able to use natural language processing to
understand the information provided on the labels of the medication or the prescription itself.
This will then allow the robot to determine when and how often the medicine should be taken
and take into account the user’s preference. The robot will be able to confirm that the
medicine has been taken through the camera, and also be able to confirm and notify the user
that the correct medicine is being taken. The robot will also be able to help the user sort the
medication into a 7 day pill draw if required, or if the user prefers, straight from the bottle/box
packaging. Upon setup of the robot, a designated contact will be assigned that the robot can
contact via text message alerts. If the user does not take their medicine, the designated
contact will be alerted for further assistance.
3.2.1.3 3: Detect emergencies
The third and potentially life-saving basic requirement is for the robot to detect
emergencies that could fatally affect the user. The robot will use video of large quantities of
humans who are experiencing a stroke as a training data set to create a machine learned
algorithm. The robot, therefore, will have the ability to detect stroke symptoms in faces with
a high degree of accuracy. The same technique will also be used to determine when if the
user falls, whether they are unconscious and/or require more urgent medical attention. The
microphones built into the robot will be used to detect whether the user is still breathing
which will help determine the severity of the emergency. Where the user reports symptoms
that are directly related to serious health concerns, the robot will also be able to take this into
account as a severe risk factor. Once everything has been taken into account, the robot will
be able to grade the severity of the situation and contact the appropriate people and/or
services.
3.2.1.4 4: Video Calling
The fourth basic requirement is to allow video calling to decrease the loneliness
perceived by the user. The API of the most common video telephony services (Skype, Google
Hangouts) will be included in the software of the robot. This will allow a basic version of the
apps to be created that is suitable for the end user.
3.2.1.5 5: Personal training
The fifth and most defining basic requirement is the ability of the robot to create
personal training regimes for the user to take part in from the comfort of their home. The
robot will start the program by asking verbally a series of questions concerning the user’s
general level of health and mobility. The robot will then ask the user to perform basic
exercises as a test of fitness. The robot will use both data to create a training regime. The
robot will use its “arms” that are specifically designed to assist with resistance training. As the
user performs the exercise, the robot will be able to detect the range and intensity of
movement and therefore level effort. The robot will be able to change the frequency of the
routines by decreasing the amount of rest days. The intensity of the exercise can be change
where the robot can increase the level of resistance in its “arms”.
3.2.1.6 6: Culinary help
The final basic requirement is for the robot to assist with the preparation of meals.
The robot will be able to use the microwave and deliver meals from it to the user on request.
Recipes for basic meals will be pre-loaded into the software of the robot. The user will be able
to confirm via voice or gesture (thumbs up) that the relevant step has been completed to
prepare the chosen meal.
3.2.2 User interaction with robot
Further analysis of the tasks to be completed during human-robot interaction will be
conducted in this section of the report. As shown in figure 12, the user can initiate 5 main
tasks based on the end user story.
3.2.2.1 1: Where’s my…?
The user will be able to alert the robot that they have misplaced an item through the
touchscreen interface or through a voice command. The robot is able to identify commonly
misplaced or valuable possessions and keep track of their approximate location. The robot
will then report verbally and via the screen where the item is most likely to be, based on
observation of the user.
3.2.2.2 2: Help me
The user will be able to trigger an SOS call through any form of interaction. The
emergency services of the local authority will be notified and a live video feed of the user
can be transmitted to the designated emergency contact.
3.2.2.3 3: Call me
The user can register or sign into an account to gain access to their video telephony
service of choice. The user can then call contacts using either a voice command or the touch
screen interface.
3.2.2.4 4: Progress
The user will be able to find satisfaction in viewing the progress they have made over
the time that they have used the robot. The robot records the level of resistance in its
“arms” and the range of movement and intensity of the exercise performed. This data will
be represented graphically in the user interface of the software of the robot.
3.2.2.5 5: Alone time
The user must be able to enjoy periods of privacy, where they may at times feel
more at ease completely on their own. This means that the robot can be told to leave the
vicinity at any given time and through any form of interaction.

3.3 Specify the user requirements


From the basic list of requirements, it was possible to extract more of the functionality
of the robot and the operations of the user through HTA diagrams. The breakdown of the
diagrams in the previous section of the report analyses each task and poses potential methods
in the ways that they can be completed. The following section will develop from the
breakdown into a list of functional user requirements that encompass all of the features that
the robot must have and their corresponding priority of implementation.

3.3.1 User requirements

Function Priority Requirement


1 1 The user must be able to understand the robot clearly

1.1 1 The user must be able to indicate whether they are confused
or cannot understand the robot
1.2 1 The user must be able to indicate whether they can hear the
robot clearly
1.2.1 1 The hearing of the user must not be affected adversely when
using the robot
1.3 1 The UI must be simple and basic for an older person to
operate
1.4 2 The user must be able to tell the robot to leave the vicinity at
any given time
2 1 The user must be able to receive assistance taking medicine

2.1 1 The user must be able to scan prescriptions or labels easily


with the robot
2.2 1 The user must be able to receive assistance organising medicine

2.2.1 1 The user must have the option to take their medication
straight from the packaging
2.2.2 1 The user must have the option to sort their medication into a
pill organiser
2.2.3 1 The user must be notified if they are taking the wrong
medicine
2.3 1 The user must be able to confirm that they have taken their
medicine
2.4 1 The user must be able to influence the decision of when they
take their medication
2.5 1 The user’s designated contact must be notified when the user
has not taken medicine has their medicine

3 1 The user must have an SOS emergency routine

3.1 1 The user must be able to report to the robot any discomforts
they are currently experiencing
3.2 1 The user’s face must be analysed periodically to detect for
signs of stroke
3.3 1 The user must be able to access contacts in an emergency

3.3.1 1 The user’s local emergency services must be contacted in the


event of an emergency
3.3.1.1 1 The user must be able to speak to the emergency services

3.3.2 1 The user’s emergency contact must be notified in the event of


an emergency
3.3.2.1 2 The user’s emergency contact must be able to stream live
video of the user
3.3.3 1 The user’s local NHS Direct must be contacted in the event of
an minor accident
3.3.3.1 1 The user must be able to speak to NHS Direct via the
telephone
3.3.4 1 The user’s emergency contact must be contacted in the event
of an minor accident
3.3.4.1 1 The user’s emergency contact must be able to stream live
video of the user
4 2 The user must be able to contact close friends through
popular video telephony services
4.1 2 The user can create an account to access video telephony
services
4.2 2 The user can use existing accounts to access video telephony
services
4.3 2 The user must be able to add new contacts to their video
telephony address book
5 1 The user must be encouraged to exercise regularly
5.1 1 The users training regime must personalised

5.2 1 The user must take advantage of progressive overload to


make strength progress
5.3 1 The user must be able to stop the exercise routine at any time

5.4 1 The user can visually see the strength progress (gains) that
they have made over the time that they have been using the
robot
5.5 1 The user must be able to use the robot to exercise
comfortably
5.5.1 1 The user must be able to perform a squat using the robot
5.5.2 1 The user must be able to perform a chest press using the
robot
5.5.3 1 The user must be able to perform a lateral pull down using the
robot
5.5.4 1 The user must be able to perform an overhead press using the
robot
5.5.5 1 The user must be able to perform an deadlift using the robot
6 2 The user must be able to able to receive assistance to make
meals
6.1.1 3 The user must be able to able to receive assistance to make
meals using the microwave
6.2.1 3 The user must be able to receive assistance to cook basic
meals
6.2.1.1 3 The user must be able to confirm that each step of the recipe
has been completed
7 2 The user must be able to tell the robot where they have left
things in the house
8 1 The user must feel comfortable when using the robot at all
times
8.1 1 The user must be able to use the robot in any position
8.1.1 1 The user must be able to use the robot sat down
8.1.2 1 The user must be able to use the robot standing up

3.4 Produce design solutions to meet user requirements


3.4.1 System requirements
Each user requirement generates a system requirement that details the technical
requirement that is specifically needed to deliver the feature to the user. Where the table is
left blank, the requirement has already been covered. A table is not the best way to display
the anthropometric data required, therefore, although it is listed as a system requirement
more in depth descriptions will be included in the next section of the assignment.
Function System Requirement
1 The robot will use Level 3 language and a 150 wpm speech rate which will
decrease if the user indicates confusion or lack of understanding
1.1 The robot will use machine learning algorithms to detect the emotion of the
user and respond to negative feedback if not understood
1.2 The robot will respond to negative vocal feedback concerning the users
inability to hear
1.2.1 The robot’s maximum volume will not exceed 85 dB

1.3 The UI of the robot will have limited functionality and will be fully operational
via voice control
1.4 The robot will respond to keywords that instruct it to leave

2.1 The robot will use the OpenCV image recognition library to recognise text

2.2

2.2.1

2.2.2 The robot will use image recognition to recognise the pill organiser being used

2.2.3 The robot will use image recognition to recognise if the wrong bottle/packet
has been opened
2.3 The robot will use image recognition to recognise that the user has taken their
medicine
2.4

2.5

3 The robot must always have an SOS trigger through any means of interaction

3.1 The robot will be able to compare user reported symptoms as well as observed
symptoms (slurred speech, shortness of breath) to a database of serve
symptoms that require further medical attention
3.2 The robot will scan the users face periodically and use machine learning
algorithms to determine if the user is having a stroke
3.3

3.3.1 The robot will be able to contact the emergency services and give a report of
the users condition via voice or text
3.3.1.1 The robot will use the users video telephony account to enable telecare from
the emergency services
3.3.2 The robot will send a message alert to designated contact(s)

3.3.2.1 The robot will use the users video telephony account to enable telecare from
the emergency contact
3.3.3 The robot will be able to contact the local NHS direct and give a report of the
users condition via voice or text
3.3.3.1 The robot will use the users video telephony account to enable telecare from
the NHS direct
3.3.4 The robot will send a message alert to designated contact(s)

3.3.4.1 The robot will use the users video telephony account to enable telecare from
the emergency contact
4 APIs of the most popular video telephony services will be used to create an
simplified app
4.1

4.2

4.3

5 The robot will create a training programme based on the user’s perceived
strength and motivation
5.1
5.2 The robot will increase the resistance in its “arms” during the programme

5.3 The robot will be able to respond to negative vocal feedback from the user,
indicating it to stop
5.4
5.5 The robot’s exercise “arms” will be designed using anthropometric data (See
section 3.4.1.1)
5.5.1
5.5.2
5.5.3
5.5.4
5.5.5
6
6.1.1 The robot will be able to carry meals to and from the microwave
6.2.1 The robot will come pre-installed with recipe instructions
6.2.1.1 The robot will respond to voice or gesture feedback indicating that the task has
been completed
7 The robot will use image recognition to detect users valuable belongings and
where they are likely to put them
8 The robot will be designed using anthropometric data (See section 3.4.1.1)
8.1
8.1.1
8.1.2

3.4.1.1 Anthropometry
The system and user requirements ensure that the robot will not fall outside of the user’s
cognitive limits and partially the physical limits. To completely make certain that the physical limits
will not be exceeded, anthropometric data will be used to design the robot, thus, making it ergonomic
and user-centred. The data will be used to assist:

 The shoulder width of the robot


 Position of the screen
 The arm length and height of the robot

3.4.1.1.1 Shoulder width


The robot when helping the user to exercise will require to match the shoulder
length of the user, to ensure that they are comfortable whilst performing the exercise in
accordance with user requirement 8 and 5.5 (UR8, UR5.5).

Figure 13 – Female 5th percentile user shoulder width


The shoulder pads must fit all users and therefore will be designed according to the
width of the 5th percentile elderly woman as shown in figure 3. The shoulder breadth is
307mm and the head width is 132mm (). This leaves a space of 87.5mm per shoulder. The
width of the shoulder pads will be 70mm to allow 10% give on each side of the shoulder to
ensure a snug fit for safety. The distance between the shoulders must be adjustable between
the 5th and 95th percentiles of woman and man respectively. Therefore the minimum distance
between the pads will be 264.5mm and the maximum will be 410.5mm to account for both
extremes.
3.4.1.1.2 Position of the screen
The optimum distance of the screen from must be calculated to ensure that it is at a
safe viewing angle for all users of the robot. The will make certain that UR8 is met, where the
user will maintain comfort when using the robot.

Figure 10 – Anthropometric data to show 95th and 5th percentiles of a man and woman sat
down (Tilley et al., 2002)

The anthropometric data shows that range B is the optimum distance of the screen to
be away from the user’s eyes. Range B is stated to be from 457mm to 559mm. Therefore, the
robot will be assumed to stand at distance of 508mm away from the users eyes. It is also
important to mention that Tilley et al., 2002 recommend that the screen should maintain at
the eye-level of the user which can vary at different heights. This means that the screen must
be adjustable in height by 305mm where the upper limit of the centre of the screen should
be 861mm from the ground and the lower 714mm.
3.4.1.1.3 Length and height of the arms of the robot
The height of the arms must be able to accommodate for the user’s height standing
up and sat down to perform the various exercises. The range, therefore, of the arm height
must be between 1171mm (5th percentile, shoulder height, female) and 1514mm (95th
percentile, shoulder height, male).
Figure 14 – Concept art of Ronnie to show user interaction
The robot will stand a distance of 20 inches from the user, however, to perform the
chest press, the arms must be compressible. The minimum compression must be based on
the maximum arm extension of the 95th percentile elderly man which is 923mm.

4 Final design overview


From the user and system requirements, the next step was to use 3D modelling
software to make a first iteration of the robot. The software used to create the model was
Solidworks, due to the author’s previous experience and familiarity with the design
programme. This section of the report will focus on the dimensions of the robot and the user
interface.

Figure 15 – Ronnie
Figure 15 shows an isometric view of Ronnie where it is possible to see all of the
functionality and features of the robot. Ronnie has an oblong design to fit more easily into
homes as part of the furniture, where it will not look “out of place” and result in increased
satisfaction from the user. The mechanical arms allow the user to partake in the training
routines and perform multiple strength-based restive movements. The grooves in the
mechanical arm allow the Ronnie to adjust the height of the arms to allow more user comfort
and safety when completing the exercises. Ronnie displays user information on the screen,
which also can be adjusted to suit the different eye level heights using the groove. Ronnie is
able to move around the house using the wheels fixed to its undercarriage. There are only
two wheels, therefore Ronnie must turn using “skid-steer” similar to tanks and other tracked
vehicles.

4.1 Dimensions
4.1.1 Overall height, width and breadth

Figure 16 – Ronnie height

Figure 17 – Ronnie width and breadth


Key dimensions:
Ronnie total height – 1800mm
Ronnie total width (without arms) - 1000mm
Ronnie total breadth – 500mm

Ronnie’s height is 1800mm which is to ensure that the robotic arms can adjust to a
range of heights and accommodate for most users. Ronnie’s width is 1000mm, which is to
ensure all of the electrical components have ample space to fit inside the robot as well as
create a space for a large widescreen interface (44 inches). Furthermore, having a wider base
will ensure that the robot does not fall over and potentially cause injury to the user. The
breadth of Ronnie is 500mm, which is mostly to allow space for larger wheels. Larger wheels
therefore require a motor of lower power where the speed is greater per unit output,
therefore, increasing the efficiency of the system.

4.1.2 Arm adjustable height

Figure 18 – Arm adjustable height


Key dimensions:

Min arm height from ground – 472mm


Max arm height from ground - 1464mm

The arm height is a 50mm less than the original female 5th percentile seating height
to accommodate for the extra height that the wheel adds to the total height. This range of
heights ensures that 90% of all elderly men and women can use Ronnie to exercise with.
4.1.3 Arm adjustable width

Figure 19 – Arm adjustable width


Key dimensions:
Maximum shoulder pad distance – 410.5mm
Minimum shoulder pad distance – 264.5mm
Shoulder pad width – 70mm

To ensure safety in the workout, the shoulder pads must be able to fit the user’s
dimensions. As discussed previously in section 3.4.1.1.1 the dimensions used are
ergonomically selected to ensure 90% of the users can use the device. It is also worth
mentioning that the material used in the shoulder pad should be cushioning to avoid causing
further injury to the user.
4.1.4 Screen size and position

Figure 20 – Screen size

Figure 21 - Screen adjustable position


Key dimensions:

Screen width – 980mm


Screen height - 551.25mm
Screen size - 1124.4mm (44 inches)
Minimum screen height from ground - 743.53mm
Maximum screen height from ground - 1046.53mm
Adjustable length - 305mm
A 44-inch screen is an ample size to enjoy multimedia and can also allow the robot to
stand further from the user if required. The larger screen size will help users with
deteriorating vision where interface will appear bigger. The adjustable slider allows the user
to change the position of the screen until comfortable. The values are derived from the
anthropometric data shown previously in the assignment to cater for 90% of users. The screen
is also where the microphone, speakers and camera of the robot is kept, to allow further
interaction with the user.

4.2 User interface

Figure 22 - UI

Figure 22 shows the UI of the robot. It is design to be simple to not overcome the
cognitive limits that the older generation may face and bridge the generation gap of the skills
and knowledge to operate a computer. There are 4 main options that are based on the
original basic requirements if the robot derived in section 3.1.1.1. The UI uses contrasting
colours from the black background to allow the elderly population who are experiencing
decline in their vision to recognise the different functions. Furthermore, the shapes will also
help the elderly person discern between the functions where each shape is related to the
action. The call a friend shape is a smiley face, the medicine shape is a red cross, the workout
shape is a notepad and the “where’s my?” lost item service is a quotation mark. The SOS is a
yellow triangle indicating a warning and is contained on every page of the user interface to
remind the user that help is always available. Settings to change more advanced features of
the robot e.g. Wi-Fi setup, Emergency contact details are available, however, it has been
locked to ensure that the elderly user does not accidently change anything. To increase the
bond between the robot and the user, Ronnie is voice activated only and there is no
touchscreen available. This is to allow Ronnie to gather useful data about the user and adapt
accordingly. Furthermore, if the user feels as if they are having a conversation with Ronnie,
the robot will better serve as a companion to the user. This also may help decrease the level
of loneliness felt by the user and improve their level of moral and overall status quo
satisfaction. It also reduces the hygiene risk where bacteria could develop on the screen over
time if it is not looked after and cleaned regularly.
4.2.1 Call a friend, medicine, workout and “where’s my?” functions

Figure 22, 23, 24, and 25 – User functions


The user will prompt the robot using their voice to go to any of the 4 screens as shown
above in figures 22 – 25. The user does not have to say specifically what is on the cards where
they act as a guideline of what could be said, this freedom allowing the user the choice of
words reinforces the strengthening of the bond between the user and Ronnie. As a guideline,
no more than 3 options are available on the screen where multiple options may become too
complex for the older generation. The UI is designed to reduce the amount of steps taken
before reaching a goal and also in the most natural way possible, via voice.

5 Evaluation Plan
The first iteration (final design) will be evaluated against the original user
requirements criteria. This will allow feedback from the design process used and show how
well the design is suited to the user. User criteria not met will either have to be redefined,
retried or scrapped completely where the requirement falls out of the users’ needs.
Furthermore, as this assignment has used BS ISO 9241-210 at the foundation of its design
process, the benefits of the standard in the human centred design of systems will be used
additionally to evaluate with the requirements;

a) Is the system increasing the productivity of users and the operational efficiency of
organizations;
b) Is the system being easier to understand and use, thus reducing training and
support costs
c) Is the system increasing usability for people with a wider range of capabilities and
thus increasing accessibility
d) Is the system improving user experience
e) Is the system reducing discomfort and stress
f) Is the system providing a competitive advantage, for example by improving brand
image
g) Is the system contributing towards sustainability objectives?
Finally, Nielsen’s quality 5 quality components to evaluate usability will also be used
reference in this section of the report.

5.1 The user must be able to understand the robot clearly


UR1 was met with the first system requirement SR1, which proposed to use UK
standard “levels” of literacy to allow the robot to decrease or potentially increase the
difficulty of language used so the user can understand the robot. Furthermore, the speed at
which people talk can affect comprehension and therefore the robot is also able to increase
or decrease the speed of its responses. UR1.1 was met where use of machine learning
algorithms can be used to detect whether the user is confused or not based on their facial
expression. Machine learning is also used to detect negative feedback, therefore meeting the
requirement for UR1.2.1 and UR1.4. The UI created meets UR1.3, where it was designed to
make certain that there were as little steps as possible in order to goals. Furthermore, the
minimalistic design did not allow the user to make mistakes often where very few (never more
than 4) were available at any time. UR1 meets b, c, d and e of the ISO benefits, where the
robot continues to learn and adapt to the user, this, improving user experience and becoming
easier to use. This also meets the efficiency, errors and satisfaction of the Nielsen quality
components, indicating that UR1 is beneficial to the user experience.

5.2 The user must be able to receive assistance taking medicine


The robot makes use of image recognition and text processing to understand and
therefore make decisions on when the user should be taking medicine. For many older adults,
especially those experiencing cognitive or visual decline, find it difficult to organise and
remember to take their medicine. The robot can help in any way specified by the user and
will alert the emergency contact if needed. This therefore meets all requirements listed in
UR2. From the ISO benefits, UR2 promotes e where previously this task could have caused
discomfort and/or stress to the user. In the Nielsen quality components, UR2 promotes
learnability and efficiency. This is because the robot instructs the user the exact steps required
to take and organise their medicine and therefore allows the user to easily complete the task.
In addition, if the user previously struggled with the task, using the robots assistance helps
increase the efficiency of the operation.

5.3 The user must have an SOS emergency routine


The SOS routine is an integral part of the UI where it included onto every screen in the
interface. The robot also uses machine learning algorithms to determine symptoms e.g.
stroke in the user which potentially could be life threatening. The robot also takes the users
feedback into account and can contact the correct person/organisation in any given situation.
UR3 has been met, and reduces the users discomfort and stress where they receive the
correct level of assistance required. Although UR3 is not part of the system to be learned and
“used”, the user can gain satisfaction from the extra security that the robot ensures.

5.4 The user must be able to contact close friends through popular video telephony
services
A simple video telephony service was created in the UI meeting all of the requirements
in UR4. As the UI creation is part of UR1 it also derives the same evaluations.

5.5 The user must be encouraged to exercise regularly


No regular alerts or notifications were included in the UI in the final design, meaning
that UR5.0 remains to be seen. The robot has the ability to increase the resistance in its
arms, however, little to no information is given on the robots ability to provide a safe level
of resistance in the user or appropriate training regime. Anthropometric data was use to
design the custom arms allowing the user to perform exercises meaning UR5.5 was met.
UR5 due to its incompleteness remains a proof of concept.

5.6 The user must be able to receive assistance making meals


This part of the robot was deemed too expansive of all of the feature of the user,
where it was not included in the final design.

5.7 The user must be able to tell the robot where they have left things in the house
Included as part of the UI.
5.8 The user must feel comfortable when using the robot at all times
The robot was designed using anthropometric data for elderly people meaning that
U8 was fully achieved. It is the most important part of the design where it ensures the safety
and comfortability of the user when using the product. U8 is validated through evaluation of
the ISO benefits, where it meets a, b, c, d and e. Users become more productive and efficient
when they feel comfortable using the system as it is easier to use. The measurements are
designed to reduce comfort and therefore ultimately improve the user experience and reduce
stress. In terms of Nielson’s quality components, introducing anthropometry (U8) into the
design increases the user satisfaction.

6 Future Research Directions and Opportunities


From the literature review it was clear that there were many different functionalities
that the robot could have; some which are being used in the technology of today and others
which have been proposed as ideas but yet to be developed from concepts. This section
discusses the potential beneficial functionalities that could be added to the robot and the
different ways in which it could be used outside of the context of the end user.
The evaluation showed what worked well in the robot and, more importantly, what
did not go as well as originally planned. The first failure of design that was encountered was
the robots ability to help in the kitchen (UR6). This is mainly due to the conflicting
requirements of UR6 and UR5. It would be difficult to create a robot that had exercise
shoulder pads, but also a level of dexterity required to operate a microwave without any
assistance. Another conflict of design which prevented UR6 was because the robot is 1000mm
wide. This is wider than most doorways meaning that the mobility of the robot is very low
and is unlikely to be able to move in and out of rooms easily. Future implementation would
compromise between the size of the screen and mobility of the robot. Furthermore, the robot
could have different sets of arms to perform different tasks; the robot could not just be
limited to cooking and exercise but for gardening and housework to benefit the end-user.
Another failure the evaluation highlighted was UR5, where more specification could
have been included into the final design of the robot to show how it would train the user.
Furthermore, whilst the concept of working out from home is a good idea, progress may not
be enough to completely motivate the user to complete the entire programme by themselves.
One advantage that group exercise classes have over home workouts is the social aspect the
end user may enjoy. It is difficult to recreate the same atmosphere within the home, however,
there could be a solution. The robot already has video telephony capability and therefore the
robot could be used to connect users who are currently exercising with it. Users could
compare progress, play exercise games and co-operate on an internet platform. This would
create an online community of end-users which may have potential health and mental health
benefits due to the social and physical aspects.

In future it would have been better to include end-users at all stages of the design
process as recommended in ISO 92410. Although research allowed the collection of
knowledge to paint a picture of the general life, health risks and desires of the older
generation it would have been easier to interview older people about their experiences to
gain even more in-depth insight to their lives. Furthermore, whilst it was necessary to conduct
an evaluation with the resources to hand, end user input may have proved more valuable
where they would be more likely to report the types of usability problems they may
experience with the design. This information could then be used in the next iteration of the
design, thus, making it more used focused.
The assignment was to create a robot for an independent elderly person who lives at
home, however, outside of the scope there are many applications that the robot could be
used for. A similar version of the robot could be used in hospitals to take care of and provide
regular workouts to the user depending on their level of fitness. Some of the robots features
could be used in place of the medical staff, leaving them with more time to and resources for
more urgent calls. For example, the robot can organise medication based on the prescription
of the end user which is a function that would translate seamlessly into a hospital. In addition,
the robots use of machine learning allows it is record and monitor symptoms of the end-user,
which in turn could provide useful information to medical staff. Users could contact people
that they are close are close to where the video telephony service is easy to use and take part
in light exercises to ensure that their mental health does not begin to deteriorate.

In the end-user story Frank had started to have problems making it to his exercise
classes at the gym. A future use for the robot is that could develop the ability to assist the
user even outside of the house. Frank also had problems remembering where he was when
he visited his local supermarket. For many in old age, going outside independently becomes
more of a challenge. If the robot could also assist the end user in all aspects of their life, it
may increase the confidence of older people that they will be secure having someone/”thing”
watching over them. This may also increase the relationship between the user and the robot,
where it would become better companion over less time than if it was just utilised in the
home.
With companionship between man and machine becoming more important in an
ageing population, humans must increase the ability for robots to speak and interact like
humans. Artificial and intelligence and machine learning continue to become increasingly
popular due to extremity of their potential applications. Machine learning already has been
proposed in this assignment, however, one use that could potential benefit the man-machine
bond was not discussed. The use of “chatbots” is become increasingly common in the service
industry. This is because they can imitate human speech through text with far better accuracy
than existing methods. Therefore, whilst the robot has predetermined responses to interact
with the user, a chatbot could be implemented in the future. This would create a more
satisfying experience, where end users could relate to the robot better due to its more human
character.

The robot is controlled via voice, however an interesting technology that is already in
use in various devices is eye and gesture tracking. The robot, equipped with a camera and
image processing libraries already has the capability to track the user accurately, however, no
implementation of this technology was included in the specification. This would mainly help
the end-users who have difficultly speaking and provide a more comfortable way to access
the UI interface without the user having to frequently adjust their position when sat down.

7 Conclusion
In this assignment, a robotic support system to assist the elderly has been developed
using the standards contained within ISO 9241-210 to apply a human centred design
approach. Whilst the aesthetic design of the robot could improve, the objective of this
assignment was to consider human factors and the overall context of use. Therefore, the
robot possesses a strong capability to meet many user requirements that stem from the likely
needs and desires of the elderly population in their homes.
Before making any designs, concepts or ideas about the robot, it was crucial to gain
more knowledge about the discipline of human factors, which forms the first section of the
literature review. This led to the understanding of that all designs that have humans
controlling a part of the system must consider the human and the appropriate methods first;
or face the consequence of human factors problems. Naturally, this explained the importance
of ISO 9241-210 where human centred design is used to ensure that the system is designed
around the end-user. Finally, information concerning the evaluation of the usability of the
design was included, which was used after the final design was created.

Understanding the end-user, the types of problems they experience in daily life and
types of problems they are likely to create in a system was very valuable information in the
assignment. This section of the literature review discussed that developed nations are ageing,
where there is also a gap in assistive care skills. It was confirmed that older people have lower
physical and cognitive limits as the age as well as different attitudes and skills in technology.
This information formed many of the foundations of the user requirements later in the
assignment.

Whilst there is a gap in assistive care skills, which is likely to become larger as more
people age, the discipline of Gerontechnology is also growing. Therefore, a review on existing
assistive technologies was conducted to understand why they had become successful. One
major discovery is that both PARO and RUDY adapt to the user during interactions to increase
their bond. This formed part of the inspiration for Ronnie to also follow this trait that has
flourished with existing users in the market.

With the research complete, using the knowledge gained a user story was created to
depict the average and user who may use the product. The central character’s problems were
and the ways in which Ronnie might be able to assist him were described. From the user story,
a basic list of fundamental requirements were described. These requirements then formed
the foundation of a HTA diagram, where it was possible to analyse it and predict any problems
that may arise.

From the analysis of the HTA diagram, a final list of user requirements were created
and their respective system requirements including anthropometric data. With a full set of
requirements that were based around the end user, the final design/1st iteration of the robot
was made with interactive UI. The final design was then evaluated against both the usability
requirements and user requirements developed earlier in the report.
Overall this confirmed that, despite missing some of the user requirements, the robot
has mostly been a success. Ronnie has an ergonomic design that will fit 90% of users, has a UI
that is designed specifically for an older audience and functionalities based upon the common
problems an elderly person is likely to face.
References
Ball, K. and C. Owsley (1993). The Useful Field of View Test: A new technique for evaluating age-
related declines in visual function.
Bock, O., K. Engelhard, P. Guardiera, H. Allmer, J. J. I. E. i. M. Kleinert and B. Magazine (2008).
"Gerontechnology and human cognition." 27(4): 23-28.
Britannica, E. (2019). "Henry Dreyfuss." Retrieved 25/02, 2019, from
https://www.britannica.com/biography/Henry-Dreyfuss.
BSI (2010). Ergonomics of human-system interaction. Human-centred design for interactive systems.
UK, Standards Policy and Strategy Committee.
Busse, A. L., G. Gil, J. M. Santarém and W. Jacob Filho (2009). "Physical activity and cognition in the
elderly: A review." Dementia & neuropsychologia 3(3): 204-208.
Card, S. K., T. P. Moran and A. Newell (1986). The model human processor: An engineering model of
human performance. Oxford, England, John Wiley & Sons.
Chen, K. and A. H. J. G. Chan (2011). "A review of technology acceptance by older adults." 10(1): 1-
12.
CIEHF. (2019). "What is ergonomics?" Retrieved 25/02, 2019, from
https://www.ergonomics.org.uk/Public/Resources/What_is_Ergonomics_.aspx.
Etymonline. (2019). "presbycousis (n.)." Retrieved 25/02, 2019, from
https://www.etymonline.com/word/presbycousis.
Gates, G. A. and J. H. Mills (2005). "Presbycusis." The Lancet 366(9491): 1111-1120.
Gow, A. J., L. Penke, L. M. Houlihan, I. J. Deary, S. E. Harris, J. M. Starr, J. Corley, R. E. Marioni and S.
B. Rafnsson (2009). "Age-associated cognitive decline." British Medical Bulletin 92(1): 135-152.
Gow, J. and M. Gilhooly (2003). Risk factors for dementia and cognitive decline, NHS Health
Scotland.
Grandjean, E. (1980). Fitting the task to the man : an ergonomic approach / E. Grandjean ;
[translated by Harold Oldroyd]. London, Taylor & Francis.
IEA. (2019). "What is ergonomics?" Retrieved 25/02, 2019, from https://iea.cc/whats/index.html.
inforobotics (2019).
Islam, M., M. Asadujjaman, M. Nuruzzaman and M. Hossain (2013). Ergonomics Consideration for
Hospital Bed Design: A Case Study in Bangladesh.
Jastrzembski, T. S. and N. J. J. o. E. P. A. Charness (2007). "The Model Human Processor and the older
adult: Parameter estimation and validation within a mobile phone task." 13(4): 224.
Kane, R. L., M. Butler, H. A. Fink, M. Brasure, H. Davila, P. Desai, E. Jutkowitz, E. McCreedy, V. A.
Nelson and J. R. McCarten (2017). "Interventions to prevent age-related cognitive decline, mild
cognitive impairment, and clinical Alzheimer’s-type dementia."
Luanaigh, C. Ó. and B. A. Lawlor (2008). "Loneliness and the health of older people." International
Journal of Geriatric Psychiatry 23(12): 1213-1221.
Martins, I. P., R. Vieira, C. Loureiro and M. E. Santos (2007). "Speech Rate and Fluency in Children
and Adolescents." Child Neuropsychology 13(4): 319-332.
McCue, T. (2017). "Elderly And Disabled Assistive Technology Market To Surpass $26 Billion By
2024." from https://www.forbes.com/sites/tjmccue/2017/03/21/elderly-and-disabled-assistive-
technology-market-to-surpass-26-billion-by-2024/#431b35ab69ea.
Morley, J. E., R. N. Baumgartner, R. Roubenoff, J. Mayer and K. S. Nair (2001). "Sarcopenia." Journal
of Laboratory and Clinical Medicine 137(4): 231-243.
NCES (2016). Total fall enrollment in degree-granting postsecondary institutions, by attendance
status, sex, and age: Selected years, 1970 through 2027.
NHS. (2019). "Is there a cure for dementia?" Retrieved 25/02, 2019, from
https://www.nhs.uk/conditions/dementia/cure/.
Nielsen, J. (2012). "Usability 101: Introduction to Usability." Retrieved 25/02, 2019, from
https://www.nngroup.com/articles/usability-101-introduction-to-usability/.
Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of
the machine, Addison-Wesley Longman Publishing Co., Inc.
Norman, D. A. (2002). The Design of Everyday Things, Basic Books, Inc.
ONS. (2018). "Living longer: how our population is changing and why it matters." from
https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/agein
g/articles/livinglongerhowourpopulationischangingandwhyitmatters/2018-08-13.
parorobots. (2019). "PARO Therapeutic Robot." from PARO Therapeutic Robot.
Peek, S., K. G. Luijkx, H. Vrijhoef, M. Nieboer, S. Aarts, C. S. van der Voort, M. D. Rijnaard and
E. J. M. Wouters (2017). Origins and consequences of technology acquirement by
independent-living seniors: Towards an integrative model.
Prince, M., M. Knapp, M. Guerchet, P. McCrone, M. Prina, A. Comas Herrera, A. wittenberg,
R. Adelaja, B. Hu, B. King, D. Rehill and D. Salimkumar (2014). Dementia UK: update.
RNIB. (2018). "Eye health and sight loss stats and facts." Retrieved 25/02, 2019, from
https://www.rnib.org.uk/sites/default/files/Eye%20health%20and%20sight%20loss%20stats
%20and%20facts.pdf.
SilverEco. (2018). "2018 Ageing Report: Europe’s population is getting older." Retrieved
25/02, 2019, from http://www.silvereco.org/en/2018-ageing-report-europes-population-is-
getting-older/.
Stanton, N. A., P. M. Salmon, G. H. Walker, C. Baber and D. P. Jenkins (2006). Human Factors
Methods: A Practical Guide for Engineering And Design, Ashgate Publishing Company.
Trust, N. L. (2019). "What do adult literacy levels mean? | National Literacy Trust."
Retrieved 24/02, 2019, from https://literacytrust.org.uk/parents-and-families/adult-
literacy/what-do-adult-literacy-levels-mean/.
Understanding, W. G. o. S. and Aging (1988). "Speech understanding and aging." 83(3): 859-
895.
Wada, K., T. Shibata and Y. Kawaguchi (2009). Long-term robot therapy in a health service
facility for the aged - A case study for 5 years. 2009 IEEE International Conference on
Rehabilitation Robotics.

Potrebbero piacerti anche