Sei sulla pagina 1di 60

UX Evaluation

Methods
PERTEMUAN 4 – EVALUASI UI/UX

HANIFAH M AZ-ZAHRA, S.Sn. M.Ds


What is evaluation?
• Evaluation is a process that critically
examines a system.
• It involves collecting and analyzing
information about a system’s
activities, characteristics, and
outcomes.
• Its purpose is to make judgments about
a system, to improve its effectiveness,
and/or to inform the stakeholders
involved.
Patton, M.Q. (1987). Qualitative Research Evaluation Methods. Thousand Oaks, CA: Sage Publishers.
When?
• Ideally, evaluation should occur throughout
the design life cycle, with the results of the
evaluation feeding back into modifications to
the design.
• Clearly, it is not usually possible to perform
extensive experimental testing continuously
throughout the design, but analytic and
informal techniques can and should be used.
• We will consider evaluation techniques under
two broad headings: expert analysis and user
participation.

Alan Dix, Janet E. Finlay, Gregory D. Abowd, and Russell Beale. 2003.
Human-Computer Interaction (3rd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
When?
• Evaluation throughout the design life
cycle has the advantage that problems can
be ironed out before considerable effort
and resources have been expended on the
implementation itself: it is much easier
to change a design in the early stages of
development than in the later stages.

Alan Dix, Janet E. Finlay, Gregory D. Abowd, and Russell Beale. 2003.
Human-Computer Interaction (3rd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
Why?
1. Improve system design and
implementation.
• It is important to periodically
assess and adapt your activities to
ensure they are as effective as they
can be.
• Evaluation can help you identify
areas for improvement and ultimately
help you realize your goals more
efficiently.
http://meera.snre.umich.edu/evaluation-what-it-and-why-do-it#
Why?
2. Demonstrate system impact.
• Evaluation enables you to
demonstrate your system’s success or
progress.
• The information you collect allows
you to better communicate your
system's impact to others, which is
critical for public relations, staff
morale, and attracting and retaining
support from current and potential
funders.
http://meera.snre.umich.edu/evaluation-what-it-and-why-do-it#
What type of
evaluation?
There are many different dimensions to
consider when choosing the best assessment
approach:
• Goal: Summative (on the final product) or
formative (during the process)
• Approach: Objective or subjective
• Data: Quantitative or qualitative
• Moments: Momentary, episodic, or overall
UX
• Setup: Lab or field
When?
• Do user research at whatever stage you’re
in right now. The earlier the research,
the more impact the findings will have on
your product, and by definition, the
earliest you can do something on your
current project (absent a time machine) is
today.
• Do user research at all the stages. As we
show below, there’s something useful to
learn in every single stage of any
reasonable project plan, and each research
step will increase the value of your
product by more than the cost of the
research.
https://www.nngroup.com/articles/ux-research-cheat-sheet/
When?
• Do most user research early in the
project (when it’ll have the most
impact), but conserve some budget for a
smaller amount of supplementary
research later in the project. This
advice applies in the common case that
you can’t get budget for all the
research steps that would be useful.

https://www.nngroup.com/articles/ux-research-cheat-sheet/
Design Life Cycle
(Nielsen)
Discover
• The discovery stage is when you try to
illuminate what you don’t know and
better understand what people need.
• An important goal at this stage is to
validate and discard assumptions, and
then bring the data and insights to the
team.
Design Life Cycle
Explore
• Exploration methods are for
understanding the problem space and
design scope and addressing user needs
appropriately.
Test
• Testing and validation methods are for
checking designs during development and
beyond, to make sure systems work well
for the people who use them.
Design Life Cycle
Listen
• Listen throughout the research and
design cycle to help understand
existing problems and to look for new
issues. Analyze gathered data and
monitor incoming information for
patterns and trends.
Top UX Research Methods
Discover • Field study
• Diary study
• User interview
• Stakeholder interview
• Requirements & constraints gathering
Explore • Competitive analysis
• Design review
• Persona building
• Task analysis
• Journey mapping
• Prototype feedback & testing (clickable or paper prototypes)
• Write user stories
• Card sorting
Test • Qualitative usability testing (in-person or remote)
• Benchmark testing
• Accessibility evaluation
Listen • Survey
• Analytics review
• Search-log analysis
• Usability-bug review
• Frequently-asked-questions (FAQ) review
Summative vs.
Formative
Summative vs.
Formative
Evaluations fall into one of two broad
categories:
• Formative evaluations are conducted during
system development and implementation and
are useful if you want direction on how to
best achieve your goals or improve your
program.
• Summative evaluations should be completed
once your system are well established and
will tell you to what extent the system is
achieving its goals.
http://meera.snre.umich.edu/evaluation-what-it-and-why-do-it
TYPE OF PURPOSE
EVALUATION
Formative
1. Needs Determines who needs the system, how great the
Assessment need is, and what can be done to best meet the
need, what audiences are not currently served
by systems and provide insight into what
characteristics new systems should have to
meet these audiences’ needs.

How to conduct need assessment:


https://coast.noaa.gov/needsassessment/#/

2. Process Examines the process of implementing the


or system and determines whether the system is
Implementati operating as planned. Can be done continuously
on or as a one-time assessment. Results are used
Evaluation to improve the system. A process evaluation
may focus on the number and type of
participants reached and/or determining how
satisfied these individuals are with the
system.
TYPE OF PURPOSE
EVALUATION
Summative
1. Outcome Investigates to what extent the system is
Evaluation achieving its outcomes. These outcomes are the
short-term and medium-term changes in system
participants that result directly from the
system. For example, outcome evaluations may
examine improvements in participants’
knowledge, skills, attitudes, intentions, or
behaviors.

2. Impact Determines any broader, longer-term changes


Evaluation that have occurred as a result of the system.
These impacts are the net effects, typically
on the entire school, community, organization,
society, or environment.
Phases of Product
Development
1. STRATEGIZE: In the beginning phase of the
product development, you typically
consider new ideas and opportunities for
the future. Research methods in this
phase can vary greatly.
2. EXECUTE: Eventually, you will reach a
"go/no-go" decision point, when you
transition into a period when you are
continually improving the design
direction that you have chosen. Research
in this phase is mainly formative and
helps you reduce the risk of execution.
Phases of Product
Development
3. ASSESS: At some point, the product or
service will be available for use by
enough users so that you can begin
measuring how well you are
doing. This is typically summative
in nature, and might be done against
the product’s own historical data or
against its competitors
Product Development Phase
Strategize Execute Assess
Goal Inspire, Inform and Measure
explore and optimize product
choose new designs in performance
directions order to against
and reduce risk itself or
opportunitie and improve its
s usability competition
Approach Qualitative Mainly Mainly
and Qualitative Quantitative
Quantitative (formative) (summative)
Typical Field Usability
Methods studies, benchmarking
diary , online
studies, assessments,
surveys, surveys, A/B
data mining, testing
or analytics
Objective vs.
Subjective
User vs Expert
Objective vs Subjective
User vs Expert
So, what's the difference?
• Expert is an UX/UI specialist who
inspects a product to identify
potential UX/UI problems
• User is the representative of target
audience to evaluate your product
whilst performing tasks

https://www.webcredible.com/blog/expert-usability-review-vs-usability-testing/
User vs Expert
They're quite similar in many ways in
that both:
• Find and prioritise UX/UI problems
• Evaluate designs in the context of
tasks

https://www.webcredible.com/blog/expert-usability-review-vs-usability-testing/
Advantages/disadvantag
es
Expert
+ Tend to find high level breaches of design
rules and consistency
+ Find some issues that user didn't
+ Cheaper and quicker to do.
+ Can be involved at any stage in the
development process

– Miss usability issues that arise during


testing with user
– Report false alarms (i.e. not real issues)

https://www.webcredible.com/blog/expert-usability-review-vs-usability-testing/
Advantages/disadvantag
es
User
+ Better at finding issues related to
special domain knowledge and task flows.
+ Give a truer picture of the real problems
people encounter because they're derived
from real users in the first place.
+ Less conjecture and feedback comes
straight from real-user

– Takes more time to plan and organise and


is more expensive

https://www.webcredible.com/blog/expert-usability-review-vs-usability-testing/
User vs Expert
• Expert reviews are good, but user
testing is da Daddy!
• The most effective approach is to try
and integrate both techniques.
• In practice, people often use expert
usability reviews early on to
straighten up their design in
preparation for user testing.
• Use real users and accept no
substitute!
Quantitative vs.
qualitative
Quantitative vs
Qualitative
Two types of data that can be collected
:
• Qualitative (qual) data  How & why
• Quantitative (quant) data  How many &
how much

https://www.nngroup.com/articles/quant-vs-qual/
Qual
• Qualitative data offer a direct assessment of
the system: researchers will observe
participants struggle with specific UI
elements and infer which aspects of the design
are problematic and which work well.
• They can always ask participants follow-up
questions and change the course of the study
to get insights into the specific issue that
the participant experiences.
• Then, based on their own UX knowledge and
possibly on observing other participants
encounter (or not) the same difficulty,
researchers will determine whether the
respective UI element is indeed poorly
designed.
https://www.nngroup.com/articles/quant-vs-qual/
Qual
• No formal assurance that the findings
from a qual study are indeed objective
and representative for the whole target
population.

https://www.nngroup.com/articles/quant-vs-qual/
Quant
• Quantitative data offer an indirect
assessment of the usability of a design.
• They can be based on users’ performance on
a given task (e.g., task-completion times,
success rates, number of errors) or can
reflect participants’ perception of
usability (e.g., satisfaction ratings).
• Quantitative metrics are simply numbers,
and as such, they can be hard to interpret
in the absence of a reference point. For
example, if 60% of the participants in a
study were able to complete a task, is
that good or bad?

https://www.nngroup.com/articles/quant-vs-qual/
Quant
• While quant data can tell us that our
design may not be usable relative to a
reference point, they do not point out
what problems users encountered. Even
worse, they don’t tell us what changes
to make in the design to get a better
result next time.
• One advantage of quant over qual
is statistical significance, certain
protection against randomness.

https://www.nngroup.com/articles/quant-vs-qual/
Quant vs Qual
• Both qualitative and quantitative
testing are essential in the iterative
design cycle.
• Qual studies are more common in UI/UX
industry,
• Quant studies are the only ones that
allow us to put a number on a redesign
and clearly say how much our new
version improved over the old one

https://www.nngroup.com/articles/quant-vs-qual/
Qual Research Quant Research
Questions Why? How many and how much?
answered
Goals Both formative and Mostly summative:
summative: • evaluate the usability of
• inform design decisions an existing site
• identify usability • track usability over time
issues and find • compare site with
solutions for them competitors
• compute ROI

When it is • Anytime: during • When you have a working


used redesign, or when you product (either at the
have a final working beginning or end of a
product design cycle
• well suited for • Best for evaluating the
identifying the main overall system
problems in a design
Outcome Findings based on the Statistically meaningful
researcher’s impressions, results that are likely to be
interpretations, and prior replicated in a different
knowledge study
Methodology • Few participants • Many participants
• Flexible study • Well-defined, strictly
conditions that can be controlled study conditions
adjusted according to • Usually no think-aloud
the team’s needs
Attitudinal vs.
Behavioral
Attitudinal vs.
Behavioral
Attitude  what people say
• Attitudinal research is usually to
understand or measure people's stated
beliefs, which is why attitudinal
research is used heavily in marketing
departments.
Behaviour  what people do
• Focus mostly on behavior seek to
understand "what people do" with the
product or service in question

https://www.nngroup.com/articles/which-ux-research-methods/
Momentary vs.
Episodic
UX Moments
• Anticipated UX refers to the period of
time before first use, and focuses on the
expectations a person has on the product,
service or system.
• Momentary UX refers to any perceived
change during the interaction in the very
moment it occurs.
• Episodic UX is an appraisal of a specific
usage episode extrapolated from a wider
interaction event.
• Remembered UX is the memory the user has
after having used the system for a while.
http://ieeexplore.ieee.org/document/7733474/
20 UX Methods in
Brief
20 UX Methods in Brief
1. Usability-Lab Studies: participants
are brought into a lab, one-on-one
with a researcher, and given a set of
scenarios that lead to tasks and
usage of specific interest within a
product or service.
2. Ethnographic Field Studies:
researchers meet with and study
participants in their natural
environment, where they would most
likely encounter the product or
service in question.
https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
3. Participatory Design: participants
are given design elements or creative
materials in order to construct their
ideal experience in a concrete way
that expresses what matters to them
most and why.
4. Focus Groups: groups of 3–12
participants are lead through a
discussion about a set of topics,
giving verbal and written feedback
through discussion and exercises.

https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
5. Interviews: a researcher meets with
participants one-on-one to discuss in
depth what the participant thinks
about the topic in question.
6. Eyetracking: an eyetracking device is
configured to precisely measure where
participants look as they perform
tasks or interact naturally with
websites, applications, physical
products, or environments.

https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
7. Usability Benchmarking: tightly
scripted usability studies are
performed with several participants,
using precise and predetermined
measures of performance.
8. Moderated Remote Usability
Studies: usability studies conducted
remotely with the use of tools such
as screen-sharing software and remote
control capabilities.

https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
9. Unmoderated Remote Panel Studies: a
panel of trained participants who
have video recording and data
collection software installed on
their own personal devices uses a
website or product while thinking
aloud, having their experience
recorded for immediate playback and
analysis by the researcher or
company.

https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
10.Concept Testing: a researcher shares
an approximation of a product or
service that captures the key essence
(the value proposition) of a new
concept or product in order to
determine if it meets the needs of
the target audience; it can be done
one-on-one or with larger numbers of
participants, and either in person or
online.

https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
11.Diary/Camera Studies: participants
are given a mechanism (diary or
camera) to record and describe
aspects of their lives that are
relevant to a product or service, or
simply core to the target
audience; diary studies are typically
longitudinal and can only be done for
data that is easily recorded by
participants.

https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
12.Customer Feedback: open-ended and/or
close-ended information provided by a
self-selected sample of users, often
through a feedback link, button,
form, or email.
13.Desirability Studies: participants
are offered different visual-design
alternatives and are expected to
associate each alternative with a set
of attributes selected from a closed
list; these studies can be both
qualitative and quantitative.
https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
14.Card Sorting: a quantitative or
qualitative method that asks users to
organize items into groups and assign
categories to each group. This method
helps create or refine the information
architecture of a site by exposing
users’ mental models.
15.Clickstream Analysis: analyzing the
record of screens or pages that users
clicks on and sees, as they use a site or
software product; it requires the site to
be instrumented properly or the
application to have telemetry data
collection enabled.
https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
16.A/B Testing (also known as
“multivariate testing,” “live
testing,” or “bucket testing”): a
method of scientifically testing
different designs on a site by
randomly assigning groups of users to
interact with each of the different
designs and measuring the effect of
these assignments on user behavior.

https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
17.Unmoderated UX Studies: a
quantitative or qualitative and
automated method that uses a
specialized research tool to captures
participant behaviors (through
software installed on participant
computers/browsers) and attitudes
(through embedded survey questions),
usually by giving participants goals
or scenarios to accomplish with a
site or prototype
https://www.nngroup.com/articles/which-ux-research-methods/
20 UX Methods in Brief
18.True-Intent Studies: a method that asks
random site visitors what their goal or
intention is upon entering the site,
measures their subsequent behavior, and
asks whether they were successful in
achieving their goal upon exiting the
site.
19.Intercept Surveys: a survey that is
triggered during the use of a site or
application.
20.Email Surveys: a survey in which
participants are recruited from an email
message.

https://www.nngroup.com/articles/which-ux-research-methods/