Sei sulla pagina 1di 10

1-4244-1243-9/07/$25.

00 2007 IEEE
Achieving Design Enlightenment: Defining a New User Experience Measurement
Framework

Alexander Thayer
University of Washington
huevos@alumni.washington.edu
Thrse E. Dugan
University of Washington
rezza21@gmail.com






Abstract
By using self-reported metrics, user researchers,
designers, and usability experts can pinpoint the aspects
of a user experience that require modification during,
rather than after, the development process. Comparative
data are potentially the most valuable to designers and
researchers because these data provide insight into users
perceptions and expectations about specific aspects of the
user experience. In this paper we extend the theory of
expectation disconfirmation and suggest that, by
collecting comparative data on users perceptions and
experiences as they relate to the specific experience goals
of a project, project teams can make the right choices and
trade-offs when it comes to designing the user experience.
We also suggest specific methodologies from the field of
education research that can inform project teams
decisions at the outset of product design projects.
Keywords: user experience, user-centered design,
experience metrics, experience framework.


Defining User Experience
Although there are many competing definitions of the
term user experience, scholars and practitioners agree
that a user experience is not a directly measurable event.
In fact, a user experience is not even an event, but an
infinite amount of smaller experiences, relating to
contexts, people, and products [1, p. 420]. Davis (2003)
claims that, however you may define the idea of the user
experience, it cannot be captured, stored, or transmitted,
only the data which occasion experiences in human minds
can [2, p. 51]. Ultimately, experience is as much about
what individuals bring to the interaction as it is about
what the designer leaves there [3, p. 18:9], and What
designers can do is provide resources through which users
structure their experiences [ibid].
Clearly it is difficult to collect and analyze data that
result from an intangible process of interaction among
humans and the world that has its existence in human
minds [2, p. 46]. That challenge has not stopped the
legions of practitioners who have examined user
experience design and have drawn conclusions about how
to improve their products and interfaces as a result of
iterative design and user testing.
Many of the present methods for studying the user
experience are reflective, post-hoc methods that are used
after the product under study has been released into the
world. There are many current examples of experience
assessment techniques, including sales figures, customer
satisfaction and recommendation scores, consumer and
technical reviews, and user feedback from usability tests.
While these methods are useful to project teams in a
reflective way, they provide limited insight into users
feelings about their experience with a specific product,
and they provide even less insight to project team
members who must make experience design decisions
about a product that has not yet been released.
The goal of this paper is threefold:
Help practitioners understand how to establish
testable metrics for the product experiences they
are creating,
Quantify users feelings about a given product
experience using those metrics,
Use those data to modify the user experience of a
product that is still being developed.
We will examine current user experience frameworks,
describe how to create practical, testable metrics that
measure an experience, and conclude with the exposition
and application of a new user experience measurement
framework that yields experience-related data when
implemented professionally.
Throughout this paper, the word product
encompasses not just software and hardware products and
interfaces, but all products and interfaces, everything
from shoes to sushi to seesaws.
Selecting a Framework for Understanding
the User Experience
Product designers know that the act of expressing an
opinion about the meaning of user experience says
something about their philosophy of design and the design
process. If their perspective is too product-focused they
may be seen as modernists, people who disregard user
978-1-4244-4358-1/09/$25.00 2009 IEEE
needs in favor of form over function, or who are
seemingly unaware that other people will attempt to use
the product being designed. However, if their perspective
on user experience is too user-focused, they may be
regarded not as designers but as stenographers who
slavishly transmit the stated needs and goals of target
users into design sketches and interface prototypes.
One solution to this dilemma is to be magnanimous in
ones approach to design, and encompass product- and
user-centered philosophies simultaneously. Tullis and
Albert (2008) claim that Usability is usually considered
the ability of the user to use the thing to carry out a task
successfully, whereas user experience takes a broader
view [than usability], looking at the individuals entire
interaction with the thing, as well as the thoughts,
feelings, and perceptions that result from that interaction
[4, p. 4]. This definition is useful because it is so broad,
distinguishing between use and things in the most
abstract way and making a very general statement about
the elements that comprise the user experience. However,
this definition is perhaps too broad as a mission statement
for practitioners in the fields of product design and
research.
There is a third philosophical possibility: Forlizzi and
Battarbee (2004) explore three approaches to
understanding the nature of experience itself, which they
describe as product-centered, user-centered, and
interaction-centered [5, p. 262]. Forlizzi and Battarbee
align themselves with the interaction-centered approach to
understanding user experience, and they separate the
user experience concept into three dimensions:
experience, an experience, co-experience [5, p. 263].
Ultimately, their framework for understanding user
experience focuses on interactions between individuals
and products and the experiences that result [5, p. 262].
This framework is useful as a guiding step toward
connecting design research activities, usability methods,
and traditional ethnographic and sociocultural
anthropology research methods to the process of
measuring the user experience. In particular, their
distinction of an experience from the morass of
experience is useful in that an experience has a
beginning and an end [5, p. 263]. In other words, an
experience is a specific event that can be measured during
a usability study, and about which data can be gathered.
The most compelling framework for understanding the
user experience, however, comes from the work of
Wright, Wallace, and McCarthy (2008). They describe the
process of sense making [3, p. 18:6] that humans
undergo as they have experiences, a process that can be
mapped to well-understood, validated practices of
professional usability experts and user researchers.
Wright et al. cite six processes of sense making, although
one of those processes should be split into two parts. This
modified set of processes is provided in Table 1, and is
adapted from the framework provided by Wright et al.
(refer to [3, p. 18:6-18:7] for the original list and all
quoted information):
Table 1. Wright et al. (2008) user experience
framework elements explained.
Experience
framework
element
Element description
Anticipating Considering what type of experience to
have with a product; the expectations
that people bring to an experience that
help shape their experience itself and
their level of satisfaction
Connecting The preattentive response to seeing a
product for the first time; occurs very
rapidly before meaning is made of the
experience
Interpreting Absorption of the initial details of the
experience; may involve changing
expectations to fit the actual experience
rather than the anticipated experience
Immediate
Reflecting
Initial judgements about the experience
as it unfolds and the act of assigning
some sort of value to that experience
Future
Reflecting
Thinking beyond the immediate
experience to consider it in the context
of other experiences; this stage also
involves considering how future
experiences might be better or worse
than the first experience
Recounting Subjectively describing the experience
to other people and considering their
responses as potential modifiers to the
original thoughts about the experience
Appropriating Creating a personal mythology around
the experience; internalizing an
experience and potentially fitting that
experience and the associated product
into ones life (or adjusting ones life
around that product or experience)
As Table 1 demonstrates, the process of having an
experience is complex. Some of the elements above
suggest clear ways in which they could be measured in a
lab- or field-based study. However, some of the elements
may be more challenging to capture from a data collection
perspective: The process of connecting, for example, may
require eye tracking or more in-depth neurological
assessment to detect preattentive brain activity, if such
data seem necessary or useful.
The remainder of this paper describes how to take this
essential framework and modify it so that usability
experts and user researchers can implement it in actual
user studies and needs assessments.
Creating Experience Metrics for Your
Projects
Before any user studies are conducted, the project team
charged with the design of a specific product and user
experience must understand the experience goals for the
product. These goals must be clearly outlined at the outset
of the project, typically after initial market research and
other planning activities are completed. This outlining
process is often a collaboration among the business
owners and stakeholders, as well as the project leaders
(such as the program and product managers, the
marketing manager, the researchers, and so on). This team
writes a preliminary set of goals that are expressed as a
series of statements that describe what type of experience
the team is attempting to create. Although these goals
may be refined later in the project, they should not be
abandoned or significantly rewritten to suit the outcomes
of user testing; rather, they should hold steady as the
tenets of the user experience on which the product design
is based.
The process of validating experience goals is
challenging, in part because the goals themselves are not
usually worded as quantifiable, measurable objectives.
For example, a typical goal for a user experience is
simplicity, as in, I find this interface simple to
understand. The project team members who are
responsible for building the product and design the
corresponding experience must bear this goal in mind as
they make decisions about how to deliver that experience.
The goal of simplicity, as stated, is so broad that the
project team may struggle to understand when their
product has achieved this goal. Additionally, different
users of the product will define simplicity in various ways
based on their backgrounds, contexts of use, and so on.
The project team must keep this ambiguity in mind as
they strike a balance between rich functionality and
interaction simplicity. It is up to the usability experts and
user researchers to determine, through user studies,
whether each design decision helps the product meet the
stated experience goals.
User experience designers and researchers have only
recently begun creating validated, broadly accepted
systems of measurement to determine the potential
success of a user experience as it is being designed and
developed. In fact, many designers may not even regard
this process as a necessary component of their work. In
their recent survey of 103 user-centered design (UCD)
professionals, Mao, Vredenburg, Smith, and Carey (2005)
found that respondents mentioned a total of 191
indicators of UCD effectiveness with little consensus.
Fifteen individuals reported that no effectiveness measure
was in place [6, p. 107]. Additionally, they found that
aside from external (customer) satisfaction, none [of the
indicators] were mentioned by more than 20% of the
respondents [ibid]. Mao et al. ultimately conclude that
the measurement of customer satisfaction was seen as
outside of the UCD process [6, p. 108] and that
common characteristics of an ideal UCD process were
not found in practice, namely focusing on the total user
experience, end-to-end user involvement in the
development process, and tracking customer satisfaction
[6, p. 109].
These results are not encouraging, particularly if you
believe that an ideal UCD process exists and can be
followed. One reason for this lack of adoption may be the
absence of clear metrics that enable the measurement of
the total user experience, or that provide direct inputs
from user testing back into the development process.
However, there is a solution to this dilemma: develop
experience metrics for the project on which you are
working. Better still, develop experience metrics that
produce self-reported data from users who take part in
usability studies.
Self-reported metrics enable a truly user-centered
perspective on a user experience. Rather than concocting
objective metrics or relying on task performance
information, usability experts can ask the users
themselves to report what they think or how they feel
about a specific experience. More importantly, users can
report their anticipated feelings before undergoing an
experience, and then reflect on that experience and report
their feelings. This process is called expectation
disconfirmation. These comparative data are potentially
the most valuable to designers and researchers because
these data provide insight into users perceptions and
expectations about specific aspects of the user experience.
The theory of expectation disconfirmation has been
used for decades in the marketing and business research
disciplines to assess customer satisfaction. Anderson uses
expectation disconfirmation theory as an approach to
understanding why customers are dissatisfied with certain
products, claiming that Consumer dissatisfactionmight
be measured by the degree of disparity between
expectations and perceived product performance [7, p.
38]. Oliver (1977) expresses the core of expectation
disconfirmation succinctly: one's expectations will be
negatively disconfirmed if the product performs more
poorly than expected, confirmed if the product performs
as expected, and positively disconfirmed if performance is
better than anticipated [8, p. 480]. Further, Oliver
reasons that confirmation is more properly the midpoint
on a disconfirmation continuum ranging from unfavorable
to favorable disconfirmation [ibid].
This description of expectation disconfirmation applies
to user experience design and is in fact an elegant way to
assess not just customer satisfaction, but specific elements
of a user experience. Albert and Dixon (2003) recognize
this value, and recommend using expectation measures
[9, p. 1] to assess each users relative satisfaction with a
product in a usability study. They focus on understanding
user satisfaction with specific elements of a product or
experience. We want to extend the theory of expectation
disconfirmation and suggest that project teams collect
comparative data on users perceptions and experiences as
they relate to the specific experience goals for a product.
Using these data, project teams can make the right choices
and trade-offs when designing each element of the user
experience for the product.
Consider the following example of a typical
professional situation. A project team has limited time
and resources to develop a new product that has not yet
been released to the public. The team agrees that a
primary experience goal for the product is performance:
The interface must be immediately responsive and must
not appear to lag between user actions and system
responses. At the same time, there is a specific feature
that market researchers have found through repeated
testing to be necessary to the product, or else many of the
target users may not purchase the product. The team can
devote its resources to improving the performance of the
system in support of the experience goal, or the team can
spend its time creating and testing the new feature that
seems equally necessary for the product to succeed in the
market. Which path should the team take?
One solution to this dilemma is to create measurable
experience metrics that are directly related to the user
experience goals for the product, and then gather data on
those metrics as part of a study that examines how users
interact with a prototype of the user experience. In this
particular example, the metrics should assess users
expectations and perceptions of system performance
(responsiveness to commands, overall time to load, and so
on), as well as desirability, necessity, and usability of
specific features (including the feature that the team is
considering adding to the product). These metrics should
be phrased as statements and made into items on a 5- or 7-
point Likert scale (refer to [4] for a deeper discussion on
how to establish Likert scales and questions).
For example, one metric for this user study might be,
I expect this interface to respond immediately when I
touch the screen, with possible responses ranging from
strongly disagree to strongly agree. This same question
can be asked of the users before and after they experience
the product in the study. The resulting data can be
analyzed to determine whether the users expectations are
being positively or negatively disconfirmed. Positive
disconfirmation may indicate unanticipated user delight
with specific elements of the experience, whereas
negative disconfirmation may dictate where the project
team needs to spend more time improving the overall user
experience.
In order to collect accurate data, the prototype
experience must precisely reflect the current system
performance, and must also include a prototype of the
additional feature that may be added to the product. This
feature prototype does not need to be fully functional (it
could be a paper prototype if need be), but it does need to
appear, from the user perspective, to match the level of
fidelity of the rest of the product and experience under
study. The prototype of the feature also needs to convey
an accurate user experience, just as the overall prototype
experience needs to reflect system performance and other
characteristics as closely as possible in order to obtain
useful data.
The outcome of this study will dictate the direction
that the team should take. If enough users agree that the
current level of system performance meets or exceeds
their expectations (determined through positive or neutral
expectation disconfirmation), the team may choose to
focus on developing the new feature instead. On the other
hand, if enough users agree that system performance fails
to meet their expectations (negative expectation
disconfirmation), the project team may need to explore
the specific areas where the experience underperforms
with regard to users expectations.
These data should be compiled as part of a set of larger
information that lets the project team know whether the
first version of their product must favor performance over
additional features, or whether slower performance is
preferable to the absence of a specific feature. The
experience goals for the project should extend beyond the
first version of the product and dictate what future
versions of the same product may do differently, or better,
from an experience perspective. Therefore, while user-
reported expectations about the experience are important,
the project team should rely on these data in the context
of all available data when making such an important
choice about the product.
Establishing a New User Experience
Measurement Framework
The establishment of experience goals leads to the
determination of specific experience metrics; however, as
stated earlier, it is extremely challenging to understand or
interpret what is happening inside peoples heads as they
have experiences. For this reason, it is necessary to rely
on a framework that separates the elements of a user
experience into discrete, measurable parts that can be
analyzed using established usability and design research
techniques. The user experience framework that Wright et
al. (2008) established is well suited as a baseline for
understanding the elements of an experience that occur
internally. However, these elements must be associated
with qualitative and quantitative data collection methods
that enable these internal feelings and thoughts to be
expressed externally and captured as data.
Table 2, below, lists the elements of the Wright et al.
(2008) user experience framework and the associated data
collection methods for each element. The elements and
methods are listed in the order in which they are
implemented in a usability study (refer to [3, p. 18:6-18:7]
for the original list and definitions of the experience
framework elements).
Table 2. New user experience measurement
framework combining experience elements from [3]
and commonly used data collection methods.
Study stage and
usability testing
goal
Experience
framework
element
Data collection
method(s)
Stage 1: Pre-
context
questionnaire
Anticipating Pre-experience
interview, pre-
test questionnaire
Study 2: Post-
context
questionnaire
Anticipating,
Connecting,
Interpreting
Post-task
questionnaire
Study 3:
Controlled user
test
Interpreting,
Immediate
Reflecting
Performance
data, think-aloud
protocol,
evaluator
observations
Study 4: Post-
test
questionnaire
Immediate
Reflecting,
Future Reflecting,
Recounting
Post-test
questionnaire,
satisfaction
questionnaire
Study 5: Post-
experience
interview
Appropriating Post-experience
interview
As Table 2 shows, the elements of the framework from
[3] map to five study stages, each with a specific usability
testing goal. This new user experience measurement
framework can be employed in the lab, in the field, or in a
remote user testing environment.
Applying the Experience Framework in a
Lab-Based Study
In this section we describe the process that a usability
expert should follow to gather information about a user
experience prototype, enabling an assessment of the
experience while it is still in development. This method is
based on a process established by the first author of this
paper, who developed the process in a corporate product
design and development organization and refined it
through repeated user testing led by professional user
researchers and usability experts.
Study setup
The study setup depends on the type of user experience
that will be tested. Specifically, the physical layout and
arrangement of study materials will differ based on the
type of product that will serve as the focus for the
experience. All stages of the study should be video
recorded simultaneously from multiple angles and with an
expert observing who can adjust the camera angles if
necessary.
If the product under study is a physical object, the
study setup should involve a collection of competing
products placed on a shelf or countertop (initially covered
with an opaque cloth), arranged to reflect the way in
which people would encounter the products in a store. If
the product is a software user interface, Web site, or other
interface intended for use primarily on a desktop
computer, a large display (such as an LCD TV) can be
used to show several applications, Web sites, or interfaces
at the same time. If the product is a software user
interface, Web site, or other interface intended for use
primarily on a mobile device, several identical mobile
devices can be arranged on a shelf or table (initially
covered with an opaque cloth), each with a different
interface loaded and ready for use. Regardless of the type
of product under study, the study setup should conceal all
of the product materials until the study moderator is ready
to unveil them at the appropriate time.
The following sections describe each of the five stages
of the usability study outlined in Table 2. Each section
explains the purpose and goal of the stage, as well as any
additional notes that may be helpful to practitioners who
hope to apply this framework professionally.
Stage 1: Pre-context questionnaire
The purpose of this stage is to gather data about each
participants expectations and frames of reference for the
user experience to be tested. This stage of the study is
called the pre-context stage because participants should
not yet know what they will be testing. While they may be
able to guess what they will be assessing based on the
company who is running the study, or the location of the
study, the moderator should not intentionally expose any
products or information that might enable participants to
determine what they will be testing.
The pre-experience interview during this stage of the
study helps the moderator achieve two goals. First, the
moderator should ask participants for category-wide
information about a general product or interface type,
such as all kitchen appliances or e-commerce Web sites,
without giving away too much information about the
specific product that the participants will be using. By
masking the identity of the product, the moderator can
ensure that the participant will provide general
information about the product category, brands, and more.
This process of information gathering takes the form of a
pre-test questionnaire; refer to [10] for additional details
about designing pre-test questionnaires.
The second goal of this stage is to gather information
about participants trusted sources of information.
Although some of this information can be gathered using
screeners when recruiting specific participants, the pre-
context questionnaire can drill down in areas that are
more specific to the product or product category in
question. For example, when testing an interface on a
mobile device, it may make sense to ask participants how
they find out about new applications for their current
mobile devices, which sources of information they trust
for reviews of mobile applications, and so on.
Broadly speaking, the pre-experience interview allows
the moderator to obtain rich, qualitative details about the
participants current experience with similar products or
product categories. By conducting this interview at the
start of each study session, the moderator can identify
particularly interesting areas of participant feedback that
may require deeper investigation during the next stages of
the study.
Stage 2: Post-context questionnaire
The purpose of this stage is to set the context for
participants, observe and record their initial reactions, and
gather data on their expectations for the product under
study. First, the moderator makes visible the set of
products included in the study, and records the
participants initial reactions to the set of products without
explaining which product is specifically under study.
Unlike a typical usability study, the point of this stage is
not to outline a specific task or scenario, but to allow
participants to explore the set of products included in the
study. This activity might include allowing participants to
pick up and read product packaging, conduct brief
explorations of Web sites or software user interfaces, or
engage in some other form of interaction with the
products that catch their attention.
The post-task questionnaire for this stage of the study
is based in part on the best practices outlined in [10]. As
Dumas and Redish (1999) recommend recommend, this
questionnaire and any associated interview questions
should be brief. The questions should use a Likert-type
scale, and should focus on the product under study and
the one or two competing or similar products included in
the study. The questionnaire must capture the
participants expectations of specific elements of these
products as they relate to the experience goals and metrics
that the project team has established. For example, the
moderator may need to ask each participant to describe
his or her expectations for ease of use, or ease of setup, or
richness of functionality of the product.
The goal of this stage is to gather baseline quantitative
data about participant expectations. It is also a good idea
to assess expectations for one or two competing products
as well; these data can be used to help understand
participants brand and quality perceptions prior to their
actual use of a product. Refer to [4] for a thorough
description of questionnaire design and a description of
how to gather expectation data from study participants.
Stage 3: Controlled user test
The purpose of this stage is to run each participant
through a set of tasks with the product under study.
Maguire (2001) describes the controlled user test method
as the chance to gather information about the users'
performance with the system, their comments as they
operate it, their post-test reactions and the evaluator's
observations [11, p. 617]. Throughout this stage, the
moderator should encourage the participant to think aloud
as they complete the tasks. Again, [10] provides guidance
on how to encourage this behavior without distracting
participants from their tasks.
The goal of this stage is to gather feedback on the
specific features or areas of the product that relate to the
experience goals. The video recordings of this portion of
the study are helpful to the design and development teams
as they can see whether any patterns of use or discussion
emerge that might affect decisions about product
functionality and interaction. Additionally, this is the part
of the study when the moderator should arrange for
developers, program managers, business managers, and
other members of the extended project team to observe
the participants and see how real people use the product
under study. This stage of the study yields a lot of
interesting participant behaviors that, when seen first-
hand, help other members of the team recognize areas of
quality or deficiency in the product design, feature set,
and so on.
Stage 4: Post-test questionnaire
The purpose of this stage is to ask the participant to
reflect on his or her experience with the product under
study, and then compare the actual experience to the
anticipated experience. This questionnaire is essentially
the same as the post-context questionnaire administered in
the second stage of the study; however, this time the
questions are phrased in the past tense and participants are
asked to describe whether certain areas of the product
exceeded, met, or fell short of their expectations as stated
earlier in the study. Refer to [8] for an example of a Likert
scale and corresponding set of questions that can acquire
expectation disconfirmation data.
The goal of this stage is to gather participant feedback
about the total user experience, but to gather that feedback
in a way that provides direction to the development team.
For example, if the designers create a software interface
prototype with over 100 visible icons, the participants can
report whether that number of icons is easy or difficult to
use, and whether the interface as a whole is easier or more
difficult to use than they expected. In this example, it is
quite possible that many members of the project team
would consider the prototype too confusing or overly
complicated; the usability experts and user researchers on
the team would then support or reject this hypothesis by
gathering, analyzing, and reporting participants
expectation disconfirmation data.
This stage of the study also enables the moderator to
dig deeper into areas of the experience that the participant
considered important earlier in the study. For example,
during the first stage a participant might claim that easy
setup is crucial to a good experience with any software
product. If the study materials were not designed to ask
participants about setup experience, the moderator should
feel free to ask additional questions clarifying whether the
setup experience in the study was easy, and whether it
exceeded, met, or fell short of the participants
expectations. Although these data may not be gathered for
all participants, future versions of the study could
incorporate new questions that were not initially included.
In this case, the data gathered may be qualitative but
useful: Even if a software setup process takes 30 minutes,
participants may consider that process easy based on
their expectations for that type of process and the context
within which they use that software application. Refer to
[4] for additional details on gathering expectation data
from study participants.
Stage 5: Post-experience interview
The purpose of this stage is to gather general
information about participants responses to the
experience they had during the study. Maguire (2001)
states that the post-experience interview is a pre-
specified list of itemsallowing the user freedom to
express additional views that they feel are important [11,
p. 619]. The interview is valuable as a quick and
inexpensive way to obtain subjective feedback from users
based on their practical experience of a system or
product [ibid].
The goal of this stage is to determine how participants
might appropriate the product under study into their lives.
This is a tricky determination to make because the lab-
based study is brief and conducted in an unnatural setting,
and the participants do not know what type of product
they will be assessing until after the study begins.
Therefore, the goal of this stage is to gather as much
relevant information as possible about what might
motivate participants to purchase or use the product under
study. For example, the moderator could ask whether
participants would buy the product, what aspects of
functionality or design might encourage or discourage
product usage, how they would describe the product to
friends and family, what forms of media they would use
to convey their opinions about the product, whether they
can describe or guess at the value proposition of the
product, and so on.
The information gathered during this interview is
useful to all project team members, but for different
reasons depending on their roles. For example, the
marketing manager will be quite interested to know
whether participants can guess the value proposition of
the product, whether they would consider purchasing the
product, and what they think of the packaging or the
marketing campaign ideas (if they are included in the
study). The interaction designers will be excited to hear
whether participants enjoy using the product, how they
might describe the product to their friends, and whether
participants think the product is cool or fun to use.
And, the information architects will be excited to hear
whether the participants read the documentation and how
they liked the layout and style of the content.
This stage of the study yields some of the best sound
bites for use in meetings with managers and other
stakeholders who did not attend the study sessions. By
selecting some of the best quotes and compiling a
highlight reel of sorts, the researchers can tell the story of
the study results in the participants own words, lending
additional credibility and a more human touch to the data
analysis results.
Extending the Experience Framework
through Time and Space
As mentioned earlier, this user experience
measurement framework can be applied in settings
outside of the usability lab. The first author of this paper
has conducted lab- and field-based studies using a mix of
experience metrics and the measurement framework,
which resulted in a far richer understanding of users
needs, expectations, and potential barriers to product
adoption than would have been possible in either setting
alone. While the physical layout of the study would
necessarily change depending on the setting, the methods
used to collect data would remain the same. In the
authors experience, field studies yield more situational
information than lab-based studies; when in their own
homes, participants tend to place the product under study
into their environment in ways that reflect how they
would actually interact with the product at home. This
information can be quite important depending on the
product or interface being studied, and the experience
goals of that product.
Additionally, this experience measurement framework
can be applied in a longitudinal, ethnographic manner that
enables researchers to develop a deeper understanding of
how users actually experience products in more natural,
everyday settings. The lab environment is contrived and
study participants arrive expecting to work with some sort
of new product; this arrangement does not reflect the
ways in which people truly encounter new products for
the first time. For example, when people see advertising
for a new or unfamiliar product on TV, their experience
with that product begins. The processes of anticipating
and connecting are initiated, processes that may help
determine whether they pursue further information about
the product or, ultimately, obtain the product and have an
end-to-end user experience with it. This initial encounter
would be useful to study in a controlled way, but in a
naturalistic setting, such as in a store or in peoples homes
as they watch TV or surf the Web.
The final processes of a user experience, recounting
and appropriation, occur when people use a specific
product as they would naturally use that product in their
everyday lives. This behavior is almost impossible to
replicate or encourage in lab- or field-based studies with
externally-imposed time limits. Instead, this behavior
must be captured in the course of users lives and at the
moments when recounting or appropriating behaviors
occur. Unfortunately, capturing these moments and
recording them as data requires a significant investment
of time, resources, and recording equipment. However,
methods do exist that can enable this sort of data
collection.
For example, Isomursu, Thti, Vinm, and Kuutti
(2007) created the experience clips methodology [12, p.
414] to capture data about dynamic interaction events
evoking fleeting emotions with context information which
would help the designer to interpret the data [ibid]. This
method uses pairs of volunteer users [ibid], one of
whom records video clips when the other was using the
application under evaluation [ibid]. The brilliance of the
experience clips method is in its avoidance of the typical
think-aloud contrivance that occurs in the lab, and
getting the study participants partner to elicit information
from the participant at the appropriate moments when
using a product.
This method of gathering data, combined with the
removal of the researcher from the equation, yields much
more promising data as the setting in which the product is
used and the experience occurs is far more natural.
Additionally, the product experience that this method
enables is similar to the socially-constructed co-
experience described by [5], which again reflects the
reality of using products in natural environments and with
other people present (as opposed to lab-based studies in
which the participant is often alone). By including pairs of
participants who have an existing friendship, this method
makes use of that existing relationship to generate data
that outsiders (researchers) may not be able to extract
during a study.
Considering Future Research Opportunities
for the Experience Framework
Future research into the experience measurement
framework defined in this paper should explain how the
framework can be modified to fit the setting and
requirements of the experience clips methodology, the
successful application of which would result in data
relating to users expectations and perceptions of a
product as they convey that information to their
confidants. The value of this method, when used within
the framework described herein, is potentially high. For
example, the data related to the recounting and
appropriating behaviors of a set of users interacting with a
product over time would reflect how users actually
describe the product experience to friends and family.
These data could help the marketing team determine how
to position the product for a successful launch, or how to
ensure the word of mouth buzz that the product generates
is primarily positive and related to the experience goals
for the product.
There is also a powerful opportunity for future
research into study methodologies from the field of
education research. For example, when a project team is
at the earliest stage of the development process, the team
frequently conducts initial inquiries and investigations
into the types of products that could be made. At this
ideation phase of a project, the team may not have any
experience goals defined yet, only a vague business
opportunity and a sense of the type of project to be built.
Project teams may employ the contextual inquiry [13]
methodology during this phase, but they may also take a
page from the grounded theory [14, 15] perspective of
education research and sociology.
Grounded theory requires the user researcher or
usability expert to serve as the primary instrument of
data collection and analysis [assuming] an inductive
stance and [striving] to derive meaning from the data
[14, p. 17]. Using this methodology, the researcher
develops an understanding of how study participants use a
product by observing them, recording qualitative data,
and then analyzing those data in a specific way. In this
type of research, the theory emerges from, or is
grounded in, the data [ibid]. Ultimately, the researcher
creates a substantive theory [ibid] that specifically
explains why the study participants use a product in a
certain way, yielding insights that can guide the project
team to make specific choices about the product early in
the project.
For example, a team of user researchers might go into
a project with a very broad question about something they
would like to study, such as when do people play music
on their iPhones? While in the field, they observe and
record participants interactions and behaviors with their
iPhones until they begin to see patterns of use or
behavioral themes develop over time. For this reason, a
longitudinal field study is helpful to a project team during
the first stage of product design, when user needs and
technology constraints are not yet clearly understood.
Inserting low-fidelity prototypes into peoples lives and
observing how they attempt to appropriate those
prototypes yields data that can help guide the
development teams user experience goals and user
scenarios throughout the life of the product line.
The difference in this method compared to traditional
UCD methods, such as context-of-use analyses and
observational field studies, is its unstructured style and
use of emergent coding to identify patterns of behavior or
areas of user interest and engagement. Instead of entering
the study with clear observational goals, or a structured
approach to obtaining information, the researchers insert a
product into peoples lives and then observe what happens
next. By gathering data over the course of days or weeks,
the researcher enables the participants to spend enough
time with the product prototype to allow for richer
appropriation.
As Wright et al. (2008) state, the focus [in experience
design] should be on how an artifact is appropriated into
someones life and how this [appropriation] is shaped by
his/her prior expectations, how his/her activities change to
accommodate the technology, and how s(he) changes the
technology to assimilate it into his/her world [3, p.
18:19]. It is quite challenging to study processes of
appropriation, assimilation, and accommodation; the
mythologies that people develop for themselves about
how they relate to the products they own are rich sources
of information for design and research practitioners.
However, by using the framework outlined in this paper
and by conducting additional ethnographic and grounded
theory studies, we believe it is possible to interpret the
many ways in which people incorporate products into
their everyday lives.




References
[1] Forlizzi, J. and S. Ford. The Building Blocks of Experience:
An Early Framework for Interaction Designers. In Proceedings
of the 3rd Conference on Designing interactive Systems:
Processes, Practices, Methods, and Techniques, New York,
419-423, 2000.

[2] Davis, M. Theoretical foundations for experiential systems
design. In Proceedings of the 2003 ACM SIGMM Workshop on
Experiential Telepresence, New York, 45-52, 2003.

[3] Wright, P., J. Wallace, and J. McCarthy. Aesthetics and
experience-centered design. ACM Transactions on Computer-
Human Interaction 15(4): 1-21, 2008.

[4] Tullis, T. and B. Albert, Measuring the User Experience,
Morgan Kaufmann, Burlington, 2008.

[5] Forlizzi, J. and K. Battarbee. Understanding Experience in
Interactive Systems. In Proceedings of the 5th Conference on
Designing interactive Systems: Processes, Practices, Methods,
and Techniques, New York, 261-268, 2004.

[6] Mao, J., K. Vredenburg, P.W. Smith, and T. Carey. The state
of user-centered design practice. Communications of the
ACM 48(3): 105-109, 2005.

[7] Anderson, R.E. Consumer Dissatisfaction: The Effect of
Disconfirmed Expectancy on Perceived Product Performance.
Journal of Marketing Research 10: 38-44, 1973.

[8] Oliver, R.L. Effect of Expectation and Disconfirmation on
Postexposure Product Evaluations: An Alternative
Interpretation. Journal of Applied Psychology 62(4): 480-486,
1977.

[9] Albert, W. and E. Dixon. Is this what you expected? The use
of expectation measures in usability testing. In Proceedings of
the Usability Professionals Association, Scottsdale, 2003.

[10] Dumas, J.S. and J.C. Redish, A Practical Guide to Usability
Testing, Intellect, Exeter, 1999.

[11] Maguire, M. Methods to support human-centered design.
International Journal of Human-Computer Studies 55: 587-634,
2001.

[12] Isomursu, M, M. Thti, S. Vinm, and K. Kuutti.
Experimental evaluation of five methods for collecting emotions
in field settings with mobile applications. International Journal
of Human-Computer Studies 65: 404-418, 2007.

[13] Beyer, H. and K. Holtzblatt, K. Principles of contextual
inquiry. Contextual Design: Defining Customer-Centered
Systems, San Francisco, Morgan Kauffman, 1998.

[14] Merriam, S.B. Qualitative research and case study
applications in education, San Francisco, Jossey-Bass, 1998.

[15] Glaser, B.G. and A.L. Strauss. The Discovery of Grounded
Theory: Strategies for Qualitative Research, Aldine, 1967.




About the Authors
Alexander Thayer has Master of Science and Bachelor
of Science degrees in Technical Communication from the
University of Washington (UW), as well as more than ten
years of professional information architecture and design
experience at Microsoft, IBM, and technical consulting
firms. Thayer is currently a PhD student and research
assistant in the Human Centered Design & Engineering
program at the UW.

Thrse E. Dugan is a PhD candidate in the University
of Washingtons College of Education Learning Sciences
program. Dugan is a researcher in the Learning in
Informal and Formal Environments (LIFE) Center and a
member of the Learning, Media, and Interactivity
research group. Her academic background is in cinema,
visual anthropology, and photography; her professional
background is in documentary filmmaking and television
production for the Public Broadcasting Service.

Potrebbero piacerti anche