Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Good afternoon and thanks for having me here. In this talk I want to look at the
design challenges of systems that anticipate users needs and then act on them.
That means that it sits at the intersection of the internet of things, user experience
design and machine learning, which to me is new territory for designers who may
have dealt with one of those disciplines before, but rarely all three at once.
The talk is divided into several parts: it starts with an overview of how I think
Internet of Things devices are primarily components of services, rather than being
self-contained experiences, how predictive behavior enables key components of
those services, and then I finish by trying to to identify user experience issues
around predictive behavior and suggestions for patterns to ameliorate those issues.
A couple of caveats:
- I focus almost exclusively on the consumer internet of things. Although predictive
behavior is an important part of the Industrial Internet of Things for things like
preventitive maintenance or energy savings in building environments, I feel its
REALLY key to the consumer IoT because of its potential ability to cut through the
information and data fog we live in.
- I want to point out that few if any of the issues I raise are new. Though the term
internet of things is hot right now, the ideas have been discussed in research
circles for decades. Search for ubiquitous computing, ambient intelligence, and
pervasive computing and itll help you keep from reinventing the wheel.
- Finally, most of my slides dont have words on them, so Ill make the complete
deck with a transcript available as soon Im done.
Today I work for PARC, the famous research lab that invented the
personal computer and laser printer, as a principal in its Innovation
Services group. We help companies reduce the risk of adopting
novel technologies using a mix of ethnographic research, user
experience design and innovation strategy. We do everything from
developing novel products for our clients to coaching teams in how
they can be much more strategic and effective with their innovation
efforts.
R. G. Shoup, 1971
PARC
PARC also started thinking about what we call the IoT long before
most other companies.
It was at PARC in 1971 that Dick Shoup, and early PARC
researcher, wrote that eventually processors would be as
common, and as invisible as electric motors. This clearly
outlines the destiny of connected computer: that eventually it will
become as boring and as common as electric motors are today.
But it didnt appear all at once. Weve only started the transition to
the ubiquitous computing world, and as such, were seeing a lot
of bad ideas about what the Internet of Things is and it isnt.
There are so many bad ideas in fact that there are entire
Tumblrs dedicated to mocking stupid IoT ideas. One is about
dumb smart things and the other is just about smart
refrigerators.
+ CONNECTING STUFF TO
10
11
11
12
or an egg carton, you still have the same problem, and its a user
experience problem.
The UX problem is that end users have to connect all the dots to
coordinate between a wide variety of devices, and to interpret
the meaning of all of these sensors to create personal value. For
many simply connected products there is so little efficiency to be
had relative to the cognitive load that its just not worth it. Whats
worse, the extra cognitive load is exactly opposite to what the
product promises, and customers feel intensely disappointed,
perhaps even betrayed, when they realize how little they get out
of such a product That makes most such products effectively
WORSE than useless. That promise gap is what distinguishes
an optional and marginal gadget from a tool.
This strategy worked very poorly for Quirky.
12
13
14
14
15
SERVICE
AVATARS
15
16
Amazon really gets this. Heres a telling older ad from Amazon for
the Kindle. Its saying Look, use whatever device you want. We
dont care, as long you stay loyal to our service. You can buy our
specialized devices, but you dont have to.
16
17
When Fire was released 3 years ago, Jeff Bezos even called it a
service.
17
18
18
19
19
20
20
+ PREDICTIVE BEHAVIOR,
ENABLED BY MACHINE
LEARNING
I think the real consumer value connected services offer is their ability to make
sense of the world on peoples behalf, to reduce cognitive load by enabling
people to interact with devices at a higher level than simple telemetry, at
the level of intentions and goals, rather than data and control. Humans are
not built to collect and make sense of huge amounts of data across many
devices, or to articulate our needs as systems of mutually interdependent
components. Computers are great at it.
21
22
They can make statistical models from many data sources across
space and time and then try to maximize the probability of a
desired outcome. A model learned from thousands of samples
across many people and long periods of time can compensate for
much wider variety of situations in a more nuanced way than an
individual will ever be able to. Because people and their machines
act pretty consistently, these systems can essentially predict the
future, which is how Waze knows that youre probably driving
home when you get in the car after work without you ever telling it
your home address and schedule.
22
23
23
24
24
25
The Birdi smart smoke alarm says it will learn over time, which is
again the same thing.
25
26
26
27
27
28
28
29
29
30
+ ISSUE 1: EXPECTATIONS OF
31
AUTONOMOUS BEHAVIOR
+ ISSUE 2: UNCERTAINTY
32
The irony in predictive systems is that theyre pretty unpredictable, at least at first.
When machine learning systems are new, theyre often inaccurate, which is not
what we expect from our digital devices. 60%-70% accuracy is typical for a first
pass, but even 90% accuracy isnt enough for a predictive system to feel right,
since if its making decisions all the time, its going to be making mistakes all the
time, too. Its fine if your house is a couple of degrees cooler than youd like, but
what if your wheelchair refuses to go to a drinking fountain next to a door because
its been trained on doors and it cant tell thats not what you mean in this one
instance? For all the times a system gets it right, its on the mistakes that we judge
it and a couple such instances can shatter peoples confidence. Anxiety is a kind of
cognitive load, and a little doubt about whether a supposedly smart system is going
to do the right thing is enough to turn a UX thats right most of the time into one
thats more trouble than its worth. When that happens, youve more than likely lost
your customer.
Unfortunately, sooner than we think, such inaccurate predictive behavior isnt going to
be an isolated incident. Soon were going to have 100 connected devices
simultaneously acting on predictions about us. If each is 99% accurate, then one is
always wrong. So the problem is: How can you design a user experience to make a
device still functional, still valuable, still fun, even when its spewing junk behavior?
How can you design for uncertainty?
Photo CC BY 2.0 photo 2011 Pop Culture Geek taken by Doug Kline:
https://www.flickr.com/photos/popculturegeek/6300931073/
32
+ ISSUE 3: CONTROL
33
The last issue comes as a result of the previous two: control. How
can we maintain some level of control over these devices, when
their behavior is by definition statistical and unpredictable?
On the one hand you can mangle your devices predictive behavior
by giving it too much data. When I visited Nest once they told me
that none of the Nests in their office worked well because theyre
constantly fiddling with them. In machine learning this is called
overtraining. The other hand, if I have no direct way to control it
other than through my own behavior, how do I adjust it? Amazon
and Netflixs recommendation systems, which are machine
learning systems for predicting what you may like, give you some
context about why they recommended something, but what do I do
when my only interface is a garden hose?
33
34
34
35
The computer offers no assistance: humans must make all decision and
actions
10
35
ADDRESSING PREDICTIVE
UX ISSUES
36
37
Extract
Train
Classify
Model
My first pattern isnt really a pattern, but a general approach. To design these
systems you need to have a user model for every stage of the machine
learning and prediction process. There needs to be a story to tell about each
step, even if its a step that seems like it would invisible to customers.
Starting with acquisition: how will you incentivize people o add data to the
system at all? Why should I upload my cars dashcam video to your traffic
prediction system EVERY DAY? Next, how will you communicate youre
extracting features? I like the way that Google speech to text shows you
partial phrases as youre speaking into it, and how it corrects itself. That
small bit of feedback tells people its pulling information out and it trains
users how to meet the algorithm halfway. How do machine-generated
classifications compare to peoples organization of the same phenomena?
How is a context model presented to end users and developers? How will
you get people to train it and tell you when the model is wrong? Does the
final behavior actually match their expectation?
Machine learning algorithms used to be strictly behind-the-scenes, but in the
IoT they are actors in our lives, so as designers its our responsibility to
understand the situations where the algorithms and the devices they control
interact with peoples lives, especially since theres a deep symbiotic
relationship between the data that comprises the models, the behavior those
models induce and the people who are the intended beneficiaries.
37
38
TRUST
VS
38
+ COMPONENTS OF TRUST IN
39
PREDICTIVE UX
Benevolence
Humor
Integrity
Balance
Control
From: Yoo and Gretzel, 2011
Kantor, et al, 2011
What makes for a trustworthy experience? Computers are social actors and predictive
systems are especially so, the qualities of a trustworthy digital assistant are
essentially the same as those of a trustworthy human apprentice.
Here are some factors recommender system researchers identified as building blocks
of trust when interacting with a system making decisions on your behalf. Why do
you trust a Slack chatbot more than one from, say, AT&T? Is it because its funny,
and if its funny then it cant be evil, right? For me every one of these bricks
represents an interesting cluster of UX, branding and messaging challenges that
are unique to each device.
Benevolence, the recommender systems caring about the user and acting in the
users interest
Integrity, adherence to a set of principles (e.g. honesty) that the user finds acceptable
Balance. several studies have demonstrated that communicators can enhance their
trustworthiness when they provide both sides of the argument - the pros and the
cons - rather than arguing only in their own favor
Humor. A number of studies found positive effects of humor on communicator
trustworthiness judgments but rarely on judgments of expertise
Familiarity. products that were familiar to users were helpful in establishing users trust
in recommender systems
Control. users showed more positive affective reactions to recommender systems
when they had increased control
39
40
EXPECTATIONS
Our current expectation for digital systems is that theyll behave consistently
and the reasons for their behavior will be clear. Neither of these is true for the
user experience of predictive systems, which dont necessarily behave
identically in what appear to be similar circumstances, whose behavior
changes over time, and where the reasons for the behavior may not be
obvious. If we undermine peoples confidence in a system by violating their
expectations, theyre likely to be disappointed and stop using it.
The first thing a predictive UX needs to do is to set peoples expectations
appropriately. It needs to explain the nature of the device, to describe it is
trying to predict, that its trying to adapt, that its going to sometimes be wrong,
to explain how its learning, and how long itll take before it crosses over from
creating more trouble than benefit.
Recommender systems, such as Google Now, describe why a certain kind of
content was selected, and that sets the expectation that in the future the
system will recommend other things based on other kinds of content youve
requested. Nests FAQ explains that you shouldnt expect your thermostat to
make a model of when youre home or not until its been operating for a week
or so.
40
41
About ten years ago Timo Arnall and his students tried to address a
similar set of questions around interactions with RFID-enabled
devices by creating an iconography system that communicated
to potential users that these devices had functionality that was
invisible from the outside. Perhaps we need something like this
for behavior created by predictive analytics?
41
42
42
+ EXPLANATION COMPONENTS
43
43
44
44
45
46
1:11
1:26
1:36
1:47
1:59
2:25
2:38
2:57
46
Finally, for me the IoT is not about the things, but the experience
created by the services for which the things are avatars.
47
48
Marshall McLuhan,
The Medium is the Massage, 1967
48
+ Thank you!
Mike Kuniavsky
mikek@parc.com
@mikekuniavsky
Thank you.
49