Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

(Excerpts From) Investigating Performance: Design and Outcomes With Xapi
(Excerpts From) Investigating Performance: Design and Outcomes With Xapi
(Excerpts From) Investigating Performance: Design and Outcomes With Xapi
Ebook91 pages6 hours

(Excerpts From) Investigating Performance: Design and Outcomes With Xapi

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This mini-book of excerpts from the upcoming release "Investigating Performance" features four chapters that help learning and development professionals make use of data to design for improved performance outcomes. This mini-book addresses different types of data (quantitative vs. qualitative) and how to make use of each. It covers how to define business objectives and the performance indicators that describe how such objectives are met.
LanguageEnglish
PublisherBookBaby
Release dateJun 7, 2016
ISBN9781483572826
(Excerpts From) Investigating Performance: Design and Outcomes With Xapi

Related to (Excerpts From) Investigating Performance

Related ebooks

Business For You

View More

Related articles

Reviews for (Excerpts From) Investigating Performance

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    (Excerpts From) Investigating Performance - Janet Laane Effron

    data.

    1.1: Quantitative and Qualitative Data

    When we think of data, we typically think of numbers. Height and weight, miles-per-gallon, ticket resolution time, test scores, time-to-completion. Big Data sounds like the Matrix, like a stream of endless numbers. Numbers are only part of the story, however. In the chapters ahead we’re going to take a close look at both quantitative and qualitative data. Taken together, they provide the two sides of good (meaningful) analysis.

    Quantitative data is about numbers

    Quantitative data can be counted or measured. It’s the kind of data we see when we look at:

    Test scores (raw numbers, pass/fail)

    Course related statistics (number of posts, learning objects completed)

    Performance measures (sales data, error rate, customer satisfaction)

    Benchmarking (gap reduction, goals met)


    Numbers are useful and important, but numbers can be limiting when we seek to answer key questions around learning, such as:

    Did it work?

    What made it work?

    Who didn’t it work for? Why not?


    The questions that drive an effective learning design are questions that go beyond linear regression to tell the whole story. They try to step beyond correlation and into causality; they try to establish not just the what, but the why.

    Qualitative data is descriptive

    Qualitative data concerns that which cannot be measured by a meter stick, a test score, or a time on the clock. It can include:

    user feedback

    sentiment

    content

    demographics


    Qualitative data puts metaphorical flesh on the bare bones of quantitative data. For example, if we look at text responses to a question, we can measure the word count but that won’t tell us all we need to know about the quality of the responses. Analyzing the content of the responses to that same question will give us qualitative data: a sense of keywords, the degree of understanding, or the application of a concept. Or, in another scenario, we may receive a lot of positive user feedback after a course (qualitative data), but that data doesn’t tell us the whole story. Users may say they liked the course, but we will need some quantitative (and possibly qualitative) performance data to determine if it was an effective course.


    In later chapters we’ll explore how to work with both quantitative and qualitative data and look at some basic examples of using them together in analysis. Before we think about using data, however, we need to be sure our data is capable of doing what we need it to do. We need to talk about data quality.

    1.2: Data Quality

    In an ideal world, all our data would be complete, consistent, readily available, and accurate. When using two (or more!) data sets together, those data sets would have perfectly matched keys, that is, a common set of values, such as user identification, that allow us to connect data across different data sets. And of course we’d love for our data to have consistent formatting, with no duplicate or incomplete records. In reality, this isn’t the case, and so when we start to do analysis, we first have to spend time finding what’s inconsistent and what’s missing. There’s always some data cleaning to be done.


    Data quality refers to how fit a data set is for its intended purpose, according to characteristics of completeness, validity, accuracy, consistency, availability and timeliness (ISO 9001). In practical terms, this means we need to look at the data and see if it represents the real world accurately. We need to make sure that different data sets are aligned. In other words, do the data sets represent the same elements consistently?


    For example, we may want to know how much time users spend during each visit to our eLearning portal. When we examine our data, we might note that the majority of visits last between 15 and 30 minutes, but one or two visits clock in at, say, 15 hours. Since it’s unlikely that anyone has spent 15 hours continuously doing training, this alerts us that there’s likely a problem in the data somewhere. Whether the problem lies in how we are set up to log user time on the portal, or something else, it’s worth checking into.


    When we’re working with multiple data sets, data consistency is especially necessary. The data might be from different systems, or it might be data in the same system over time, and it’s not uncommon to find different date or other numeric formats, different user IDs for the same person, or even different criteria for a field (for example, how we define active user or time spent on portal or even pass).


    There’s no getting around the fact that we need to take a hard look at our data before analysis. We can do this through basic data exploration, like reading the raw data, counting types, and matching key-value pairs. We also need to understand thoroughly where the numbers are coming from in terms of actions, criteria, and which interactions in an application interface create which pieces of data. When we do this, the resulting analysis is likely to be more valid, more useful, and less frustrating. We will also gain knowledge about ways to improve future data collection efforts so that we will generate data that will most effectively meet

    Enjoying the preview?
    Page 1 of 1