Sei sulla pagina 1di 12

A Rubric for Evaluating E-Learning Tools in

Higher Education
Authors:
 by Lauren Anstey and Gavan Watson
 
Published:
 Monday, September 10, 2018
The Rubric for E-Learning Tool Evaluation offers educators a framework, with criteria
and levels of achievement, to assess the suitability of an e-learning tool for their
learners' needs and for their own learning outcomes and classroom context.

Credit: AnggunFaith / EDUCAUSE © 2018

As educational developers supporting the incorporation of technology into teaching, we


are often asked by instructors for a tailored recommendation of an e-learning tool to use
in a particular course. When they use the phrase e-learning tool, instructors are typically
asking for some kind of digital technology, mediated through the use of an internet-
connected device, that is designed to support student learning. Such requests tend to
be accompanied by statements of frustration over the selection process they've
undertaken. These frustrations often result from two factors. First, instructors are
typically experts in their course's subject matter, yet they are not necessarily fluent in
the best criteria for evaluating e-learning tools. Second, the number and the variety of e-
learning tools continue to proliferate. Both of these factors make it increasingly
challenging for faculty members to evaluate and select an e-learning tool that aligns
with their course design and meaningfully supports their students' learning experience.
Yet, we firmly believe that instructors should be the ultimate decision-makers in
selecting the tools that will work for their courses and their learners. Thus, we saw an
opportunity to develop a framework that would assist with the predictive evaluation of e-
learning tools—a framework that could be used by non-tech experts and applied in a
variety of learning contexts to help draw their attention to the cogent aspects of
evaluating any e-learning tool. To address this need, we created the Rubric for E-
Learning Tool Evaluation.

At our institution, Western University, the Rubric for E-Learning Tool Evaluation is


currently being utilized in two ways. First, educational developers are using the rubric to
review the tools and technologies profiled on the eLearning Toolkit, a university online
resource intended to help instructors discover and meaningfully integrate technologies
into their teaching. Second, we have shared our rubric with instructors and staff so that
they can independently review tools of interest to them. These uses of the framework
are key to our intended purpose for the rubric: to serve as a guide for instructors and
staff in their assessment and selection of e-learning tools through a multidimensional
evaluation of functional, technical, and pedagogical aspects.

Foundations of the Framework


In the 1980s, researchers began creating various models for choosing, adopting, and
evaluating technology. Some of these models assessed readiness to adopt technology
(be it by instructors, students, or institutions)—for example, the technology
acceptance model (TAM) or its many variations. Other models aimed to measure
technology integration into teaching or the output quality of specific e-learning software
and platforms. Still other researchers combined models to support decision-making
throughout the process of integrating technology into teaching, from initial curriculum
design to the use of e-learning tools.

However, aside from the SECTIONS model,1 existing models fell short in two key areas:

 They were not typically intended for ad hoc instructor use.


 They did not enable critique of specific tools or technology for informing adoption
by instructors.

To address this, we integrated, reorganized, and presented existing concepts using an


instructor-based lens to create an evaluative, predictive model that lets instructors and
support staff—including instructional designers and courseware developers—evaluate
technologies for their appropriate fit to a course's learning outcomes and classroom
contexts.

Why a Rubric?
Educators often use rubrics to articulate "the expectations for an assignment by listing
the criteria or what counts, and describing levels of quality." 2 We have adapted these
broad aims to articulate the appropriate assessment criteria for e-learning tools using
the standard design components of other analytical rubrics: categories, criteria,
standards, and descriptors.

We organized our rubric's evaluation criteria into eight categories. Each category has a
specific set of characteristics, or criteria, against which e-learning tools are evaluated,
and each criterion is assessed against three standards: works well, minor concerns,
or serious concerns. Finally, the rubric offers individual descriptions of the qualities an
e-learning tool must have to achieve a standard.

Although our rubric integrates a broad range of functional, technical, and pedagogical
criteria, it is not intended to be overly prescriptive. Our goal is for the framework to
respond to an instructor's needs and be adapted as appropriate. For example, when a
rubric criterion is not relevant to the assessment of a particular tool, it can be excluded
without impacting the overall quality of the assessment.

The rubric reflects our belief that instructors should choose e-learning tools in the
context of the learning experience. We therefore encourage an explicit alignment
between the instructor's intended outcomes and the tool, based on principles of
constructive alignment.3 Given the diversity of outcomes across learning experiences, e-
learning tools should be chosen on a case-by-case basis and should be tailored to each
instructor's intended learning outcomes and planned instructional activities. We
designed the rubric with this intention in mind.

The Rubric Categories


The rubric is intended to be used as a stand-alone resource. The following is an
explanation of each category and how we framed it to meet our development goals.

Functionality

Broadly speaking, functionality considers a tool's operations or affordances and the


quality or suitability of these functions to the intended purpose—that is, does the tool
serve its intended purpose well? In the case of e-learning tools, the intended purpose is
classroom use. 

Scale. Postsecondary classrooms vary in format and size, ranging from small seminars
to large-enrollment courses. In larger courses, creating small groups increases contact
among students, fosters cooperative learning, and enhances social presence among
learners.4 An e-learning tool should therefore not only be flexible in accommodating
various class sizes but also be capable of supporting small-group work.
Hence, scale focuses on the tool's affordances to accommodate the size and nature of
the classroom environment.

Ease of Use. When a tool is inflexible, is cumbersome in design, is difficult to navigate,


or behaves in unexpected ways, it is likely to be negatively perceived by instructors and
students. Comparatively, a tool tends to be more positively perceived when it feels
intuitive and easy to use and offers guidance through user engagement. The ease of
use criterion therefore focuses on design characteristics that contribute to user-
friendliness and intuitive use.

Tech Support / Help Availability. When technical problems or lack of user know-how


impairs the function of a tool, users must know where to turn for help. Timely support
helps instructors feel comfortable and competent with e-learning tools and helps
students self-regulate their learning.5 While such support can come from a variety of
sources—including peers, experts, IT staff, and help documentation—we believe that
the optimal support is localized, up-to-date, responsive to users' needs, and timely.
Such support is often best provided either through campus-based technical support or
through robust support from the platform itself.

Hypermediality. Cognitive psychology emphasizes the importance of giving learners


multiple, diverse forms of representation organized in a way that lets them control their
own engagement.6 Hypermediality is achieved by providing multiple forms of media
(audio, video, and textual communication channels), as well as the ability to organize
lessons in a non-sequential way.7 This criterion therefore focuses on assessing how a
tool's functions support and encourage instructors and students to engage with and
communicate through different forms of media in a flexible, nonlinear fashion.

Accessibility

Here, we define accessibility both broadly—as outlined by the Universal Design for


Learning (UDL) principles of flexible, adaptable curriculum design to support multiple
learning approaches and engagement for all students—and in terms of legislative
requirements for meeting the specific accessibility needs of learners with disabilities.

Accessibility Standards. At a minimum, an e-learning tool should adhere to mandated


requirements for accessibility, including those of legislative accessibility standards, as
well as generally accepted guidelines, such as the World Wide Web Consortium
(W3C) Web Accessibility Initiative. The documentation for an e-learning tool should
provide information regarding the degree and nature of a tool's ability to meet
accessibility standards. Unfortunately, such information is often missing, raising a
serious concern that developers have not valued accessibility standards in their design
and support of the e-learning tool.

User-Focused Participation. Whereas standards serve as a foundation for


accessibility, they are not the only set of criteria to consider in adopting a framework of
universal design. Drawing on an Accessibility 2.0 model, 8 the user-focused participation
criterion rewards e-learning tools that address the needs of diverse users and include
broader understandings of literacies and student capabilities.

Required Equipment. Given that inaccessibility is a mismatch between a learner's


needs in a particular environment and the format in which content is presented, 9 we
examine environmental factors that impact accessibility. These factors include
necessary hardware (e.g., speakers, a microphone, and a mobile phone) and the
technology or service (e.g., high-speed internet connection) that users need to engage
with an e-learning tool. Generally, the less equipment required, the more accessible the
tool will be to a broad group of users, regardless of socioeconomic, geographic, or other
environmental considerations.

Cost of Use. Continuing with a consideration of socioeconomic factors as a broader


question of accessibility, this criterion evaluates the financial costs of a tool. In addition
to tuition costs, students regularly face significant (and unregulated) expenses for
course resources.10 The burden increases if students are required to buy e-learning
tools. Instructors play an integral role in balancing tool use and costs incurred; at best,
tool use is open access, covered by tuition, or otherwise subsidized by the institution.

Technical

In a review of e-learning readiness models, 11 researchers found that a user's technology


—that is, internet access, hardware, software, and computer availability—was integral
to successful e-learning implementation. This category thus considers the basic
technologies needed to make a tool work.

Integration/Embedding within a Learning Management System (LMS). LMSs are


internet-based, institutionally-backed platforms that support teaching and learning at the
course level. Any e-learning tool adopted for teaching should be able to "play well" with
an institution's LMS.

A fully integrated tool shares data back and forth with the institution's LMS. Currently,
tool integration is most often achieved by being Learning Tools Interoperability (LTI)
compliant. Using an LTI-compliant tool should mean a seamless experience for users.
For example, accounts are created in the integrated e-learning tool without user input,
and assessments occurring within the tool are synced automatically to the LMS
gradebook. In contrast, an embedded tool is added as an object with HTML code—that
is, the tool is "inserted" into a webpage. An example here is adding a streaming video,
which users can start, stop, and pause, to an LMS web page. While both integration and
embedding permit student interaction with the tools, only integrated tools offer a two-
way flow of data.

Overall, if students can access a tool directly and consistently within an LMS, as
allowed by both embedding and integration, the learning experience is strengthened.

Desktop/Laptop Operating Systems and Browser. Although operating systems and


browsers are distinct, we describe these separate rubric criteria as one here since they
relate to the same underlying question: Can learners effectively use the e-learning tool
on a desktop or laptop computer if they have a standard, up-to-date operating system
(OS) and/or browser? (We consider mobile OSs later, in the Mobile Design category.)
We define standard here as any commonly used OS and up-to-date as any OS still
supported by its vendor. The more OSs or browsers a tool supports, the better: any tool
that can be used only by users of one OS or browser is cause for concern. Selecting an
e-learning tool that can be installed and run on up-to-date versions of Windows and Mac
OS enables access for nearly all desktop and laptop users.

Additional Downloads. A tool that requires learners to install additional software or


browser plug-ins—whether on their own system or in the tool itself—is problematic. As
in the case of Adobe Flash players, which were initially popular but later blocked by
many browsers due to security issues—if an e-learning tool relies on another piece of
software in order to work, it risks being rendered obsolete due to factors beyond the tool
developers' control.

Mobile Design

With the continued adoption of mobile devices worldwide, instructional methods and
tools that deliver content using mobile technology will continue to grow and therefore
warrant their own assessment category.

Access. For e-learning tools accessed using a mobile device, the best ones will be OS-
and device-agnostic. This means that students, regardless of the mobile device they
choose to use, should be able to access and interact with the tool either through the
download of an application ("app") built for their OS or through the browser.

Functionality. Ideally, the mobile version will have few to no differences from the
desktop version. If there are multiple mobile versions for different OSs, the functionality
of different versions should be the same. In addition the user experience should
consider the constraints of smaller mobile device screens, either by using responsive
design or by offering a mobile app.

Offline Access. To enhance its flexibility, any e-learning tool that accesses the internet
should offer an offline mode to expand access for those who have limited or intermittent
connectivity.

Privacy, Data Protection, and Rights

While e-learning tools offer numerous potential benefits for learners and instructors,
they also can entail risks. The primary concerns relate to personal information and
intellectual property (IP).

Sign Up / Sign In. Institutions have a responsibility to protect student data, including


name, student number or ID, geolocation information, and photos, videos, or audio files
containing a student's face or voice.12 When students are asked to create an account
with a third-party e-learning tool, the tool often requires them to disclose the same
personal information that higher education institutions are responsible to protect. Ideally,
no user of an e-learning tool will be required to disclose personal information when
accessing a tool—thus guaranteeing the protection of information. If personal
information is to be collected, instructors should be the only ones required to provide
that information (thereby protecting students), or the tool needs to have been vetted
through appropriate channels (e.g., an institution's procedures for IT risk assessment) to
ensure that the collection of student data by a third-party group is being protected
according to local and institutional standards.

Data Privacy and Ownership. E-learning tools can also raise various copyright and IP
concerns. Tools are increasingly hosted by for-profit companies on servers external to
an institution; these companies can sometimes claim ownership of the work that is
residing on their servers.13 Further, some e-learning tools may protect users' IP but
make their content publicly available by default. Other tools give users greater
autonomy over how content will be shared. The key factors to assess here are the IP
policies of the e-learning tool and the user's control over how content is shared.
Ultimately, users should maintain their IP rights and be able to exercise full control over
how their content is made public.

Archiving, Saving, and Exporting Data. The platforms used for hosting a tool may not
reliably ensure adequate protection against data loss. Instructors should thus analyze e-
learning tools to determine how data or content can be migrated back and forth between
the service and its user. In part, this guards against data loss through export and
backup while also offering learners the flexibility to freely move their content between
tools rather than being locked into or committed to one tool.

Social Presence

The final three categories of the rubric stem from the Communities of Inquiry (CoI)
model,14 which considers, in part, how the design of online learning environments might
best create and sustain a sense of community among learners. D. Randy Garrison,
Terry Anderson, and Walter Archer define social presence as the ability of participants
"to project their personal characteristics into the community, thereby presenting
themselves to the other participants as 'real people.'" 15 This category focuses on
establishing a safe, trusting environment that fosters collaboration, teamwork, and an
overall sense of community.

Collaboration. Based on the principles of the CoI model, instructors are encouraged to


design learning activities and environments that provide students with frequent and
varied opportunities to interact with their peers and collaborate on activities to build a
sense of community. This manifests in not only providing synchronous and
asynchronous channels for knowledge exchange but also establishing a sense of
community between users—for example, through the prompted creation of online
profiles that allow participants to establish an online identity.

User Accountability. If students are to engage willingly, a safe and trusted


environment is essential. Establishing and maintaining this requires that instructors be
able to identify students, even if they are otherwise using pseudonyms to protect their
privacy. Instructors must also be able to control users' contributions by moderating
forums and managing forum privileges. These features not only support social presence
but also aid in supporting student assessment.

Diffusion. The diffusion concept holds that commonly used tools are more readily
adopted than foreign ones.16 That is, students who feel familiar with a tool are more
likely to feel comfortable with and positive about using it, thus contributing to regular
use, endorsement, and sense of belonging.

Teaching Presence

Garrison defines teaching presence as "the crucial integrating force that structures and
leads the educational process in a constructive, collaborative and sustained
manner."17 In our rubric, we interpret teaching presence as related to tool elements that
enable instructors to establish and maintain their teaching presence through facilitation,
customization, and feedback.

Facilitation. Effective teaching presence requires a facilitative approach, characterized


as: providing timely input, information, and feedback; questioning or challenging
students' thinking; modeling inquiry; and demonstrating cognitive engagement. Some e-
learning tools support these activities better than others. The rubric gives preference to
tools providing easy-to-use features that enhance an instructor's ability to effectively
engage in facilitation activities.

Customization. As educational developers, we value alignment between intended


learning outcomes and learning activities and assessments, advocating for technologies
and tools that complement learning outcomes. A tool can support this alignment when it
gives instructors the flexibility to customize how learners will engage with a tool, thus
enabling them to focus on specific uses while disregarding other, distracting functions.
Ideally, the tool also supports instructors in communicating this intentionality—making it
clear to students why they are doing what they are doing.

Learning Analytics. The NMC Horizon Report: 2018 Higher Education


Edition defines learning analytics as a diverse array of tools and applications that
enable instructors, students, and institutions to "collect, connect, combine, and interpret
data to more clearly understand learner capabilities and progress [and] fuel
personalized and adaptive learning experiences." 18 Reviewers of an e-learning tool
should assess the availability, quality, and user-friendliness of the analytics offered to
ensure the tool supports the desired data required for tracking performance, providing
feedback on learning, and informing course design.

Cognitive Presence

The third and final CoI framework category is cognitive presence: engagement in "the
process of inquiry that moves from problem definition to exploration of relevant content
and ideas, integrating those ideas into a meaningful structure or solution." 19 In our
evaluation context, this category considers a tool's ability to support students' cognitive
engagement in learning tasks.

Enhancement of Cognitive Task(s). Ideally, an e-learning tool enhances or transforms


learning. Ruben Puentedura's SAMR (Substitution, Augmentation, Modification and
Redefinition) model provides a framework for evaluating how e-learning technologies
influence cognitive tasks.20 The model gives preference to tools that transform learning
by modifying or redefining engagement in the targeted task, either by redesigning the
activity or by establishing new approaches previously inconceivable or unachievable
through other means. Our rubric encourages tool evaluators to select technologies that
modify or redefine tasks rather than simply substituting one task for another without
adding functional value.

Higher-Order Thinking. This criterion measures a tool's ability to help learners


integrate, rearrange, or extend new and existing information to achieve a purpose or
find answers to a complex problem. In considering a tool's cognitive elements,
reviewers should consider its ability to support higher-order learning tasks such as
critical thinking, problem solving, and reasoning.

Metacognitive Engagement. Metacognitive activities are those that prompt


understanding, regulation, and reflection of students' own cognitive activities across the
learning process. Many e-learning tools offer avenues for metacognitive engagement
and are considered successful on this criterion when they help students "think about
their learning, how they approach specific tasks, and the success of their
strategies."21 This is commonly achieved through formative feedback—the opportunity to
test knowledge, track performance, and monitor improvement in an effort to modify
thinking or behavior for improved learning. 22 The rubric directly links formative feedback
and metacognitive practice, giving priority to those tools that enable instructors to
provide formative feedback in support of students' growth through self-regulated
learning and reflective practice.

Conclusion
The Rubric for E-Learning Tool Evaluation offers educators a framework, with criteria
and levels of achievement, to assess the suitability of an e-learning tool for their
learners' needs and for their own learning outcomes and classroom context. The rubric
was designed with utility in mind: it is intended to help decision-makers independently
evaluate e-learning tools.

We designed the rubric to be practical for everyday use. Although we wrote it within a
Canadian postsecondary context, our hope is that other researchers will adapt the
rubric for their local context in tertiary education. The rubric is freely available under
a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
License. Given the rapid developments in e-learning and technology-enhanced
instruction, we hope others will use and build on our rubric to ensure its ongoing
relevance in light of future advances in the field.
Addendum
To assist with the sharing and adaptation of the Rubric for E-Learning Tool Evaluation,
the authors created a Git Hub repository for the rubric.

Notes

1. For more on the SECTIONS (Students; Ease of use; Costs; Teaching and
Learning; Interactivity; Organization; Novelty; Speed) model, see A. W. Bates
and Gary Poole, Effective Teaching with Technology in Higher Education:
Foundations for Success. (San Francisco: Jossey-Bass Publishers, 2003). ↩
2. Y. Malini Reddy  and Heidi Andrade, "A Review of Rubric Use in Higher
Education," Assessment & Evaluation in Higher Education 35, no. 4 (2010),
435–448. ↩
3. John Biggs, "Constructive Alignment in University Teaching," HERDSA
Review of Higher Education 1 (July 2014), 1–18. ↩
4. Barbara J. Millis, "Why Faculty Should Adopt Cooperative Learning
Approaches," in Cooperative Learning in Higher Education: Across the
Disciplines, Across the Academy, ed. Barbara J. Millis (Sterling, VA: Stylus
Publishing, 2012), 1–9; Chih Hsiung Tu and Marina McIsaac, "The Relationship
of Social Presence and Interaction in Online Classes," American Journal of
Distance Education 16, no. 3 (2002), 131–150. ↩
5. Kim Watty, Jade McKay, and Leanne Ngo, "Innovators or Inhibitors?
Accounting Faculty Resistance to New Educational Technologies in Higher
Education," Journal of Accounting Education 36  (September 2016), 1–15; S.
Sharma, Geoff Dick, Wynne Chin, and Lesley Land, "Self-Regulation and e-
Learning," ECIS 2007 Proceedings 45 (2007); Hong Zhao and Li Chen, "How
Can Self-Regulated Learning Be Supported in E-Learning 2.0 Environment:
A Comparative Study," Journal of Educational Technology Development and
Exchange (JETDE) 9, no. 2 (2016), 1–20. ↩
6. Stephen A. Ambrose, Michael W. Bridges, Michele DiPietro, Marsha C. Lovett,
Marie K. Norman, and Richard E. Mayer, How Learning Works: Seven
Research-Based Principles for Smart Teaching (San Francisco, CA: Jossey
Bass, 2010). ↩
7. Carmelo Ardito, Maria Francesca Costabile, Marilena De Marsico, Rosa
Lanzilotti, Stefano Levialdi, Paola Plantamura, Teresa Roselli, et al., "Towards
Guidelines for Usability of E-Learning Application," in User-Centered
Interaction Paradigms for Universal Access in the Information Society UI4ALL
2004, LNCS vol. 3196, ed. Christian Stary and Constantine Stephanidis (Berlin:
Springer, 2004), 82. ↩
8. Brian Kelly er al., "Accessibility 2.0: Next Steps for Web
Accessibility," Journal of Access Services 6, no. 1–2 (2009). ↩
9. Greg Gay, "Accessibility in E-Learning: What You Need to Know," Council of
Ontario Universities, [n.d.]. ↩
10. "Policy Brief: Ancillary Fees," Ontario Undergraduate Student Alliance,
2016. ↩
11. Asma A. Mosa, Mohammad N. Mahrin, and Roslina Ibrrahim, "Technological
Aspects of E-Learning Readiness in Higher Education: A Review of the
Literature," Computer and Information Science 9, no. 1 (2016), 113–127. ↩
12. Andreas Schroeder, Shailey Minocha, and Christoph Schneider, "The
Strengths, Weaknesses, Opportunities and Threats of Using Social
Software in Higher and Further Education Teaching and Learning," Journal
of Computer Assisted Learning 26, no. 3 (2010), 159–174; Anthony E. Kelly and
Mika Seppala, "Changing Policies Concerning Student Privacy and Ethics in
Online Education," International Journal of Information and Education
Technology 6, no. 8 (2016), 652–655. ↩
13. Julia E. Rodriguez, "Social Media Use in Higher Education: Key Areas to
Consider for Educators," Journal of Online Learning and Teaching 7, no. 4
(2011), 539–550. ↩
14. D. Randy Garrison, E-Learning in the 21st Century: A Framework for Research
and Practice, 2d ed. (London: Routledge, 2011). 
15. D. Randy Garrison, Terry Anderson, and Walter Archer, "Critical Inquiry in a
Text-Based Environment: Computer Conferencing in Higher Education
Model," The Internet and Higher Education 2, nos. 2-3 (2000), 89. ↩
16. Ronnie Cheung and Doug Vogel, "Predicting User Acceptance of
Collaborative Technologies: An Extension of the Technology Acceptance
Model for E-Learning," Computer Education 63 (2013), 160–175. ↩
17. D. Randy Garrison, "Online Collaboration Principles," Journal of
Asynchronous Learning 10, no. 1 (2006), 26. ↩
18. Samantha Adams Becker et al., NMC Horizon Report: 2018 Higher Education
Edition (Louisville, CO: EDUCAUSE, 2018), 38. ↩
19. Garrison, "Online Collaboration Principles," 28. ↩
20. Ruben R. Puentedura, "SAMR: A Contextualized Introduction" [n.d.]. ↩
21. Zehra Akyol and D. Randy Garrison, "Assessing Metacognition in an Online
Community of Inquiry," The Internet and Higher Education 14 (2011), 183–
190. ↩
22. Valerie J. Schute, "Focus on Formative Feedback," Review of Educational
Research 78, no. 1 (2008), 153. ↩

Lauren M. Anstey is an educational developer with a focus on e-learning and


curriculum in the Centre for Teaching and Learning and is Adjunct Research Faculty in
the Centre for Research on Teaching and Learning in Higher Education at Western
University.
Gavan P. L. Watson is Associate Director, eLearning, in the Centre for Teaching and
Learning and is Adjunct Research Faculty in the Centre for Research on Teaching and
Learning in Higher Education at Western University.

© 2018 Lauren M. Anstey and Gavan P. L. Watson. The text of this article is licensed
under the Creative Commons Attribution 4.0 International License.

https://er.educause.edu/articles/2018/9/a-rubric-for-evaluating-e-learning-tools-in-higher-education

Potrebbero piacerti anche