Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Higher Education
Authors:
by Lauren Anstey and Gavan Watson
Published:
Monday, September 10, 2018
The Rubric for E-Learning Tool Evaluation offers educators a framework, with criteria
and levels of achievement, to assess the suitability of an e-learning tool for their
learners' needs and for their own learning outcomes and classroom context.
However, aside from the SECTIONS model,1 existing models fell short in two key areas:
Why a Rubric?
Educators often use rubrics to articulate "the expectations for an assignment by listing
the criteria or what counts, and describing levels of quality." 2 We have adapted these
broad aims to articulate the appropriate assessment criteria for e-learning tools using
the standard design components of other analytical rubrics: categories, criteria,
standards, and descriptors.
We organized our rubric's evaluation criteria into eight categories. Each category has a
specific set of characteristics, or criteria, against which e-learning tools are evaluated,
and each criterion is assessed against three standards: works well, minor concerns,
or serious concerns. Finally, the rubric offers individual descriptions of the qualities an
e-learning tool must have to achieve a standard.
Although our rubric integrates a broad range of functional, technical, and pedagogical
criteria, it is not intended to be overly prescriptive. Our goal is for the framework to
respond to an instructor's needs and be adapted as appropriate. For example, when a
rubric criterion is not relevant to the assessment of a particular tool, it can be excluded
without impacting the overall quality of the assessment.
The rubric reflects our belief that instructors should choose e-learning tools in the
context of the learning experience. We therefore encourage an explicit alignment
between the instructor's intended outcomes and the tool, based on principles of
constructive alignment.3 Given the diversity of outcomes across learning experiences, e-
learning tools should be chosen on a case-by-case basis and should be tailored to each
instructor's intended learning outcomes and planned instructional activities. We
designed the rubric with this intention in mind.
Functionality
Scale. Postsecondary classrooms vary in format and size, ranging from small seminars
to large-enrollment courses. In larger courses, creating small groups increases contact
among students, fosters cooperative learning, and enhances social presence among
learners.4 An e-learning tool should therefore not only be flexible in accommodating
various class sizes but also be capable of supporting small-group work.
Hence, scale focuses on the tool's affordances to accommodate the size and nature of
the classroom environment.
Accessibility
Technical
A fully integrated tool shares data back and forth with the institution's LMS. Currently,
tool integration is most often achieved by being Learning Tools Interoperability (LTI)
compliant. Using an LTI-compliant tool should mean a seamless experience for users.
For example, accounts are created in the integrated e-learning tool without user input,
and assessments occurring within the tool are synced automatically to the LMS
gradebook. In contrast, an embedded tool is added as an object with HTML code—that
is, the tool is "inserted" into a webpage. An example here is adding a streaming video,
which users can start, stop, and pause, to an LMS web page. While both integration and
embedding permit student interaction with the tools, only integrated tools offer a two-
way flow of data.
Overall, if students can access a tool directly and consistently within an LMS, as
allowed by both embedding and integration, the learning experience is strengthened.
Mobile Design
With the continued adoption of mobile devices worldwide, instructional methods and
tools that deliver content using mobile technology will continue to grow and therefore
warrant their own assessment category.
Access. For e-learning tools accessed using a mobile device, the best ones will be OS-
and device-agnostic. This means that students, regardless of the mobile device they
choose to use, should be able to access and interact with the tool either through the
download of an application ("app") built for their OS or through the browser.
Functionality. Ideally, the mobile version will have few to no differences from the
desktop version. If there are multiple mobile versions for different OSs, the functionality
of different versions should be the same. In addition the user experience should
consider the constraints of smaller mobile device screens, either by using responsive
design or by offering a mobile app.
Offline Access. To enhance its flexibility, any e-learning tool that accesses the internet
should offer an offline mode to expand access for those who have limited or intermittent
connectivity.
While e-learning tools offer numerous potential benefits for learners and instructors,
they also can entail risks. The primary concerns relate to personal information and
intellectual property (IP).
Data Privacy and Ownership. E-learning tools can also raise various copyright and IP
concerns. Tools are increasingly hosted by for-profit companies on servers external to
an institution; these companies can sometimes claim ownership of the work that is
residing on their servers.13 Further, some e-learning tools may protect users' IP but
make their content publicly available by default. Other tools give users greater
autonomy over how content will be shared. The key factors to assess here are the IP
policies of the e-learning tool and the user's control over how content is shared.
Ultimately, users should maintain their IP rights and be able to exercise full control over
how their content is made public.
Archiving, Saving, and Exporting Data. The platforms used for hosting a tool may not
reliably ensure adequate protection against data loss. Instructors should thus analyze e-
learning tools to determine how data or content can be migrated back and forth between
the service and its user. In part, this guards against data loss through export and
backup while also offering learners the flexibility to freely move their content between
tools rather than being locked into or committed to one tool.
Social Presence
The final three categories of the rubric stem from the Communities of Inquiry (CoI)
model,14 which considers, in part, how the design of online learning environments might
best create and sustain a sense of community among learners. D. Randy Garrison,
Terry Anderson, and Walter Archer define social presence as the ability of participants
"to project their personal characteristics into the community, thereby presenting
themselves to the other participants as 'real people.'" 15 This category focuses on
establishing a safe, trusting environment that fosters collaboration, teamwork, and an
overall sense of community.
Diffusion. The diffusion concept holds that commonly used tools are more readily
adopted than foreign ones.16 That is, students who feel familiar with a tool are more
likely to feel comfortable with and positive about using it, thus contributing to regular
use, endorsement, and sense of belonging.
Teaching Presence
Garrison defines teaching presence as "the crucial integrating force that structures and
leads the educational process in a constructive, collaborative and sustained
manner."17 In our rubric, we interpret teaching presence as related to tool elements that
enable instructors to establish and maintain their teaching presence through facilitation,
customization, and feedback.
Cognitive Presence
The third and final CoI framework category is cognitive presence: engagement in "the
process of inquiry that moves from problem definition to exploration of relevant content
and ideas, integrating those ideas into a meaningful structure or solution." 19 In our
evaluation context, this category considers a tool's ability to support students' cognitive
engagement in learning tasks.
Conclusion
The Rubric for E-Learning Tool Evaluation offers educators a framework, with criteria
and levels of achievement, to assess the suitability of an e-learning tool for their
learners' needs and for their own learning outcomes and classroom context. The rubric
was designed with utility in mind: it is intended to help decision-makers independently
evaluate e-learning tools.
We designed the rubric to be practical for everyday use. Although we wrote it within a
Canadian postsecondary context, our hope is that other researchers will adapt the
rubric for their local context in tertiary education. The rubric is freely available under
a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
License. Given the rapid developments in e-learning and technology-enhanced
instruction, we hope others will use and build on our rubric to ensure its ongoing
relevance in light of future advances in the field.
Addendum
To assist with the sharing and adaptation of the Rubric for E-Learning Tool Evaluation,
the authors created a Git Hub repository for the rubric.
Notes
1. For more on the SECTIONS (Students; Ease of use; Costs; Teaching and
Learning; Interactivity; Organization; Novelty; Speed) model, see A. W. Bates
and Gary Poole, Effective Teaching with Technology in Higher Education:
Foundations for Success. (San Francisco: Jossey-Bass Publishers, 2003). ↩
2. Y. Malini Reddy and Heidi Andrade, "A Review of Rubric Use in Higher
Education," Assessment & Evaluation in Higher Education 35, no. 4 (2010),
435–448. ↩
3. John Biggs, "Constructive Alignment in University Teaching," HERDSA
Review of Higher Education 1 (July 2014), 1–18. ↩
4. Barbara J. Millis, "Why Faculty Should Adopt Cooperative Learning
Approaches," in Cooperative Learning in Higher Education: Across the
Disciplines, Across the Academy, ed. Barbara J. Millis (Sterling, VA: Stylus
Publishing, 2012), 1–9; Chih Hsiung Tu and Marina McIsaac, "The Relationship
of Social Presence and Interaction in Online Classes," American Journal of
Distance Education 16, no. 3 (2002), 131–150. ↩
5. Kim Watty, Jade McKay, and Leanne Ngo, "Innovators or Inhibitors?
Accounting Faculty Resistance to New Educational Technologies in Higher
Education," Journal of Accounting Education 36 (September 2016), 1–15; S.
Sharma, Geoff Dick, Wynne Chin, and Lesley Land, "Self-Regulation and e-
Learning," ECIS 2007 Proceedings 45 (2007); Hong Zhao and Li Chen, "How
Can Self-Regulated Learning Be Supported in E-Learning 2.0 Environment:
A Comparative Study," Journal of Educational Technology Development and
Exchange (JETDE) 9, no. 2 (2016), 1–20. ↩
6. Stephen A. Ambrose, Michael W. Bridges, Michele DiPietro, Marsha C. Lovett,
Marie K. Norman, and Richard E. Mayer, How Learning Works: Seven
Research-Based Principles for Smart Teaching (San Francisco, CA: Jossey
Bass, 2010). ↩
7. Carmelo Ardito, Maria Francesca Costabile, Marilena De Marsico, Rosa
Lanzilotti, Stefano Levialdi, Paola Plantamura, Teresa Roselli, et al., "Towards
Guidelines for Usability of E-Learning Application," in User-Centered
Interaction Paradigms for Universal Access in the Information Society UI4ALL
2004, LNCS vol. 3196, ed. Christian Stary and Constantine Stephanidis (Berlin:
Springer, 2004), 82. ↩
8. Brian Kelly er al., "Accessibility 2.0: Next Steps for Web
Accessibility," Journal of Access Services 6, no. 1–2 (2009). ↩
9. Greg Gay, "Accessibility in E-Learning: What You Need to Know," Council of
Ontario Universities, [n.d.]. ↩
10. "Policy Brief: Ancillary Fees," Ontario Undergraduate Student Alliance,
2016. ↩
11. Asma A. Mosa, Mohammad N. Mahrin, and Roslina Ibrrahim, "Technological
Aspects of E-Learning Readiness in Higher Education: A Review of the
Literature," Computer and Information Science 9, no. 1 (2016), 113–127. ↩
12. Andreas Schroeder, Shailey Minocha, and Christoph Schneider, "The
Strengths, Weaknesses, Opportunities and Threats of Using Social
Software in Higher and Further Education Teaching and Learning," Journal
of Computer Assisted Learning 26, no. 3 (2010), 159–174; Anthony E. Kelly and
Mika Seppala, "Changing Policies Concerning Student Privacy and Ethics in
Online Education," International Journal of Information and Education
Technology 6, no. 8 (2016), 652–655. ↩
13. Julia E. Rodriguez, "Social Media Use in Higher Education: Key Areas to
Consider for Educators," Journal of Online Learning and Teaching 7, no. 4
(2011), 539–550. ↩
14. D. Randy Garrison, E-Learning in the 21st Century: A Framework for Research
and Practice, 2d ed. (London: Routledge, 2011).
15. D. Randy Garrison, Terry Anderson, and Walter Archer, "Critical Inquiry in a
Text-Based Environment: Computer Conferencing in Higher Education
Model," The Internet and Higher Education 2, nos. 2-3 (2000), 89. ↩
16. Ronnie Cheung and Doug Vogel, "Predicting User Acceptance of
Collaborative Technologies: An Extension of the Technology Acceptance
Model for E-Learning," Computer Education 63 (2013), 160–175. ↩
17. D. Randy Garrison, "Online Collaboration Principles," Journal of
Asynchronous Learning 10, no. 1 (2006), 26. ↩
18. Samantha Adams Becker et al., NMC Horizon Report: 2018 Higher Education
Edition (Louisville, CO: EDUCAUSE, 2018), 38. ↩
19. Garrison, "Online Collaboration Principles," 28. ↩
20. Ruben R. Puentedura, "SAMR: A Contextualized Introduction" [n.d.]. ↩
21. Zehra Akyol and D. Randy Garrison, "Assessing Metacognition in an Online
Community of Inquiry," The Internet and Higher Education 14 (2011), 183–
190. ↩
22. Valerie J. Schute, "Focus on Formative Feedback," Review of Educational
Research 78, no. 1 (2008), 153. ↩
© 2018 Lauren M. Anstey and Gavan P. L. Watson. The text of this article is licensed
under the Creative Commons Attribution 4.0 International License.
https://er.educause.edu/articles/2018/9/a-rubric-for-evaluating-e-learning-tools-in-higher-education