Sei sulla pagina 1di 7

Learning Object Reusability Metrics: Some Ideas from Software Engineering

Juan-Jos Cuadrado University of Valladolid, SPAIN Miguel-Angel Sicilia University of Alcal, SPAIN msicilia@uah.es

Learning Object Reusability Metrics: Some Ideas from Software Engineering

Abstract Reusability is considered to be an essential characteristic of learning objects, which are the central notion of current approaches to standardized learning content design. In consequence, measurement instruments for learning object reusability should be developed for the sake of quality assessment and also to serve as criteria to choose between alternate designs. In this paper, the applicability of metrics borrowed from the field of Software Engineering is analyzed, providing analogies for several metrics that can be given an interpretation in terms of learning objects.

Introduction and motivation Reusability is considered to be an essential characteristic of the concept of learning object as the central notion for modern digital learning content design. For example, Polsani (2003) includes reuse in his definition of learning object as an independent and self-standing unit of learning content that is predisposed to reuse in multiple instructional contexts, and Wiley (2001) also mentions the term in his learning object definition any digital resource that can be reused to support learning. Nonetheless, the concept of learning object reusability as a key quality factor for content design is difficult to characterize and measure, since it encompasses not only the evaluation of the contents by themselves (Vargo et al., 2003), but also a balance between their usability in specific contexts, and the range of educational contexts it explicitly targets (Sicilia and Garca, 2003). Current specifications and standards have focused in the possibility of moving learning objects from a platform to another without changes. But this portability is not concerned with the actual educational design of learning objects, so that standard-conformant learning objects may in fact be of little reusability in practice. Reusability in diverse educational contexts requires a careful design of the contents and their associated metadata records, so that they are consistent and complete enough (Pags et al., 2003) to be useful for automated or manual selection. Reusability can be judged by humans by considering three interrelated aspects: (a) The quality of the separation of contents to presentation, which is required for an effective reuse in technical and formatting terms; (b) the quality of the metadata record, specially its comprehensiveness and the clearness and precision of the educational context explicitly targeted; and (c) the quality of the instructional design for each of the educational contexts targeted. Obviously, a reliable and consistent process for judging these three aspects would be time consuming and expert-intensive. In fact, this process could be considered as a structured version of the process of peer-review of MERLOT (Cafolla, 2002), enhanced with tools to gather machine-readable data in a commonly agreed format. Such a process would result in highly reliable assessments, fostering the reuse of quality resources. Nonetheless, lighter approaches to assessing reusability would also be desirable due to two important reasons. First, assessment could ideally be integrated in the learning design process itself, by augmenting computer-based design tools, or by providing automated analyzers useful to obtain quick findings that could be used to guide the rest of the evaluation. And second, software systems that automatically retrieve and compose learning objects could use these automated metrics as a quality criteria guiding their actions. Reusability characterizations aimed at automated analysis can be found in the literature about Software Engineering metrics (Fenton, 1991), so that an exploration of such software metrics could be useful for the crafting of learning object ones. Software engineering techniques, and in particular, object technology, have been used as a source of inspiration for learning object design (Sosteric and Hesenmeier, 2002; Sicilia an Snchez, 2003). Nonetheless, analogies are mostly metaphorical since learning objects and software components instances have fairly different usage constraints. Learning objects are a in a general sense - particular pieces of software, e.g. a Web content possibly with script code (or other similar artifact) for the

interaction. But learning objects are intended for use by human users, that is, they are interface objects in the sense of being intended for browser-based interaction, while software components are characterized by being developer technology artifacts with an interface (not to be confused with user interface) with strict type conformance. In addition, the intention of a software component is embodied in its interface(s), while the intended usages and constraints for learning objects are (or at least should be) described in its (separate) metadata record. Nonetheless, some of the metrics developed and used by the Software Engineering (SE) in the last decades (Chidamber and Kemerer, 1994) deal with concepts like dependencies and complexity that have clear correlates in learning object technology. This suggests that SE metrics could give some light to the problem of obtaining LO metrics that are connected directly or indirectly to reusability. In the rest of this paper, some insights about the techniques that could be applied to learning objects are sketched. Some Reusability Metrics applicable to Learning Objects Although many software metrics are intended to measure actual reuse, only a few of them address reusability directly (Washizaki et al., 2003). Here we will analyze the classic Chidamber and Kemerer (1994) metrics that can be used to measure reusability, namely Weighted Methods per Class (WMC), Depth of Inheritance Tree (DIT), Coupling between Object Classes (CBO), and Lack of Cohesion in Methods (LCOM). In what follows, the underlying ideas of this metrics are revisited in an attempt to find analogies in the field of learning objects. The WMC metric is the aggregation of the complexities of the methods of a given class, which could be used as a predictor for the reusability of the class from the viewpoint that classes with large numbers of methods are likely to be more application specific, limiting the possibility of reuse. The following analogies can be considered for this metric: The learning object under consideration stands for the class as the unit of analysis for the metric. This analogy has been established also elsewhere (Sicilia and Snchez, 2003). The concept of method, as a capability of the class can be assimilated to the concept of interactive activity inside a composite learning object. A broad notion of activity can be that of different interaction units inside the learning object. For example, a learning object with explanatory text followed by a questionnaire can be considered as having two methods (activities). Finally, the notion of complexity can be stated in terms of granularity of activities. This concept of granularity is dependant on the type of content under consideration, e.g. for texts, it can be counted in terms of number of words, while for questionnaires, it could be measured in terms of number and type of questions. The resulting metric for learning objects would be consistent with the current consideration that only learning objects of fine granularity may offer a high degree of reusability, e.g. (Wiley, 2003), and the decomposition idea fits well in content

aggregation schemes like that of SCORM1 or the activity-based language of the IMS Learning Design2 specification. The DIT metric is the count of the depth in the inheritance tree of a class inside a software framework. This metric is related to reusability under the view that classes that are deeper in the inheritance tree are more complex (due to having probably inherited more features), and thus less predisposed to reuse. Here the key concept is that of inheritance, i.e. sub-typing. Types of learning objects are expressed in current learning object metadata by simply putting a label in a metadata field (Learning Resource Type in LOM3), but richer approaches, involving actual inheritance of features and properties have been proposed elsewhere (Sicilia et al., 2004). Inheritance depth as a driver for increased complexity applies in a similar way to learning objects, since subtyping entails the requirement of more detailed metadata elements, e.g. a meta-cognitive questionnaire activity can be defined as a specialization of learning object, questionnaire and meta-cognitive resources, thus requiring detailed descriptions about the three aspects involved. The CBO metric is the count of the number of classes to which the one under consideration is coupled. Higher CBOs prevent reuse since they are detrimental to modular design. This metric has a direct translation in terms of learning object relationships, which can be defined with current metadata schemas like LOM even though they are not free of ambiguities as described in (Snchez and Sicilia, 2004). Provided that metadata records declare explicitly those relationships, the couplings are easy to count automatically. Obviously, a high number of relationships to other learning objects (except in the case of versionOf-like relationships, that do not entail dependencies) entail that the learning object is of a high level of granularity (i.e. it is aggregated or composed by many other learning objects) or that it is not self-contained, in the sense that it declares dependencies to other learning objects which are required to properly use the one under consideration. Finally, the LCOM metric is a measure of the (lack of) overlap in the use of attributes by the methods of the class. If the methods use separate subsets of attributes of the class, it can be hypothesized that the methods are not correctly grouped, and probably the class should be split in several ones. Classes with high LCOM prevent reuse since disparateness difficult complexity of understanding. To assess the utility of this metric for learning object, an analogy for class attributes would be required. If we consider that the different activities (methods) inside the class (learning object) deal with some concepts that are the objectives for the learner, it can be stated that learning objectives (i.e. the intended outcomes of the process of learner-learning object interaction) can be regarded metaphorically as attributes of the class. In consequence, disparateness of objectives (as stated in metadata records) for the activities that are part of a learning object are indicators for ill-defined objectives, which hampers reuse driven by a particular learning need.

Conclusions and Outlook

1 2

http://www.adlnet.org http://www.imsproject.org 3 http://ltsc.ieee.org/wg12/

Learning object reusability is a concept difficult to characterize, due to its multidimensional nature, encompassing formatting, content and metadata considerations. This would result in time-consuming reusability evaluation techniques carried out by human experts. A less expensive complement to such techniques may come from metrics that could be automated and used to provide a quick analysis of the potential reusability of the learning objects. Several analogies can be found between classical software reusability metrics and learning object characteristics, specially the analogy of granularity as a form of complexity, and the consideration of learning object dependencies. The discussion about the applicability of ideas taken from classical software metrics to the learning object domain suggest that itd worth the effort in investigating the validity of some of their correlates in current learning object repositories. This would require metadata records of better quality than many of the ones that can be obtained in current systems, and also the clarification of several aspects regarding learning objects, including a consistent interpretation for their relationships. In other words, enhanced metadata creation practices can be considered a prerequisite for the crafting of reliable learning object metrics.

References Cafolla, R. Project Merlot: Bringing Peer Review to Web-based Educational Resources. In Proceedings of the USA Society for Information Technology and Teacher Education International Conference (2002) pp. 614 618. Chidamber, S. and Kemerer, C. (1994). A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, 20(6). Fenton, N. (1991) Software Metrics: A Rigorous Approach, Chapman & Hall, Ltd., London, UK. Pags, C., Sicilia, M.A., Garca, E., Martnez, J.J., Gutirrez, J.M. On The Evaluation Of Completeness Of Learning Object Metadata In Open Repositories. In: Proceedings of the Second International Conference on Multimedia and Information & Communication Technologies in Education (m-ICTE 2003), 1760-1764. Polsani, P. R. (2003). Use and Abuse of Reusable Learning Objects. Journal of Digital information, 3(4). Retrieved May 11, 2004 from: http://jodi.ecs.soton.ac.uk/Articles/v03/i04/Polsani/ Snchez, S. and Sicilia, M. A. (2004). On the semantics of aggregation and generalization in learning object contracts. In Proceedings of the 4th IEEE International Conference on Advanced Learning Technologies - ICALT 2004. Joensuu, Finland. Sicilia, M.A. & Garca, E. (2003). On the Concepts of Usability and Reusability of Learning Objects. International Review of Research in Open and Distance Learning 4(2). Sicilia, M.A., Snchez, S. (2003). Learning Object "Design by Contract". WSEAS Transactions on Systems, Vol. 2, Issue 3, pp. 612-617.

Sicilia, M.A., Garca, E., Snchez, S. and Rodrguez, E. (2004). Describing learning object types in ontological structures: towards specialized pedagogical selection. In Proceedings of ED-MEDIA 2004 - World conference on educational multimedia, hypermedia and telecommunications. Lugano, Switzerland. Sicilia, M.A., Snchez, S. 2003. On the concept of learning object "Design by Contract". WSEAS Transactions on Systems, 2 (3), 612-617. Sosteric, M. and Hesemeier, S. (2002). When a Learning Object is not an Object: A first step towards a theory of learning objects. International Review of Research in Open and Distance Learning Journal, 3(2). Vargo, J., Nesbit, J., Belfer, K. Archambault, A. (2003). Learning object evaluation: Computer mediated collaboration and inter-rater reliability. International Journal of Computers and Applications 25(3). Washizaki, H., Yamamoto, Y. and Fukazawa, Y. (2003). A Metrics Suite for Measuring Reusability of Software Components. In: 9th IEEE International Symposium on Software Metrics Wiley, D. A. (2001). The Instructional Use of Learning Objects. Association for Educational Communications and Technology, Bloomington. Wiley, D. A., Gibbons, A., and Recker, M. M. (2000). A reformulation of learning object granularity. Retrieved July 2003 from: http://reusability.org/granularity.pdf

Potrebbero piacerti anche