Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Introduction
ABSTRACT: Despite the well-documented advantages of switching to instruction based on assessment of learning outcomes, many academic disciplines, including food science, are still based on the traditional mode of instruction. The problems of converting from traditional to assessment-driven instruction are numerous and change in the university setting is slow. However, certain guidelines can be followed to start the process for change and evaluate the effects on student learning. A partnership between the industry being served and academic instructors is needed to ensure that assessment-based instruction is focused on the proper principles. Methods of assessment of learning outcomes need to be carefully chosen and developed to bring industry standards and student learning together. This can be done only if both direct and indirect assessments at the program level provide faculty with means to answer their most pressing questions about what students know and are able to do as a result of Food Science education.
The transition to a curriculum based on assessment of learning outcomes is being embraced by numerous professional fields and is based on results from the education literature that demonstrate the benefits of such an approach (Diamond 1998; Palomba and Banta 1999). For example, the engineering disciplines and their standardizing body, Accreditation Board for Engineering and Technology (ABET), have recently switched to an outcomes-based learning approach. Other fields utilizing this approach include nursing, accounting, and medicine, among others (Gainen and Locatelli 1995; Bowyer 1996; Stone 1996). In the field of Food Science, the Institute of Food Technologists (IFT) has reviewed programs in Food Science since 1977, based on minimum standards developed by leaders in the field. IFT approval carries some weight in that only students from IFTapproved programs are eligible for IFT scholarships. In 2001, the IFT policies for program review were changed significantly. Prior to 2001, a series of courses with set content material were prescribed and only departments that met these minimum course requirements were approved. Currently, the IFT Education Standards (www.ift.org/education/standards.shtml) require a program of assessment of student learning based on specified outcomes. IFT now requires (1) that a certain set of core competencies be met within the curriculum (not a specified set of courses), (2) that specific learning outcomes (statements of desired learning that are measurable using some assessment tool) be written both for individual courses and for the curriculum as a whole, (3) that adequate assessment tools be used for measuring student learning, both for individual classes and for the curriculum as a whole, and (4) that there be some well-thought-out process of curricular reform based on the results of the assessment data. Assessment tools in this sense cover a wide range of measurements that allow the instructor to quantify the degree of competency of students for specific tasks. Details on the various assessment tools available are given later. To assist departments in making this transition, IFT provides a Guide Book for Food Science Programs (www.ift.org/education/guidebook/). The Guide Book provides the rationale for making this transition and some resources for outcomesbased education; however, each program is responsible for how they make the transition. The process of change towards assessment of learning outcomes at each institution should be based on several factors. These include the experience of the instructors in education principles, the resources available, and the needs of the students at that particular institution. At some schools, the students may be primarily prepared for industrial jobs immediately upon receipt of the BS degree, whereas at other schools, the majority of students may be destined for Ph.D. degrees and jobs in research and development. Although the needs may be different, a program must know where their students end up and what their needs are in their careers. Instruction, in terms of course content and instructional style, must be geared towards the needs of the program. Assessments, in turn, must be used to provide faculty with the feedback they require to be sure that program needs are addressed. Changes to assessment typically result in changes to instruction (Madaus 1988). If assessments are designed to measure both content knowledge and student performance in key ability areas, then new forms of instruction, often emphasizing active student learning, are the result (Glatthorn 1999).
2003 Institute of Food Technologists
32
Despite all the recent activity in the education field regarding the benefits of assessment of learning outcomes, it is still common for most food science courses to be taught in a traditional manner. That is, the instructor lectures and gives the students a grade based on whether students can essentially repeat on an exam the material the instructor has asked them to learn. Perhaps some exam questions are used to test how well students can integrate the course material with their previous learning, but for the Most of us understand most part exams are based on that the ability to do well reproducing the material covered in that class with little exon an exam does not tension to other areas. To help reflect a students true them practice their responses understanding and com- and ensure that they do reapetence with the material sonably well on the exams, homework may be assigned such that students become familiar with the type of questions being asked. In each class, students must get accustomed to the style of instructor question and how they grade the exam. In a previous publication (Hartel 2002), it was suggested that traditional education is similar to open-loop control of a process (similar to that used in a laundry washing machine). Students enter our program and progress through the curriculum taking all the required courses. We evaluate some aspect of learning looking at the clothes from our washing machine to see if they are clean or not. However, there are often students who do not perform as well as others upon graduation. Are we really sure that we have taught them as well as we possibly can? The food industry, a major recipient of our graduates, often says that graduates do not have the skills needed to succeed immediately upon graduation; instead, they take considerable time and energy to develop to the point where they can really contribute. Although continual improvement is always desired, it is important not to allow change to lose the good aspects of what is already done. Any transition in educational approaches should maintain the good and useful aspects of the previous model.
A change to assessment of learning outcomes
When an organizing body like IFT requires a change in program requirements, as has happened every 10 years or so since the standards were initiated in 1977, there are several automatic responses. The first and foremost is one of disagreement (or something stronger). As faculty, we do not take well to others telling us what to do. Faculty are a fairly autonomous group and prefer to maintain their strong independence. There may also be an element of dislike to any change; it is often said that there is a 40year half life for change at the university. There is a sense that the traditional model of instruction does a reasonable job at training students, is fairly efficient at training high numbers of students, and there is no need to change. Furthermore, the new IFT Education Standards require substantial retraining in the concepts of learning outcomes and how we can assess whether or not our students have developed a certain skill. Most of us are not trained in education, and these activities will require substantial retraining on our part. However, the aim of the new Education Standards, which comes from a committee of faculty peers, is to promote continual improvement of food science education. Its hard to argue against having an attitude that we should always be striving to be better at what we do, and that includes our teaching skills. Shouldnt we faculty always be striving for excellence in whatever we do? We do this in our research programs, why not in teaching as well?
Available on-line at: www.ift.org
One of the simplest approaches to meeting the new Education Standards is to simply add on a few assessment methods. Many universities already require some assessment of our programs and these methods can be used to meet the IFT Standards. Such exercises as alumni surveys, exit interviews, and even employer surveys are things we can do to get some feedback about the students experience and how well they perform in their jobs. This is all useful information, but the typical add-on type of assessment (that is, assessment methods that simply add on to the traditional system) still do not really get at the details needed for true and substantial improvement. Details of various assessment methods, including advantages and disadvantages of each, are summarized in a later section. A true change to a coordinated instructional and assessment program requires significant faculty cooperation (both among ourselves and with outside reviewers) and commitment. Faculty must carefully evaluate the curriculum to understand what learning outcomes are desired, based on the projected responsibilities of the students (and not just a subset of the students). Changes to the curriculum as well as the approaches to instruction must then be made to promote student learning. Many of the details must be decided by faculty discussion and consensus building because the needs of each individual program will be different. A plan outlined in the following sections details the general approach recommended by Palomba and Banta (1999) as assessment essentials. The plan suggests the following steps: (1) agree on goals and objectives for learning; (2) design and implement a thoughtful approach to assessment planning; (3) involve individuals from on and off campus; (4) select or design and implement data collection approaches; (5) examine, share and act on assessment findings; and (6) regularly reexamine the assessment process. Ultimately, these steps lead to development of a coherent curriculum designed for assessment of student progress throughout their stay in our department. An approach to this problem can be found in action science or action research, where faculty work together (among themselves and with outside influences, like industry personnel and education consultants) to develop the approach that works best for them (Argyris and others 1985; Zuber-Skerritt 1992a and b). Action research, as used in the education field, involves using social science methods to solve immediate problems to improve educational practices. The following example, on developing teamwork skills, indicates what is meant by a coordinated curriculum with a coordinated assessment program. Developing teamwork skills is an important aim of university instruction, as we all know that industry desires students with the skills to work together as a team. The first step to developing this skill in our students, however, is to truly define what we mean by teamwork skills. The input of industry personnel and alumni who can help us identify the Developing teamwork specific skills would be quite skills is an important aim useful here. Exactly what do of university instruction they think is needed in terms of teamwork skills when a student first enters the field? Is it that our graduates need to learn to work together, communicate effectively in a group setting, and know how to resolve differences that arise? Once the specific skills have been identified and developed into true learning outcomes, where a statement describes the specific skill (or technical knowledge) desired, faculty must decide how to develop this skill within the curriculum and then to assess whether or not students have attained this skill by the time they graduate. For example, the following approach might be used to develop teamwork skills in our students and to assess how well they have
Vol. 2, 2003JOURNAL OF FOOD SCIENCE EDUCATION 33
skills in our individual courses. To our surprise, we found that many of us use various instructional approaches to help our students learn these skills, but to no ones surprise, our approaches were not coordinated. The result of that exercise was to start people thinking about how we might first introduce the concepts of group activities and teamwork in a course early in the curriculum, and then build on that By the time the students in a coordinated fashion in subsequent courses. In this graduate, they will have way, by the time the students developed these skills to graduate, they will have dethese skills to the the point where they can veloped point where they can use use them directly upon them directly upon employment. In a similar fashion, one employment can envision coordinating the entire curriculum, both technical knowledge and success skills, around each of the desired outcomes defined in the first step. To do this, faculty need to review their existing courses and individual instructional approaches to see where and how aspects of each outcome are addressed. This will require substantial coordination among the teaching faculty and should be coordinated with departmental work on curriculum development and instructional improvement, made up of interested faculty and students. A series of retreats, workshops (coordinated with Industry Advisory groups), and meetings may be necessary to organize and coordinate instruction in the required courses so that, as a faculty, a consensus is reached on the best approach to promoting student learning. One specific goal within this objective is to develop learning outcomes for each required course in Food Science (as required by the IFT Education Standards). Since faculty are not experts in developing learning outcomes, it is critical to bring in outside expertise to provide assistance and training in writing good learning outcomes. By evaluating what each course contributes towards the curricular outcomes, specific learning outcomes for each course can be developed that help facilitate student learning. A more coordinated curriculum will be the result. Develop a coordinated assessment program. After defining the curricular outcomes and developing the curriculum to emphasize these outcomes, the next step is to develop a coordinated assessment program that allows evaluation of how well students meet the desired outcomes. A variety of assessment tools (from standard exams and class projects to alumni surveys and exit interviews) are already in place even in the traditional model of instruction. However, these are not coordinated in any way to truly assess student learning and are rarely targeted towards specific outcomes, nor is there a coordinated approach to monitoring student progress through the curriculum. Assessment strategies, both within individual courses and for the curriculum as a whole, must be developed. Numerous resources exist that provide guidelines for developing an assessment program (Diamond 1998; Palomba and Banta 1999). One approach may be to distinguish assessment tools as examples of direct or indirect indicators of learning. Specific examples are listed below (LEAD Center; www.cae.wisc.edu/~lead/): Direct indicators of learning capstone course evaluation courses with embedded assessment tests and examinations portfolio evaluation thesis evaluation videotape/audiotape evaluation of performance
Available on-line at: www.ift.org
specific changes. Such recommendations may be as simple as suggesting a slight change in instructional practices or a complete overhaul of the curriculum to ensure better coordination. Assessment programs begin with developing clear learning goals or outcomes, in this case starting with the Core Competencies listed in the IFT Standards, and then selecting the best assessment tools to gather information about student achievement of the goals (AAHE 1992).
Learning outcomes
Assessment is a goal-oriented process, and the statement of intended outcomes is the first step in any assessment plan. Learning outcomes are precise statements of what faculty expect students to know and be able to do as a result of completing a program, course, unit, or lesson (Huba and Freed 2000), although each of these levels suggests a distinct level of precision (Palomba and Banta 1999). These outcomes are based on broad educational standards like those of IFT, but must be translated into program and course-specific outcomes or objectives. Learning outcomes are of two types: content and performance (Glatthorn 1999). Content outcomes are the discipline-specific knowledge Outcomes developed that successful students have. jointly with industry Performance skills are what IFT calls success skills. Outassociates can assure statements enable facthat the outcomes align comes ulty to develop assessments of favorably with career content knowledge and its application to specific situations expectations in areas like food processing or chemistry (Kendall and Marzano 1997). Broad goals or standards of both types must then be translated into learning objectives that describe specific demonstrations that students must perform successfully at both program and course levels (Erwin 1991). The challenge with developing learning objectives is to capture facultys intentions so that they can be communicated effectively to students and can guide faculty in making choices about curriculum and instruction. Outcomes developed jointly with industry associates can assure that the outcomes align favorably with career expectations. Furthermore, it may not be necessary to reinvent the wheel as colleagues at other campuses may have developed learning outcomes for programs and courses that can serve as templates. The best learning objectives take intended learning goals and make them clear to students, and are measurable without reducing their complexity. Outcome statements for goals and objectives often begin with Students will be able to (Huba and Freed 2000). As this sentence stem suggests, expected outcomes are best described in active terms such as design, create, analyze, apply (Maki 2002). For example, a broad professional or institutional standard about critical thinking, when it is converted into a specific objective for a program, becomes Students will be able to define a problem, identify potential causes and possible solutions, and apply critical thinking skills to new situations (IFT 2003). For a course, such as food processing, this statement is further specified to a level of increased precision. In an introductory food processing course, the objective students will be able to select the best cleaning solution for a given process and justify their choice makes it possible to bring knowledge and skill together in such a way that faculty can assess it. Specifying how critical thinking matters in a food science program gives faculty a place to start to develop course level learning objectives. Without making a learning outcome statement for the food processing course, the goals of the course remain as vague and immeasurable as the original
Vol. 2, 2003JOURNAL OF FOOD SCIENCE EDUCATION 35
Assessment tools are selected based on the type of information about student learning sought and how that information will be used. The best assessment programs use a variety of tools so that data from the assessments can be compared and differentially used to develop a fuller picture of what students have learned. These tools include both direct and indirect measures of student learning achievement. In the case of each direct and indirect assessment tool, there are advantages and disadvantages, depending on the kind of information sought and how that information will be used. Taking these factors into account, assessment of student learning outcomes can be designed to meet local requirements and limitations and to match faculty needs for information about student learning in the program. Learning outcomes also become focal points for faculty collaboration in action science projects that ask significant questions about what students are learning and how learning opportunities may be increased program-wide. In the case of any assessment tool, teaching and learning in the program may be shaped by the assessments used (Madaus 1998). Therefore, assessment tools should be thoughtfully selected to provide rich, diverse, and complementary views of student learning outcomes and enable faculty to design learning experiences that build towards complex performances that use student knowledge and skill (Glatthorn 1999). Direct indicators of learning: These assessments take direct samples or measures of student performance in order to assess student learning and provide faculty with feedback about program effectiveness. These direct assessments should comprise the bulk of the assessment program and may include traditional testing as well as more authentic and performance-based assessments that measure student learning in success skills (Wiggins 1993). At the program level, these assessments are not ready addons to existing programs, but rather capture the spirit of assessment that is intended to enhance overall student learning achievement. They require faculty collaboration across courses to design, implement, and make use of results, but are worth the effort because of the rich picture of student learning complementary direct assessments provide. Direct assessment information should be gathered only if it is going to be used by faculty to ensure that the learning they intend for students is accomplished in the program. A sampling of direct assessment tools follows. - Capstone course evaluation: Capstone courses are those that are designed as culminating experiences in a program that typically requires advanced students to complete integrated performances that combine core program-level knowledge and abilities. Direct assessment of student portfolios, student research, or other culminating projects allow faculty to determine how well students understand and apply what they know (Lunde and others 1995). To accomplish the intended goals, capstone courses should be designed so that students have multiple chances to apply technical knowledge and success skills so these can be observed and measured. A food science capstone course allows for a culminating view of student achievement in many of the IFT standards. There are several challenges with capstone-based assessments: (1) coordinating outcomes across courses and faculty; (2) making sure that the assessments capture intended outcomes; and (3) developing reliable measurements and scoring guides (Palomba and Banta 1999). It may take several semesters of running the capstone to develop and refine it based on student performance and the departments changing need to gather evidence
36
of student learning. The course design will also have to be sensitive to changes in the field and will require faculty collaboration to make the assessments in the capstone useful. - Course-embedded assessment: A course-embedded assessment plan uses existing course assessments to systematically gather information about program-level learning outcomes. For example, any assessment of student learning that is currently used in a course can provide evidence of student learning achievements useful at the program level. Ideally, existing assessments form the core of the assessment plan, and faculty carefully select and coordinate these assessments to gather information that they can and will use about student performance across courses. In a food science program, the IFT problem-solving and critical thinking standard cited above is assessed repeatedly so that these skills are applied in ways that are specific to food chemistry, processing and engineering, and other core curricular areas. To know if students can consistently solve problems using the specific technical knowledge acquired in their courses, these assessment results can be used by the department as a whole to improve learning outcomes generally by coordinating learning opportunities. Courseembedded assessments have several advantages. First, they are cost effective and acceptable to faculty. Second, coordination between courses and sections can be facilitated by sharing assessment evidence. Also, student motivation to perform well is not compromised in such assessments, unlike out-of-course assessments of overall program effectiveness wherein students see no personal stake (Huba and Freed 2000). The success of course-embedded assessment hinges on faculty coordination of assessments and joint use of assessment data to improve the instructional program (Wergin and Swingen 2000). - Tests and examinations: Tests and examinations of contentknowledge are the most familiar form of assessment in higher education. Their advantages include the ability to determine what students know and wide faculty acceptance (Schilling and Schilling 1998). There are two main types of tests and examinations. Both are familiar, but they assess different aspects of learning and must be used judiciously in order to be valid and reliable measures of intended learning outcomes. First, large scale multiple choice tests are easily administered and enable faculty to compare their students to others. These assessments are favored for accountability purposes, but do not provide information that is easily used to improve the program. Other sources of information are necessary to make these multiple choice test results useful. Second, course-specific tests and examinations are Examinations that rely useful because they align to on students short-term what has been taught in the or working memory are program and can be used to respond when students do not associated with meet faculty expectations. In forgetting once the exam food science programs, the is completed and do little majority of assessments will be of this type, and these examito build the kinds of nations can serve as courseembedded assessments. Withcognitive processes in the familiar examination, associated with faculty are free to use a variety developing expertise of item types, the most useful of which will require thoughtful responses or solutions to problems. The disadvantages of large scale multiple choice tests and some course examinations are clear; however, tests often cannot provide evidence of higher order success skills like critical thinking and problem-solving or student ability to apply what has been learned. Multiple choice
Available on-line at: www.ift.org
tion to the field. Programs that are more interested in placing students in graduate school than in the field may find theses most appealing as part of an assessment plan. - Videotape/audiotape evaluation of performance: Performance assessments, including presentations, are recorded on video or audiotape to allow for reviewing the assessment or sharing it among assessors. Video and audiotaped assessments are often included in portfolios and can be used to show progression in student learning and achievement at authentic tasks (Angelo and Cross 1993). They can be cumbersome to store and time-consuming to assess, however. - Exit competency exam: The exit competency exam is a culminating, program-level assessment that requires students to draw on knowledge learned across courses. The exit competency exam shares with tests, examinations, and thesis evaluation a level of acceptance and the ability to capture both faculty and student motivation. In food science, a locally developed comprehensive examination can assure faculty that students have mastered core content knowledge in the field, and each subdiscipline can make its unique contribution to the exam, bolstering confidence that students are ready to graduate. One limitation of such exams is a traditional emphasis on content knowledge alone, failing to allow students an opportunity to develop and demonstrate complex abilities for professional success and knowledge application. Like all assessment tools, the exit competency exam can be a useful element in an overall assessment plan that does provide such opportunities. The exam requires ongoing fine-tuning to be certain that it reflects current curriculum and changes in the field. Indirect indicators of learning: Indirect assessment tools do not directly measure student learning, but seek information and insight about student performance through surveys, self-reports, interviews, focus groups, and other means of capturing faculty, student, alumni, or employer perceptions about students preparation. They complement direct assessments and provide means to gather insights about program effectiveness from various stakeholder groups, but they do not take the place of direct assessments and should not dominate the programs assessment plan because of their relative ease in implementation or to preclude the level of faculty collaboration required by direct assessment at the program level. As with direct assessment measures, no indirect assessment information should be gathered if it will not be used for program improvement and development. There are some commonly used indirect assessment tools that can complement a program that is based on direct assessments (Palomba and Banta 1999). Like direct assessments, indirect assessments provide information for faculty to ask and answer their own questions about student learning achievement. - External reviewers: External reviewers are members of stakeholder groups such as alumni, employers, or higher education colleagues that assist the program in assessment or evaluation. External reviewers can serve two complementary functions in an assessment program. First, program assessment should include the use of external reviewers or assessors of student performance as a matter of principle. This addresses a key accountability issue: Faculty should not be the only ones judging student learning outcomes. External assessors provide a neutrality that benefits both assessment and evaluation of programs (Alverno College 1985). Second, professional guidance from the food industry is useful for keeping knowledge applications current and building career connections for students by reviewing programs and providing advice to faculty (Palomba and Banta 1999). Students find the feedback from working professionals useful in their development of important knowledge and abilities as well as professional networks. Disadvantages are logistical and economic: use of external assessors means recruiting, training, and maintaining positive working relationships with a variety of industry and professional
Vol. 2, 2003JOURNAL OF FOOD SCIENCE EDUCATION 37
bases must be developed and maintained. Here too, the issue of selective responses may lead to biases in the overall responses. - Curriculum/Syllabus Analysis: Analysis of written curriculum and course syllabi can yield information about course-by-course learning as well as an overall picture of a programs learning opportunities. Such an analysis also yields information about how learning is already assessed that can be used to select course-embedded assessments or to develop a program assessment plan (Palomba and Banta 1999). One advantage is the ability to study the programs parts in relation to the whole to identify gaps between intended outcomes and actual learning opportunities. Another advantage is the relative ease with which syllabi can be changed to reflect the development and refinement of learning outcomes without a cumbersome curriculum change process. Standardized course syllabi or syllabi templates based on IFT Standards can be created so that objectives can be clearly understood. Disadvantages include the possibility that curriculum documents and syllabi do not adequately or accurately represent the learning experiences of students. The process, if it is to yield useful results, requires a commitment of time for comprehensive review, analysis, and discussion.
Conclusion
Integration of learning standards, such as those of IFT, means changing and developing new means of assessment and moving from traditional to assessment-driven instruction. Often, courselevel assessments are changed and program-level assessments used for the first time. Assessment drives changes to curriculum as well as instructional practices and requires faculty to work together to clarify their intentions for student learning based on standards, to develop a system of complementary direct and indirect assessments, and to use assessment information to inquire about the effectiveness of courses and the program and to improve them. In this way, assessment is not unlike the research that faculty are accustomed to. Questions are raised and methods developed to answer the questions. Industry partners play a key role in assessment as advisors and assessors. Students and alumni also serve as sources of information and insight about program effectiveness. In any case, assessments should always be designed and implemented so that the information gathered matches the programs need for it, and faculty must commit to collaborative use of the evidence assessment yields.
References
Alverno College Faculty. 1985. Assessment at Alverno College. Milwaukee, Wis.: Alverno Publications. 83 p. [AAHE] American Association for Higher Education. 1992. Principles of good practice for assessing student learning. Washington, D.C.: AAHE. 4 p. Angelo TA, Cross KP. 1993. Classroom assessment techniques: A handbook for college teachers. 2nd ed. San Francisco: Jossey-Bass. 427 p. Argyris C, Putnam R, Smith DM. 1985. Action science: Concepts, methods, and skills for research and intervention. San Francisco: Jossey-Bass. 480 p. Bowyer KA. 1996. Efforts to continually improve a nursing program. In: Banta TW, Lund JP, Black KE, Oblander FW, editors. Assessment in practice: putting principles to work on college campuses. San Francisco: Jossey-Bass. p 128-9. Diamond RM. 1998. Designing and assessing courses and curricula: a practical guide. San Francisco: Jossey-Bass. Dillon WT. 1997. Involving community in program assessment. Assessment Update 4(2):4+. Erwin TD. 1991. Assessing student learning and development: a guide to the principles, goals, and methods of determining college outcomes. San Francisco: Jossey-Bass. 208 p. Gainen J, Locatelli P. 1995. Assessment for the new curriculum: a guide for professional accounting programs. Sarasota, Fla: American Accounting Association. 158 p. Glatthorn AA. 1999. Performance standards and authentic learning. Larchmont, N.Y.: Eye on Education. 176 p. Hartel RW. 2002. Core competencies in food science: background information on the development of the IFT Education Standards. J Food Sci Educ
39