Sei sulla pagina 1di 8

Making the Transition to a Food Science Curriculum Based on Assessment of Learning Outcomes

R.W. Hartel and D. Gardner

Introduction

ABSTRACT: Despite the well-documented advantages of switching to instruction based on assessment of learning outcomes, many academic disciplines, including food science, are still based on the traditional mode of instruction. The problems of converting from traditional to assessment-driven instruction are numerous and change in the university setting is slow. However, certain guidelines can be followed to start the process for change and evaluate the effects on student learning. A partnership between the industry being served and academic instructors is needed to ensure that assessment-based instruction is focused on the proper principles. Methods of assessment of learning outcomes need to be carefully chosen and developed to bring industry standards and student learning together. This can be done only if both direct and indirect assessments at the program level provide faculty with means to answer their most pressing questions about what students know and are able to do as a result of Food Science education.

The transition to a curriculum based on assessment of learning outcomes is being embraced by numerous professional fields and is based on results from the education literature that demonstrate the benefits of such an approach (Diamond 1998; Palomba and Banta 1999). For example, the engineering disciplines and their standardizing body, Accreditation Board for Engineering and Technology (ABET), have recently switched to an outcomes-based learning approach. Other fields utilizing this approach include nursing, accounting, and medicine, among others (Gainen and Locatelli 1995; Bowyer 1996; Stone 1996). In the field of Food Science, the Institute of Food Technologists (IFT) has reviewed programs in Food Science since 1977, based on minimum standards developed by leaders in the field. IFT approval carries some weight in that only students from IFTapproved programs are eligible for IFT scholarships. In 2001, the IFT policies for program review were changed significantly. Prior to 2001, a series of courses with set content material were prescribed and only departments that met these minimum course requirements were approved. Currently, the IFT Education Standards (www.ift.org/education/standards.shtml) require a program of assessment of student learning based on specified outcomes. IFT now requires (1) that a certain set of core competencies be met within the curriculum (not a specified set of courses), (2) that specific learning outcomes (statements of desired learning that are measurable using some assessment tool) be written both for individual courses and for the curriculum as a whole, (3) that adequate assessment tools be used for measuring student learning, both for individual classes and for the curriculum as a whole, and (4) that there be some well-thought-out process of curricular reform based on the results of the assessment data. Assessment tools in this sense cover a wide range of measurements that allow the instructor to quantify the degree of competency of students for specific tasks. Details on the various assessment tools available are given later. To assist departments in making this transition, IFT provides a Guide Book for Food Science Programs (www.ift.org/education/guidebook/). The Guide Book provides the rationale for making this transition and some resources for outcomesbased education; however, each program is responsible for how they make the transition. The process of change towards assessment of learning outcomes at each institution should be based on several factors. These include the experience of the instructors in education principles, the resources available, and the needs of the students at that particular institution. At some schools, the students may be primarily prepared for industrial jobs immediately upon receipt of the BS degree, whereas at other schools, the majority of students may be destined for Ph.D. degrees and jobs in research and development. Although the needs may be different, a program must know where their students end up and what their needs are in their careers. Instruction, in terms of course content and instructional style, must be geared towards the needs of the program. Assessments, in turn, must be used to provide faculty with the feedback they require to be sure that program needs are addressed. Changes to assessment typically result in changes to instruction (Madaus 1988). If assessments are designed to measure both content knowledge and student performance in key ability areas, then new forms of instruction, often emphasizing active student learning, are the result (Glatthorn 1999).
2003 Institute of Food Technologists

32

JOURNAL OF FOOD SCIENCE EDUCATIONVol. 2, 2003

Assessment of learning outcomes . . .


The traditional curriculum

Despite all the recent activity in the education field regarding the benefits of assessment of learning outcomes, it is still common for most food science courses to be taught in a traditional manner. That is, the instructor lectures and gives the students a grade based on whether students can essentially repeat on an exam the material the instructor has asked them to learn. Perhaps some exam questions are used to test how well students can integrate the course material with their previous learning, but for the Most of us understand most part exams are based on that the ability to do well reproducing the material covered in that class with little exon an exam does not tension to other areas. To help reflect a students true them practice their responses understanding and com- and ensure that they do reapetence with the material sonably well on the exams, homework may be assigned such that students become familiar with the type of questions being asked. In each class, students must get accustomed to the style of instructor question and how they grade the exam. In a previous publication (Hartel 2002), it was suggested that traditional education is similar to open-loop control of a process (similar to that used in a laundry washing machine). Students enter our program and progress through the curriculum taking all the required courses. We evaluate some aspect of learning looking at the clothes from our washing machine to see if they are clean or not. However, there are often students who do not perform as well as others upon graduation. Are we really sure that we have taught them as well as we possibly can? The food industry, a major recipient of our graduates, often says that graduates do not have the skills needed to succeed immediately upon graduation; instead, they take considerable time and energy to develop to the point where they can really contribute. Although continual improvement is always desired, it is important not to allow change to lose the good aspects of what is already done. Any transition in educational approaches should maintain the good and useful aspects of the previous model.
A change to assessment of learning outcomes

When an organizing body like IFT requires a change in program requirements, as has happened every 10 years or so since the standards were initiated in 1977, there are several automatic responses. The first and foremost is one of disagreement (or something stronger). As faculty, we do not take well to others telling us what to do. Faculty are a fairly autonomous group and prefer to maintain their strong independence. There may also be an element of dislike to any change; it is often said that there is a 40year half life for change at the university. There is a sense that the traditional model of instruction does a reasonable job at training students, is fairly efficient at training high numbers of students, and there is no need to change. Furthermore, the new IFT Education Standards require substantial retraining in the concepts of learning outcomes and how we can assess whether or not our students have developed a certain skill. Most of us are not trained in education, and these activities will require substantial retraining on our part. However, the aim of the new Education Standards, which comes from a committee of faculty peers, is to promote continual improvement of food science education. Its hard to argue against having an attitude that we should always be striving to be better at what we do, and that includes our teaching skills. Shouldnt we faculty always be striving for excellence in whatever we do? We do this in our research programs, why not in teaching as well?
Available on-line at: www.ift.org

One of the simplest approaches to meeting the new Education Standards is to simply add on a few assessment methods. Many universities already require some assessment of our programs and these methods can be used to meet the IFT Standards. Such exercises as alumni surveys, exit interviews, and even employer surveys are things we can do to get some feedback about the students experience and how well they perform in their jobs. This is all useful information, but the typical add-on type of assessment (that is, assessment methods that simply add on to the traditional system) still do not really get at the details needed for true and substantial improvement. Details of various assessment methods, including advantages and disadvantages of each, are summarized in a later section. A true change to a coordinated instructional and assessment program requires significant faculty cooperation (both among ourselves and with outside reviewers) and commitment. Faculty must carefully evaluate the curriculum to understand what learning outcomes are desired, based on the projected responsibilities of the students (and not just a subset of the students). Changes to the curriculum as well as the approaches to instruction must then be made to promote student learning. Many of the details must be decided by faculty discussion and consensus building because the needs of each individual program will be different. A plan outlined in the following sections details the general approach recommended by Palomba and Banta (1999) as assessment essentials. The plan suggests the following steps: (1) agree on goals and objectives for learning; (2) design and implement a thoughtful approach to assessment planning; (3) involve individuals from on and off campus; (4) select or design and implement data collection approaches; (5) examine, share and act on assessment findings; and (6) regularly reexamine the assessment process. Ultimately, these steps lead to development of a coherent curriculum designed for assessment of student progress throughout their stay in our department. An approach to this problem can be found in action science or action research, where faculty work together (among themselves and with outside influences, like industry personnel and education consultants) to develop the approach that works best for them (Argyris and others 1985; Zuber-Skerritt 1992a and b). Action research, as used in the education field, involves using social science methods to solve immediate problems to improve educational practices. The following example, on developing teamwork skills, indicates what is meant by a coordinated curriculum with a coordinated assessment program. Developing teamwork skills is an important aim of university instruction, as we all know that industry desires students with the skills to work together as a team. The first step to developing this skill in our students, however, is to truly define what we mean by teamwork skills. The input of industry personnel and alumni who can help us identify the Developing teamwork specific skills would be quite skills is an important aim useful here. Exactly what do of university instruction they think is needed in terms of teamwork skills when a student first enters the field? Is it that our graduates need to learn to work together, communicate effectively in a group setting, and know how to resolve differences that arise? Once the specific skills have been identified and developed into true learning outcomes, where a statement describes the specific skill (or technical knowledge) desired, faculty must decide how to develop this skill within the curriculum and then to assess whether or not students have attained this skill by the time they graduate. For example, the following approach might be used to develop teamwork skills in our students and to assess how well they have
Vol. 2, 2003JOURNAL OF FOOD SCIENCE EDUCATION 33

JFSE: Journal of Food Science Education


learned those skills. First, teamwork concepts may be introduced in a course early in the curriculum to familiarize students with the idea. A simple team problem may be assigned at this point with the instructor providing lots of advice and encouragement. The focus of this problem may not be on the problem being solved, but on the process used by a team to solve the problem. Later, in midcurriculum, a course (or several of them) would require students to practice teamwork skills, and perhaps introduce more advanced concepts. Again, the instructor must be capable of helping students work through the issues involved in team or group activities. At the end of the curriculum, perhaps in a capstone course, a team project would be required of all students and their performance on this project would be used to assess their competency level in these skills. Outside input, as in industry review of a portfolio for example, or a faculty team of assessors may be used to evaluate how well the students have developed these skills. Ultimately, the level of performance of students should dictate whether or not any changes are made to the curriculum or teaching approach. For example, if students are performing, on average, at a low level in a certain skill, then the faculty must coordinate changes to either the curriculum or the teaching approach to improve learning. This is the heart of assessment. The general approach to building a curriculum based on assessment of learning outcomes should include the following steps: Define intended outcomes (technical and success skills) for program. The first step is to identify the skills that our students need upon graduation. Most of us already have some idea of what those skills are and we feel we already do a decent job of teaching our students those skills, since industry hires all of our students. However, very few of the faculty have worked in the food industry, especially at the B.S. level, where many of our students complete their education. Thus, it is important for the faculty to get a better sense of the range of job skills needed by these students so that they can write meaningful outcomes for this target group of students. The specific needs of students who go on to graduate school for M.S. and Ph.D. degrees must also be considered relative to their specific job activities (whether in the food industry as product developers, in research and development, or in academia). To fully understand the technical knowledge and skills needed of all our students, substantial faculty collaboration is necessary, both within the University and with outside input. Faculty workshops, retreats, and focus groups that include students, alumni, employers, faculty/teaching staff, and administrators are potential methods of soliciting input and developing a set of skills needed by our students. The IFT Education Standards are based on a set of Core Competencies that should be required of all students earning a degree in food science. These competencies form the basis for specific learning outcomes, but each program should look further to decide if they wish to require additional competencies beyond those listed in the IFT Standards. The main product of these retreats and focus groups would be a set of learning outcomes for the food science curriculum that are based on the needs of the particular students in that program. These outcomes will encompass not only the technical knowledge and skills needed by our students to be successful scientists, but also the professional (or success) skills needed to advance in their careers (as described in the IFT Education Standards). Develop a coordinated curriculum based on meeting desired outcomes. Once the curricular outcomes have been defined, it is necessary to look to individual courses to see how and where the expected knowledge and skills are developed. The development of teamwork and interpersonal skills within the curriculum can again be used as an example. Our department (Food Science at University of Wisconsin-Madison) developed a grid to log all activities that lead to development of teamwork and interpersonal
34

skills in our individual courses. To our surprise, we found that many of us use various instructional approaches to help our students learn these skills, but to no ones surprise, our approaches were not coordinated. The result of that exercise was to start people thinking about how we might first introduce the concepts of group activities and teamwork in a course early in the curriculum, and then build on that By the time the students in a coordinated fashion in subsequent courses. In this graduate, they will have way, by the time the students developed these skills to graduate, they will have dethese skills to the the point where they can veloped point where they can use use them directly upon them directly upon employment. In a similar fashion, one employment can envision coordinating the entire curriculum, both technical knowledge and success skills, around each of the desired outcomes defined in the first step. To do this, faculty need to review their existing courses and individual instructional approaches to see where and how aspects of each outcome are addressed. This will require substantial coordination among the teaching faculty and should be coordinated with departmental work on curriculum development and instructional improvement, made up of interested faculty and students. A series of retreats, workshops (coordinated with Industry Advisory groups), and meetings may be necessary to organize and coordinate instruction in the required courses so that, as a faculty, a consensus is reached on the best approach to promoting student learning. One specific goal within this objective is to develop learning outcomes for each required course in Food Science (as required by the IFT Education Standards). Since faculty are not experts in developing learning outcomes, it is critical to bring in outside expertise to provide assistance and training in writing good learning outcomes. By evaluating what each course contributes towards the curricular outcomes, specific learning outcomes for each course can be developed that help facilitate student learning. A more coordinated curriculum will be the result. Develop a coordinated assessment program. After defining the curricular outcomes and developing the curriculum to emphasize these outcomes, the next step is to develop a coordinated assessment program that allows evaluation of how well students meet the desired outcomes. A variety of assessment tools (from standard exams and class projects to alumni surveys and exit interviews) are already in place even in the traditional model of instruction. However, these are not coordinated in any way to truly assess student learning and are rarely targeted towards specific outcomes, nor is there a coordinated approach to monitoring student progress through the curriculum. Assessment strategies, both within individual courses and for the curriculum as a whole, must be developed. Numerous resources exist that provide guidelines for developing an assessment program (Diamond 1998; Palomba and Banta 1999). One approach may be to distinguish assessment tools as examples of direct or indirect indicators of learning. Specific examples are listed below (LEAD Center; www.cae.wisc.edu/~lead/): Direct indicators of learning capstone course evaluation courses with embedded assessment tests and examinations portfolio evaluation thesis evaluation videotape/audiotape evaluation of performance
Available on-line at: www.ift.org

JOURNAL OF FOOD SCIENCE EDUCATIONVol. 2, 2003

Assessment of learning outcomes . . .


exit competency exam Indirect indicators of learning external reviewers student surveys/exit interviews alumni surveys employer surveys curriculum/syllabus analysis A combination of these techniques for assessing student learning must be developed through discussion and planning among the food science faculty, with the assistance of the industry advisory group and an education consultant. The main goal of the assessment program is to ensure that student learning progresses according to the desired level of technical and professional skills developed as learning outcomes. Assessment at the appropriate point in the curriculum for skill development at the appropriate level of Blooms taxonomy will help ensure that students progress through the desired stages of learning. The use of standardized skills exams like the Cornell Critical Thinking test at various stages in the curriculum, from incoming freshmen to outgoing seniors, should be considered. This approach allows evaluation of whether or not students are gaining specific skills as they progress through the curriculum. However, it may be necessary to develop various assessment tools that are more specific to skills needed in food science. For example, we may choose to develop questions designed to evaluate students use of critical thinking skills in problems of importance to food science. Alternatively, we may assess student capabilities prior to their use in a course, as in the quantitative skills needed for the food engineering course. Evaluating math skills of students as they enter the engineering course would allow the instructor to offer remedial assistance to those who need it. We envision developing tests such as this for other skills as needed. Numerous resources are available to help individuals and departments develop an assessment program. A resource available at UW-Madison for assistance with classroom assessment is the National Institute for Science Educations (NISE) College-Level One (CL-1) Team. This is a nationwide community of post-secondary science, mathematics, engineering, and technology (SMET) faculty, education researchers, faculty developers, and students. Their Web site, the Field-tested Learning Assessment Guide (FLAG) for science, math, engineering and technology instructors, provides detailed descriptions of various assessment tools useful for science-based instruction (www.wcer.wisc.edu/nise/cl1/flag/). Evaluate whether changes in the curriculum, based on points above, have impacted student learning. Perhaps an important question to ask is whether switching to assessment of learning outcomes, as now required by IFT, truly enhances student learning. It is important to evaluate whether the changes implemented through the process described above truly are reflected in the skills of our graduates. Since there are numerous factors that impact student learning, many of which are beyond our control, probably the best we can do is get a sense of whether the skills of our students are improving over time as we go through the transitions described above. Nevertheless, it is important to ensure that all assessment activities are continuously monitored and improved to enhance the reliability of the results. Ultimately, the better we understand the range of skills and abilities of our students, the better able we will be to improve our programs and provide them with the best possible education. Continually improve the curriculum and instructional methods based on assessment results. In any assessment program, a formalized mechanism must be in place to continually evaluate the assessment results and make changes to either the curriculum or instructional policies. A committee of faculty, students, and perhaps industry personnel may be responsible for evaluating all assessment data and making recommendations to the faculty for
Available on-line at: www.ift.org

specific changes. Such recommendations may be as simple as suggesting a slight change in instructional practices or a complete overhaul of the curriculum to ensure better coordination. Assessment programs begin with developing clear learning goals or outcomes, in this case starting with the Core Competencies listed in the IFT Standards, and then selecting the best assessment tools to gather information about student achievement of the goals (AAHE 1992).
Learning outcomes

Assessment is a goal-oriented process, and the statement of intended outcomes is the first step in any assessment plan. Learning outcomes are precise statements of what faculty expect students to know and be able to do as a result of completing a program, course, unit, or lesson (Huba and Freed 2000), although each of these levels suggests a distinct level of precision (Palomba and Banta 1999). These outcomes are based on broad educational standards like those of IFT, but must be translated into program and course-specific outcomes or objectives. Learning outcomes are of two types: content and performance (Glatthorn 1999). Content outcomes are the discipline-specific knowledge Outcomes developed that successful students have. jointly with industry Performance skills are what IFT calls success skills. Outassociates can assure statements enable facthat the outcomes align comes ulty to develop assessments of favorably with career content knowledge and its application to specific situations expectations in areas like food processing or chemistry (Kendall and Marzano 1997). Broad goals or standards of both types must then be translated into learning objectives that describe specific demonstrations that students must perform successfully at both program and course levels (Erwin 1991). The challenge with developing learning objectives is to capture facultys intentions so that they can be communicated effectively to students and can guide faculty in making choices about curriculum and instruction. Outcomes developed jointly with industry associates can assure that the outcomes align favorably with career expectations. Furthermore, it may not be necessary to reinvent the wheel as colleagues at other campuses may have developed learning outcomes for programs and courses that can serve as templates. The best learning objectives take intended learning goals and make them clear to students, and are measurable without reducing their complexity. Outcome statements for goals and objectives often begin with Students will be able to (Huba and Freed 2000). As this sentence stem suggests, expected outcomes are best described in active terms such as design, create, analyze, apply (Maki 2002). For example, a broad professional or institutional standard about critical thinking, when it is converted into a specific objective for a program, becomes Students will be able to define a problem, identify potential causes and possible solutions, and apply critical thinking skills to new situations (IFT 2003). For a course, such as food processing, this statement is further specified to a level of increased precision. In an introductory food processing course, the objective students will be able to select the best cleaning solution for a given process and justify their choice makes it possible to bring knowledge and skill together in such a way that faculty can assess it. Specifying how critical thinking matters in a food science program gives faculty a place to start to develop course level learning objectives. Without making a learning outcome statement for the food processing course, the goals of the course remain as vague and immeasurable as the original
Vol. 2, 2003JOURNAL OF FOOD SCIENCE EDUCATION 35

JFSE: Journal of Food Science Education


broad standard for critical thinking regardless of its application. Finally, statements of intended learning outcomes might suggest assessment tools that make it possible to make judgments about what students know and can do.
Assessment tools

Assessment tools are selected based on the type of information about student learning sought and how that information will be used. The best assessment programs use a variety of tools so that data from the assessments can be compared and differentially used to develop a fuller picture of what students have learned. These tools include both direct and indirect measures of student learning achievement. In the case of each direct and indirect assessment tool, there are advantages and disadvantages, depending on the kind of information sought and how that information will be used. Taking these factors into account, assessment of student learning outcomes can be designed to meet local requirements and limitations and to match faculty needs for information about student learning in the program. Learning outcomes also become focal points for faculty collaboration in action science projects that ask significant questions about what students are learning and how learning opportunities may be increased program-wide. In the case of any assessment tool, teaching and learning in the program may be shaped by the assessments used (Madaus 1998). Therefore, assessment tools should be thoughtfully selected to provide rich, diverse, and complementary views of student learning outcomes and enable faculty to design learning experiences that build towards complex performances that use student knowledge and skill (Glatthorn 1999). Direct indicators of learning: These assessments take direct samples or measures of student performance in order to assess student learning and provide faculty with feedback about program effectiveness. These direct assessments should comprise the bulk of the assessment program and may include traditional testing as well as more authentic and performance-based assessments that measure student learning in success skills (Wiggins 1993). At the program level, these assessments are not ready addons to existing programs, but rather capture the spirit of assessment that is intended to enhance overall student learning achievement. They require faculty collaboration across courses to design, implement, and make use of results, but are worth the effort because of the rich picture of student learning complementary direct assessments provide. Direct assessment information should be gathered only if it is going to be used by faculty to ensure that the learning they intend for students is accomplished in the program. A sampling of direct assessment tools follows. - Capstone course evaluation: Capstone courses are those that are designed as culminating experiences in a program that typically requires advanced students to complete integrated performances that combine core program-level knowledge and abilities. Direct assessment of student portfolios, student research, or other culminating projects allow faculty to determine how well students understand and apply what they know (Lunde and others 1995). To accomplish the intended goals, capstone courses should be designed so that students have multiple chances to apply technical knowledge and success skills so these can be observed and measured. A food science capstone course allows for a culminating view of student achievement in many of the IFT standards. There are several challenges with capstone-based assessments: (1) coordinating outcomes across courses and faculty; (2) making sure that the assessments capture intended outcomes; and (3) developing reliable measurements and scoring guides (Palomba and Banta 1999). It may take several semesters of running the capstone to develop and refine it based on student performance and the departments changing need to gather evidence
36

of student learning. The course design will also have to be sensitive to changes in the field and will require faculty collaboration to make the assessments in the capstone useful. - Course-embedded assessment: A course-embedded assessment plan uses existing course assessments to systematically gather information about program-level learning outcomes. For example, any assessment of student learning that is currently used in a course can provide evidence of student learning achievements useful at the program level. Ideally, existing assessments form the core of the assessment plan, and faculty carefully select and coordinate these assessments to gather information that they can and will use about student performance across courses. In a food science program, the IFT problem-solving and critical thinking standard cited above is assessed repeatedly so that these skills are applied in ways that are specific to food chemistry, processing and engineering, and other core curricular areas. To know if students can consistently solve problems using the specific technical knowledge acquired in their courses, these assessment results can be used by the department as a whole to improve learning outcomes generally by coordinating learning opportunities. Courseembedded assessments have several advantages. First, they are cost effective and acceptable to faculty. Second, coordination between courses and sections can be facilitated by sharing assessment evidence. Also, student motivation to perform well is not compromised in such assessments, unlike out-of-course assessments of overall program effectiveness wherein students see no personal stake (Huba and Freed 2000). The success of course-embedded assessment hinges on faculty coordination of assessments and joint use of assessment data to improve the instructional program (Wergin and Swingen 2000). - Tests and examinations: Tests and examinations of contentknowledge are the most familiar form of assessment in higher education. Their advantages include the ability to determine what students know and wide faculty acceptance (Schilling and Schilling 1998). There are two main types of tests and examinations. Both are familiar, but they assess different aspects of learning and must be used judiciously in order to be valid and reliable measures of intended learning outcomes. First, large scale multiple choice tests are easily administered and enable faculty to compare their students to others. These assessments are favored for accountability purposes, but do not provide information that is easily used to improve the program. Other sources of information are necessary to make these multiple choice test results useful. Second, course-specific tests and examinations are Examinations that rely useful because they align to on students short-term what has been taught in the or working memory are program and can be used to respond when students do not associated with meet faculty expectations. In forgetting once the exam food science programs, the is completed and do little majority of assessments will be of this type, and these examito build the kinds of nations can serve as courseembedded assessments. Withcognitive processes in the familiar examination, associated with faculty are free to use a variety developing expertise of item types, the most useful of which will require thoughtful responses or solutions to problems. The disadvantages of large scale multiple choice tests and some course examinations are clear; however, tests often cannot provide evidence of higher order success skills like critical thinking and problem-solving or student ability to apply what has been learned. Multiple choice
Available on-line at: www.ift.org

JOURNAL OF FOOD SCIENCE EDUCATIONVol. 2, 2003

Assessment of learning outcomes . . .


tests have also been linked to a narrowing of higher order learning opportunities and may be expensive to administer and score (Madaus 1988; Palomba and Banta 1999). Despite their widespread acceptance, typical in-course examinations that rely on students short-term or working memory are associated with forgetting once the exam is completed and do little to build the kinds of cognitive processes associated with developing expertise (National Research Council 2000). - Portfolio evaluation: Portfolios are organized collections of artifacts that are assembled in order to demonstrate competency (Glatthorn 1999). They are chiefly designed to gather varied evidence of learning over time and to demonstrate higher order abilities and applications (Huba and Freed 2000; Wiggins 1998). Portfolios have also been linked to improving student learning outcomes (Hutchings 1998). Student self-assessment or reflection on the learning process and outcomes is another advantage. Portfolios require that clear conceptual frameworks, performance criteria, and media choice be developed and refined, however. The conceptual framework consists of a clearly articulated purpose and major themes to help students and faculty select work Assessing portfolios that best demonstrates can be complicated and achievement. In food science, time-consuming, the IFT Standards comprise a although they yield rich logical conceptual framework to guide faculty portfolio asevidence of student signments and student artifact selection. Without a clear learning conceptual framework, portfolios quickly devolve into scrapbooks that are not useful to students or faculty. Performance criteria must be developed to make expectations as goal statements for the portfolio clear and useful to promote learning as well. Portfolio media can be paper or electronic based on practical considerations such as expense and storage. In a food science portfolio, the artifacts students systematically collect from courses demonstrate growth and provide faculty with a powerful tool for assessing that growth. A portfolio is scored with a reliable scale developed and refined for the purpose of making the assessment useful to compare student cohorts over time. As this description suggests, assessing portfolios can also be complicated and time-consuming, although they yield rich evidence of student learning and often gain acceptance once faculty experience shows they are worth the effort (Palomba and Banta 1999). As with assessments generally, portfolios should not be developed and then be expected to serve a valuable function without ongoing evaluation and refinement of the assessments themselves. In the case of portfolios, the department must be committed to them for the long run. As with most other assessments that are useful at the program level, faculty collaboration and support are essential. - Thesis evaluation: The assessment of student research projects, thesis evaluation, shares with testing and examinations a wide faculty acceptance (Schilling and Schilling 1998). Thesis evaluation allows for assessment of complex and integrated knowledge and abilities based on student interest. Students also understand the thesis to be a significant assessment of high level, integrated knowledge, and ability and are thus highly motivated to do well. In food science, independent research projects are an excellent means to assess the IFT standards, and departments find the quality of performance on theses to be highly useful for assessing student achievement. One disadvantage to theses is that they do little to address the overreliance on writing to assess student learning in higher education programs generally, and may be inadequate to demonstrate certain kinds of learning outcomes, such as applicaAvailable on-line at: www.ift.org

tion to the field. Programs that are more interested in placing students in graduate school than in the field may find theses most appealing as part of an assessment plan. - Videotape/audiotape evaluation of performance: Performance assessments, including presentations, are recorded on video or audiotape to allow for reviewing the assessment or sharing it among assessors. Video and audiotaped assessments are often included in portfolios and can be used to show progression in student learning and achievement at authentic tasks (Angelo and Cross 1993). They can be cumbersome to store and time-consuming to assess, however. - Exit competency exam: The exit competency exam is a culminating, program-level assessment that requires students to draw on knowledge learned across courses. The exit competency exam shares with tests, examinations, and thesis evaluation a level of acceptance and the ability to capture both faculty and student motivation. In food science, a locally developed comprehensive examination can assure faculty that students have mastered core content knowledge in the field, and each subdiscipline can make its unique contribution to the exam, bolstering confidence that students are ready to graduate. One limitation of such exams is a traditional emphasis on content knowledge alone, failing to allow students an opportunity to develop and demonstrate complex abilities for professional success and knowledge application. Like all assessment tools, the exit competency exam can be a useful element in an overall assessment plan that does provide such opportunities. The exam requires ongoing fine-tuning to be certain that it reflects current curriculum and changes in the field. Indirect indicators of learning: Indirect assessment tools do not directly measure student learning, but seek information and insight about student performance through surveys, self-reports, interviews, focus groups, and other means of capturing faculty, student, alumni, or employer perceptions about students preparation. They complement direct assessments and provide means to gather insights about program effectiveness from various stakeholder groups, but they do not take the place of direct assessments and should not dominate the programs assessment plan because of their relative ease in implementation or to preclude the level of faculty collaboration required by direct assessment at the program level. As with direct assessment measures, no indirect assessment information should be gathered if it will not be used for program improvement and development. There are some commonly used indirect assessment tools that can complement a program that is based on direct assessments (Palomba and Banta 1999). Like direct assessments, indirect assessments provide information for faculty to ask and answer their own questions about student learning achievement. - External reviewers: External reviewers are members of stakeholder groups such as alumni, employers, or higher education colleagues that assist the program in assessment or evaluation. External reviewers can serve two complementary functions in an assessment program. First, program assessment should include the use of external reviewers or assessors of student performance as a matter of principle. This addresses a key accountability issue: Faculty should not be the only ones judging student learning outcomes. External assessors provide a neutrality that benefits both assessment and evaluation of programs (Alverno College 1985). Second, professional guidance from the food industry is useful for keeping knowledge applications current and building career connections for students by reviewing programs and providing advice to faculty (Palomba and Banta 1999). Students find the feedback from working professionals useful in their development of important knowledge and abilities as well as professional networks. Disadvantages are logistical and economic: use of external assessors means recruiting, training, and maintaining positive working relationships with a variety of industry and professional
Vol. 2, 2003JOURNAL OF FOOD SCIENCE EDUCATION 37

JFSE: Journal of Food Science Education


colleagues, all of which require investments of time and money. Advisory panels and focus groups can provide guidance that keeps the program in tune with students aspirations and maintains dialogue between faculty and industry about what matters most in higher education, and develops reviewers into program advocates that can spread the word about the programs value (Dillon 1997). - Student surveys/exit interviews: Surveys of current students and exit interviews with graduating seniors gather information about their characteristics, values and beliefs, aspirations and expectations, behaviors, attitudes, and perceptions of program quality in a structured fashion. The limitations of surveys are well understood, including low response rates, the possibility of misinterpretation of questions, and missing responses (Pfatteicher and others 1998). Exit interviews with graduating students may have a practical advantage over surveys of this group because recent alumni who are transitioning from the university to careers are notoriously hard to track down. Exit interviews are also useful in small programs or when students can be effectively sampled from a larger group. Interviews may be difficult to analyze and interpret, however, especially if the interview is intended to explore complex program issues. Both surveys and exit interviews incur costs for postage, phone charges, or use of faculty time in the case of interviews. In-class surveys are a compromise that saves resources. Focus groups may be used effectively in lieu of individual student exit interviews, if a skilled facilitator can be found. - Alumni surveys: Surveys are widely used to gather information about the programs effectiveness from alumni at different career stages. Such surveys are useful to follow up with graduates on employment and continuing education. Employment-related themes and real-world applications of learning are suitably explored using this method. Programs find alumni responses about poor preparation for an important on-the-job ability particularly useful (Palomba and Banta 1999). Survey research has common limitations, stated above. In this particular case, response rates are a concern as well as an expense and inability to explore themes in depth. Graduates may be hard to track, particularly recent ones who are seeking professional employment. Parents addresses are often gathered as a backup. Ongoing use of alumni surveys requires that a database be developed and maintained. Furthermore, inherent biases in results based on those who respond must be understood. If not every graduate gives their honest feedback, it is difficult to tell how representative the results of an alumni survey truly are. - Employer Surveys: The connection between Food Science departments and industry is at the heart of the change to assessment-driven instruction, so surveying the employers of students makes perfect sense. These surveys can serve two functions: (1) to provide information about the knowledge and skills employers seek as well as to deterThe connection between mine who is likely to be hired; and (2) to gather information Food Science departabout student performance on ments and industry is at the job. In order to get emthe heart of the change ployer cooperation, the intentions of the survey must be to assessment-driven clear and well structured to gather precise information. instruction Employer and alumni surveys can also be coordinated to develop a nuanced picture of graduates experiences in the workplace (Palomba and Banta 1999). Employer surveys, however, raise ethical and legal questions about students privacy, and in some cases may not be possible at all because of these concerns. As with alumni surveys, data
38

bases must be developed and maintained. Here too, the issue of selective responses may lead to biases in the overall responses. - Curriculum/Syllabus Analysis: Analysis of written curriculum and course syllabi can yield information about course-by-course learning as well as an overall picture of a programs learning opportunities. Such an analysis also yields information about how learning is already assessed that can be used to select course-embedded assessments or to develop a program assessment plan (Palomba and Banta 1999). One advantage is the ability to study the programs parts in relation to the whole to identify gaps between intended outcomes and actual learning opportunities. Another advantage is the relative ease with which syllabi can be changed to reflect the development and refinement of learning outcomes without a cumbersome curriculum change process. Standardized course syllabi or syllabi templates based on IFT Standards can be created so that objectives can be clearly understood. Disadvantages include the possibility that curriculum documents and syllabi do not adequately or accurately represent the learning experiences of students. The process, if it is to yield useful results, requires a commitment of time for comprehensive review, analysis, and discussion.

Conclusion
Integration of learning standards, such as those of IFT, means changing and developing new means of assessment and moving from traditional to assessment-driven instruction. Often, courselevel assessments are changed and program-level assessments used for the first time. Assessment drives changes to curriculum as well as instructional practices and requires faculty to work together to clarify their intentions for student learning based on standards, to develop a system of complementary direct and indirect assessments, and to use assessment information to inquire about the effectiveness of courses and the program and to improve them. In this way, assessment is not unlike the research that faculty are accustomed to. Questions are raised and methods developed to answer the questions. Industry partners play a key role in assessment as advisors and assessors. Students and alumni also serve as sources of information and insight about program effectiveness. In any case, assessments should always be designed and implemented so that the information gathered matches the programs need for it, and faculty must commit to collaborative use of the evidence assessment yields.

References
Alverno College Faculty. 1985. Assessment at Alverno College. Milwaukee, Wis.: Alverno Publications. 83 p. [AAHE] American Association for Higher Education. 1992. Principles of good practice for assessing student learning. Washington, D.C.: AAHE. 4 p. Angelo TA, Cross KP. 1993. Classroom assessment techniques: A handbook for college teachers. 2nd ed. San Francisco: Jossey-Bass. 427 p. Argyris C, Putnam R, Smith DM. 1985. Action science: Concepts, methods, and skills for research and intervention. San Francisco: Jossey-Bass. 480 p. Bowyer KA. 1996. Efforts to continually improve a nursing program. In: Banta TW, Lund JP, Black KE, Oblander FW, editors. Assessment in practice: putting principles to work on college campuses. San Francisco: Jossey-Bass. p 128-9. Diamond RM. 1998. Designing and assessing courses and curricula: a practical guide. San Francisco: Jossey-Bass. Dillon WT. 1997. Involving community in program assessment. Assessment Update 4(2):4+. Erwin TD. 1991. Assessing student learning and development: a guide to the principles, goals, and methods of determining college outcomes. San Francisco: Jossey-Bass. 208 p. Gainen J, Locatelli P. 1995. Assessment for the new curriculum: a guide for professional accounting programs. Sarasota, Fla: American Accounting Association. 158 p. Glatthorn AA. 1999. Performance standards and authentic learning. Larchmont, N.Y.: Eye on Education. 176 p. Hartel RW. 2002. Core competencies in food science: background information on the development of the IFT Education Standards. J Food Sci Educ

JOURNAL OF FOOD SCIENCE EDUCATIONVol. 2, 2003

Available on-line at: www.ift.org

Assessment of learning outcomes . . .


1(1):3-5. Huba ME, Freed JE. 2000. Learner-centered assessment on college campuses: shifting the focus from teaching to learning. Boston, Mass.: Allyn and Bacon. 286 p. Hutchings P. 1998. The course portfolio: how faculty can examine their teaching to advance practice and improve student learning. Washington, D.C.: American Association for Higher Education. 121 p. [IFT] Institute of Food Technologists. 2003. Undergraduate education standards for degrees in food science. Accessed 28 April 2003. A complete copy can be accessed on-line from the IFT at http://www.ift.org/education/ standards.shtml?L+mystore+. Kendall JS, Marzano RJ. 1997. Content knowledge: A compendium of standards and benchmarks for k-12 education. 2nd ed. Aurora, Colo.: Mid-Continent Regional Educational Laboratory. ERIC document ED 414303. 645 p. Lunde JP, Baker M, Buelow FH, Hayes LS. 1995. Reshaping curricula: revitalization programs at three land-grant universities. Boston, Mass.: Anker Publishing. 262 p. Madaus G. 1988. The influence of testing on the curriculum. In: Tanner, LN, editor. Critical issues in curriculum. 87th Yearbook of the National Society for the Study of Education, Part I, Chicago, Ill.: Univ. of Chicago Press. p 83121. Maki PL. 2002. Developing an assessment plan to learn about student learning. Accessed 28 April 2003. A complete copy can be accessed on-line from the American Association for Higher Education at http://www.aahe.org/ Assessment/assessmentplan.htm. National Research Council. 2000. How people learn: brain, mind, experience, and school. Washington, D.C.: National Academy Press. Available at http://www.nap.edu/catalog/9853.html. 374 p. Palomba CA, Banta TW. 1999. Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass. 405 p. Pfatteicher SKA, Bowcock D, Kushner J. 1998. Program assessment tool kit: a guide to conducting surveys and interviews. Madison, Wis.: LEAD Center. Schilling KM, Schilling KL. 1998. Proclaiming and sustaining excellence: assessment as a faculty role. Washington, D.C.: The George Washington Univ., Graduate School of Education and Human Development. ERIC-ASHE Education Reports 26:1-105. ERIC document ED 420245. 116 p. Stone HL. 1996. An ability-based assessment program at the Medical School: Univ. of Wisconsin-Madison. Lessons Learned from the FIPSE Projects III. Accessed 28 April 2003. Available online from the Fund for the Improvement of Postsecondary Education at http://www.ed.gov/offices/OPE/FIPSE/ LessonsIII/madis.html. Wergin JF, Swingen JN. 2000. Departmental assessment: how some campuses are effectively evaluating the collective work of faculty. Washington, D.C.: American Association for Higher Education. 40 p. Wiggins GP. 1998. Educative assessment: designing assessments to inform and improve student performance. San Francisco: Jossey-Bass. 361 p. Wiggins GP. 1993. Assessing student performance: exploring the purpose and limits of testing. San Francisco: Jossey-Bass. 316 p. Zuber-Skerritt O. 1992a. Professional development in higher education: A theoretical framework for action research. London: Kogan Page. 277 p. Zuber-Skerritt O. 1992b. Action research in higher education: examples and reflections. London: Kogan Page. 129 p. MS 20030122 Submitted 3/4/03 Revised 4/3/03 Accepted 4/4/03 Received 4/6/03 Author Hartel is with the Dept. of Food Science, Univ. or Wisconsin, 1605 Lindon Dr., Madison, WI 53706, and author Gardner is with the Dept. of Educational Administration and Foundations, Illinois State Univ. Normal, Ill. Direct inquiries to author Hartel (E-mail: hartel@calshp.cals.wisc.edu).

Available on-line at: www.ift.org

Vol. 2, 2003JOURNAL OF FOOD SCIENCE EDUCATION

39

Potrebbero piacerti anche