Sei sulla pagina 1di 35

Developing an Assessment System

Abdul Hameed
Assistant Professor
http://informationtechnology.pk

Objectives
Identify components of an assessment system
Identify important for an assessment system
Identify a set of assessment tools

Why assess?
Whatever we measure, we tend to improve
D. Leach

Purpose of Assessment
Did the students achieve the standards for that educational
experience?
What knowledge, skills or attitudes does the students need
to work on?
How might the course use aggregate performance data to
improve education?
Assessment results can provide formative and summative
feedback to students

What is an assessment system?


A comprehensive and integrated set of evaluation
measures that provides information for use in
monitoring candidate performance and
managing and improving unit operations and
programs for the preparation of professional
educators
(
http://highered.colorado.gov/Academics/Teache
rEd/glossary.html
)

An assessment system:
Is a collection of assessment tools that measure a residents
performance
Defines who the evaluators are
Describes what performance will be evaluated
Indicates how often the evaluation occurs

Assessment System Model

Expectations
The new context means of assessment for students
include
ability to communicate.
adaptability to change.
ability to work in teams.
preparedness to solve problems.
ability to analyze and conceptualize.
ability to reflect on and improve performance.
ability to manage oneself.
ability to create, innovate, and criticize.
ability to engage in learning new things at all times.
ability to cross specialist borders.

Components of an Assessment System


The Purposes of Assessment
Teachers regularly use tests to determine if
students understand the concepts they are
trying to teach and if additional instruction is
needed. Increasingly, school systems are
looking to use tests to gauge teachers abilities
to improve student performance.

Coherence
The system aligns curriculum, instruction,
and assessment around the key learning
goals spelled out in the standards for
college and career readiness.

Comprehensiveness
The system consists of a toolbox of
assessments that meet a variety of different
purposes and that provide various users with
information they need to make decisions.
(Teachers, Students, Parents, Community,
Policy makers, etc.)

Accuracy and Credibility


The information from assessments supports valid
inferences about students progress,
proficiency level, achievements, attitude,
future predictions and, as well as actionable
information for multiple users.

Fairness
The assessments enable all students to
demonstrate what they know and are able
to do.
Students should know what the expectations
are, and assessments
should measure what they are expected to
learn.

Matching Learning Outcomes/Standards


The types of assessment tasks should match the
learning outcomes that students are expected
to demonstrate.

Clarity in Reporting
The design of the reports should be considered at
the outset and should provide different
audiences with actionable information that
informs next steps to further advance student
learning.

Teacher Engagement
Involving teachers in the development, scoring,
and use of assessments can strengthen
assessment and improve instruction

Fundamentals of Assessment System

Reliability
Consistency of scores on the assessment tool

Validity
The assessment tool measures what it says it measures

Feasibility
Is this tool practical to implement?

Value of information
Is the information obtained valuable in determining a
students competence or implementing curricular change?

Feedback
Formative Assessment
Summative Assessment

ASSESSMENT CONCERNS
GENERAL PUBLIC
1.Unfair
2.Lack of opportunities
3.Inadequate resources
4.Ethnic and gender bias
5.Unequal treatment
6.Unfair comparisons
7.Face validity

ASSESSMENT CONCERNS
CONCERNS OF CERTIFICATION AUTHORITY
1.Qualified assessors
2.Test administration
3.Scoring
4.Interpretation

ASSESSMENT
CONCERNS
CONCERNS OF ASSESSORS
1.
Accuracy
(definitions, predetermined answers, correct application
of criteria)
2. Generalizability
Three dimensions
(domain, times and setting)
3. Meaning
4. Utility (e.g. Efficiency -speed and economy of data
collection, sensitivity- to detect small differences)

Usual Mistakes in Paper marking


Separate marks not awarded in short answer
questions.
Answer is crossed without remarks.
Answers left unmarked.
Marks awarded to repeated questions.
Over attempted questions counted wrong
Wrong total of parts of questions
Wrong transfer of marks on title page
Award List mistakes

Usual Mistakes in Paper marking Cont

Wrong bubble filling


Award of excess number
No signature on answer sheet or award list
Entering award against wrong roll number
Difference in words and figures
No initials on cutting or overwriting
Use of ink remover
And many more..

Quality Examinations
What makes a good examination paper?
There are lots of issue here. We will mention just
4.
Validity
Accessibility
Reliability
Targeting the ability of students
Tell us what students can do

Cont.
It should represent the National curriculum
(content validity)
Repeated applications of the test should give the
same results (reliability)
It should provide for the range of student abilities
It should allow students to demonstrate what they
know (criterion referenced)
It should provide a measure of student ability
(norm referenced).

How to evaluate
Essay
Type
Rubrics
Question
Rubrics are performance-based assessments
that evaluate student performance on any
given task or set of tasks that ultimately leads
to a final product, or learning outcome.
Rating scales can be either holistic or
analytical. Holistic scales offer several
dimensions together while analytical scales
offer a separate scale for various dimensions.

The disadvantage of holistic scoring is that it


does not provide detailed information about
student performance in specific areas of
content or skill.
Analytic scoring breaks down the objective or
final product into component parts and each
part is scored independently. In this case, the
total score is the sum of the rating for all of the
parts that are being evaluated.

Whether holistic scales or analytical scales are


used, the important factors in developing
effective rubrics is the use of clear criteria that
will be used to rate a student's work and that
the performance being evaluated is directly
observable.

Keep in mind
Establish a rubric for grading all essay assignments.
For example, rather than assigning 10 points for
grammar on a 30-point essay, the grammar section can
be more specific to include items like subject-verb
agreement, punctuation, spelling, clarity and
vocabulary. A five-item list like this would make each of
these items worth two points on the essay, rather than
just assigning a broad grade based on your perception of
what "grammar" entails.

How to Develop a Rubric?


The scoring system should be objective and
consistent. The tasks should be appropriate
to students' abilities to avoid or minimize
scoring error. Be practical when designing
the scoring system.

A Sample Rubric for


Essay Type Items

Any ?

Thanks

Potrebbero piacerti anche