Sei sulla pagina 1di 7

1.

Why do we assess? Why is assessment necessary? Can you briefly discuss the differences
between testing, assessment, and evaluation?

The primary purpose of classroom assessment is:

to gather information
to help us make decisions
that will lead to beneficial consequences for stakeholders (learners, teachers).
Testing:
done at the end of a learning period; result expressed by a mark, a grade or ratio; students compared with
each other.
Assessment:
involving the collection of information or evidence of a learners progress or achievement over a period of
time for the purpose of improving teaching and learning; not based on one test or task; students are
measured against his/her own starting point.
Evaluation:
involving making an overall judgment about ones work or a whole schools work.
Testing for schools and parents benefit
Assessment for teachers and learners benefits
Evaluation for the authoritys benefits
Speaking of assessment methods, many teachers immediately think of tests. Assessment includes testing
but definitely not only testing.
2. Can you briefly discuss the difference between formative and summative assessment?
In the classroom, we use two kinds of language assessments: formative and summative
Formative assessment: based on information collected in a classroom during the teaching process;
providing more information.
Evaluates students in the process of forming their competencies and skills with the goal of helping them
to continue the growth process. The key to such formation is the delivery (by the teacher) and
internalization (by the student) of appropriate feedback on performance, with an eye toward the future
continuation (or formation) of learning.
Formative and summative assessment
summative assessment: based on testing, done at the end of a learning period; focus on memory
work; cannot be used to inform teachers teaching and students learning

aims to measure, or summarize, what a student has learned, and typically occurs at the
end of a course or unit of instruction.

3. In any consideration of educational testing a distinction must be made between informal, teachermade tests and formal, large-scale, "standardised" tests. Can you briefly discuss the difference
between these two notions?

Teacher-made tests are written or oral assessments are not commercially


produced or standardized. In other words, a teacher designs a test
specifically for his or hers students. Testing refers to any kind of school
activity that results in some type of mark or comment being entered in a
checklist, grade book, or anecdotal record.
Formal Tests

Formal tests may be standardized. They are designed to be given according to a standard set of circumstances, they
have time limits, and they have sets of directions which are to be followed exactly. Examples: SAT; FCAT; ACT
Informal Tests
Informal tests generally do not have a set of standard directions. They have a great deal of flexibility in how they are
administered. They are constructed by teachers and have unknown validity and reliability. Examples: Review games;
Quizzes
4. Can you briefly discuss the difference between subjective and objective testing?
Subjesctive and objective are terms used to refer to the scoring of tests. All test are constructed subjectively
by the tester, who decides which areas of language to test, how to test those particular areas, and what kind
of items to use for this purpose. Thus, it is only the scoring of the test that can be descibed as objective.
This means that the testee will score the same mark no matter which examiner marks the test. Objesctive
test usually have only one correct answer, so they are easier to correct, but they are harder to construct.
While subjective test such as writing or oral asisgment are easyer to construct but harder to correct if a
tester wants to be as abjective as possible. On the whole, objective tests require far more careful preparation
than subjective test. Examiners tend to spend a relatively short time on setting the question but considerable
time on marking in subjective tests. And it is vise versa with objective tests.

Objective tests can be constructed by means of multiple choice questions,


true or false questions or matching.
5. Specify and describe the procedure for the construction of a language test.
Some practical steps to test construction:
Set clear and specific objectives what language knowledge and/or skills are you assessing
It is essential that all instructions needs to be clearly written and that examples are given.
Instructions should not be long and they should not be a reading comprehension.
Determine a simple and practical outline, tested skills, and decide forms of item types and
tasks.
Devising Test Tasks by drafting the questions, revising the draft, request aid from colleague,
and imagine yourself as a student who write for this test.
Designing Multiple-choice items by checking practicality, reliability and facility of cheating.
Even weight and points on each section.
After testing, students identification of success and challenge can be given.
6. What kinds of tests and testing are there? And what purpose do they serve?
According to Hughes, there are four types of tests: proficiency, achievement, diagnostic and placement
tests. All types are discussed later on.
1. Proficiency tests- measure student's ability in languages, their proficiency with a reference to a
particular task he/she will be required to perform, It tests what student has to be able to do in order
to be considered proficient, in order to follow a course of study at the university abroad, to be
language translator etc. This tests to not measure general attainment but a specific skill in light of
the language demands made later for a course of study, certain job or any other particular purpose.

2.

Regardless of previous training.


Achievement tests- are directly related to language courses. Their purpose is to establish how
successful individual student, group of students or the courses have been in achieving the
objectives. There are two kinds of achievement tests: final achievement test-are administered at the
end of a course study and progress achievement test-are intended to measure the progress that
students are making. Difficult to construct. Lack of good ones.
3. Diagnostic tests- are used to identify students' weaknesses and strengths, raising the question
what further teaching is necessary. Through them, teachers make profiles of the students
abilities.
4. Placement tests- are intended to provide information which will help to place students at the
stage of the teaching programme most appropriate to their abilities, meaning this tests are
used to assign students to classes at different levels , most usually in private schools. Define
characteristics of each level of proficiency.

Testing is said to be direct when it requires the candidate to perform precisely the skill which we wish to
measure. Indirect testing attempts to measure the abilities which underlie the skill in which we are
interested. Discrete point testing refers to the testing of one element at a time, item by item. Integrative
testing requires the candidate to combine many language elements in the completion of a task. Nonreferenced relates one candidates performance to that of other candidates. Criterion-referenced have the
purpose to classify people according to wheather or not they are able to perfom some task or set of tasks
satisfactorily. The scoring is objective if there is no judgment required. If judgment is required the scoring
is said to be subjective.
7.List some of the commonest test methods and discuss their advantages and disadvantages.
Direct versus indirect testing
Testing is said to be direct when it requires the candidate to perform precisely the skill which we wish to
measure. The tasks and the texts that are used should be as authentic as possible. Direct testing is easier to
carry out when it is intended to measure the productive skills of speaking and writing. It is relatively
straightforward to create the conditions which will elicit the behavior on which to base our judgments.
Indirect testing attempts to measure the abilities that underlie the skills in which we are interested. It offers
representative sample of a finite number of abilities which underlie a potentially indefinite large number of
manifestations of them. Indirect testing is superior to direct testing in that its results are more generalisable.
The problem with indirect tests is that the relationship between performance on them and performance on
skills in which we are usually more interested tends to be rather weak in strength and uncertain in nature. It
is preferable to rely principally on direct testing. These tests are generally easier to construct.They will
always include an indirect element.
Discrete point versus integrative testing
Discrete point testing refers to the testing of one element at a time, item by item. Integrative testing requires
the candidate to combine many language elements in the completion of a task. Discrete point tests will
almost always be indirect, while integrative tests will tend to be direct.
Norm referenced versus criterion referenced testing
Norm-referenced test relates candidates performance to that of other candidates. We are not told directly
what the student is capable of doing in the language. In a criterion referenced test we learn something about
what he or she can actually do in the language. The purpose of these tests is to classify people according to
whether or not they are able to perform some task or set of tasks satisfactorily.
Objective testing versus subjective testing
If no judgement is required on the part of the scorer, then the scoring is objective.(multiple choice test). If
judgement is called for, the scoring is said to be subjective. The less subjective the scoring, the greater
agreement there will be between two different scorers.
Computer adaptive testing
This type of testing offers a potentially more efficient way of collecting information on peoples ability. The
computer goes on in this way to present individual candidates with items that are appropriate for their
apparent level of ability, raising or lowering the level of difficulty until a dependable estimate of their
ability is achieved.

8. What are the characteristics of a good test? How can we make tests more reliable, valid and
practical?
Characteristics of a good test are: reliability, validity and practicability.
Validity helps us to see if a test is measuring what it claims to measure. The goal of a test is to measure a
practical skill and if the test is measuring other skills at the same time then its not valid. We can say that a
reading test is not valid if the exam depends on information that is not provided in a text. So in order to
make the test more valid , we should make sure that a test is measuring the right intended skill.
A test is reliable if the teacher administered to the same group of candidates on different occasions, and if
the results are different then its not reliable. There are a couple of methods measuring the reliability of the
test. One method is to administer the same test after certain time. Another method is giving a similar test.
That means the test must be identical with the first test, it needs to be of the same difficulty, lengths, rubric,
etc. if the results are similar, then the test is reliable.
A test is practical if its not expensive, relatively easy to administer and is easy to score. If a test is
expensive, if a student needs 5 hours to solve it and if teacher needs several hours to evaluate the test which
took students to solve in a few minutes then the test is impractical.

9.What are the advantages and disadvantages of testing?


Many language teachers harbor a deep mistrust of tests and of testers. It cannot be denied that a great deal
of language testing is of very poor quality. The effect of testing on teaching and learning is known as
backwash and can be harmful or beneficial. If the test content and testing techniques are at variance with
the objectives of the course, there is likely to be harmful backwash. Tests fail to measure accurately
whatever it is that they are intended to measure. Students true abilities are not always reflected in the test
scores that they obtain. There are two main sources of inaccuracy. The first of these concerns test content
and test techniques. The second source of inaccuracy is lack of reliability. Unreliability has two origins. The
first is the interaction between the person taking the test and features of the test itself. The second origin is
to be found in the scoring of the test. Tests provide information about the achievements of groups of
learners.
A: With exams, a person will be able to know his performance and knowledge. They can encourage them
to work and learn. They can also help in developing ones own personality and confidence.
D: exams make some people stress. This is because there is too much pressure of their parents and teachers.
Also tests can be poorly designed.
10. Designing Classroom Language Tests.

DESIGNING CLASSROOM LANGUAGE TESTS


The following five questions should form the basis of your approach to
designing tests for your classroom.
Question 1: What is the purpose of the test?
Why am I creating this test?
For an evaluation of overall proficiency? (Proficiency Test)
To place students into a course? (Placement Test)
To measure achievement within a course? (Achievement Test)Once you have
established the major purpose of a test, you can determine its objectives
Question 2: What are the objectives of the test?
What specifically am I trying to find out?
What language abilities are to be assessed?

Question 3: How will the test specifications reflect both the purpose
and objectives?
When a test is designed, the objectives should be incorporated into a
structure that appropriately weights the various competencies being
assessed.
Question 4: How will the test tasks be selected and the separate
items arranged?
The tasks need to be practical
They should also achieve content validity by presenting tasks that mirror
those of the course being assessed.
They should be evaluated reliably by the teacher or scorer.
The tasks themselves should strive for authenticity, and the progression of
tasks ought to be biased for best performance.
Question 5:What kind of scoring, grading, and/or feedback is
expected?
Tests vary in the form and function of feedback, depending on their purpose
For every test, the way results are reported is an important consideration.
Under some circumstances letter grade or a holistic score may be
appropriate; other circumstances may require that a teacher offer
substantive washback to the learner.

11. What are some of the problems in measuring speaking ability? What types of oral production
tests are there?
Testing the ability to speak is a most important aspect of language testing. In many tests of oral production
it is neither possible not desirable to separate the speaking skills from listening skills. This very
interdependence of the speaking and listening skills increases the difficulty of any serious attempt to
analyse precisely what is being tested at any one time. Moreover, since the spoken language is transient, it
is impossible without a tape recorder to apply such procedures as in the marking of compositions, where
examiners are able to check back and make an assessment at leisure. The examiner of an oral production
test is working under great pressure all the time, making subjective judgment as quickly as possible.
Another difficulty in oral testing is that of administration. It is impossible to test large numbers of students
because of the limited time involved. Moreover, it is very difficult to test a number of students, because of
the time constrains. Another difficulty when it comes to testing speaking is scoring. Even though, it is
recommendable to use tape recorder whenever it is possible, it is not the best way for providing an accurate
means of checking the score, because of the entire context occurred during the test.
Some types of oral production are:
Reading aloud
Many oral tests include reading aloud in which the student is given a short time to glance through an extract
before being required to read it aloud. Test involving reading aloud are generally used when it is desired to
assess pronunciation as distinct from the total speaking skills.
Conversational exchanges
These drills are suitable for the language laboratory and can serve to focus attention on certain aspects of
the spoken language.
Using pictures for assessing oral production

Pictures of single objects can be used for testing the production of significant phoneme contrasts, while a
picture of a scene or an incident can be used for examining the total oral skills.
The oral interview
Here the interviewer should put the student at ease at the beginning of the interview, adopting a
sympathetic attitude and trying to hold a genuine conversation. The interviewer should never attempt to
note down marks or comments while the student is still engaged in the interview. The dual role is always a
most difficult one.

Presentations
and
descriptions
The candidate has to give a short presentation on a topic, or describe or
explain something. The examiner just listens. Topics can include personal
experiences and current issues.
The aspects of speaking that are considered part of its assessment include
grammar, pronunciation, fluency, content, organization, and vocabulary.
12. Should we write our own tests? How often should students be
tested? In what language should English language learners be
tested?
Tests are useful not only for evaluating students and assigning marks, but
also as a device for self-evaluation. Testing can be teaching .
The frequency and types of assessments used depend on the class, the
teacher, and the reasons for assessing students' learning progress .
First, we can use the day-to-day testing, where we test students mastery of
yesterdays lesson. In this way we make students review lessons every day.
This is an informal form of language testing. In this testing, there is no need
for a written test paper. We just check something, and students can answer
them by speaking out or writing them on any piece of paper.
In addition, we can use unit-based testing, where we measure students
mastery of the unit. This can a formal form of testing and should have a
written
test
paper.
Also, we can use term-mid testing. This is formal testing, and it is carried out
in the middle of one semester. Students should go over what has been learnt
in the first part of this semester and go on a systematic review.
Last, we should have term-end testing. This is also a formal form of testing,
and it is taken at the end of one semester. Students are supposed to review
all the knowledge of this course systematically.
When the test group is monolingual, the teacher needs to use the students
first language. This is only recommended when the group is at the
elementary level. ELL should be tested in English.
13. Describe briefly the types of techniques which you find most useful for testing speaking.
The most useful techniques for testing speaking in my opinion are:role play, interpreting , discussion.
Role playing
Candidates can be asked to assume a role in a particular situation. This allows the ready elicitation of other
language functions.

Interpreting
Simply interpreting tasks can test both production and comprehension in a controlled way. One of the
testers acts as a monolingual speaker of the candidate;s native language, the other as a monolingual speaker
of the language being tested.
Discussion
This can be valuable source of information. Discussions can be about some topic or in order to come to a
decision.
Through discussion and role playing the teacher can discover how students are thinking and using target
language.
Another good technique of testing speaking is using pictures. Students are given a picture and they are
supposed to tell a story based on the picture. Whenever doing this type of exercise, the teacher gives his
student a few minutes to think about the picture and to study it, and then, they have to describe the picture
in a given time (three, four minutes). Exercises of this type help controlling the basic vocabulary required.
What is also very useful is that this type of exercise may include not only narration or picture description,
but also discussion about the picture.

Presentations
and
descriptions
The candidate has to give a short presentation on a topic, or describe or
explain something. The examiner just listens. Topics can include personal
experiences and current issues.

Potrebbero piacerti anche