Sei sulla pagina 1di 42

BIR6014

QUANTITATIVE RESEARCH METHODS IN


LANGUAGE TEACHING
TOPIC:
VARIABLES, VALIDITY AND RELIABILITY
PRESENTER:
SREENIVASA RAO A/L HANUMANTHA RAO
MATRIX NUMBER:
M20141000152
LECTURER:
DR. GOH HOCK SENG

VARIABLES
RESEARCH
OBJECTS
PROPERTIES
OF THE
OBJECT
VALUE OF
PROPERTIES
RESEARCH OBJECT
Definition
OBJECT:

Persons - Students
Things Curriculum
programs, handout materials
Places School, day care
centres
PROPERTIES OF THE
OBJECT
DEFINITION
PROPETIES :
Characteristics or attribute of
an object
EXAMPLES :
Students - Genders
School Length of school day
VALUE OF PROPERTIES
DEF 1:
Is a number that represent the
magnitude of the variables
Exp: Weight = 18kg-20kg
DEF 2:
Or the category of the variable
Exp: Gender = male/female
Exp: Measuring the height of a group of
students
Exercise
Q1: What are the variables?
Q2: What is the magnitude and
category involved?
VARIABLES
HEIGHT
(CM)
MAGNITUDE
GENDER
(MALE/FEMALE)
CATEGORY
0
20
40
60
80
100
120
MALE FEMALE
Exp: Measuring the height of a
group of boys and girls
(Category of variable)
(Magnitude) cm
GENDER (V)
HEIGHT (V)
CONCLUSIONS:
The larger the magnitude, the greater the height
Variables must have at least 2 categories of measure,
if only one, then they are constants, not variables
Values of variables must have:
- Exhaustive Means each object can be assigned a value
- Mutually exclusive Means that each object have one
& only one value
Fraenkel, J.R. and Wallen,N.E. (2009). How to design and evaluate research in education(7
th
ed. New York,
NY:McGraw-Hill.
Types of
variables
Dependent
&
Independent
variables
Categorical
Variables
Continuous
Variables
Attribute
Variables
Extraneous
Variables
Confounding
Variables
Dependent & Independent
variables
Independent
variables
Affects
Dependent
variables
(Presumed or possible
cause)
(Presumed results)
NOTE:

-Reciprocal causation Causal relation that flows in both
directions, each variables causes the other.
Fraenkel, J.R. and Wallen,N.E. (2009). How to design and evaluate research in education(7
th
ed., pp.42-43).
New York, NY:McGraw-Hill.
Research Topic: Effects of Think Pair
Share Technique on descriptive writing
Exercise
Continuous Variable
Measured on a scale. Exp: Test scores range
from 0-100
Categorical Variable
Measured and assigned to groups of specific
characteristics. Exp: Gender
Attribute Variable
A measured or classified pre-existing quality of research.
Exp: Anxiety level of students effects test performance,
Independent Variable: Anxiety level, Dependent Variable:
Test performance, Attribute variable: Test taking
experience
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in education (6
th
ed.).New York, NY: Routledge
Extraneous Variable
Alternate factors which unintentionally influence the
dependent variables. Exp:size of the class, age of
teacher, length of class hours.
Can be control by : - Hold them constant
Confounding Variable
When any other extraneous variables changes
along with the deliberate change in the independent
variable is confounded with extraneous variable.
Exp: If 2 methods of teaching were studied by
comparing one method in the fall and the other
method in spring, then the teaching method
confounded with time of the year (extraneous
variable)
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in education (6
th
ed.).New York, NY: Routledge
Validity
How well the instrument used measures what
it purported to measure
Ability of a scale or measuring instrument to
measure what is intended to be measured
Validation-process of collecting and analyzing
data or evidence to support inference
Colin,P.,& Julie,W. (2005-06). Exploring Reliability In Academic Assessment. Retrieved September 20,2014
from http://www.uni.edu/chfasoa/reliabilityandvalidity.htm
Why validity is important?
refers to the level to which
data or evidence support any
inferences a researcher makes
The collected data is able to be
concluded into meaningful
specific inference intended
What is validity and why it is important in research?. Retrieved September 25,2014 from
http://psucd8.wordpress.com/2011/11/20/why-is-validity-important-in-research
Types of
evidence
Content-
related
evidence
Criterion-
related
evidence
Construct-
related
evidence
Content-related evidence
The instrument includes an adequate and
representative items that hit the concept
Key elements :
the adequacy of the questions
format of the instrument (clarity of prints, clarity of
directions, appropriate language etc.)
The validity of the evidence is usually determine by the
content experts
Colin,P.,& Julie,W. (2005-06). Exploring Reliability In Academic Assessment. Retrieved September 20,2014
from http://www.uni.edu/chfasoa/reliabilityandvalidity.htm
Criterion-related evidence
The test results (the one being validated) is compared with
other test results (criterion)
Criterion other assessment to measure the same variable

2 types of criterion-related validity
Predictive validity instrument data and criterion data
obtained over a period of time
Concurrent validity instrument data and criterion data
obtained at the same time
Colin,P.,& Julie,W. (2005-06). Exploring Reliability In Academic Assessment. Retrieved September 20,2014
from http://www.uni.edu/chfasoa/reliabilityandvalidity.htm
The key index for both types is the correlation
coefficient (r)
Positive relationship the scores of both instrument and
criterion are at the same degree
Negative relationship the scores of the instrument and
the criterion are at different degree
Validity coefficient- the relationship between the scores
obtained by the same individuals on particular
instrument and their scores on the criterion
r = correlation coefficient between two halves (reliability for test)
Colin,P.,& Julie,W. (2005-06). Exploring Reliability In Academic Assessment. Retrieved September 20,2014
from http://www.uni.edu/chfasoa/reliabilityandvalidity.htm
The choice of the criterion is crucial
and will determine the validity
The criterion must be relevant
The criterion must be reliable
The criterion should be free from
bias
Colin,P.,& Julie,W. (2005-06). Exploring Reliability In Academic Assessment. Retrieved September 20,2014
from http://www.uni.edu/chfasoa/reliabilityandvalidity.htm
Construct-related evidence
A variety of different types of evidence are
collected
To obtain construct-related evidence of validity;
The variable being measured is clearly
defined
Hypotheses are formed (based on theory
underlying the variable)
The hypotheses are tested both logically and
empirically
Colin,P.,& Julie,W. (2005-06). Exploring Reliability In Academic Assessment. Retrieved September 20,2014
from http://www.uni.edu/chfasoa/reliabilityandvalidity.htm
construct-
related evidence
Convergent
validity
multiple
measures of the
same construct
operate in
similar ways
Divergent
validity
the measures
that should not
be related are
not related in
reality
Construct validation involves various type of
procedures and evidence, including content-
related and criterion-related evidence
Note:
Colin,P.,& Julie,W. (2005-06). Exploring Reliability In Academic Assessment. Retrieved September 20,2014
from http://www.uni.edu/chfasoa/reliabilityandvalidity.htm
Any relationship observed between two
or more variables should be clear as to
what it means rather than due to other
thing
The differences observation on the
dependent variable is directly to the
independent variable and not to
unintended variable
Internal Validity
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in education (6
th
ed.).New York, NY: Routledge
1
Subject characteristic
2
Loss of subjects (mortality)
3
Location
4
Instrumentation
5
Testing
6
History
7
Maturation
8
Attitude of subjects
9
Regression
10
Implementation
Threats to Internal Validity
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in education (6
th
ed.).New York, NY: Routledge
1
Standardize the conditions under which the
study happen
2
Obtain more information on the subjects of the
study
3
Obtain more information on the details of the
study
4
Choose an appropriate design
How researcher can minimize the
threats?
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in education (6
th
ed.).New York, NY: Routledge
Def:
To which extent the results of a study can be
generalized to wider population, cases or situation
1
Population generalizability - the degree which a
sample represents the population of interest
2
Ecological generalizability the degree which the
results of a study can be expanded to other
settings or conditions
External Validity
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in education (6
th
ed.).New York, NY: Routledge
1
Selection effects
2
Setting effects
3
History effects
4
Construct effects
Threats to external Validity
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in education (6
th
ed.).New York, NY: Routledge
Reliability indicates how consistently a test measures
whatever it does measure
A test or instrument is considered reliable if it can give
same result over and over again
The consistency and stability of the scores/data
obtained
Reliability coefficient a relationship between scores
obtained by the same individuals on the same
instrument at two different times, or on two parts of the
same instrument
Reliability
Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. Retrieved September
21,2014 from http://www.socialresearchmethods.net/kb/reliable.php
Why reliability is
important?
Errors of
measurement
Reliability estimates
gives researchers
idea how much
variation to expect
Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. Retrieved September
21,2014 from http://www.socialresearchmethods.net/kb/reliable.php
Ways to
obtain a
reliability
coefficient
Test-retest
method
Equivalent
-forms
method
Internal-
consistency
methods
Test-retest reliability
The same test is given twice to the same
individuals after a period of time
The reliability coefficient will be affected by the
duration of the time interval between the test
For most educational study, stability of scores over
a two-to-three month period is viewed as sufficient
evidence of test-retest reliability
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill
Equivalent-forms reliability
1
Using two different but equivalent forms of
test/instrument are given to the same
individuals at the same period
2
Inter-rater reliability-all researchers must come
to the same agreement by ensuring each
researcher enter the data in the same way
The inter-rater agreement can be calculated
as a percentage:
Reliability can be achieved through;
Number of actual agreements
x 100
Number of possible agreements
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill
Internal-consistency measures of reliability
-The instrument/tests run only once
Test items are divided into two halves (each half is matched in
terms of item difficulty and content) and correlates the
individuals scores on the two halves
The marks obtained from one half of a test should match the
marks on the other half
The reliability coefficient is calculated using Spearman-Brown
prophecy formula
Reliability of scores
on total test
=
2r
1 + r
r = correlation coefficient between two halves (reliability for test)
i. Split-half method
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill
Internal-consistency measures of reliability
-The instrument/tests run only once
r is obtained with Pearson correlation
coefficient
If the r obtained is 0.56 by comparing one
half of the test items to the other half, then
reliability score for the whole test would be
i. Split-half method
Reliability of
scores on total
test
=
2 x 0.56
=
1.12
= 0.72
1 +0.56 1.56
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill
Internal-consistency measures of reliability
The KR20 and KR21 formulas
The latter formula required three pieces of
information;
K = The number of items on the test
M = The mean
SD = The standard deviation of the set of test
scores
* Formula KR21 can be used only if there is assumption
that all items in test are of equal difficulty
ii. Kuder-Richardson approaches
Reliability of
the whole test
=
K
( 1 -
M(K-M)
)
K - 1 K(SD)
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill
Internal-consistency measures of reliability
A general form of the KR20 formula to be
used in calculating the reliability of items
that are not scored right versus wrong,
such as in essay test, where more than
one answer is possible
iii. Alpha coefficient (Cronbach alpha)
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill








The summary of methods checking the validity and the reliability of an instrument


















Validity (Truthfulness)
Method Procedure
Content-related evidence Obtain expert judgment
Criterion-related evidence Relate to another measure of the same variable
Construct-related evidence Assess evidence on predictions made from theory
Reliability (Consistency)
Method Content Time
interval
Procedure
Test-retest Identical Varies Give identical instrument twice
Equivalent forms Different None Give two forms of instrument
Equivalent
forms/retest
Different Varies Give two forms of instrument, with
time interval
Internal
consistency
Different None Divide instrument into halves and
score each or use Kuder-Richardson
approach
Fraenkel, J.R. and Wallen,N.E. (2009). How to design and evaluate research in education(7
th
ed., pp.158).
New York, NY:McGraw-Hill.
The Standard Error of
Measurement (SEMeas)
Index to show to what extent the
measurement would vary under
certain circumstances
The longer the elapse time
between measurement, the score
is considerably fluctuates more
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill
A person score in an IQ test

The formula for standard error of measurement:



= standard deviation = reliability coefficient


Fig. 1
Fig. 2
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate research in education(7
th
ed.). New York, NY:McGraw-Hill
References:
Fraenkel, J.R. and Wallen,N.E (2009). How to design and evaluate
research in education(7
th
ed.). New York, NY:McGraw-Hill
Cohen, L., Manion, L. and Morrison, K. (2007). Research methods in
education (6
th
ed.).New York, NY: Routledge
Zamalia Mahmud (2009). Handbook of research methodology: a
simplified version. Shah Alam, Malaysia:UiTM Press
Research designs in education. Retrieved September 21,2014 from
http://adhi301126117.wordpress.com/2010/07/26/the-quantitative-
research-and-appreciative-inquiry/
Trochim, William M. The Research Methods Knowledge Base, 2nd
Edition. Retrieved September 21,2014 from
http://www.socialresearchmethods.net
What is validity and why it is important in research?. Retrieved
September 25,2014 from http://psucd8.wordpress.com/2011/11/20/why-
is-validity-important-in-research/

Potrebbero piacerti anche