Sei sulla pagina 1di 60

RESEARCH DESIGN AND MEASUREMENT

UNIT 2

RESEARCH DESIGN A research design is the arrangement of conditions for collection and analysis of data in a manner that aims to combine relevance to the research purpose with economy in procedure. According to Green and Tull A research design is the specification of methods and procedures for acquiring the information needed. It is the overall operational pattern or framework of the project that stipulates what information is to be collected by which source by what procedures. The conceptual structure within which research is conducted. Decision regarding what, where, when, how much, by what means concerning an inquiry or a research.

Need for research design

The research design has to be prepared on account of the following reasons;

1. Research design is the blueprint of the proposed research to be conducte It enables to plan the various activities and provides an insight into the type of difficulties that may arise so that the researcher may be prepared to tackle the same . 2. It gives an idea regarding the type of resources required in terms of money,manpower, time and efforts

3. It enables the smooth and efficient conduct of various research operation

4. The research design affects the reliability of the research findings and as such it constitutes the foundation of the entire research work

FEATURES It is a plan that specifies the sources and types of information relevant to the research problem. It is a strategy specifying which approach will be used for gathering and analyzing the data. It also includes the time and cost budget since most studies are done under these two constraints. Research design must contain: 1. Clear statement of the research problem. 2. Procedures and techniques to be used for gathering information, 3. the population to be studied 4. Methods to be used in processing and analyzing data.

1. 2. 3.

4.

Research design can be split into Sampling design- which deals with the method of selecting items to be observed for the given study. Observational design relates to the conditions under which the observations are to be made Statistical design concerns with the question of how many items are to be observed and how the information and data gathered are to be analyzed. Operational design deals with the techniques by which the procedures specified in the sampling, statistical and observational designs can be carried out.

CONCEPTS RELATING TO RESEARCH DESIGN 1. Variables: A concept which has quantitative values. Ex: Weight, height, income, etc - Continuous: Quantitative values are in decimal point Ex: Age - Discrete variables/non-continuous: Values are expressed only in integers Ex: Number of children 2. Dependent :If one variable depends upon or consequences of the other variables. 3. Independent variables: If variable is antecedent to the dependent variable. Ex: Height depends on Age Height Dependent variables. Age Independent variables.

4. Extraneous variables or Experimental Error :

Independent variables that are not related to the purpose of the study but may affect the dependent variable. 5. Control: minimizing the effect of extraneous variables.

6. Confounded relationships: When the dependent variable is not free from the extraneous variables, the relationship between dependent and independent variable is confounded.

7. Research hypothesis: When prediction or a hypothesized relationship is tested by scientific methods it is called research hypothesis. Research hypothesis is a predictive statement that relates an independent variable to a dependent variables.

8. Hypothesis testing research: When the purpose of research is to test a research hypothesis it is Hypo testing research. It can be of the experimental design or of the non-experimental design.

EXPERIMENTAL AND CONTROL GROUPS:


In an experimental hypothesis testing research when a group is exposed to usual conditions, it is termed a control group, but when the group is exposed to special conditions it is termed as control groups. TREATMENTS: The different conditions under which experimental and control groups are put are usually referred to as treatments.

Experiment: The process of examining the truth of statistical hypothesis relating to some research problem is known as an experiment. a. Absolute experiment b. Comparative experiment

Experimental Units:
The pre-determined plots or blocks where different treatments are used are known as experimental units.

DIFFERENT RESEARCH DESIGNS:

1. Research design in case of exploratory research studies


a. The survey of concerning literature b. Experience survey

c. Analysis of insight stimulating examples.

2. Research design in case of descriptive or diagnostic research studies; Descriptive Research: Describing characteristic of particular individal or groups. Diagonistic Research: Determine the frequency with which something occurs or its association with something else. 3. Research design in case of hypothesis testing research studies (Experimental Design)

Research design Exploratory Overall design

Type of study Descriptive Rigid design( design must take enough provision for protection against bias and must maximize reliability) Probability sampling design Pre-planned design for analysis. Structured instrument for collection of data. Advanced decision about operational procedures. Flexible design( design must provide opportunity for considering different aspects of the problem) Non-probability sampling design No re-planned design for analysis. Unstructured instrument for collecting data No fixed decisions about the operational procedures.

1.Sampling design 2. Statistical design 3. Observational design 4. Operational design

COMPONENTS OF RESEARCH DESIGN:

1. Title of the Problem


2. Nature of the study 3. Objectives of the study 4. Scope of the study 5. Survey of Literature 6. Formulation of Hypothesis 7. Selection of Sample 8. Data Collection

9. Data Analysis
10.Report Writing 11.Bibliography

TYPES OF RESEARCH DESIGN Exploratory / Formulative Research Design: Purpose: Formulating a problem for more precise investigation or to developing the working hypothesis from an operational point of view. To discover new ideas and insight. Methods: 1. Survey of concerning literature. 2. Experience survey. 3. Analysis of insight stimulating examples.

Descriptive Research Design: Concerned with describing the characteristics of a particular individual, or of a group, . Concerned with specific predictions, with narration of facts and characteristics concerning individual, group or situation Points to be focused: a. Formulating the objective of the study b. Designing the methods of data collection c. Selecting the samples d. Collecting the data e. Processing and analyzing the data f. Reporting the findings.

Experimental / Hypothesis testing Research Design: Experimental design is the framework or structure of a n experiment. Testing the hypotheses of casual relationships between variables. Principles: 1. Principle of replication the experiment should be repeated more than once. 2. Principle of randomization should design and plan the experiment in such a way that the variations caused by extraneous factors can all be combined under general heading of chance. 3. Principle of local control measuring and eliminating the variability due to extraneous factors from the experimental error.

Types of experimental design: 1. Informal experimental design (i) Before and after without control design (ii) After only with control design (iii) Before and after with control design. 2. Formal experimental design (i) Completely randomized design a. Two group simple randomized design b. Random replications design (ii) Randomized Block design (iii) Latin square design (iv) Factorial design a. Simple factorial b. complex factorial

INFORMAL EXPERIMENTAL DESIGN Designs that normally use a less sophisticated form of analysis based on differences in magnitudes. 1.Before and after without control design Single : Level of phenomenon Lop after Test area before treatment (X) treatment treatment (Y) ( dependent variable introduced (dependent is measured) variable is again measured) Treatment effect = Y - X

2. After only with control design: Test area: Treatment introduced

Level of phenomenon after treatment (Y) ( dependent variable is measured)

Control area:

Level of phenomenon without treatment (Z)

Treatment effect = Y - Z

3. Before and after only with control design: Test area : Level of phenomenon Lop after before treatment (X) treatment treatment (Y) introduced
Control area: Level of phenomenon without treatment (A) Lop without treatment (Z)

Treatment effect = (Y-X) (Z-A)

Completely randomized design: Involves principle of randomization and principle of replication. Subject are randomly assigned to experimental treatment. One way ANOVA is used to analyze Even unequal replication can also work in this design Provides maximum number of degree of freedom to the error. Used when areas are homogeneous When all the variance due to uncontrolled extraneous factors are include under the heading of chance variation CR design is used.

Two group simple randomized design:

Population

Randomly selected

Sample

Randomly assigned

Experimental group Control Group

Treat1

Treat2

Used in behavioral sciences It is simple and randomizes the differences among the sample items. It does not control the extraneous variables, so experiment may not predict the correct picture.

Independent Variables

Random replication design:


Population available for study Sample Group1/E to be studied Group2/E Random Random selection Group3/E assignment Sample to conduct treatment

Treat1

Population available for treatment

Group4/C
Group5/C Group6/C

Treat2

Provides controls for the differential effects of the extraneous independent variables. It randomizes any individual differences among those conducting the treatment.

Independent or casual variables

RATING SCALES: Dichotomous scale Category Scale Likert Scale Numerical Scale Semantic differential Scale

Itemized rating Scale


Fixed or Constant sum rating scale Stapel scale Graphic rating scale.

Semantic Differential scale:


Several bipolar attributes are identified at the extremes of the scale, and respondents are asked to indicate their attitudes, on what may be called a semantic space, toward a particular individual, object or event on each of the attributes.
Example
Please indicate your opinions about the 'Jolly Boys' TV show by checking one box in each row below: very much somewhat neither somewhat very much Enjoyable [] [] [] [] [] boring

Likely
Noisy Silly

[]
[] []

[]
[] []

[]
[] []

[]
[] []

[]
[] []

challenging
fascinating ridiculous

Itemized rating scale:

A 5-point or 7 point scale with anchors , as needed, is provided for each item and the respondents states the appropriate number on the side of each item.

MEASUREMENT:
Measurement is a process of mapping aspects of domain onto other aspects of a range according to some rule of correspondence. Measurement scales: Nominal scale: It is simply assigning number symbols to events in order to label them. Ordinal Scale: The ordinal scale places events in order, but there is no attempt to make the intervals of the scale equal in terms of some rule.

Interval scale: In case of interval scale the intervals are adjusted in terms of some rule that has been established as a basis for making the units equal. It will have arbitrary zero but not absolute zero.

RATIO SCALE: Ratio scales have an absolute or true zero of measurement. It represents the actual amounts of variables. Measures of physical dimensions such as height, distance etc.. are examples.

Sources of error in Measurement: a. Respondent b. Situation c. Measurer d. Instrument.

TESTS OF SOUND MEASUREMENT


Sound measurement must meet the tests of: Validity extent to which a test measures what we actually wish to measure. Reliability accuracy and precision of a measurement procedure Practicality wide range of factors of economy, convenience and interpretability.

Test of Validity:
It indicates the degree to which an instrument measures what it is supposed to measure. Types of Validity: a. Content Validity b. Criterion-RELATED Validity

c. Construct Validity.

Content Validity:

The extent to which a measuring instrument provides adequate coverage about the topic.

Criterion-Related Validity:
It relates to our ability to predict some outcome.

Qualities of Criterion-related Validity:


Relevance Freedom from bias Reliability (stable or reproducible) Availability.

Criterion validity again refers to a. Predictive validity and b. Concurrent validity.

Test of Reliability: A measuring instrument is reliable if it provides consistent results.

Aspects of Reliability: 1. Stability Aspect 2. Equivalence Aspect.

Stability Aspect:

It is concerned with securing consistent results with repeated measurements.


Equivalence Validity: This aspect considers how much error may get introduced by different investigators or different samples of the items being studied.

Reliability can be improved: By standardizing the conditions under which the measurement takes place. Carefully designed directions for measurement with no variations from group to group. Trained and motivated persons to conduct research.

Test of Practicality:

It can be judged in terms of economy, convenience and interpretability.

SCALING: Scaling describes the procedures of assigning numbers to various degrees of opinion, attitude, and other concept.

This can be done in two ways: a. Judgment about some characteristics of individual and placing him directly on a scale. b. Constructing questionnaires in such a way that the score of individuals responses assigns him a place on a scale.

Important scaling techniques: Ranking Scale: a. Paired comparison method. B. Forced Choice Method.

Scale Construction Techniques:

Scale Construction Approach

Name of the scales Developed

1. Arbitrary Approach 2. Consensus Scale Approach 3. Item Analysis Approach 4. Cumulative scale approach

Arbitrary scale Differential scale(Thurstone type scales) Summated scales Cumulative scales (scalogram analysis)

5. Factor Analysis Approach

Semantic Differential scale.

ARBITRARY SCALES

Arbitrary scales are designed largely through the researchers own subjective selection of items.
The researcher first collects few items which he believes are unambiguous and appropriate to a given topic. Some of these are selected for inclusion in the measuring instrument and then people are asked to check in a list the statements with which they agree.

Test-retest Reliability Goodness of data Validity Stability Parallel form Consistency Interitem consistency

Split-half

Logical (content)

Criterion-related

Congruent (construct)

Face

Predictive

Concurrent

Convergent

Discriminant

VALIDITY IN EXPERIMENTATION
Internal validity - It refers to the confidence we place in the cause and effect relationship. Threats: History Maturation Testing Instrumentation Selection Statistical regression Experimental morality

History
Certain factors that would have an impact on the independent variable dependent variable relationship might unexpectedly occur while the experiment is in progress , the history of events may confound the cause-effect relationship between two variables.

Maturation
Changes in physical, intellectual, or emotional characteristics, that occur naturally over time, that influence the results of a research study. In longitudinal studies, for instance, individuals grow older, become more sophisticated, maybe more set in there ways.

Testing
Also called pretest sensitization, this refers to the effects of taking a test upon performance on a second testing. Merely having been exposed to the pretest may influence performance on a posttest. Testing becomes a more viable threat to internal validity as the time between pretest and posttest is shortened.

Instrumentation
Changes in the way a test or other measuring instrument is calibrated that could account for results of a research study (different forms of a test can have different levels of difficulty). This threat typically arises from unreliability in the measuring instrument. Can also be present when using observers.

Statistical Regression
Occurs when individuals are selected for an intervention or treatment on the basis of extreme scores on a pretest. Extreme scores are more likely to reflect larger (positive or negative) errors in measurement (chance factors). Such extreme measurement errors are NOT likely to occur on a second testing.

Differential Selection
This can occur when intact groups are compared.
The groups may have been different to begin with. If three different classrooms are each exposed to a different intervention, the classroom performances may differ only because the groups were different to begin with.

Research Mortality
The differential loss of individuals from treatment and/or comparison groups. This is often a problem when research participants are volunteers.
Volunteers may drop our of the study if they find it is consuming too much of their time. Others may drop out if they find the task to be too arduous.

External validity:
- does an observed casual relationship generalize across persons, settings and times. Threats: Situation Type of subjects
VARIABLES: A variable is anything that can take on differing or varying values. The values can differ at various times for the same object or person, or at the same time for different objects or persons. TYPES OF VARIABLES: The dependent variable The independent variable The moderating variable and The intervening variable

Dependent Variable: The dependent variable is the variable of primary interest to the researcher.

Independent variable:
The variable that influences the dependent variable in either a positive or negative way. Moderating Variable: The moderating variable modifies the relationship between the dependent and independent variable. Intervening Variable: It is the one that surfaces the time the independent variables start operating the influence the dependent variable and the time their impact is felt on.

TECHNIQUE OF DEVELOPING MEASUREMENT TOOLS


Technique involves a four-stage process: Concept development the researcher should arrive at an understanding of the major concepts pertaining to his study. Specification of concept dimensions adopting a more or less intuitive approach or by empirical correlation of the individual dimensions with the total concept and /or the other concepts. Selection of indicators researcher must develop indicators for measuring each concept element. Indicators are specific questions ,scales or other devices by which respondents knowledge ,opinion , expectation are measured. Formation of index combining various indicators into an index.

SCALING
The procedures of assigning numbers to various degrees of opinion, attitude and other concepts. Ways: (i) Making a judgment about some characteristics of a n individual and placing him directly on a scale that has been defined in terms of that characteristics. (ii) Constructing questionnaires in such a way that the score of individuals responses assigns him a place on a scale. Bases for classification of scale: Subject orientations Response form Degree of subjectivity

Scale properties Number of dimensions Scale construction techniques - Arbitrary approach - Consensus approach - Item analysis approach - Cumulative scales - Factor scales

SCALING TECHNIQUES
1. Rating/Categorical scales Qualitative description of a limited number of aspects of a thing or of traits of a person. Ex: like dislike, above average below average, always - never. (a) Graphic rating scale various points are usually put along the line to form a continuum and the rater indicates his rating by simply making a mark at the appropriate point on a line that runs from one extreme to the other. (b) Itemized rating/numeric scale- presents a series of statements from which a respondent selects one as best reflecting his evaluation. 2.

Ranking/Comparative scales- relative judgments against other similar objects. i.e., comparing two or more objects and make choices among them. (a) Method of paired comparisons making a choice between two objects. No. of judgments required N = n(n-1)/2, n= no. of objects (b) Method of rank order the respondents are asked to rank their choices.

SCALE CONSTRUCTION TECHNIQUES Arbitrary scales: Few statements or items which are unambiguous and appropriate to a given topic are collected and respondents are asked to check in a list the statements with which they agree. Differential scales - Thurstone scale: Items are selected by panel of judges who evaluate the items in terms of whether they are relevant to the topic area and unambiguous in implication. .

Summated Scales - Likert Scales: Particular item is evaluated on the basis of how well it discriminates between those persons whose total score is high and those score is low. Those items or statements that best meet this sort of discrimination test are included in the final instrument. - It consists of statements which express either a favorable or unfavorable attitude towards the given object to which the respondent is asked to react

Cumulative scales - Guttmans Scalogram: Consist of series of statements to which a respondent expresses his agreement or disagreement. Statement in it form a cumulative series. Factor scales : developed on the basis of intercorrelations of items which indicate that a common factor accounts for the relationships between items. - Osgoods Semantic Differential scale: to measure the psychological meanings of an object to an individual. - Multi -dimensional Scaling: can scale objects, individuals or both with a minimum of information. can be characterized as a set of procedures for portraying perceptual or affective dimensions of substantive interest. Approaches: Metric: Non-metric:

Rating Scale
Dichotomous scale: - yes or no answer type. (nominal scale) Ex: Do you own a car - yes or no. Category scale: - uses multiple items ( nominal scale) Ex: Where in northern California do you reside - North bay -South bay -East bay -Peninsula -Other. Likert scale: - to examine how respondent agree or disagree with statement (interval scale) Ex: My work is very interesting 1 2 3 4 5 Strongly disagree -1, Disagree -2, Neither agree nor disagree- 3 Agree-4, strongly agree-5.

Semantic differential scale: - Several bipolar attributes are identified at the extremes of the scale, respondent are asked to indicate their attributes (interval scale) Ex: Beautiful _ _ _ _ _ Ugly Numerical scale: - Numbers are provided on 5 point or 7 point scale with bipolar attributes at both ends. (interval scale) Ex: How pleased are you with you new agent? extremely pleased 5 4 3 2 1 Extremely displeased Itemized rating scale: - Respondent give or circles the relevant number against each items (interval scale) Ex: I will be changing my job within 2 months ____ I will take a new assignment in future 1 2 3 4 5 Very likely -1, unlikely -2, neither unlikely nor likely- 3, likely 4, Very likely 5.

Fixed or constant sum scale: - respondent are asked to distribute a given number across various items. (Ordinal scale) Ex: In choosing a shirt, indicate the importance you attach to each of the following four aspects Colour ___ Design ___ Price ___ Quality ___ Total points 100 Stapel scale: - measure both direction and intensity of the attitude towards the item. ( interval scale)

Ex: How would you rate your supervisors ability with respect to each characteristics +3 +3 +2 +2 +1 +1 Innovative interpersonal skill -1 -1 -2 -2 -3 -3 Graphical rating scale: - respondent indicate on this scale their answer to a particular question by placing a mark on the line Ex: how would you rate your worker 10 excellent 5 all right 1very bad

Ranking scale

Used to give preference between two or more objects or items. relative judgments against other similar objects. i.e., comparing two or more objects and make choices among them. (a) Method of paired comparisons making a choice between two objects. No. of judgments required N = n(n-1)/2, n= no. of objects (b) Method of rank order the respondents are asked to rank their choices. - Forced choice:- respondent rank the object relative to one another among the alternatives. Ex: Which magazine do you like to subscribe among the below Economics Times Hindu Chronicle Business line - Comparative scales:- provides point of reference to assess attitudes toward current object, event, or situation Ex: in current financial environment, compared to stock how wise or useful is it to invest in bonds More useful about the same less useful 1 2 3 4 5

Potrebbero piacerti anche