Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Research
for
BBS 4th year
prepared by
Sijan Raj Joshi
Reference (Book)
Pant, P. R. Fundamentals of Business Research Methods. Kathmandu: Buddha Publication
Pvt.Ltd
RESEARCH
(search of knowledge again and again )
Research is a systematic, controlled , empirical ,
and critical investigation of a hypothesis
proposition about the presumed relations among
natural phenomena .
•Scientific and systematic search for information on
a particular topic or issue.
•Systematic or organized effort to investigate a
specific problem that needs a solution.
•Search for knowledge.
•Investigation or inquiry of new facts /findings in
any branch of knowledge.
Systematic Controlled Empirical Critical
investigation
Of hypothesis proposition
Research
Purpose of research
• Focus/specific.
• Research without purpose leads nowhere.
• Purpose influence activities of a researcher.
• Basis of procedure.
• Methodology of executing.
• Interpretation of finding.
• Reduces major errors.
• Increase possibility of meaningful result.
2) Testability
• To develop and test the hypothesis.
• Various statistical tool / techniques to test
the hypothesis.
• Major outcome of research is to test
hypothesis.
3) Replicability
• Acceptable / confidence being scientific.
• Similar methods must give similar results.
• Previous method of data collection and
analysis
• Similar acceptable outcome.
4) Objectivity
• More objective = more scientific
• Reduce measurement error that introduce
biases / subjectivity in findings
• Research should be bias free.
• Others should be able to understand/replicate
a finding before it is considered dependable.
5) Rigour
• Carefulness
• Degree of exactitude
• Collect right information from an appropriate
sample with minimum bias.
2) Problem identification
3) Review of literature
4) Theoretical framework
6)Formulation of hypotheses
7)Research design
4)ACCESS SITES
• The problem
• The methodology
• Data gathering
• Data analysis
• Report writing
Features of research design
• Objectivity: objectivity means value free research .
while designing our research project ,we must
think about the possibility and chances of biases
into your research process.
• Reliability: reliability means consistency in
responses.
• Validity : validity means accurate measurement of
the concept.
• Generalization : The goal of scientific research is to
make generalization.
Types of research design
1. Exploratory research design
2. Descriptive research design
• Descriptive research
• Developmental research
• Case study Research
3.Comparative research design
• Correlational research
•Casual-comparative research
4)Interventional research design
•Lab-based experimental research
•Field-based experimental research
Principles and criteria of good
research design
1. Theory-grounded: good research reflect the theories which are
being investigated. Where specific theoretical expectations can be
hypothesized , these are incorporated into the design.
2. Situational: good research design reflect the settings of the
investigation.
3. Feasible : good research design can be implemented . The
sequence and timing of events are carefully through out.
4. Redundant : good research design have some flexibility built into
them . often this flexibility results from duplication of essential
design features.
5. Efficient: good research design strike a balance between
redundancy and the tendency to overdesign .where it is
reasonable ,other , less costly, strategies for ruling out potential
that's to validity are utilized.
Validity and reliability
Validity
Validity refers to the extent you are measuring what
you intended to measure. It indicates the accuracy of a
measure.
According to Joppe , “Validity refers to the truthfulness
of findings. It determines whether the research truly
measures that what it was intended to measure or
how truthful the research results are”.
For ex; if a teacher wants to test the grammetic skill of
students but if test is designed in such a way that it
measures the listening skill of students instead of
grammetic skill, it is not valid for the purpose it was
originally intended.
A valid measure should satisfy three types/criteria;
Content Validity, Construct Validity and Criterion-
related Validity.
Content Validity
• It ensures that the measure/test includes an adequate
and representative set of items that tap the concept.
• The more the scale items represent the domain or
universe of the concept being measured, the greater the
content validity.
• Example, if we want to test knowledge on American
Geography it is not fair to have most questions limited to
the geography of New England.
• A panel of judge can attest to the content validity of the
instrument.
• Kidder and Judd cite the example where a test designed
to measure degrees of speech damage can be considered
as having validity if it is so evaluated by a group of expert
judges (i.e. professional speech therapists).
Construct Validity
• Construct validity testifies to how well the results obtained
from the use of the measure fit the theories around which
the test is designed.
• It seeks agreement between a theoretical concept and our
specific measuring procedure.
• In other words, it is involved with the factors that lie behind
the measurement scores obtained ; with what factors (i.e.
constructs) responsible for the variance in measurement
scores.
• This is assessed through Convergent and Discriminatory
validity.
• Convergent Validity; It is established when the scores
obtained with two different instruments measuring the same
concept are highly correlated.
• Discriminant validity is established when, based on theory,
two variables are predicted to be uncorrelated, and the score
obtained by measuring them are indeed empirically found to
be uncorrelated.
Criterion-Related Validity
• Criterion-related validity is used to demonstrate
the accuracy of a measure or procedure by
comparing it with another measure or procedure
which has been demonstrated to be valid.
• A job applicant takes a performance test during the
interview process. If this test accurately predicts
how well the employee will perform on the job, the
test is said to have criterion validity.
• A graduate student takes the GRE. The GRE has
been shown as an effective tool (i.e. it has criterion
validity) for predicting how well a student will
perform in graduate studies.
Criterion-Related Validity
i) Predictive Validity: It is judged by the degree to
which an instrument can forecast an outcome.
ii) Concurrent Validity: Concurrent validity
measures how well a new test compares to an
well-established test. It can also refer to the
practice of concurrently testing two groups at
the same time, or asking two different groups of
people to take the same test.
Ex; If you create a new test for depression
levels, you can compare its performance to
previous depression tests
(like a 42-item depression level survey) that have
high validity. If results of both are same the new test
is valid.
Reliability
• Reliability is the degree of consistency of a measure. A
test will be reliable when it gives the same repeated
result under the same conditions.
• In other words, it refers to whether your data collection
techniques and analytical procedure would reproduce
consistent findings if they were repeated on another
occasion or if they were replicated by another researcher.
• It is assessed on the basis of stability (includes test-retest
and alternative form) and consistency (spilt-half and
inter-item) of measures.
Types of Reliability
• Test-Retest Reliability : The reliability coefficient obtained by
repetition of the same measure on a second occasion in called the
test-retest reliability.
• That is, when a questionnaire containing some items that are
supposed to measure a concept is asked to a set of respondents,
now and again to same respondents between time period, then the
correlation between the scores obtained is called test-retest
reliability.
• Alternative-Form Reliability: a measure of reliability obtained by
administering different two tests (both tests must contain items that
inquiry the same construct, skill, knowledge base, etc.) to the same
group of individuals within short period of time. The scores from the
two tests can then be correlated in order to evaluate the consistency
of results across alternate versions.
Split-Half Reliability
• It is used for measuring the internal consistency of the
test.
• In split-half reliability, a test for a single knowledge area
is split into two parts and then both parts given to one
group of students at the same time. The scores from
both parts of the test are correlated.
• The scores from both parts of the test are correlated by
the help of Karl’s Pearson’s coefficient correlation and
Spareman’s Rank Difference method.
• A reliable test will have high correlation, indicating that a
student would perform equally well (or as poorly) on
both halves of the test.
Relationship between Validity and
Reliability
• In research, a researcher wants measuring instruments used that
are high on both validity and reliability. But in practice we may find
out any four relationships between validity and reliability;
1. High Reliability, but Low validity: The indicators measure
something consistently, but not the intended concept. Consider
the SAT, used as a predictor of success in college. It is a reliable test
(high scores relate to high GPA), though only a moderately valid
indicator of success.
2. High validity, but low reliability: The indicator represents the
concept well, but does not produce consistent measurement.
3. Low validity and low reliability: The worst case, the indicators
neither measure the concept nor produce consistent results of
whatever they measure.
4. 4. High Validity and high reliability: What we hope for the
indicators consistently measure what we intended to measure.
We can use Target Analogy to
understand it