Sei sulla pagina 1di 68

Everything about

Research

for
BBS 4th year
prepared by
Sijan Raj Joshi
Reference (Book)
Pant, P. R. Fundamentals of Business Research Methods. Kathmandu: Buddha Publication
Pvt.Ltd
RESEARCH
(search of knowledge again and again )
Research is a systematic, controlled , empirical ,
and critical investigation of a hypothesis
proposition about the presumed relations among
natural phenomena .
•Scientific and systematic search for information on
a particular topic or issue.
•Systematic or organized effort to investigate a
specific problem that needs a solution.
•Search for knowledge.
•Investigation or inquiry of new facts /findings in
any branch of knowledge.
Systematic Controlled Empirical Critical

investigation

Of hypothesis proposition

About the presumed relations


among natural phenomena .

Research
Purpose of research

To solve a current existing problem in the work setting


(Action/applied)

To generate new knowledge upon which


new theory can be built.
(basic/fundamental)
Features of research

Purposive Testabilit Replicable


y

Objectivi Rigor Generali


ty zable
1) Purposiveness

• Focus/specific.
• Research without purpose leads nowhere.
• Purpose influence activities of a researcher.
• Basis of procedure.
• Methodology of executing.
• Interpretation of finding.
• Reduces major errors.
• Increase possibility of meaningful result.
2) Testability
• To develop and test the hypothesis.
• Various statistical tool / techniques to test
the hypothesis.
• Major outcome of research is to test
hypothesis.
3) Replicability
• Acceptable / confidence being scientific.
• Similar methods must give similar results.
• Previous method of data collection and
analysis
• Similar acceptable outcome.
4) Objectivity
• More objective = more scientific
• Reduce measurement error that introduce
biases / subjectivity in findings
• Research should be bias free.
• Others should be able to understand/replicate
a finding before it is considered dependable.
5) Rigour

• Carefulness
• Degree of exactitude
• Collect right information from an appropriate
sample with minimum bias.

• Lack of rigour may lead to


• Selection of faulty research design.
• Inappropriate or biased collection.
• Wrong interpretation / wrong conclusion.
6) Generalizability
• Wider applicability of research findings.
• Sampling design must be logically developed.
• Careful steps in data collection methods to be
followed .
• More generalizable = more useful and
valuable
Steps in scientific research
1) Sensing or realizing a problem

2) Problem identification

3) Review of literature

4) Theoretical framework

6)Formulation of hypotheses

7)Research design

8)Data collection / data analysis

9)Refinement of theory (Interpretation)


Ethical concerns in research
 Not putting pressure to participants for getting
information.
 No deception of participants.
 No fabricating the data.
 Not presenting dishonestly.
 No manipulation
 No discrimination
 Respect for intellectual property
 Respect to social culture , norms ,and values.
 Protection of participants right.
UNIT-2
LITERATURE
SEARCHING AND
THEORETICAL
FRAMEWORK
CONCEPT OF LITERATURE REVIEW
• N.WILLIMAN(2010):
• "A literature review (or overview) is a
summary and analysis of current
knowledge about a particular topic or
area of enquiry."
• P.HAYWOOD AND E.C.WRAGG(1996):
• "It is a process of locating,obtaining,
reading and evaluating the research
literature in the area of your interest."
STEPS IN LITERATURE REVIEW
PROCESS
Working with literature
1) Locating: list of topic relevant to your
subject
2) obtaining:from libraries,online sources and
other sources
3) Reading: effective and selective reading,
keeping track of reference
4) Evaluating:content analysis and critical
review
DESCRIPTION
• In terms of literature review,
• "the literature “ means the work you consulted in order to
understand and investigate your research problem.
• "Review" is a process of systematic, meticulous , and critical
summary of the published literature in your field of research.
• Literature review is a way to discover what other research in
the area of your problem has uncovered.
• It is a way to avoid investigating problems that have already
been definitely answered.
Purpose And Need For Literature Review
• The primary purpose of literature review are:
• To learn how others have identified and
measured key concepts;
• To identify data sources that other
researchers have used;
• To identify potential relationships between
concepts ; and
• To identify researchable hypotheses.
• "The purpose of literature review is , thus, to find out
what research studies have been conducted in your
chosen field of study , and what remains to be done .
Hence,a literature review survey helps you to avoid
duplication of effort."
Specifically stating,the purposes of
literature review survey are as follows:
• To give continuity in research.
• To place the research in a historical context to show
familiarity with state-of-the-art developments.
• To synthesize and gain new perspective.
• To draw a theoretical framework and define the research
parameters.
• To discover important variables relevant to the topic.
• To generate hypotheses.
• To identify the methodology and techniques of research.
Need for a good literature review

• It allows you to establish your theoretical


framework and methodological focus.Even if you
are proposing a new theory or a new method,you
are doing so in relation to what has been done.
• It is the knowledge of your field which allows
you to identify the gap which your research could
fill.
LITERATURE SEARCH THROUGH INTERNET

1)DECIDE TO SEARCH THE INTERNET

2)ACCESS THE INTERNET


3)ENTER THE ADDRESS
(Search Using Keywords Or BY SELECTING A TOPIC)

4)ACCESS SITES

5) BOOKMARK THE SITES THAT APPEAR


[NOTE DOWN THE FULL ADDRESS OF ALL MATERIAL YOU
INTEND TO REFERENCE
Kinds of literature survey
1. Historical review : This type of literature review
traces the issues, concepts or events over time.
2. Methodological review : This kind of review assesses
and evaluates methodological techniques used and the
strengths of different studies.
3. Theoretical review : This type of review focuses on
the theories or concepts related to the research issue
under study.
4. Integrative review : This type of review summarizes
and integrates the current state of knowledge on the
topic under study
Relation Of Literature To
Research
• Core component of any scientific research.
• A research project cannot take shape , be complete and be
accepted without proper literature survey.
• It demonstrates a strong knowledge of the current state of
research in your field or topic;
• It shows what issues are being discussed or debated and
where research is headed;
• It helps you to avoid duplication, identifies knowledge gap and
provides directions for future research;
• It provides an excellent background information and need of
research in your selected subject or area
• It provides a 'mental road map' of the past, present and future
of research in a particular field.
THEORETICAL FRAMEWORK
• Sekaran and bougie (2013):
• "The theoretical framework is the foundation on
which the entire thesis is based . It is a logically
developed , described and elaborated network of
associations among variables that have been
identified through such processes as interview,
observations and literature survey."

• Describes the relationships among the variables


• Elaborates the theory underlying these relations
• Describes the nature and direction of the
relationship.
THEORY AND RESEARCH
• A theory is a statement concerning the relationships
between or among concepts
• The purpose of theory is to define establish and
explain relationships between concepts or
constructs.
• Criteria which need to be met by a theory are:
• •Definition of terms concepts and variables
• •Domain(where the theory applies)
• •A set of relationships of concepts
• •Specific prediction.
• Research is closely related to theory . Research and
theory are in separable, supplementary components
of scientific investigation .A theory provides a
conceptual framework for research . Research , in
return, contributes to the development of theory
Unit-3
RESEARCH
DESIGN
CONCEPT OF RESEARCH DESIGN
• Research design is defined as a framework of
methods and techniques chosen by a researcher
to combine various components of research in a
reasonably logical manner so that the research
problem is efficiently handled.
• A research design is a systematic plan to study a
scientific problem
• William zikmund(2013):"Research design is a
master plan specifying the methods and
procedures for collecting and analyzing the
needed information.
Essential of a good research design
• A research design is an overall plan for
the activities to be undertaken during
the course of a research study.
• It is an organized and integrated system
that guides the researcher in
formulating, implementing,and
controlling the study.
• The research design is a strategy of
obtaining information for the purpose of
conducting a study and making
generalizations about the population.
Elements of a research design

• The problem
• The methodology
• Data gathering
• Data analysis
• Report writing
Features of research design
• Objectivity: objectivity means value free research .
while designing our research project ,we must
think about the possibility and chances of biases
into your research process.
• Reliability: reliability means consistency in
responses.
• Validity : validity means accurate measurement of
the concept.
• Generalization : The goal of scientific research is to
make generalization.
Types of research design
1. Exploratory research design
2. Descriptive research design
• Descriptive research
• Developmental research
• Case study Research
3.Comparative research design
• Correlational research
•Casual-comparative research
4)Interventional research design
•Lab-based experimental research
•Field-based experimental research
Principles and criteria of good
research design
1. Theory-grounded: good research reflect the theories which are
being investigated. Where specific theoretical expectations can be
hypothesized , these are incorporated into the design.
2. Situational: good research design reflect the settings of the
investigation.
3. Feasible : good research design can be implemented . The
sequence and timing of events are carefully through out.
4. Redundant : good research design have some flexibility built into
them . often this flexibility results from duplication of essential
design features.
5. Efficient: good research design strike a balance between
redundancy and the tendency to overdesign .where it is
reasonable ,other , less costly, strategies for ruling out potential
that's to validity are utilized.
Validity and reliability
Validity
 Validity refers to the extent you are measuring what
you intended to measure. It indicates the accuracy of a
measure.
 According to Joppe , “Validity refers to the truthfulness
of findings. It determines whether the research truly
measures that what it was intended to measure or
how truthful the research results are”.
 For ex; if a teacher wants to test the grammetic skill of
students but if test is designed in such a way that it
measures the listening skill of students instead of
grammetic skill, it is not valid for the purpose it was
originally intended.
 A valid measure should satisfy three types/criteria;
Content Validity, Construct Validity and Criterion-
related Validity.
Content Validity
• It ensures that the measure/test includes an adequate
and representative set of items that tap the concept.
• The more the scale items represent the domain or
universe of the concept being measured, the greater the
content validity.
• Example, if we want to test knowledge on American
Geography it is not fair to have most questions limited to
the geography of New England.
• A panel of judge can attest to the content validity of the
instrument.
• Kidder and Judd cite the example where a test designed
to measure degrees of speech damage can be considered
as having validity if it is so evaluated by a group of expert
judges (i.e. professional speech therapists).
Construct Validity
• Construct validity testifies to how well the results obtained
from the use of the measure fit the theories around which
the test is designed.
• It seeks agreement between a theoretical concept and our
specific measuring procedure.
• In other words, it is involved with the factors that lie behind
the measurement scores obtained ; with what factors (i.e.
constructs) responsible for the variance in measurement
scores.
• This is assessed through Convergent and Discriminatory
validity.
• Convergent Validity; It is established when the scores
obtained with two different instruments measuring the same
concept are highly correlated.
• Discriminant validity is established when, based on theory,
two variables are predicted to be uncorrelated, and the score
obtained by measuring them are indeed empirically found to
be uncorrelated.
Criterion-Related Validity
• Criterion-related validity is used to demonstrate
the accuracy of a measure or procedure by
comparing it with another measure or procedure
which has been demonstrated to be valid.
• A job applicant takes a performance test during the
interview process. If this test accurately predicts
how well the employee will perform on the job, the
test is said to have criterion validity.
• A graduate student takes the GRE. The GRE has
been shown as an effective tool (i.e. it has criterion
validity) for predicting how well a student will
perform in graduate studies.
Criterion-Related Validity
i) Predictive Validity: It is judged by the degree to
which an instrument can forecast an outcome.
ii) Concurrent Validity: Concurrent validity
measures how well a new test compares to an
well-established test. It can also refer to the
practice of concurrently testing two groups at
the same time, or asking two different groups of
people to take the same test.
Ex; If you create a new test for depression
levels, you can compare its performance to
previous depression tests
(like a 42-item depression level survey) that have
high validity. If results of both are same the new test
is valid.
Reliability
• Reliability is the degree of consistency of a measure. A
test will be reliable when it gives the same repeated
result under the same conditions.
• In other words, it refers to whether your data collection
techniques and analytical procedure would reproduce
consistent findings if they were repeated on another
occasion or if they were replicated by another researcher.
• It is assessed on the basis of stability (includes test-retest
and alternative form) and consistency (spilt-half and
inter-item) of measures.
Types of Reliability
• Test-Retest Reliability : The reliability coefficient obtained by
repetition of the same measure on a second occasion in called the
test-retest reliability.
• That is, when a questionnaire containing some items that are
supposed to measure a concept is asked to a set of respondents,
now and again to same respondents between time period, then the
correlation between the scores obtained is called test-retest
reliability.
• Alternative-Form Reliability: a measure of reliability obtained by
administering different two tests (both tests must contain items that
inquiry the same construct, skill, knowledge base, etc.) to the same
group of individuals within short period of time. The scores from the
two tests can then be correlated in order to evaluate the consistency
of results across alternate versions.
Split-Half Reliability
• It is used for measuring the internal consistency of the
test.
• In split-half reliability, a test for a single knowledge area
is split into two parts and then both parts given to one
group of students at the same time. The scores from
both parts of the test are correlated.
• The scores from both parts of the test are correlated by
the help of Karl’s Pearson’s coefficient correlation and
Spareman’s Rank Difference method.
• A reliable test will have high correlation, indicating that a
student would perform equally well (or as poorly) on
both halves of the test.
Relationship between Validity and
Reliability
• In research, a researcher wants measuring instruments used that
are high on both validity and reliability. But in practice we may find
out any four relationships between validity and reliability;
1. High Reliability, but Low validity: The indicators measure
something consistently, but not the intended concept. Consider
the SAT, used as a predictor of success in college. It is a reliable test
(high scores relate to high GPA), though only a moderately valid
indicator of success.
2. High validity, but low reliability: The indicator represents the
concept well, but does not produce consistent measurement.
3. Low validity and low reliability: The worst case, the indicators
neither measure the concept nor produce consistent results of
whatever they measure.
4. 4. High Validity and high reliability: What we hope for the
indicators consistently measure what we intended to measure.
We can use Target Analogy to
understand it

Potrebbero piacerti anche