Sei sulla pagina 1di 33

ASSIGNMENT SET-1

Q 1. Give examples of specific situations that would call for the following types of
research, explaining why – a) Exploratory research b) Descriptive research c) Diagnostic
research d) Evaluation research.

Ans.: Research may be classified crudely according to its major intent or the methods. According
to the intent, research may be classified as:
Basic (aka fundamental or pure) research is driven by a scientist's curiosity or interest in a
scientific question. The main motivation is to expand man's knowledge, not to create or invent
something. There is no obvious commercial value to the discoveries that result from basic
research.

For example, basic science investigations probe for answers to questions such as:
• How did the universe begin?

• What are protons, neutrons, and electrons composed of?

• How do slime molds reproduce?

• What is the specific genetic code of the fruit fly?

Most scientists believe that a basic, fundamental understanding of all branches of science is
needed in order for progress to take place. In other words, basic research lays down the
foundation for the applied science that follows. If basic work is done first, then applied spin-offs
often eventually result from this research. As Dr. George Smoot of LBNL says, "People cannot
foresee the future well enough to predict what's going to develop from basic research. If we only
did applied research, we would still be making better spears."

Applied research is designed to solve practical problems of the modern world, rather than to
acquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is to
improve the human condition.

For example, applied researchers may investigate ways to:


• Improve agricultural crop production

• Treat or cure a specific disease

• Improve the energy efficiency of homes, offices, or modes of transportation

Some scientists feel that the time has come for a shift in emphasis away from purely basic
research and toward applied science. This trend, they feel, is necessitated by the problems
resulting from global overpopulation, pollution, and the overuse of the earth's natural resources.
Exploratory research provides insights into and comprehension of an issue or situation. It
should draw definitive conclusions only with extreme caution. Exploratory research is a type of
research conducted because a problem has not been clearly defined. Exploratory research helps
determine the best research design, data collection method and selection of subjects. Given its
fundamental nature, exploratory research often concludes that a perceived problem does not
actually exist.
Exploratory research often relies on secondary research such as reviewing available literature
and/or data, or qualitative approaches such as informal discussions with consumers, employees,
management or competitors, and more formal approaches through in-depth interviews, focus
groups, projective methods, case studies or pilot studies. The Internet allows for research
methods that are more interactive in nature: E.g., RSS feeds efficiently supply researchers with
up-to-date information; major search engine search results may be sent by email to researchers
by services such as Google Alerts; comprehensive search results are tracked over lengthy
periods of time by services such as Google Trends; and Web sites may be created to attract
worldwide feedback on any subject.
The results of exploratory research are not usually useful for decision-making by themselves, but
they can provide significant insight into a given situation. Although the results of qualitative
research can give some indication as to the "why", "how" and "when" something occurs, it cannot
tell us "how often" or "how many."
Exploratory research is not typically generalizable to the population at large.
A defining characteristic of causal research is the random assignment of participants to the
conditions of the experiment; e.g., an Experimental and a Control Condition... Such assignment
results in the groups being comparable at the beginning of the experiment. Any difference
between the groups at the end of the experiment is attributable to the manipulated variable.
Observational research typically looks for difference among "in-tact" defined groups. A common
example compares smokers and non-smokers with regard to health problems. Causal
conclusions can't be drawn from such a study because of other possible differences between the
groups; e.g., smokers may drink more alcohol than non-smokers. Other unknown differences
could exist as well. Hence, we may see a relation between smoking and health but a conclusion
that smoking is a cause would not be warranted in this situation. (Cp)
Descriptive research, also known as statistical research, describes data and characteristics
about the population or phenomenon being studied. Descriptive research answers the questions
who, what, where, when and how.
Although the data description is factual, accurate and systematic, the research cannot describe
what caused a situation. Thus, descriptive research cannot be used to create a causal
relationship, where one variable affects another. In other words, descriptive research can be said
to have a low requirement for internal validity.
The description is used for frequencies, averages and other statistical calculations. Often the best
approach, prior to writing descriptive research, is to conduct a survey investigation. Qualitative
research often has the aim of description and researchers may follow-up with examinations of
why the observations exist and what the implications of the findings are.
In short descriptive research deals with everything that can be counted and studied. But there are
always restrictions to that. Your research must have an impact to the life of the people around
you. For example, finding the most frequent disease that affects the children of a town. The
reader of the research will know what to do to prevent that disease thus; more people will live a
healthy life.
Diagnostic study: it is similar to descriptive study but with different focus. It is directed towards
discovering what is happening and what can be done about. It aims at identifying the causes of a
problem and the possible solutions for it. It may also be concerned with discovering and testing
whether certain variables are associated. This type of research requires prior knowledge of the
problem, its thorough formulation, clear-cut definition of the given population, adequate methods
for collecting accurate information, precise measurement of variables, statistical analysis and test
of significance.
Evaluation Studies: it is a type of applied research. It is made for assessing the effectiveness of
social or economic programmes implemented or for assessing the impact of development of the
project area. It is thus directed to assess or appraise the quality and quantity of an activity and its
performance and to specify its attributes and conditions required for its success. It is concerned
with causal relationships and is more actively guided by hypothesis. It is concerned also with
change over time.
Action research is a reflective process of progressive problem solving led by individuals working
with others in teams or as part of a "community of practice" to improve the way they address
issues and solve problems. Action research can also be undertaken by larger organizations or
institutions, assisted or guided by professional researchers, with the aim of improving their
strategies, practices, and knowledge of the environments within which they practice. As designers
and stakeholders, researchers work with others to propose a new course of action to help their
community improve its work practices (Center for Collaborative Action Research). Kurt Lewin,
then a professor at MIT, first coined the term “action research” in about 1944, and it appears in
his 1946 paper “Action Research and Minority Problems”. In that paper, he described action
research as “a comparative research on the conditions and effects of various forms of social
action and research leading to social action” that uses “a spiral of steps, each of which is
composed of a circle of planning, action, and fact-finding about the result of the action”.
Action research is an interactive inquiry process that balances problem solving actions
implemented in a collaborative context with data-driven collaborative analysis or research to
understand underlying causes enabling future predictions about personal and organizational
change (Reason & Bradbury, 2001). After six decades of action research development, many
methodologies have evolved that adjust the balance to focus more on the actions taken or more
on the research that results from the reflective understanding of the actions. This tension exists
between
● those that are more driven by the researcher’s agenda to those more driven by
participants;

• Those that are motivated primarily by instrumental goal attainment to those motivated
primarily by the aim of personal, organizational, or societal transformation; and
• 1st-, to 2nd-, to 3rd-person research, that is, my research on my own action, aimed
primarily at personal change; our research on our group (family/team), aimed
primarily at improving the group; and ‘scholarly’ research aimed primarily at
theoretical generalization and/or large scale change.
Action research challenges traditional social science, by moving beyond reflective knowledge
created by outside experts sampling variables to an active moment-to-moment theorizing, data
collecting, and inquiring occurring in the midst of emergent structure. “Knowledge is always
gained through action and for action. From this starting point, to question the validity of social
knowledge is to question, not how to develop a reflective science about action, but how to
develop genuinely well-informed action — how to conduct an action science” (Tolbert 2001).

Q 2.In the context of hypothesis testing, briefly explain the difference between a) Null and
alternative hypothesis b) Type 1 and type 2 error c) Two tailed and one tailed test d)
Parametric and non-parametric tests.

Ans.: Some basic concepts in the context of testing of hypotheses are explained below -
11) Null Hypotheses and Alternative Hypotheses: In the context of statistical analysis,
we often talk about null and alternative hypotheses. If we are to compare the
superiority of method A with that of method B and we proceed on the assumption that
both methods are equally good, then this assumption is termed as a null hypothesis.
On the other hand, if we think that method A is superior, then it is known as an
alternative hypothesis.
These are symbolically represented as:
Null hypothesis = H0 and Alternative hypothesis = Ha
Suppose we want to test the hypothesis that the population mean is equal to the hypothesized
mean (µ H0) = 100. Then we would say that the null hypothesis is that the population mean is
equal to the hypothesized mean 100 and symbolically we can express it as: H0: µ= µ H0=100
If our sample results do not support this null hypothesis, we should conclude that something else
is true. What we conclude rejecting the null hypothesis is known as an alternative hypothesis. If
we accept H0, then we are rejecting Ha and if we reject H0, then we are accepting Ha. For H0:
µ= µ H0=100, we may consider three possible alternative hypotheses as follows:

Alternative To be read as follows


Hypotheses
Ha: µ≠µ H0 (The alternative hypothesis is that the population mean is not equal to 100
i.e., it may be more or less 100)
Ha: µ>µ H0 (The alternative hypothesis is that the population mean is greater than
100)
Ha: µ< µ H0 (The alternative hypothesis is that the population mean is less than 100)

The null hypotheses and the alternative hypotheses are chosen before the sample is drawn (the
researcher must avoid the error of deriving hypotheses from the data he collects and testing the
hypotheses from the same data). In the choice of null hypothesis, the following considerations are
usually kept in view:
1a. The alternative hypothesis is usually the one, which is to be proved, and the null
hypothesis is the one that is to be disproved. Thus a null hypothesis represents
the hypothesis we are trying to reject, while the alternative hypothesis represents
all other possibilities.
2b. If the rejection of a certain hypothesis when it is actually true involves great risk, it
is taken as null hypothesis, because then the probability of rejecting it when it is
true is α (the level of significance) which is chosen very small.
3c. The null hypothesis should always be a specific hypothesis i.e., it should not state
an approximate value.
Generally, in hypothesis testing, we proceed on the basis of the null hypothesis, keeping the
alternative hypothesis in view. Why so? The answer is that on the assumption that the null
hypothesis is true, one can assign the probabilities to different possible sample results, but this
cannot be done if we proceed with alternative hypotheses. Hence the use of null hypotheses (at
times also known as statistical hypotheses) is quite frequent.
12) The Level of Significance: This is a very important concept in the context of
hypothesis testing. It is always some percentage (usually 5%), which should be
chosen with great care, thought and reason. In case we take the significance
level at 5%, then this implies that H0 will be rejected when the sampling result
(i.e., observed evidence) has a less than 0.05 probability of occurring if H0 is
true. In other words, the 5% level of significance means that the researcher is
willing to take as much as 5% risk rejecting the null hypothesis when it (H0)
happens to be true. Thus the significance level is the maximum value of the
probability of rejecting H0 when it is true and is usually determined in advance
before testing the hypothesis.
23) Decision Rule or Test of Hypotheses: Given a hypothesis Ha and an
alternative hypothesis H0, we make a rule, which is known as a decision rule,
according to which we accept H0 (i.e., reject Ha) or reject H0 (i.e., accept Ha).
For instance, if H0 is that a certain lot is good (there are very few defective items
in it), against Ha, that the lot is not good (there are many defective items in it),
then we must decide the number of items to be tested and the criterion for
accepting or rejecting the hypothesis. We might test 10 items in the lot and plan
our decision saying that if there are none or only 1 defective item among the 10,
we will accept H0; otherwise we will reject H0 (or accept Ha). This sort of basis is
known as a decision rule.
34) Type I & II Errors: In the context of testing of hypotheses, there are basically two
types of errors that we can make. We may reject H0 when H0 is true and we may
accept H0 when it is not true. The former is known as Type I and the latter is
known as Type II. In other words, Type I error means rejection of hypotheses,
which should have been accepted, and Type II error means accepting of
hypotheses, which should have been rejected. Type I error is denoted by α
(alpha), also called as level of significance of test; and Type II error is denoted by
β(beta).
Decision

Accept H0 Reject H0
H0 (true) Correct decision Type I error (α error)
Ho (false) Type II error (β error) Correct decision

The probability of Type I error is usually determined in advance and is understood as the level of
significance of testing the hypotheses. If type I error is fixed at 5%, it means there are about 5
chances in 100 that we will reject H0 when H0 is true. We can control type I error just by fixing it
at a lower level. For instance, if we fix it at 1%, we will say that the maximum probability of
committing type I error would only be 0.01.
But with a fixed sample size n, when we try to reduce type I error, the probability of committing
type II error increases. Both types of errors cannot be reduced simultaneously, since there is a
trade-off in business situations. Decision makers decide the appropriate level of type I error by
examining the costs of penalties attached to both types of errors. If type I error involves time and
trouble of reworking a batch of chemicals that should have been accepted, whereas type II error
means taking a chance that an entire group of users of this chemicals compound will be
poisoned, then in such a situation one should prefer a type I error to a type II error. As a result,
one must set a very high level for type I error in one’s testing techniques of a given hypothesis.
Hence, in testing of hypotheses, one must make all possible efforts to strike an adequate balance
between Type I & Type II error.
15) Two Tailed Test & One Tailed Test: In the context of hypothesis testing, these two terms
are quite important and must be clearly understood. A two-tailed test rejects the null hypothesis if,
say, the sample mean is significantly higher or lower than the hypothesized value of the mean of
the population. Such a test is inappropriate when we have H0: µ= µ H0 and Ha: µ≠µ H0 which
may µ>µ H0 or µ<µ H0. If significance level is 5 % and the two-tailed test is to be applied, the
probability of the rejection area will be 0.05 (equally split on both tails of the curve as 0.025) and
that of the acceptance region will be 0.95. If we take µ = 100 and if our sample mean deviates
significantly from µ, in that case we shall accept the null hypothesis. But there are situations when
only a one-tailed test is considered appropriate. A one-tailed test would be used when we are to
test, say, whether the population mean is either lower or higher than some hypothesized value.
Parametric statistics is a branch of statistics that assumes data come from a type of probability
distribution and makes inferences about the parameters of the distribution most well known
elementary statistical methods are parametric.
Generally speaking parametric methods make more assumptions than non-parametric
methods. If those extra assumptions are correct, parametric methods can produce more accurate
and precise estimates. They are said to have more statistical power. However, if those
assumptions are incorrect, parametric methods can be very misleading. For that reason they are
often not considered robust. On the other hand, parametric formulae are often simpler to write
down and faster to compute. In some, but definitely not all cases, their simplicity makes up for
their non-robustness, especially if care is taken to examine diagnostic statistics.
Because parametric statistics require a probability distribution, they are not distribution-free.
Non-parametric models differ from parametric models in that the model structure is not
specified a priori but is instead determined from data. The term nonparametric is not meant to
imply that such models completely lack parameters but that the number and nature of the
parameters are flexible and not fixed in advance.
Kernel density estimation provides better estimates of the density than histograms.
Nonparametric regression and semi parametric regression methods have been developed based
on kernels, splines, and wavelets.
Data Envelopment Analysis provides efficiency coefficients similar to those obtained
by Multivariate Analysis without any distributional assumption.

Q 3. Explain the difference between a causal relationship and correlation, with an example
of each. What are the possible reasons for a correlation between two variables?
Ans.: Correlation: The correlation is knowing what the consumer wants, and providing it.
Marketing research looks at trends in sales and studies all of the variables, i.e. price, color,
availability, and styles, and the best way to give the customer what he or she wants. If you can
give the customer what they want, they will buy, and let friends and family know where they got it.
Making them happy makes the money.

Casual relationship Marketing was first defined as a form of marketing developed from direct
response marketing campaigns, which emphasizes customer retention and satisfaction, rather
than a dominant focus on sales transactions.

As a practice, Relationship Marketing differs from other forms of marketing in that it recognizes
the long term value of customer relationships and extends communication beyond intrusive
advertising and sales promotional messages.

With the growth of the internet and mobile platforms, Relationship Marketing has continued to
evolve and move forward as technology opens more collaborative and social communication
channels. This includes tools for managing relationships with customers that goes beyond simple
demographic and customer service data. Relationship Marketing extends to include Inbound
Marketing efforts (a combination of search optimization and Strategic Content), PR, Social Media
and Application Development.

Just like Customer relationship management(CRM), Relationship Marketing is a broadly


recognized, widely-implemented strategy for managing and nurturing a company’s interactions
with clients and sales prospects. It also involves using technology to, organize, synchronize
business processes (principally sales and marketing activities) and most importantly, automate
those marketing and communication activities on concrete marketing sequences that could run in
autopilot (also known as marketing sequences). The overall goals are to find, attract, and win new
clients, nurture and retain those the company already has, entice former clients back into the fold,
and reduce the costs of marketing and client service. [1] Once simply a label for a category of
software tools, today, it generally denotes a company-wide business strategy embracing all client-
facing departments and even beyond. When an implementation is effective, people, processes,
and technology work in synergy to increase profitability, and reduce operational costs

Reasons for a correlation between two variables: Chance association, (the relationship is due
to chance) or causative association (one variable causes the other).
The information given by a correlation coefficient is not enough to define the dependence
structure between random variables. The correlation coefficient completely defines the
dependence structure only in very particular cases, for example when the distribution is a
multivariate normal distribution. (See diagram above.) In the case of elliptic distributions it
characterizes the (hyper-)ellipses of equal density, however, it does not completely characterize
the dependence structure (for example, a multivariate t-distribution's degrees of freedom
determine the level of tail dependence).

Distance correlation and Brownian covariance / Brownian correlation [8][9] were introduced to
address the deficiency of Pearson's correlation that it can be zero for dependent random
variables; zero distance correlation and zero Brownian correlation imply independence.

The correlation ratio is able to detect almost any functional dependency, or the entropy-based
mutual information/total correlation which is capable of detecting even more general
dependencies. The latter are sometimes referred to as multi-moment correlation measures, in
comparison to those that consider only 2nd moment (pairwise or quadratic) dependence.

The polychoric correlation is another correlation applied to ordinal data that aims to estimate the
correlation between theorised latent variables.

One way to capture a more complete view of dependence structure is to consider a copula
between them.

Q 4. Briefly explain any two factors that affect the choice of a sampling technique. What
are the characteristics of a good sample?

Ans.: The difference between non-probability and probability sampling is that non-probability
sampling does not involve random selection and probability sampling does. Does that mean that
non-probability samples aren't representative of the population? Not necessarily. But it does
mean that non-probability samples cannot depend upon the rationale of probability theory. At
least with a probabilistic sample, we know the odds or probability that we have represented the
population well. We are able to estimate confidence intervals for the statistic. With non-probability
samples, we may or may not represent the population well, and it will often be hard for us to know
how well we've done so. In general, researchers prefer probabilistic or random sampling methods
over non probabilistic ones, and consider them to be more accurate and rigorous. However, in
applied social research there may be circumstances where it is not feasible, practical or
theoretically sensible to do random sampling. Here, we consider a wide range of non-probabilistic
alternatives.

We can divide non-probability sampling methods into two broad types:


Accidental or purposive.

Most sampling methods are purposive in nature because we usually approach the
sampling problem with a specific plan in mind. The most important distinctions among these types
of sampling methods are the ones between the different types of purposive sampling approaches.

Accidental, Haphazard or Convenience Sampling


One of the most common methods of sampling goes under the various titles listed here. I
would include in this category the traditional "man on the street" (of course, now it's probably the
"person on the street") interviews conducted frequently by television news programs to get a
quick (although non representative) reading of public opinion. I would also argue that the typical
use of college students in much psychological research is primarily a matter of convenience. (You
don't really believe that psychologists use college students because they believe they're
representative of the population at large, do you?). In clinical practice, we might use clients who
are available to us as our sample. In many research contexts, we sample simply by asking for
volunteers. Clearly, the problem with all of these types of samples is that we have no evidence
that they are representative of the populations we're interested in generalizing to -- and in many
cases we would clearly suspect that they are not.

Purposive Sampling
In purposive sampling, we sample with a purpose in mind. We usually would have one or
more specific predefined groups we are seeking. For instance, have you ever run into people in a
mall or on the street who are carrying a clipboard and who are stopping various people and
asking if they could interview them? Most likely they are conducting a purposive sample (and
most likely they are engaged in market research). They might be looking for Caucasian females
between 30-40 years old. They size up the people passing by and anyone who looks to be in that
category they stop to ask if they will participate. One of the first things they're likely to do is verify
that the respondent does in fact meet the criteria for being in the sample. Purposive sampling can
be very useful for situations where you need to reach a targeted sample quickly and where
sampling for proportionality is not the primary concern. With a purposive sample, you are likely to
get the opinions of your target population, but you are also likely to overweight subgroups in your
population that are more readily accessible.
All of the methods that follow can be considered subcategories of purposive sampling
methods. We might sample for specific groups or types of people as in modal instance, expert, or
quota sampling. We might sample for diversity as in heterogeneity sampling. Or, we might
capitalize on informal social networks to identify specific respondents who are hard to locate
otherwise, as in snowball sampling. In all of these methods we know what we want -- we are
sampling with a purpose.

• Modal Instance Sampling


In statistics, the mode is the most frequently occurring value in a distribution. In sampling, when
we do a modal instance sample, we are sampling the most frequent case, or the "typical" case. In
a lot of informal public opinion polls, for instance, they interview a "typical" voter. There are a
number of problems with this sampling approach. First, how do we know what the "typical" or
"modal" case is? We could say that the modal voter is a person who is of average age,
educational level, and income in the population. But, it's not clear that using the averages of these
is the fairest (consider the skewed distribution of income, for instance). And, how do you know
that those three variables -- age, education, income -- are the only or even the most relevant for
classifying the typical voter? What if religion or ethnicity is an important discriminator? Clearly,
modal instance sampling is only sensible for informal sampling contexts.

• Expert Sampling
Expert sampling involves the assembling of a sample of persons with known or demonstrable
experience and expertise in some area. Often, we convene such a sample under the auspices of
a "panel of experts." There are actually two reasons you might do expert sampling. First, because
it would be the best way to elicit the views of persons who have specific expertise. In this case,
expert sampling is essentially just a specific sub case of purposive sampling. But the other reason
you might use expert sampling is to provide evidence for the validity of another sampling
approach you've chosen. For instance, let's say you do modal instance sampling and are
concerned that the criteria you used for defining the modal instance are subject to criticism. You
might convene an expert panel consisting of persons with acknowledged experience and insight
into that field or topic and ask them to examine your modal definitions and comment on their
appropriateness and validity. The advantage of doing this is that you aren't out on your own trying
to defend your decisions -- you have some acknowledged experts to back you. The disadvantage
is that even the experts can be, and often are, wrong.

• Quota Sampling
In quota sampling, you select people non-randomly according to some fixed quota. There are two
types of quota sampling: proportional and non proportional. In proportional quota sampling you
want to represent the major characteristics of the population by sampling a proportional amount
of each. For instance, if you know the population has 40% women and 60% men, and that you
want a total sample size of 100, you will continue sampling until you get those percentages and
then you will stop. So, if you've already got the 40 women for your sample, but not the sixty men,
you will continue to sample men but even if legitimate women respondents come along, you will
not sample them because you have already "met your quota." The problem here (as in much
purposive sampling) is that you have to decide the specific characteristics on which you will base
the quota. Will it be by gender, age, education race, religion, etc.?
Non-proportional quota sampling is a bit less restrictive. In this method, you specify the
minimum number of sampled units you want in each category. Here, you're not concerned with
having numbers that match the proportions in the population. Instead, you simply want to have
enough to assure that you will be able to talk about even small groups in the population. This
method is the non-probabilistic analogue of stratified random sampling in that it is typically used
to assure that smaller groups are adequately represented in your sample.

• Heterogeneity Sampling
We sample for heterogeneity when we want to include all opinions or views, and we aren't
concerned about representing these views proportionately. Another term for this is sampling for
diversity. In many brainstorming or nominal group processes (including concept mapping), we
would use some form of heterogeneity sampling because our primary interest is in getting broad
spectrum of ideas, not identifying the "average" or "modal instance" ones. In effect, what we
would like to be sampling is not people, but ideas. We imagine that there is a universe of all
possible ideas relevant to some topic and that we want to sample this population, not the
population of people who have the ideas. Clearly, in order to get all of the ideas, and especially
the "outlier" or unusual ones, we have to include a broad and diverse range of participants.
Heterogeneity sampling is, in this sense, almost the opposite of modal instance sampling.

• Snowball Sampling
In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in
your study. You then ask them to recommend others who they may know who also meet the
criteria. Although this method would hardly lead to representative samples, there are times when
it may be the best method available. Snowball sampling is especially useful when you are trying
to reach populations that are inaccessible or hard to find. For instance, if you are studying the
homeless, you are not likely to be able to find good lists of homeless people within a specific
geographical area. However, if you go to that area and identify one or two, you may find that they
know very well whom the other homeless people in their vicinity are and how you can find them.
Characteristics of good Sample: The decision process is a complicated one. The researcher
has to first identify the limiting factor or factors and must judiciously balance the conflicting
factors. The various criteria governing the choice of the sampling technique are:
11. Purpose of the Survey: What does the researcher aim at? If he intends to
generalize the findings based on the sample survey to the population, then an
appropriate probability sampling method must be selected. The choice of a particular
type of probability sampling depends on the geographical area of the survey and the
size and the nature of the population under study.
22.Measurability: The application of statistical inference theory requires computation of
the sampling error from the sample itself. Only probability samples allow such
computation. Hence, where the research objective requires statistical inference, the
sample should be drawn by applying simple random sampling method or stratified
random sampling method, depending on whether the population is homogenous or
heterogeneous.
33.Degree of Precision: Should the results of the survey be very precise, or could even
rough results serve the purpose? The desired level of precision is one of the criteria
for sampling method selection. Where a high degree of precision of results is desired,
probability sampling should be used. Where even crude results would serve the
purpose (E.g., marketing surveys, readership surveys etc), any convenient non-
random sampling like quota sampling would be enough.
44. Information about Population: How much information is available about the
population to be studied? Where no list of population and no information about its
nature are available, it is difficult to apply a probability sampling method. Then an
exploratory study with non-probability sampling may be done to gain a better idea of
the population. After gaining sufficient knowledge about the population through the
exploratory study, an appropriate probability sampling design may be adopted.
55. The Nature of the Population: In terms of the variables to be studied, is the
population homogenous or heterogeneous? In the case of a homogenous population,
even simple random sampling will give a representative sample. If the population is
heterogeneous, stratified random sampling is appropriate.
66. Geographical Area of the Study and the Size of the Population: If the area
covered by a survey is very large and the size of the population is quite large, multi-
stage cluster sampling would be appropriate. But if the area and the size of the
population are small, single stage probability sampling methods could be used.
77. Financial Resources: If the available finance is limited, it may become necessary to
choose a less costly sampling plan like multistage cluster sampling, or even quota
sampling as a compromise. However, if the objectives of the study and the desired
level of precision cannot be attained within the stipulated budget, there is no
alternative but to give up the proposed survey. Where the finance is not a constraint,
a researcher can choose the most appropriate method of sampling that fits the
research objective and the nature of population.
88. Time Limitation: The time limit within which the research project should be
completed restricts the choice of a sampling method. Then, as a compromise, it may
become necessary to choose less time consuming methods like simple random
sampling, instead of stratified sampling/sampling with probability proportional to size;
or multi-stage cluster sampling, instead of single-stage sampling of elements. Of
course, the precision has to be sacrificed to some extent.
99. Economy: It should be another criterion in choosing the sampling method. It means
achieving the desired level of precision at minimum cost. A sample is economical if
the precision per unit cost is high, or the cost per unit of variance is low. The above
criteria frequently conflict with each other and the researcher must balance and blend
them to obtain a good sampling plan. The chosen plan thus represents an adaptation
of the sampling theory to the available facilities and resources. That is, it represents a
compromise between idealism and feasibility. One should use simple workable
methods, instead of unduly elaborate and complicated techniques.

Q 5. Select any topic for research and explain how you will use both secondary and
primary sources to gather the required information.

Ans.: Primary Sources of Data


Primary sources are original sources from which the researcher directly collects data that has not
been previously collected, e.g., collection of data directly by the researcher on brand awareness,
brand preference, and brand loyalty and other aspects of consumer behavior, from a sample of
consumers by interviewing them. Primary data is first hand information collected through various
methods such as surveys, experiments and observation, for the purposes of the project
immediately at hand.
The advantages of primary data are –
1 It is unique to a particular research study
2 It is recent information, unlike published information that is already available
The disadvantages are –
1 It is expensive to collect, compared to gathering information from available
sources
2 Data collection is a time consuming process
3 It requires trained interviewers and investigators
2 Secondary Sources of Data
These are sources containing data, which has been collected and compiled for another purpose.
Secondary sources may be internal sources, such as annual reports, financial statements, sales
reports, inventory records, minutes of meetings and other information that is available within the
firm, in the form of a marketing information system. They may also be external sources, such as
government agencies (e.g. census reports, reports of government departments), published
sources (annual reports of currency and finance published by the Reserve Bank of India,
publications of international organizations such as the UN, World Bank and International
Monetary Fund, trade and financial journals, etc.), trade associations (e.g. Chambers of
Commerce) and commercial services (outside suppliers of information).
Methods of Data Collection:
The researcher directly collects primary data from its original sources. In this case, the researcher
can collect the required data precisely according to his research needs and he can collect them
when he wants and in the form that he needs it. But the collection of primary data is costly and
time consuming. Yet, for several types of social science research, required data is not available
from secondary sources and it has to be directly gathered from the primary sources.
Primary data has to be gathered in cases where the available data is inappropriate, inadequate or
obsolete. It includes: socio economic surveys, social anthropological studies of rural communities
and tribal communities, sociological studies of social problems and social institutions, marketing
research, leadership studies, opinion polls, attitudinal surveys, radio listening and T.V. viewing
surveys, knowledge-awareness practice (KAP) studies, farm management studies, business
management studies etc.
There are various methods of primary data collection, including surveys, audits and panels,
observation and experiments.
1 Survey Research
A survey is a fact-finding study. It is a method of research involving collection of data directly from
a population or a sample at a particular time. A survey has certain characteristics:
1 It is always conducted in a natural setting. It is a field study.
2 It seeks responses directly from the respondents.
3 It can cover a very large population.
4 It may include an extensive study or an intensive study
5 It covers a definite geographical area.

A survey involves the following steps -


1 Selection of a problem and its formulation
2 Preparation of the research design
3 Operation concepts and construction of measuring indexes and scales
4 Sampling
5 Construction of tools for data collection
6 Field work and collection of data
7 Processing of data and tabulation
8 Analysis of data
9 Reporting

There are four basic survey methods, which include:


1 Personal interview
2 Telephone interview
3 Mail survey and
4 Fax survey
Personal Interview
Personal interviewing is one of the prominent methods of data collection. It may be defined as a
two-way systematic conversation between an investigator and an informant, initiated for obtaining
information relevant to a specific study. It involves not only conversation, but also learning from
the respondent’s gestures, facial expressions and pauses, and his environment.
Interviewing may be used either as a main method or as a supplementary one in studies of
persons. Interviewing is the only suitable method for gathering information from illiterate or less
educated respondents. It is useful for collecting a wide range of data, from factual demographic
data to highly personal and intimate information relating to a person’s opinions, attitudes, values,
beliefs, experiences and future intentions. Interviewing is appropriate when qualitative information
is required, or probing is necessary to draw out the respondent fully. Where the area covered for
the survey is compact, or when a sufficient number of qualified interviewers are available,
personal interview is feasible.
Interview is often superior to other data-gathering methods. People are usually more willing to talk
than to write. Once rapport is established, even confidential information may be obtained. It
permits probing into the context and reasons for answers to questions.
Interview can add flesh to statistical information. It enables the investigator to grasp the
behavioral context of the data furnished by the respondents. It permits the investigator to seek
clarifications and brings to the forefront those questions, which for some reason or the other the
respondents do not want to answer. Interviewing as a method of data collection has certain
characteristics. They are:
1. The participants – the interviewer and the respondent – are strangers;
hence, the investigator has to get himself/herself introduced to the
respondent in an appropriate manner.
2. The relationship between the participants is a transitory one. It has a
fixed beginning and termination points. The interview proper is a fleeting,
momentary experience for them.
3. The interview is not a mere casual conversational exchange, but a
conversation with a specific purpose, viz., obtaining information relevant
to a study.
4. The interview is a mode of obtaining verbal answers to questions put
verbally.
5. The interaction between the interviewer and the respondent need not
necessarily be on a face-to-face basis, because the interview can also be
conducted over the telephone.
6. Although the interview is usually a conversation between two persons, it
need not be limited to a single respondent. It can also be conducted with
a group of persons, such as family members, or a group of children, or a
group of customers, depending on the requirements of the study.
7. The interview is an interactive process. The interaction between the
interviewer and the respondent depends upon how they perceive each
other.
8. The respondent reacts to the interviewer’s appearance, behavior,
gestures, facial expression and intonation, his perception of the thrust of
the questions and his own personal needs. As far as possible, the
interviewer should try to be closer to the social-economic level of the
respondents.
9. The investigator records information furnished by the respondent in the
interview. This poses a problem of seeing that recording does not
interfere with the tempo of conversation.
10. Interviewing is not a standardized process like that of a chemical
technician; it is rather a flexible, psychological process.
3 Telephone Interviewing Telephone interviewing is a non-personal method of data collection. It
may be used as a major method or as a supplementary method. It will be useful in the following
situations:
11. When the universe is composed of those persons whose names are
listed in telephone directories, e.g. business houses, business
executives, doctors and other professionals.
12. When the study requires responses to five or six simple questions, e.g. a
radio or television program survey.
13. When the survey must be conducted in a very short period of time,
provided the units of study are listed in the telephone directory.
14. When the subject is interesting or important to respondents, e.g. a survey
relating to trade conducted by a trade association or a chamber of
commerce, a survey relating to a profession conducted by the concerned
professional association.
15. When the respondents are widely scattered and when there are many
call backs to make.
4 Group Interviews A group interview may be defined as a method of collecting primary data in
which a number of individuals with a common interest interact with each other. In a personal
interview, the flow of information is multi dimensional. The group may consist of about six to eight
individuals with a common interest. The interviewer acts as the discussion leader. Free
discussion is encouraged on some aspect of the subject under study. The discussion leader
stimulates the group members to interact with each other. The desired information may be
obtained through self-administered questionnaire or interview, with the discussion serving as a
guide to ensure consideration of the areas of concern. In particular, the interviewers look for
evidence of common elements of attitudes, beliefs, intentions and opinions among individuals in
the group. At the same time, he must be aware that a single comment by a member can provide
important insight. Samples for group interviews can be obtained through schools, clubs and other
organized groups.
5 Mail Survey The mail survey is another method of collecting primary data. This method
involves sending questionnaires to the respondents with a request to complete them and return
them by post. This can be used in the case of educated respondents only. The mail
questionnaires should be simple so that the respondents can easily understand the questions and
answer them. It should preferably contain mostly closed-ended and multiple choice questions, so
that it could be completed within a few minutes. The distinctive feature of the mail survey is that
the questionnaire is self-administered by the respondents themselves and the responses are
recorded by them and not by the investigator, as in the case of personal interview method. It does
not involve face-to-face conversation between the investigator and the respondent.
Communication is carried out only in writing and this requires more cooperation from the
respondents than verbal communication. The researcher should prepare a mailing list of the
selected respondents, by collecting the addresses from the telephone directory of the association
or organization to which they belong. The following procedures should be followed -  a covering
letter should accompany a copy of the questionnaire. It must explain to the respondent the
purpose of the study and the importance of his cooperation to the success of the project. 
Anonymity must be assured.  The sponsor’s identity may be revealed. However, when such
information may bias the result, it is not desirable to reveal it. In this case, a disguised
organization name may be used.  A self-addressed stamped envelope should be enclosed in
the covering letter.
1 After a few days from the date of mailing the questionnaires to the respondents, the
researcher can expect the return of completed ones from them. The progress in return may be
watched and at the appropriate stage, follow-up efforts can be made.

The response rate in mail surveys is generally very low in developing countries like India. Certain
techniques have to be adopted to increase the response rate. They are:
11. Quality printing: The questionnaire may be neatly printed on quality light colored paper,
so as to attract the attention of the respondent.
22. Covering letter: The covering letter should be couched in a pleasant style, so as to
attract and hold the interest of the respondent. It must anticipate objections and answer
them briefly. It is desirable to address the respondent by name.
33. Advance information: Advance information can be provided to potential respondents by
a telephone call, or advance notice in the newsletter of the concerned organization, or by
a letter. Such preliminary contact with potential respondents is more successful than
follow-up efforts.
44. Incentives: Money, stamps for collection and other incentives are also used to induce
respondents to complete and return the mail questionnaire.
55. Follow-up-contacts: In the case of respondents belonging to an organization, they may
be approached through someone in that organization known as the researcher.
66. Larger sample size: A larger sample may be drawn than the estimated sample size. For
example, if the required sample size is 1000, a sample of 1500 may be drawn. This may
help the researcher to secure an effective sample size closer to the required size.
7
8Q 6. Case Study: You are engaged to carry out a market survey on behalf of a leading
Newspaper that is keen to increase its circulation in Bangalore City, in order to
ascertain reader habits and interests. Develop a title for the study; define the
research problem and the objectives or questions to be answered by the study.

Ans.: Title: Newspaper reading choices

Research problem: A research problem is the situation that causes the researcher to feel
apprehensive, confused and ill at ease. It is the demarcation of a problem area within a certain
context involving the WHO or WHAT, the WHERE, the WHEN and the WHY of the problem
situation.

There are many problem situations that may give rise to research. Three sources usually
contribute to problem identification. Own experience or the experience of others may be a source
of problem supply. A second source could be scientific literature. You may read about certain
findings and notice that a certain field was not covered. This could lead to a research problem.
Theories could be a third source. Shortcomings in theories could be researched.

Research can thus be aimed at clarifying or substantiating an existing theory, at clarifying


contradictory findings, at correcting a faulty methodology, at correcting the inadequate or
unsuitable use of statistical techniques, at reconciling conflicting opinions, or at solving existing
practical problems

Types of questions to be asked :For more than 35 years, the news about newspapers and
young readers has been mostly bad for the newspaper industry. Long before any competition
from cable television or Nintendo, American newspaper publishers were worrying about declining
readership among the young.

As early as 1960, at least 20 years prior to Music Television (MTV) or the Internet, media
research scholars1 began to focus their studies on young adult readers' decreasing interest in
newspaper content. The concern over a declining youth market preceded and perhaps
foreshadowed today's fretting over market penetration. Even where circulation has grown or
stayed stable, there is rising concern over penetration, defined as the percentage of occupied
households in a geographic market that are served by a newspaper.2 Simply put, population
growth is occurring more rapidly than newspaper readership in most communities.

This study looks at trends in newspaper readership among the 18-to-34 age group and examines
some of the choices young adults make when reading newspapers.

One of the underlying concerns behind the decline in youth newspaper reading is the question of
how young people view the newspaper. A number of studies explored how young readers
evaluate and use newspaper content.

Comparing reader content preferences over a 10-year period, Gerald Stone and Timothy
Boudreau found differences between readers ages 18-34 and those 35-plus.16 Younger readers
showed increased interest in national news, weather, sports, and classified advertisements over
the decade between 1984 and 1994, while older readers ranked weather, editorials, and food
advertisements higher. Interest in international news and letters to the editor was less among
younger readers, while older readers showed less interest in reports of births, obituaries, and
marriages.

David Atkin explored the influence of telecommunication technology on newspaper readership


among students in undergraduate media courses.17 He reported that computer-related
technologies, including electronic mail and computer networks, were unrelated to newspaper
readership. The study found that newspaper subscribers preferred print formats over electronic.
In a study of younger, school-age children, Brian Brooks and James Kropp found that electronic
newspapers could persuade children to become news consumers, but that young readers would
choose an electronic newspaper over a printed one.18

In an exploration of leisure reading among college students, Leo Jeffres and Atkin assessed
dimensions of interest in newspapers, magazines, and books,19 exploring the influence of media
use, non-media leisure, and academic major on newspaper content preferences. The study
discovered that overall newspaper readership was positively related to students' focus on
entertainment, job / travel information, and public affairs. However, the students' preference for
reading as a leisure-time activity was related only to a public affairs focus. Content preferences
for newspapers and other print media were related. The researchers found no significant
differences in readership among various academic majors, or by gender, though there was a
slight correlation between age and the public affairs readership index, with older readers more
interested in news about public affairs.

Methodology

Sample

Participants in this study (N=267) were students enrolled in 100- and 200-level English courses at
a midwestern public university. Courses that comprise the framework for this sample were
selected because they could fulfill basic studies requirements for all majors. A basic studies
course is one that is listed within the core curriculum required for all students. The researcher
obtained permission from seven professors to distribute questionnaires in the eight classes during
regularly scheduled class periods. The students' participation was voluntary; two students
declined. The goal of this sampling procedure was to reach a cross-section of students
representing various fields of study. In all, 53 majors were represented.

Of the 267 students who participated in the study, 65 (24.3 percent) were male and 177 (66.3
percent) were female. A total of 25 participants chose not to divulge their genders. Ages ranged
from 17 to 56, with a mean age of 23.6 years. This mean does not include the 32 respondents
who declined to give their ages. A total of 157 participants (58.8 percent) said they were of the
Caucasian race, 59 (22.1 percent) African American, 10 (3.8 percent) Asian, five (1.9 percent)
African/Native American, two (.8 percent) Hispanic, two (.8 percent) Native American, and one (.4
percent) Arabic. Most (214) of the students were enrolled full time, whereas a few (28) were part-
time students. The class rank breakdown was: freshmen, 45 (16.9 percent); sophomores, 15 (5.6
percent); juniors, 33 (12.4 percent); seniors, 133 (49.8 percent); and graduate students, 16 (6
percent).

Procedure

After two pre-tests and revisions, questionnaires were distributed and collected by the
investigator. In each of the eight classes, the researcher introduced herself to the students as a
journalism professor who was conducting a study on students' use of newspapers and other
media. Each questionnaire included a cover letter with the researcher's name, address, and
phone number. The researcher provided pencils and was available to answer questions if anyone
needed further assistance. The average time spent on the questionnaires was 20 minutes, with
some individual students taking as long as an hour. Approximately six students asked to take the
questionnaires home to finish. They returned the questionnaires to the researcher's mailbox
within a couple of day.

Assignment Set- 2

Q 1.Discuss the relative advantages and disadvantages of the different methods of


distributing questionnaires to the respondents of a study.

Ans.: There are some alternative methods of distributing questionnaires to the respondents.
They are:
1) Personal delivery,
2) Attaching the questionnaire to a product,
3) Advertising the questionnaire in a newspaper or magazine, and
4) News-stand inserts.
Personal delivery: The researcher or his assistant may deliver the questionnaires to the
potential respondents, with a request to complete them at their convenience. After a day or two,
the completed questionnaires can be collected from them. Often referred to as the self-
administered questionnaire method, it combines the advantages of the personal interview and the
mail survey. Alternatively, the questionnaires may be delivered in person and the respondents
may return the completed questionnaires through mail.
Attaching questionnaire to a product: A firm test marketing a product may attach a
questionnaire to a product and request the buyer to complete it and mail it back to the firm. A gift
or a discount coupon usually rewards the respondent.
Advertising the questionnaire: The questionnaire with the instructions for completion may be
advertised on a page of a magazine or in a section of newspapers. The potential respondent
completes it, tears it out and mails it to the advertiser. For example, the committee of Banks
Customer Services used this method for collecting information from the customers of commercial
banks in India. This method may be useful for large-scale studies on topics of common interest.
Newsstand inserts: This method involves inserting the covering letter, questionnaire and self
addressed reply-paid envelope into a random sample of newsstand copies of a newspaper or
magazine.
Advantages and Disadvantages:
The advantages of Questionnaire are:
 this method facilitates collection of more accurate data for longitudinal studies than any other
method, because under this method, the event or action is reported soon after its occurrence.
 this method makes it possible to have before and after designs made for field based studies.
For example, the effect of public relations or advertising campaigns or welfare measures can be
measured by collecting data before, during and after the campaign.
 the panel method offers a good way of studying trends in events, behavior or attitudes. For
example, a panel enables a market researcher to study how brand preferences change from
month to month; it enables an economics researcher to study how employment, income and
expenditure of agricultural laborers change from month to month; a political scientist can study
the shifts in inclinations of voters and the causative influential factors during an election. It is also
possible to find out how the constituency of the various economic and social strata of society
changes through time and so on.
 A panel study also provides evidence on the causal relationship between variables. For
example, a cross sectional study of employees may show an association between their attitude to
their jobs and their positions in the organization, but it does not indicate as to which comes first -
favorable attitude or promotion. A panel study can provide data for finding an answer to this
question.
 It facilities depth interviewing, because panel members become well acquainted with the field
workers and will be willing to allow probing interviews.
The major limitations or problems of Questionnaire method are:
 this method is very expensive. The selection of panel members, the payment of premiums,
periodic training of investigators and supervisors, and the costs involved in replacing dropouts, all
add to the expenditure.
 it is often difficult to set up a representative panel and to keep it representative. Many persons
may be unwilling to participate in a panel study. In the course of the study, there may be frequent
dropouts. Persons with similar characteristics may replace the dropouts. However, there is no
guarantee that the emerging panel would be representative.
 A real danger with the panel method is “panel conditioning” i.e., the risk that repeated
interviews may sensitize the panel members and they become untypical, as a result of being on
the panel. For example, the members of a panel study of political opinions may try to appear
consistent in the views they express on consecutive occasions. In such cases, the panel
becomes untypical of the population it was selected to represent. One possible safeguard to
panel conditioning is to give members of a panel only a limited panel life and then to replace them
with persons taken randomly from a reserve list.
 the quality of reporting may tend to decline, due to decreasing interest, after a panel has been
in operation for some time. Cheating by panel members or investigators may be a problem in
some cases.

Q 2. In processing data, what is the difference between measures of central tendency and
measures of dispersion? What is the most important measure of central tendency and
dispersion?

Ans.: Measures of Central tendency:


Arithmetic Mean
The arithmetic mean is the most common measure of central tendency. It simply the sum of the
numbers divided by the number of numbers. The symbol m is used for the mean of a population.
The symbol M is used for the mean of a sample. The formula for m is shown below: m=
ΣX
N
Where ΣX is the sum of all the numbers in the numbers in the sample and N is the number of
numbers in the sample. As an example, the mean of the numbers 1+2+3+6+8=
20
5
=4 regardless of whether the numbers constitute the entire population or just a sample from
the population.
The table, Number of touchdown passes, shows the number of touchdown (TD) passes thrown
by each of the 31 teams in the National Football League in the 2000 season. The mean number
of touchdown passes thrown is 20.4516 as shown below. m=
ΣX
N
=
634
31
=20.4516
37 33 33 32 29 28 28 23
22 22 22 21 21 21 20 20
19 19 18 18 18 18 16 15
14 14 14 12 12 9 6
Table 1: Number of touchdown passes
Although the arithmetic mean is not the only "mean" (there is also a geometric mean), it is by far
the most commonly used. Therefore, if the term "mean" is used without specifying whether it is
the arithmetic mean, the geometric mean, or some other mean, it is assumed to refer to the
arithmetic mean.
Median
The median is also a frequently used measure of central tendency. The median is the midpoint of
a distribution: the same number of scores is above the median as below it. For the data in the
table, Number of touchdown passes, there are 31 scores. The 16th highest score (which equals
20) is the median because there are 15 scores below the 16th score and 15 scores above the
16th score. The median can also be thought of as the 50th percentile.
Let's return to the made up example of the quiz on which you made a three discussed previously
in the module Introduction to Central Tendency and shown in Table 2.
Student Dataset 1 Dataset 2 Dataset 3
You 3 3 3
John's 3 4 2
Maria's 3 4 2
Shareecia's 3 4 2
Luther's 3 5 1
Table 2: Three possible datasets for the 5-point make-up quiz
For Dataset 1, the median is three, the same as your score. For Dataset 2, the median is 4.
Therefore, your score is below the median. This means you are in the lower half of the class.
Finally for Dataset 3, the median is 2. For this dataset, your score is above the median and
therefore in the upper half of the distribution.
Computation of the Median: When there is an odd number of numbers, the median is simply the
middle number. For example, the median of 2, 4, and 7 is 4. When there is an even number of
numbers, the median is the mean of the two middle numbers. Thus, the median of the numbers 2,
4, 7, 12 is
4+7
2
=5.5.
Mode
The mode is the most frequently occurring value. For the data in the table, Number of touchdown
passes, the mode is 18 since more teams (4) had 18 touchdown passes than any other number
of touchdown passes. With continuous data such as response time measured to many decimals,
the frequency of each value is one since no two scores will be exactly the same (see discussion
of continuous variables). Therefore the mode of continuous data is normally computed from a
grouped frequency distribution. The Grouped frequency distribution table shows a grouped
frequency distribution for the target response time data. Since the interval with the highest
frequency is 600-700, the mode is the middle of that interval (650).
Range Frequency
500-600 3
600-700 6
700-800 5
800-900 5
900-1000 0
1000-1100 1
Table 3: Grouped frequency distribution

Measures of Dispersion: A measure of statistical dispersion is a real number that is zero if all
the data are identical, and increases as the data becomes more diverse. It cannot be less than
zero.
Most measures of dispersion have the same scale as the quantity being measured. In other
words, if the measurements have units, such as metres or seconds, the measure of dispersion
has the same units. Such measures of dispersion include:

• Standard deviation
• Interquartile range
• Range
• Mean difference
• Median absolute deviation
• Average absolute deviation (or simply called average deviation)
• Distance standard deviation

These are frequently used (together with scale factors) as estimators of scale parameters, in
which capacity they are called estimates of scale.

All the above measures of statistical dispersion have the useful property that they are location-
invariant, as well as linear in scale. So if a random variable X has a dispersion of SX then a linear
transformation Y = aX + b for real a and b should have dispersion SY = |a|SX.

Other measures of dispersion are dimensionless (scale-free). In other words, they have no
units even if the variable itself has units. These include:

• Coefficient of variation
• Quartile coefficient of dispersion
• Relative mean difference, equal to twice the Gini coefficient

There are other measures of dispersion:

• Variance (the square of the standard deviation) — location-invariant but not linear in
scale.
• Variance-to-mean ratio — mostly used for count data when the term coefficient of
dispersion is used and when this ratio is dimensionless, as count data are themselves
dimensionless: otherwise this is not scale-free.

Some measures of dispersion have specialized purposes, among them the Allan variance and
the Hadamard variance.

For categorical variables, it is less common to measure dispersion by a single number. See
qualitative variation. One measure that does so is the discrete entropy.

Sources of statistical dispersion

In the physical sciences, such variability may result only from random measurement errors:
instrument measurements are often not perfectly precise, i.e., reproducible. One may assume
that the quantity being measured is unchanging and stable, and that the variation between
measurements is due to observational error.

In the biological sciences, this assumption is false: the variation observed might be intrinsic to the
phenomenon: distinct members of a population differ greatly. This is also seen in the arena of
manufactured products; even there, the meticulous scientist finds variation.The simple model of a
stable quantity is preferred when it is tenable. Each phenomenon must be examined to see if it
warrants such a simplification.
Q 3. What are the characteristics of a good research design? Explain how the research
design for exploratory studies is different from the research design for descriptive and
diagnostic studies.

Ans.: Good research design:Much contemporary social research is devoted to examining


whether a program, treatment, or manipulation causes some outcome or result. For example, we
might wish to know whether a new educational program causes subsequent achievement score
gains, whether a special work release program for prisoners causes lower recidivism rates,
whether a novel drug causes a reduction in symptoms, and so on. Cook and Campbell (1979)
argue that three conditions must be met before we can infer that such a cause-effect relation
exists:

1. Covariation. Changes in the presumed cause must be related to changes in the


presumed effect. Thus, if we introduce, remove, or change the level of a treatment or
program, we should observe some change in the outcome measures.
2. Temporal Precedence. The presumed cause must occur prior to the presumed effect.
3. No Plausible Alternative Explanations. The presumed cause must be the only
reasonable explanation for changes in the outcome measures. If there are other factors,
which could be responsible for changes in the outcome measures, we cannot be
confident that the presumed cause-effect relationship is correct.

In most social research the third condition is the most difficult to meet. Any number of factors
other than the treatment or program could cause changes in outcome measures. Campbell and
Stanley (1966) and later, Cook and Campbell (1979) list a number of common plausible
alternative explanations (or, threats to internal validity). For example, it may be that some
historical event which occurs at the same time that the program or treatment is instituted was
responsible for the change in the outcome measures; or, changes in record keeping or
measurement systems which occur at the same time as the program might be falsely attributed to
the program. The reader is referred to standard research methods texts for more detailed
discussions of threats to validity.

This paper is primarily heuristic in purpose. Standard social science methodology textbooks
(Cook and Campbell 1979; Judd and Kenny, 1981) typically present an array of research designs
and the alternative explanations, which these designs rule out or minimize. This tends to foster a
"cookbook" approach to research design - an emphasis on the selection of an available design
rather than on the construction of an appropriate research strategy. While standard designs may
sometimes fit real-life situations, it will often be necessary to "tailor" a research design to
minimize specific threats to validity. Furthermore, even if standard textbook designs are used, an
understanding of the logic of design construction in general will improve the comprehension of
these standard approaches. This paper takes a structural approach to research design. While this
is by no means the only strategy for constructing research designs, it helps to clarify some of the
basic principles of design logic.

Minimizing Threats to Validity

Good research designs minimize the plausible alternative explanations for the hypothesized
cause-effect relationship. But such explanations may be ruled out or minimized in a number of
ways other than by design. The discussion, which follows, outlines five ways to minimize threats
to validity, one of which is by research design:

1. By Argument. The most straightforward way to rule out a potential threat to validity is to
simply argue that the threat in question is not a reasonable one. Such an argument may
be made either a priori or a posteriori, although the former will usually be more
convincing than the latter. For example, depending on the situation, one might argue that
an instrumentation threat is not likely because the same test is used for pre and post test
measurements and did not involve observers who might improve, or other such factors.
In most cases, ruling out a potential threat to validity by argument alone will be weaker
than the other approaches listed below. As a result, the most plausible threats in a study
should not, except in unusual cases, be ruled out by argument only.
2. By Measurement or Observation. In some cases it will be possible to rule out a threat
by measuring it and demonstrating that either it does not occur at all or occurs so
minimally as to not be a strong alternative explanation for the cause-effect relationship.
Consider, for example, a study of the effects of an advertising campaign on subsequent
sales of a particular product. In such a study, history (i.e., the occurrence of other events
which might lead to an increased desire to purchase the product) would be a plausible
alternative explanation. For example, a change in the local economy, the removal of a
competing product from the market, or similar events could cause an increase in product
sales. One might attempt to minimize such threats by measuring local economic
indicators and the availability and sales of competing products. If there is no change in
these measures coincident with the onset of the advertising campaign, these threats
would be considerably minimized. Similarly, if one is studying the effects of special
mathematics training on math achievement scores of children, it might be useful to
observe everyday classroom behavior in order to verify that students were not receiving
any additional math training to that provided in the study.
3. By Design. Here, the major emphasis is on ruling out alternative explanations by adding
treatment or control groups, waves of measurement, and the like. This topic will be
discussed in more detail below.
4. By Analysis. There are a number of ways to rule out alternative explanations using
statistical analysis. One interesting example is provided by Jurs and Glass (1971). They
suggest that one could study the plausibility of an attrition or mortality threat by
conducting a two-way analysis of variance. One factor in this study would be the original
treatment group designations (i.e., program vs. comparison group), while the other factor
would be attrition (i.e., dropout vs. non-dropout group). The dependent measure could be
the pretest or other available pre-program measures. A main effect on the attrition factor
would be indicative of a threat to external validity or generalizability, while an interaction
between group and attrition factors would point to a possible threat to internal validity.
Where both effects occur, it is reasonable to infer that there is a threat to both internal
and external validity.

The plausibility of alternative explanations might also be minimized using covariance


analysis. For example, in a study of the effects of "workfare" programs on social welfare
caseloads, one plausible alternative explanation might be the status of local economic
conditions. Here, it might be possible to construct a measure of economic conditions and
include that measure as a covariate in the statistical analysis. One must be careful when
using covariance adjustments of this type -- "perfect" covariates do not exist in most
social research and the use of imperfect covariates will not completely adjust for potential
alternative explanations. Nevertheless causal assertions are likely to be strengthened by
demonstrating that treatment effects occur even after adjusting on a number of good
covariates.

5. By Preventive Action. When potential threats are anticipated some type of preventive
action can often rule them out. For example, if the program is a desirable one, it is likely
that the comparison group would feel jealous or demoralized. Several actions can be
taken to minimize the effects of these attitudes including offering the program to the
comparison group upon completion of the study or using program and comparison
groups which have little opportunity for contact and communication. In addition, auditing
methods and quality control can be used to track potential experimental dropouts or to
insure the standardization of measurement.
The five categories listed above should not be considered mutually exclusive. The inclusion of
measurements designed to minimize threats to validity will obviously be related to the design
structure and is likely to be a factor in the analysis. A good research plan should, where possible.
make use of multiple methods for reducing threats. In general, reducing a particular threat by
design or preventive action will probably be stronger than by using one of the other three
approaches. The choice of which strategy to use for any particular threat is complex and depends
at least on the cost of the strategy and on the potential seriousness of the threat.

Design Construction

Basic Design Elements. Most research designs can be constructed from four basic elements:

1. Time. A causal relationship, by its very nature, implies that some time has elapsed
between the occurrence of the cause and the consequent effect. While for some
phenomena the elapsed time might be measured in microseconds and therefore might be
unnoticeable to a casual observer, we normally assume that the cause and effect in
social science arenas do not occur simultaneously, In design notation we indicate this
temporal element horizontally - whatever symbol is used to indicate the presumed cause
would be placed to the left of the symbol indicating measurement of the effect. Thus, as
we read from left to right in design notation we are reading across time. Complex designs
might involve a lengthy sequence of observations and programs or treatments across
time.
2. Program(s) or Treatment(s). The presumed cause may be a program or treatment
under the explicit control of the researcher or the occurrence of some natural event or
program not explicitly controlled. In design notation we usually depict a presumed cause
with the symbol "X". When multiple programs or treatments are being studied using the
same design, we can keep the programs distinct by using subscripts such as "X1" or "X2".
For a comparison group (i.e., one which does not receive the program under study) no
"X" is used.
3. Observation(s) or Measure(s). Measurements are typically depicted in design notation
with the symbol "O". If the same measurement or observation is taken at every point in
time in a design, then this "O" will be sufficient. Similarly, if the same set of measures is
given at every point in time in this study, the "O" can be used to depict the entire set of
measures. However, if different measures are given at different times it is useful to
subscript the "O" to indicate which measurement is being given at which point in time.
4. Groups or Individuals. The final design element consists of the intact groups or the
individuals who participate in various conditions. Typically, there will be one or more
program and comparison groups. In design notation, each group is indicated on a
separate line. Furthermore, the manner in which groups are assigned to the conditions
can be indicated by an appropriate symbol at the beginning of each line. Here, "R" will
represent a group, which was randomly assigned, "N" will depict a group, which was
nonrandom assigned (i.e., a nonequivalent group or cohort) and a "C" will indicate that
the group was assigned using a cutoff score on a measurement.

Q 4. How is the Case Study method useful in Business Research? Give two specific
examples of how the case study method can be applied to business research.

Ans.: While case study writing may seem easy at first glance, developing an effective case study
(also called a success story) is an art. Like other marketing communication skills, learning how to
write a case study takes time. What’s more, writing case studies without careful planning usually
results in sub optimal results?
Savvy case study writers increase their chances of success by following these ten proven
techniques for writing an effective case study:

Involve the
customer
throughout the
process. Involving the customer throughout the case study development process helps ensure
customer cooperation and approval, and results in an improved case study. Obtain customer
permission before writing the document, solicit input during the development, and secure
approval after drafting the document.
• Write all customer quotes for their review. Rather than asking the customer to draft
their quotes, writing them for their review usually results in more compelling material.

Case Study Writing Ideas


• Establish a document template. A template serves as a roadmap for the case study
process, and ensures that the document looks, feels, and reads consistently. Visually, the
template helps build the brand; procedurally, it simplifies the actual writing. Before
beginning work, define 3-5 specific elements to include in every case study, formalize
those elements, and stick to them.
• Start with a bang. Use action verbs and emphasize benefits in the case study title and
subtitle. Include a short (less than 20-word) customer quote in larger text. Then,
summarize the key points of the case study in 2-3 succinct bullet points. The goal should
be to tease the reader into wanting to read more.
• Organize according to problem, solution, and benefits. Regardless of length, the
time-tested, most effective organization for a case study follows the problem-solution-
benefits flow. First, describe the business and/or technical problem or issue; next,
describe the solution to this problem or resolution of this issue; finally, describe how the
customer benefited from the particular solution (more on this below). This natural story-
telling sequence resonates with readers.
• Use the general-to-specific-to-general approach. In the problem section, begin with a
general discussion of the issue that faces the relevant industry. Then, describe the
specific problem or issue that the customer faced. In the solution section, use the
opposite sequence. First, describe how the solution solved this specific problem; then
indicate how it can also help resolve this issue more broadly within the industry.
Beginning more generally draws the reader into the story; offering a specific example
demonstrates, in a concrete way, how the solution resolves a commonly faced issue; and
concluding more generally allows the reader to understand how the solution can also
address their problem.
• Quantify benefits when possible. No single element in a case study is more compelling
than the ability to tie quantitative benefits to the solution. For example, “Using Solution X
saved Customer Y over $ZZZ, ZZZ after just 6 months of implementation;” or, “Thanks to
Solution X, employees at Customer Y have realized a ZZ% increase in productivity as
measured by standard performance indicators.” Quantifying benefits can be challenging,
but not impossible. The key is to present imaginative ideas to the customer for ways to
quantify the benefits, and remain flexible during this discussion. If benefits cannot be
quantified, attempt to develop a range of qualitative benefits; the latter can be quite
compelling to readers as well.
• Use photos. Ask the customer if they can provide shots of personnel, ideally using the
solution. The shots need not be professionally done; in fact, “homegrown” digital photos
sometimes lead to surprisingly good results and often appear more genuine. Photos
further personalize the story and help form a connection to readers.
• Reward the customer. After receiving final customer approval and finalizing the case
study, provide a pdf, as well as printed copies, to the customer. Another idea is to frame
a copy of the completed case study and present it to the customer in appreciation for
their efforts and cooperation.
Writing a case study is not easy. Even with the best plan, a case study is doomed to failure if the
writer lacks the exceptional writing skills, technical savvy, and marketing experience that these
documents require. In many cases, a talented writer can mean the difference between an
ineffective case study and one that provides the greatest benefit. If a qualified internal writer is
unavailable, consider outsourcing the task to professionals who specialize in case study writing.

Q 5. What are the differences between observation and interviewing as methods of data
collection? Give two specific examples of situations where either observation or
interviewing would be more appropriate.

Ans.: Observation means viewing or seeing. Observation may be defined as a systematic


viewing of a specific phenomenon on its proper setting for the specific purpose of gathering data
for a particular study. Observation is classical method of scientific study.

The prerequisites of observation consist of:


• Observations must be done under conditions, which will permit accurate results. The
observer must be in vantage point to see clearly the objects to be observed. The
distance and the light must be satisfactory. The mechanical devices used must be in
good working conditions and operated by skilled persons.

• Observation must cover a sufficient number of representative samples of the cases.

• Recording should be accurate and complete.

• The accuracy and completeness of recorded results must be checked. A certain number
of cases can be observered again by another observer/another set of mechanical
devices as the case may be. If it is feasible two separate observers and set of
instruments may be used in all or some of the original observations. The results could
then be compared to determine their accuracy and completeness.

Advantages of observation
o The main virtue of observation is its directness it makes it possible to study
behavior as it occurs. The researcher needs to ask people about their behavior
and interactions he can simply watch what they do and say.

o Data collected by observation may describe the observed phenomena as they


occur in their natural settings. Other methods introduce elements or artificiality
into the researched situation for instance in interview the respondent may not
behave in a natural way. There is no such artificiality in observational studies
especially when the observed persons are not aware of their being observed.

o Observations in more suitable for studying subjects who are unable to articulate
meaningfully e.g. studies of children, tribal animals, birds etc.

o Observations improve the opportunities for analyzing the contextual back ground
of behavior. Furthermore verbal resorts can be validated and compared with
behavior through observation. The validity of what men of position and authority
say can be verified by observing what they actually do.

o Observations make it possible to capture the whole event as it occurs. For


example only observation can be providing an insight into all the aspects of the
process of negotiation between union and management representatives.

o Observation is less demanding of the subjects and has less biasing effect on
their conduct than questioning.

o It is easier to conduct disguised observation studies than disguised questioning.

o Mechanical devices may be used for recording data in order to secure more
accurate data and also of making continuous observations over longer periods.

Interviews are a crucial part of the recruitment process for all Organisations. Their purpose is to
give the interviewer(s) a chance to assess your suitability for the role and for you to demonstrate
your abilities and personality. As this is a two-way process, it is also a good opportunity for you to
ask questions and to make sure the organisation and position are right for you.
Interview format
Interviews take many different forms. It is a good idea to ask the organisation in advance what
format the interview will take.

• Competency/criteria based interviews - These are structured to reflect the


competencies or qualities that an employer is seeking for a particular job, which will
usually have been detailed in the job specification or advert. The interviewer is looking for
evidence of your skills and may ask such things as: ‘Give an example of a time you
worked as part of a team to achieve a common goal.’

The organisation determines the selection criteria based on the roles they are recruiting
for and then, in an interview, examines whether or not you have evidence of possessing
these.
Recruitment Manager, The Cooperative Group
• Technical interviews - If you have applied for a job or course that requires technical
knowledge, it is likely that you will be asked technical questions or has a separate
technical interview. Questions may focus on your final year project or on real or
hypothetical technical problems. You should be prepared to prove yourself, but also to
admit to what you do not know and stress that you are keen to learn. Do not worry if you
do not know the exact answer - interviewers are interested in your thought process and
logic.
• Academic interviews - These are used for further study or research positions.
Questions are likely to center on your academic history to date.
• Structured interviews - The interviewer has a set list of questions, and asks all the
candidates the same questions.
• Formal/informal interviews - Some interviews may be very formal, while others will feel
more like an informal chat about you and your interests. Be aware that you are still being
assessed, however informal the discussion may seem.
• Portfolio based interviews - If the role is within the arts, media or communications
industries, you may be asked to bring a portfolio of your work to the interview, and to
have an in-depth discussion about the pieces you have chosen to include.
• Senior/case study interviews - These ranges from straightforward scenario questions
(e.g. ‘What would you do in a situation where…?’) to the detailed analysis of a
hypothetical business problem. You will be evaluated on your analysis of the problem,
how you identify the key issues, how you pursue a particular line of thinking and whether
you can develop and present an appropriate framework for organising your thoughts.

Specific types of interview

The Screening Interview

Companies use screening tools to ensure that candidates meet minimum qualification
requirements. Computer programs are among the tools used to weed out unqualified candidates.
(This is why you need a digital resume that is screening-friendly. See our resume center for help.)
Sometimes human professionals are the gatekeepers. Screening interviewers often have honed
skills to determine whether there is anything that might disqualify you for the position. Remember-
they does not need to know whether you are the best fit for the position, only whether you are not
a match. For this reason, screeners tend to dig for dirt. Screeners will hone in on gaps in your
employment history or pieces of information that look inconsistent. They also will want to know
from the outset whether you will be too expensive for the company.

Some tips for maintaining confidence during screening interviews:

• Highlight your accomplishments and qualifications.


• Get into the straightforward groove. Personality is not as important to the screener as
verifying your qualifications. Answer questions directly and succinctly. Save your winning
personality for the person making hiring decisions!
• Be tactful about addressing income requirements. Give a range, and try to avoid giving
specifics by replying, "I would be willing to consider your best offer."
• If the interview is conducted by phone, it is helpful to have note cards with your vital
information sitting next to the phone. That way, whether the interviewer catches you
sleeping or vacuuming the floor, you will be able to switch gears quickly.

The Informational Interview

On the opposite end of the stress spectrum from screening interviews is the informational
interview. A meeting that you initiate, the informational interview is underutilized by job-seekers
who might otherwise consider themselves savvy to the merits of networking. Job seekers
ostensibly secure informational meetings in order to seek the advice of someone in their current
or desired field as well as to gain further references to people who can lend insight. Employers
that like to stay apprised of available talent even when they do not have current job openings, are
often open to informational interviews, especially if they like to share their knowledge, feel
flattered by your interest, or esteem the mutual friend that connected you to them. During an
informational interview, the jobseeker and employer exchange information and get to know one
another better without reference to a specific job opening.

This takes off some of the performance pressure, but be intentional nonetheless:

• Come prepared with thoughtful questions about the field and the company.
• Gain references to other people and make sure that the interviewer would be comfortable
if you contact other people and use his or her name.
• Give the interviewer your card, contact information and resume.
• Write a thank you note to the interviewer.

The Directive Style

In this style of interview, the interviewer has a clear agenda that he or she follows unflinchingly.
Sometimes companies use this rigid format to ensure parity between interviews; when
interviewers ask each candidate the same series of questions, they can more readily compare the
results. Directive interviewers rely upon their own questions and methods to tease from you what
they wish to know. You might feel like you are being steam-rolled, or you might find the
conversation develops naturally. Their style does not necessarily mean that they have dominance
issues, although you should keep an eye open for these if the interviewer would be your
supervisor.

Either way, remember:

• Flex with the interviewer, following his or her lead.


• Do not relinquish complete control of the interview. If the interviewer does not ask you for
information that you think is important to proving your superiority as a candidate, politely
interject it.

The Meandering Style

This interview type, usually used by inexperienced interviewers, relies on you to lead the
discussion. It might begin with a statement like "tell me about yourself," which you can use to your
advantage. The interviewer might ask you another broad, open-ended question before falling into
silence. This interview style allows you tactfully to guide the discussion in a way that best serves
you.

The following strategies, which are helpful for any interview, are particularly important when
interviewers use a non-directive approach:

• Come to the interview prepared with highlights and anecdotes of your skills, qualities and
experiences. Do not rely on the interviewer to spark your memory-jot down some notes
that you can reference throughout the interview.
• Remain alert to the interviewer. Even if you feel like you can take the driver's seat and go
in any direction you wish, remain respectful of the interviewer's role. If he or she becomes
more directive during the interview, adjust.
• Ask well-placed questions. Although the open format allows you significantly to shape the
interview, running with your own agenda and dominating the conversation means that
you run the risk of missing important information about the company and its needs.

Q 6. Case Study: You are engaged to carry out a market survey on behalf of a leading
Newspaper that is keen to increase its circulation in Bangalore City, in order to ascertain
reader habits and interests. What type of research report would be most appropriate?
Develop an outline of the research report with the main sections.

Ans.: There are four major interlinking processes in the presentation of a literature review:

1. Critiquing rather than merely listing each item a good literature review is led by your own
critical thought processes - it is not simply a catalogue of what has been written.

Once you have established which authors and ideas are linked, take each group in turn
and really think about what you want to achieve in presenting them this way. This is your
opportunity for showing that you did not take all your reading at face value, but that you
have the knowledge and skills to interpret the authors' meanings and intentions in relation
to each other, particularly if there are conflicting views or incompatible findings in a
particular area.

Rest assured that developing a sense of critical judgment in the literature surrounding a
topic is a gradual process of gaining familiarity with the concepts, language, terminology
and conventions in the field. In the early stages of your research you cannot be expected
to have a fully developed appreciation of the implications of all findings.

As you get used to reading at this level of intensity within your field you will find it easier
and more purposeful to ask questions as you read:

o What is this all about?


o Who is saying it and what authorities do they have?
o Why is it significant?
o What is its context?
o How was it reached?
o How valid is it?
o How reliable is the evidence?
o What has been gained?
o What do other authors say?
o How does it contribute?
o So what?
2. Structuring the fragments into a coherent body through your reading and discussions
with your supervisor during the searching and organising phases of the cycle, you will
eventually reach a final decision as to your own topic and research design.

As you begin to group together the items you read, the direction of your literature review
will emerge with greater clarity. This is a good time to finalise your concept map, grouping
linked items, ideas and authors into firm categories as they relate more obviously to your
own study.

Now you can plan the structure of your written literature review, with your own intentions
and conceptual framework in mind. Knowing what you want to convey will help you
decide the most appropriate structure.
A review can take many forms; for example:

o An historical survey of theory and research in your field


o A synthesis of several paradigms
o A process of narrowing down to your own topic

It is likely that your literature review will contain elements of all of these.

As with all academic writing, a literature review needs:

o An introduction
o A body
o A conclusion

The introduction sets the scene and lays out the various elements that are to be
explored.

The body takes each element in turn, usually as a series of headed sections and
subsections. The first paragraph or two of each section mentions the major authors in
association with their main ideas and areas of debate. The section then expands on
these ideas and authors, showing how each relates to the others, and how the debate
informs your understanding of the topic. A short conclusion at the end of each section
presents a synthesis of these linked ideas.

The final conclusion of the literature review ties together the main points from each of
your sections and this is then used to build the framework for your own study. Later,
when you come to write the discussion chapter of your thesis, you should be able to
relate your findings in one-to-one correspondence with many of the concepts or
questions that were firmed up in the conclusion of your literature review.

3. Controlling the 'voice' of your citations in the text (by selective use of direct quoting,
paraphrasing and summarizing)

You can treat published literature like any other data, but the difference is that it is not
data you generated yourself.

When you report on your own findings, you are likely to present the results with reference
to their source, for example:

o 'Table 2 shows that sixteen of the twenty subjects responded positively.'

When using published data, you would say:

o 'Positive responses were recorded for 80 per cent of the subjects (see table 2).'
o 'From the results shown in table 2, it appears that the majority of subjects
responded positively.'

In these examples your source of information is table 2. Had you found the same results
on page 17 of a text by Smith published in 1988, you would naturally substitute the name,
date and page number for 'table 2'. In each case it would be your voice introducing a fact
or statement that had been generated somewhere else.
You could see this process as building a wall: you select and place the 'bricks' and your
'voice' provides the ‘mortar’, which determines how strong the wall will be. In turn, this is
significant in the assessment of the merit and rigor of your work.

There are three ways to combine an idea and its source with your own voice:

o Direct quote
o Paraphrase
o Summary

In each method, the author's name and publication details must be associated with the
words in the text, using an approved referencing system. If you don't do this you would be
in severe breach of academic convention, and might be penalized. Your field of study has
its own referencing conventions you should investigate before writing up your results.

Direct quoting repeats exact wording and thus directly represents the author:

o 'Rain is likely when the sky becomes overcast' (Smith 1988, page 27).

If the quotation is run in with your text, single quotation marks are used to enclose it, and
it must be an identical copy of the original in every respect.

Overuse or simple 'listing' of quotes can substantially weaken your own argument by
silencing your critical view or voice.

Paraphrasing is repeating an idea in your own words, with no loss of the author's
intended meaning:

o As Smith (1988) pointed out in the late eighties, rain may well be indicated by the
presence of cloud in the sky.

Paraphrasing allows you to organize the ideas expressed by the authors without being
rigidly constrained by the grammar, tense and vocabulary of the original. You retain a
degree of flexibility as to whose voice comes through most strongly.

Summarizing means to shorten or crystallize a detailed piece of writing by restating the


main points in your own words and in the order in which you found them. The original
writing is 'described' as if from the outside, and it is your own voice that is predominant:

o Referring to the possible effects of cloudy weather, Smith (1988) predicted the
likelihood of rain.
o Smith (1988) claims that some degree of precipitation could be expected as the
result of clouds in the sky: he has clearly discounted the findings of Jones (1986).
4. Using appropriate language
Your writing style represents you as a researcher, and reflects how you are dealing with
the subtleties and complexities inherent in the literature.

Once you have established a good structure with appropriate headings for your literature
review, and once you are confident in controlling the voice in your citations, you should
find that your writing becomes more lucid and fluent because you know what you want to
say and how to say it.
The good use of language depends on the quality of the thinking behind the writing, and
on the context of the writing. You need to conform to discipline-specific requirements.
However, there may still be some points of grammar and vocabulary you would like to
improve. If you have doubts about your confidence to use the English language well, you
can help yourself in several ways:

o Ask for feedback on your writing from friends, colleagues and academics
o Look for specific language information in reference materials
o Access programs or self-paced learning resources which may be available on
your campus

Grammar tips - practical and helpful


The following guidance on tenses and other language tips may be useful.

Which tense should I use?

Use present tense:

o For generalizations and claims:


 The sky is blue.
o To convey ideas, especially theories, which exist for the reader at the time of
reading:
 I think therefore I am.
o For authors' statements of a theoretical nature, which can then be compared on
equal terms with others:
 Smith (1988) suggests that...
o In referring to components of your own document:
 Table 2 shows...

Use present perfect tense for:

o Recent events or actions that are still linked in an unresolved way to the present:
 Several studies have attempted to...

Use simple past tense for:

o Completed events or actions:


 Smith (1988) discovered that...

Use past perfect tense for:

o Events which occurred before a specified past time:


 Prior to these findings, it had been thought that...

Use modals (may, might, could, would, should) to:

o Convey degrees of doubt


 This may indicate that ... this would imply that...

Other language tips


o Convey your meaning in the simplest possible way. Don't try to use an
intellectual tone for the sake of it, and do not rely on your reader to read your
mind!
o Keep sentences short and simple when you wish to emphasise a point.
o Use compound (joined simple) sentences to write about two or more ideas which
may be linked with 'and', 'but', 'because', 'whereas' etc.
o Use complex sentences when you are dealing with embedded ideas or those that
show the interaction of two or more complex elements.
o Verbs are more dynamic than nouns, and nouns carry information more densely
than verbs.
o Select active or passive verbs according to whether you are highlighting the
'doer' or the 'done to' of the action.
o Keep punctuation to a minimum. Use it to separate the elements of complex
sentences in order to keep subject, verb and object in clear view.
o Avoid densely packed strings of words, particularly nouns.

The total process

The story of a research study

Introduction
I looked at the situation and found that I had a question to ask about it. I wanted to investigate
something in particular.

Review of literature
So I read everything I could find on the topic - what was already known and said and what had
previously been found. I established exactly where my investigation would fit into the big picture,
and began to realise at this stage how my study would be different from anything done previously.

Methodology
I decided on the number and description of my subjects, and with my research question clearly in
mind, designed my own investigation process, using certain known research methods (and
perhaps some that are not so common). I began with the broad decision about which research
paradigm I would work within (that is, qualitative/quantitative, critical/interpretive/ empiricist). Then
I devised my research instrument to get the best out of what I was investigating. I knew I would
have to analyse the raw data, so I made sure that the instrument and my proposed method(s) of
analysis were compatible right from the start. Then I carried out the research study and recorded
all the data in a methodical way according to my intended methods of analysis. As part of the
analysis, I reduced the data (by means of my preferred form of classification) to manageable
thematic representation (tables, graphs, categories, etc). It was then that I began to realise what I
had found.

Findings/results
What had I found? What did the tables/graphs/categories etc. have to say that could be pinned
down? It was easy enough for me to see the salient points at a glance from these records, but in
writing my report, I also spelled out what I had found truly significant to make sure my readers did
not miss it. For each display of results, I wrote a corresponding summary of important
observations relating only elements within my own set of results and comparing only like with like.
I was careful not to let my own interpretations intrude or voice my excitement just yet. I wanted to
state the facts - just the facts. I dealt correctly with all inferential statistical procedures, applying
tests of significance where appropriate to ensure both reliability and validity. I knew that I wanted
my results to be as watertight and squeaky clean as possible. They would carry a great deal more
credibility, strength and thereby academic 'clout' if I took no shortcuts and remained both rigorous
and scholarly.
Discussion
Now I was free to let the world know the significance of my findings. What did I find in the results
that answered my original research question? Why was I so sure I had some answers? What
about the unexplained or unexpected findings? Had I interpreted the results correctly? Could
there have been any other factors involved? Were my findings supported or contested by the
results of similar studies? Where did that leave mine in terms of contribution to my field? Can I
actually generalise from my findings in a breakthrough of some kind, or do I simply see myself as
reinforcing existing knowledge? And so what, after all? There were some obvious limitations to
my study, which, even so, I'll defend to the hilt. But I won't become over-apologetic about the
things left undone, or the abandoned analyses, the fascinating byways sadly left behind. I have
my memories...

Conclusion
We'll take a long hard look at this study from a broad perspective. How does it rate? How did I
end up answering the question I first thought of? The conclusion needs to be a few clear, succinct
sentences. That way, I'll know that I know what I'm talking about. I'll wrap up with whatever
generalizations I can make, and whatever implications have arisen in my mind as a result of
doing this thing at all. The more you find out, the more questions arise. How I wonder what you
are ... how I speculate. OK, so where do we all go from here?

Three stages of research

1. Reading
2. Research design and implementation
3. Writing up the research report or thesis

Use an active, cyclical writing process: draft, check, reflect, revise, redraft.

Establishing good practice

1. Keep your research question always in mind.


2. Read widely to establish a context for your research.
3. Read widely to collect information, which may relate to your topic, particularly to your
hypothesis or research question.
4. Be systematic with your reading, note-taking and referencing records.
5. Train yourself to select what you do need and reject what you don't need.
6. Keep a research journal to reflect on your processes, decisions, state of mind, changes
of mind, reactions to experimental outcomes etc.
7. Discuss your ideas with your supervisor and interested others.
8. Keep a systematic log of technical records of your experimental and other research data,
remembering to date each entry, and noting any discrepancies or unexpected
occurrences at the time you notice them.
9. Design your research approaches in detail in the early stages so that you have
frameworks to fit findings into straightaway.
10. Know how you will analyse data so that your formats correspond from the start.
2Keep going back to the whole picture. Be thoughtful and think ahead about the way you will
consider and store new information as it comes to light.

Potrebbero piacerti anche