Sei sulla pagina 1di 29

Master of Business Administration - MBA Semester 3 Subject Code MB0050 Subject Name Research Methodology 4 Credits (Book ID:

D: B1206) Assignment Set- 1 (60 Marks)


Q.1 Define Research. What are the features and types of Research? Answer: Research simply means a search for facts answers to questions and solutions to problems. It is a purposive investigation. It is an organized inquiry. It seeks to find explanations to unexplained phenomenon to clarify the doubtful facts and to correct the misconceived facts. Features It is a systematic and critical investigation into a phenomenon. It is a purposive investigation aiming at describing, interpreting and explaining a phenomenon. It adopts scientific method. It is objective and logical, applying possible test to validate the measuring tools and the conclusions reached. It is based upon observable experience or empirical evidence. Research is directed towards finding answers to pertinent questions and solutions to problems. It emphasizes the development of generalization, principles or theories. The purpose of research is not only to arrive at an answer but also to stand up the test of criticism. Types of Research: Although any typology of research is inevitably arbitrary, Research may be classified crudely according to its major intent or the methods. According to the intent, research may be classified as: Pure Research: It is undertaken for the sake of knowledge without any intention to apply it in practice, e.g., Einsteins theory of relativity, Newtons contributions, Galileos contribution, etc. It is also known as basic or fundamental research. It is undertaken out of intellectual curiosity or inquisitiveness. It is not necessarily problem-oriented. It aims at extension of knowledge. It may lead to either discovery of a new theory or refinement of an existing theory. It lays foundation for applied research. It offers solutions to many practical problems. It helps to find the critical factors in a practical problem. It develops many alternative solutions and thus enables us to choose the best solution.

Applied Research: It is carried on to find solution to a real-life problem requiring an action or policy decision. It is thus problem-oriented and action-directed. It seeks an immediate and practical result, e.g., marketing research carried on for developing a new market or for studying the post-purchase experience of customers. Though the immediate purpose of an applied research is to find solutions to a practical problem, it may incidentally contribute to the development of theoretical knowledge by leading to the discovery of new facts or testing of theory or o conceptual clarity. It can put theory to the test. It may aid in conceptual clarification. It may integrate previously existing theories. Exploratory Research: It is also known as formulative research. It is preliminary study of an unfamiliar problem about which the researcher has little or no knowledge. It is ill-structured and much less focused on pre-determined objectives. It usually takes the form of a pilot study. The purpose of this research may be to generate new ideas, or to increase the researchers familiarity with the problem or to make a precise formulation of the problem or to gather information for clarifying concepts or to determine whether it is feasible to attempt the study. Katz conceptualizes two levels of exploratory studies. At the first level is the discovery of the significant variable in the situations; at the second, the discovery of relationships between variables. Descriptive Study: It is a fact-finding investigation with adequate interpretation. It is the simplest type of research. It is more specific than an exploratory research. It aims at identifying the various characteristics of a community or institution or problem under study and also aims at a classification of the range of elements comprising the subject matter of study. It contributes to the development of a young science and useful in verifying focal concepts through empirical observation. It can highlight important methodological aspects of data collection and interpretation. The information obtained may be useful for prediction about areas of social life outside the boundaries of the research. They are valuable in providing facts needed for planning social action program. Diagnostic Study: It is similar to descriptive study but with a different focus. It is directed towards discovering what is happening, why it is happening and what can be done about. It aims at identifying the causes of a problem and the possible solutions for it. It may also be concerned with discovering and testing whether certain variables are associated. This type of research requires prior knowledge of the problem, its thorough formulation, clear-cut definition of the given population, adequate methods for collecting accurate information, precise measurement of variables, statistical analysis and test of significance. Evaluation Studies: It is a type of applied research. It is made for assessing the effectiveness of social or economic programmes implemented or for assessing the impact of developmental projects on the development of the project area. It is thus directed to assess or appraise the quality and quantity of an activity and its performance, and to specify its attributes and conditions required for its success. It is concerned with causal relationships and is more actively guided by hypothesis. It is concerned also with change over time. Action Research: It is a type of evaluation study. It is a concurrent evaluation study of an action programme launched for solving a problem for improving an exiting situation. It includes six major steps: diagnosis, sharing of diagnostic

information, planning, developing change programme, initiation of organizational change, implementation of participation and communication process, and post experimental evaluation. Q.2 How is a research problem formulated? What are the sources from which one may be able to identify research problems? Answer: The selection of one appropriate researchable problem out of the identified problems requires evaluation of those alternatives against certain criteria, which may be grouped into: A. Internal Source 1) Researchers interest: The problem should interest the researcher and be a challenge to him. Without interest and curiosity, he may not develop sustained perseverance. Even a small difficulty may become an excuse for discontinuing the study. Interest in a problem depends upon the researchers educational background, experience, outlook and sensitivity. 2) Researchers competence: A mere interest in a problem will not do. The researcher must be competent to plan and carry out a study of the problem. He must have the ability to grasp and deal with int. he must possess adequate knowledge of the subject-matter, relevant methodology and statistical procedures. 3) Researchers own resource: In the case of a research to be done by a researcher on his won, consideration of his own financial resource is pertinent. If it is beyond his means, he will not be able to complete the work, unless he gets some external financial support. Time resource is more important than finance. Research is a time-consuming process; hence it should be properly utilized. B. External Source 1) Research-ability of the problem: The problem should be researchable, i.e., amendable for finding answers to the questions involved in it through scientific method. To be researchable a question must be one for which observation or other data collection in the real world can provide the answer. 2) Importance and urgency: Problems requiring investigation are unlimited, but available research efforts are very much limited. Therefore, in selecting problems for research, their relative importance and significance should be considered. An important and urgent problem should be given priority over an unimportant one. 3) Novelty of the problem: The problem must have novelty. There is no use of wasting ones time and energy on a problem already studied thoroughly by others. This does not mean that replication is always needless. In social sciences in some cases, it is appropriate to replicate (repeat) a study in order to verify the validity of its findings to a different situation. 4) Feasibility: A problem may be a new one and also important, but if research on it is not feasible, it cannot be selected. Hence feasibility is a very important consideration 5) Facilities: Research requires certain facilities such as well-equipped library facility, suitable and competent guidance, data analysis facility, etc. Hence the availability of the facilities relevant to the problem must be considered.

6) Usefulness and social relevance: Above all, the study of the problem should make significant contribution to the concerned body of knowledge or to the solution of some significant practical problem. It should be socially relevant. This consideration is particularly important in the case of higher level academic research and sponsored research. 7) Research personnel: Research undertaken by professors and by research organizations require the services of investigators and research officers. But in India and other developing countries, research has not yet become a prospective profession. Hence talent persons are not attracted to research projects. Each identified problem must be evaluated in terms of the above internal and external criteria and the most appropriate one may be selected by a research scholar. The sources from which one may be able to identify research problems or develop problems awareness are: Review of literature Academic experience Daily experience Exposure to field situations Consultations Brain storming Research Intuition Q.3 What are the types of Observations? What is the utility of Observation in Business Research? Answer: Observations may be classified in different ways. With reference to investigators role, it may be classified into (a) participant observation and (b) nonparticipant observation. In terms of mode of observation, it may be classified into (c) direct observation. With reference to the rigor of the system adopted. Observation is classified into (e) controlled observation, and (f) uncontrolled observation Participant Observation: In this observation, the observer is a part of the phenomenon or group which is observed and he acts as both an observer and a participant. For example, a study of tribal customs by an anthropologist by taking part in tribal activities like folk dance. The persons who are observed should not be aware of the researchers purpose. Then only their behaviour will be natural. The concealment of research objective and researchers identity is justified on the ground that it makes it possible to study certain aspects of the groups culture which are not revealed to outsiders. Advantages: The advantages of participant observation are: 1. The observer can understand the emotional reactions of the observed group, and get a deeper insight of their experiences. 2. The observer will be able to record context which gives meaning to the observed behaviour and heard statements. Disadvantages: Participant observation suffers from some demerits. 1. The participant observer narrows his range of observation. For example, if there is a hierarchy of power in the group/community under study, he comes to occupy one position within in, and thus other avenues of information are closed to him.

2. To the extent that the participant observer participates emotionally, the objectivity is lost. 3. Another limitation of this method is the dual demand made on the observer. Recording can interfere with participation, and participation can interfere with observation. Recording on the spot is not possible and it has to be postponed until the observer is alone. Such time lag results in some inaccuracy in recording Non-participant observations: In this method, the observer stands apart and does not participate in the phenomenon observed. Naturally, there is no emotional involvement on the part of the observer. This method calls for skill in recording observations in an unnoticed manner. Direct observation: This means observation of an event personally by the observer when it takes place. This method is flexible and allows the observer to see and record subtle aspects of events and behaviour as they occur. He is also free to shift places, change the focus of the observation. A limitation of this method is that the observers perception circuit may not be able to cover all relevant events when the latter move quickly, resulting in the incompleteness of the observation. Indirect observation: This does not involve the physical presence of the observer, and the recording is done by mechanical, photographic or electronic devices, e.g. recording customer and employee movements by a special motion picture camera mounted in a department of a large store. This method is less flexible than direct observations, but it is less biasing and less erratic in recording accuracy. It is also provides a permanent record for an analysis of different aspects of the event. Controlled observation: This involves standardization of observational techniques and exercises of maximum control over extrinsic and intrinsic variables by adopting experimental design and systematically recording observations. Controlled observation is carried out either in the laboratory or in the field. It is typified by clear and explicit decisions on what, how and when to observe. Uncontrolled observation: This does not involve control over extrinsic and intrinsic variables. It is primary used for descriptive research. Participant observation is a typical uncontrolled one Observation is suitable for a variety of research purposes. It may be used for studying (a) The behaviour of human beings in purchasing goods and services. :-life style, customs, and manner, interpersonal relations, group dynamics, crowd behaviour, leadership styles, managerial style, other behaviours and actions; (b) The behaviour of other living creatures like birds, animals etc. (c) Physical characteristics of inanimate things like stores, factories, residences etc. (d) Flow of traffic and parking problems (e) Movement of materials and products through a plant. Q.4 What is Research Design? What are the different types of Research Designs? Answer: The research designer understandably cannot hold all his decisions in his head. Even if he could, he would have difficulty in understanding how these are interrelated. Therefore, he records his decisions on paper or record disc by using relevant symbols or concepts. Such a symbolic construction may be called the research design

or model. A research design is a logical and systematic plan prepared for directing a research study. It specifies the objectives of the study, the methodology and techniques to be adopted for achieving the objectives. It constitutes the blue print for the collection, measurement and analysis of data. It is the plan, structure and strategy of investigation conceived so as to obtain answers to research questions. The plan is the overall scheme or program of research. A research design is the program that guides the investigator in the process of collecting, analyzing and interpreting observations. It provides a systematic plan of procedure for the researcher to follow elltiz, Jahoda and Destsch and Cook describe, A research design is the arrangement of conditions for collection and analysis of data in a manner that aims to combine relevance to the research purpose with economy in procedure. The different types of Research Designs are:-- There are a number of crucial research choices, various writers advance different classification schemes, some of which are: 1. Experimental, historical and inferential designs (American Marketing Association). 2. Exploratory, descriptive and causal designs (Selltiz, Jahoda, Deutsch and Cook). 3. Experimental, and expost fact (Kerlinger) 4. Historical method, and case and clinical studies (Goode and Scates) 5. Sample surveys, field studies, experiments in field settings, and laboratory experiments (Festinger and Katz) 6. Exploratory, descriptive and experimental studies (Body and Westfall) 7. Exploratory, descriptive and casual (Green and Tull) 8. Experimental, quasi-experimental designs (Nachmias and Nachmias) 9. True experimental, quasi-experimental and non-experimental designs (Smith). 10. Experimental, pre-experimental, quasi-experimental designs and Survey Research (Kidder and Judd). These different categorizations exist, because research design is a complex concept. In fact, there are different perspectives from which any given study can be viewed. They are: a. The degree of formulation of the problem (the study may be exploratory or formalized) b. The topical scope-breadth and depth-of the study(a case or a statistical study) c. The research environment: field setting or laboratory (survey, laboratory experiment) d. The time dimension(one-time or longitudinal) e. The mode of data collection (observational or survey) f. The manipulation of the variables under study (experimental or expost facto) g. The nature of the relationship among variables (descriptive or causal) Q.5 Explain the Sampling Process and briefly describe the methods of Sampling. Answer: Decision process of sampling is complicated one. The researcher has to first identify the limiting factor or factors and must judiciously balance the conflicting factors. The various criteria governing the choice of the sampling technique: 1. Purpose of the Survey: What does the researcher aim at? If he intends to generalize the findings based on the sample survey to the population, then an

2.

3.

4.

5.

6.

7.

8.

9.

appropriate probability sampling method must be selected. The choice of a particular type of probability sampling depends on the geographical area of the survey and the size and the nature of the population under study. Measurability: The application of statistical inference theory requires computation of the sampling error from the sample itself. Probability samples only allow such computation. Hence, where the research objective requires statistical inference, the sample should be drawn by applying simple random sampling method or stratified random sampling method, depending on whether the population is homogenous or heterogeneous. Degree of Precision: Should the results of the survey be very precise, or even rough results could serve the purpose? The desired level of precision as one of the criteria of sampling method selection. Where a high degree of precision of results is desired, probability sampling should be used. Where even crude results would serve the purpose (E.g., marketing surveys, readership surveys etc) any convenient non-random sampling like quota sampling would be enough. Information about Population: How much information is available about the population to be studied? Where no list of population and no information about its nature are available, it is difficult to apply a probability sampling method. Then exploratory study with non-probability sampling may be made to gain a better idea of population. After gaining sufficient knowledge about the population through the exploratory study, appropriate probability sampling design may be adopted. The Nature of the Population: In terms of the variables to be studied, is the population homogenous or heterogeneous? In the case of a homogenous population, even a simple random sampling will give a representative sample. If the population is heterogeneous, stratified random sampling is appropriate. Geographical Area of the Study and the Size of the Population: If the area covered by a survey is very large and the size of the population is quite large, multi-stage cluster sampling would be appropriate. But if the area and the size of the population are small, single stage probability sampling methods could be used. Financial resources: If the available finance is limited, it may become necessary to choose a less costly sampling plan like multistage cluster sampling or even quota sampling as a compromise. However, if the objectives of the study and the desired level of precision cannot be attained within the stipulated budget, there is no alternative than to give up the proposed survey. Where the finance is not a constraint, a researcher can choose the most appropriate method of sampling that fits the research objective and the nature of population. Time Limitation: The time limit within which the research project should be completed restricts the choice of a sampling method. Then, as a compromise, it may become necessary to choose less time consuming methods like simple random sampling instead of stratified sampling/sampling with probability proportional to size; multi-stage cluster sampling instead of single-stage sampling of elements. Of course, the precision has to be sacrificed to some extent. Economy: It should be another criterion in choosing the sampling method. It means achieving the desired level of precision at minimum cost. A sample is economical if the precision per unit cost is high or the cost per unit of variance is low.

The above criteria frequently conflict and the researcher must balance and blend them to obtain to obtain a good sampling plan. The chosen plan thus represents an adaptation of the sampling theory to the available facilities and resources. That is, it represents a compromise between idealism and feasibility. One should use simple workable methods instead of unduly elaborate and complicated techniques Sampling techniques or methods may be classified into two generic types: Probability or Random Sampling Probability sampling is based on the theory of probability. It is also known as random sampling. It provides a known nonzero chance of selection for each population element. It is used when generalization is the objective of study, and a greater degree of accuracy of estimation of population parameters is required. The cost and time required is high hence the benefit derived from it should justify the costs. The following are the types of probability sampling: I. Simple Random Sampling: This sampling technique gives each element an equal and independent chance of being selected. An equal chance means equal probability of selection. An independent chance means that the draw of one element will not affect the chances of other elements being selected. The procedure of drawing a simple random sample consists of enumeration of all elements in the population. a. Preparation of a List of all elements, giving them numbers in serial order 1, 2, B, and so on, and b. Drawing sample numbers by using (i) lottery method, (ii) a table of random numbers or (iii) a computer. Suitability: This type of sampling is suited for a small homogeneous population. Advantages: The advantage of this is that it is one of the easiest methods, all the elements in the population have an equal chance of being selected, simple to understand, does not require prior knowledge of the true composition of the population. Disadvantages: It is often impractical because of non-availability of population list or of difficulty in enumerating the population, does not ensure proportionate representation and it may be expensive in time and money. The amount of sampling error associated with any sample drawn can easily be computed. But it is greater than that in other probability samples of the same size, because it is less precise than other methods. II. Stratified Random Sampling: This is an improved type of random or probability sampling. In this method, the population is sub-divided into homogenous groups or strata, and from each stratum, random sample is drawn. E.g., university students may be divided on the basis of discipline, and each discipline group may again be divided into juniors and seniors. Stratification is necessary for increasing a samples statistical efficiency, providing adequate data for analyzing the various sub-populations and applying different methods to different strata. The stratified random sampling is appropriate for a large heterogeneous population. Stratification process involves three major decisions. They are stratification base or bases, number of strata and strata sample sizes. Stratified random sampling may be classified into:

a) Proportionate stratified sampling: This sampling involves drawing a sample from each stratum in proportion to the latters share in the total population. It gives proper representation to each stratum and its statistical efficiency is generally higher. This method is therefore very popular. E.g., if the Management Faculty of a University consists of the following specialization groups: Specialization stream Production Finance Marketing Rural development No. of students 40 20 30 10 100 Proportion each stream 0.4 0.2 0.3 0.1 1.0 of

The research wants to draw an overall sample of 30. Then the strata sample sizes would be: Strata Production Finance Marketing Rural development Sample size 30 x 0.4 12 30 x 0.2 6 30 x 0.3 9 30 x 0.1 3 30 Advantages: Stratified random sampling enhances the representativeness to each sample, gives higher statistical efficiency, easy to carry out, and gives a self-weighing sample. Disadvantages: A prior knowledge of the composition of the population and the distribution of the population, it is very expensive in time and money and identification of the strata may lead to classification of errors. b) Disproportionate stratified random sampling: This method does not give proportionate representation to strata. It necessarily involves giving overrepresentation to some strata and under-representation to others. The desirability of disproportionate sampling is usually determined by three factors, viz, (a) the sizes of strata, (b) internal variances among strata, and (c) sampling costs. Suitability: This method is used when the population contains some small but important subgroups, when certain groups are quite heterogeneous, while others are homogeneous and when it is expected that there will be appreciable differences in the response rates of the subgroups in the population. Advantages: The advantages of this type is it is less time consuming and facilitates giving appropriate weighing to particular groups which are small but more important. Disadvantages: The disadvantage is that it does not give each stratum proportionate representation, requires prior knowledge of composition of the population, is subject to classification errors and its practical feasibility is doubtful.

c) Systematic Random Sampling: This method of sampling is an alternative to random selection. It consists of taking kth item in the population after a random start with an item form 1 to k. It is also known as fixed interval method. E.g., 1st, 11th, 21st Strictly speaking, this method of sampling is not a probability sampling. It possesses characteristics of randomness and some non-probability traits. Suitability: Systematic selection can be applied to various populations such as students in a class, houses in a street, telephone directory etc. Advantages: The advantages are it is simpler than random sampling, easy to use, easy to instruct, requires less time, its cheaper, easier to check, sample is spread evenly over the population, and it is statistically more efficient. Disadvantages: The disadvantages are it ignores all elements between two kth elements selected, each element does not have equal chance of being selected, and this method sometimes gives a biased sample. Cluster Sampling: It means random selection of sampling units consisting of population elements. Each such sampling unit is a cluster of population elements. Then from each selected sampling unit, a sample of population elements is drawn by either simple random selection or stratified random selection. Where the elements is not readily available, the use of simple or stratified random sampling method would be too expensive and time-consuming. In such cases cluster sampling is usually adopted. The cluster sampling process involves: identify clusters, examine the nature of clusters, and determine the number of stages. Suitability: The application of cluster sampling is extensive in farm management surveys, socio-economic surveys, rural credit surveys, demographic studies, ecological studies, public opinion polls, and large scale surveys of political and social behaviour, attitude surveys and so on. Advantages: The advantages of this method is it is easier and more convenient, cost of this is much less, promotes the convenience of field work as it could be done in compact places, it does not require more time, units of study can be readily substituted for other units and it is more flexible. Disadvantages: The cluster sizes may vary and this variation could increase the bias of the resulting sample. The sampling error in this method of sampling is greater and the adjacent units of study tend to have more similar characteristics than do units distantly apart. Area sampling: This is an important form of cluster sampling. In larger field surveys cluster consisting of specific geographical areas like districts, talluks, villages or blocks in a city are randomly drawn. As the geographical areas are selected as sampling units in such cases, their sampling is called area sampling. It is not a separate method of sampling, but forms part of cluster sampling. Multi-stage and sub-sampling: In multi-stage sampling method, sampling is carried out in two or more stages. The population is regarded as being composed of a number of second stage units and so forth. That is, at each stage, a sampling unit is a cluster of the sampling units of the subsequent stage. First, a sample of the first stage sampling units is drawn, then from each of the selected first stage sampling unit, a sample of the second stage sampling units is drawn. The procedure continues down to the final sampling units or population elements. Appropriate random sampling method is adopted at each stage. It is

appropriate where the populatiosurvey has to be made within a limited time and cost budget. The major disadvantage is that the procedure of estimating sampling error and cost advantage is complicated. Sub-sampling is a part of multi-stage sampling process. In a multi-stage sampling, the sampling in second and subsequent stage frames is called subsampling. Sub-sampling balances the two conflicting effects of clustering i.e., cost and sampling errors. Random Sampling with Probability Proportional to Size: The procedure of selecting clusters with probability Proportional to size (PPS) is widely used. If one primary cluster has twice as large a population as another, it is give twice the chance of being selected. If the same number of persons is then selected from each of the selected clusters, the overall probability of any person will be the same. Thus PPS is a better method for securing a representative sample of population elements in multi-stage cluster sampling. Advantages: The advantages are clusters of various sizes get proportionate representation, PPS leads to greater precision than would a simple random sample of clusters and a constant sampling fraction at the second stage, equal-sized samples from each selected primary cluster are convenient for field work. Disadvantages: PPS cannot be used if the sizes of the primary sampling clusters are not known. Double Sampling and Multiphase Sampling: Double sampling refers to the subsection of the final sample form a pre-selected larger sample that provided information for improving the final selection. When the procedure is extended to more than two phases of selection, it is then, called multi-phase sampling. This is also known as sequential sampling, as sub-sampling is done from a main sample in phases. Double sampling or multiphase sampling is a compromise solution for a dilemma posed by undesirable extremes. The statistics based on the sample of n can be improved by using ancillary information from a wide base: but this is too costly to obtain from the entire population of N elements. Instead, information is obtained from a larger preliminary sample nL which includes the final sample n. Replicated or Interpenetrating Sampling: It involves selection of a certain number of sub-samples rather than one full sample from a population. All the sub-samples should be drawn using the same sampling technique and each is a self-contained and adequate sample of the population. Replicated sampling can be used with any basic sampling technique: simple or stratified, single or multistage or single or multiphase sampling. It provides a simple means of calculating the sampling error. It is practical. The replicated samples can throw light on variable non-sampling errors. But disadvantage is that it limits the amount of stratification that can be employed. Non-probability or Non Random Sampling: Non-probability sampling or nonrandom sampling is not based on the theory of probability. This sampling does not provide a chance of selection to each population element. Advantages: The only merits of this type of sampling are simplicity, convenience and low cost. Disadvantages: The demerits are it does not ensure a selection chance to each population unit. The selection probability sample may not be a representative one. The selection probability is unknown. It suffers from sampling bias which will distort results.

The reasons for usage of this sampling are when there is no other feasible alternative due to non-availability of a list of population, when the study does not aim at generalizing the findings to the population, when the costs required for probability sampling may be too large, when probability sampling required more time, but the time constraints and the time limit for completing the study do not permit it. It may be classified into: Convenience or Accidental Sampling: It means selecting sample units in a just hit and miss fashion E.g., interviewing people whom we happen to meet. This sampling also means selecting whatever sampling units are conveniently available, e.g., a teacher may select students in his class. This method is also known as accidental sampling because the respondents whom the researcher meets accidentally are included in the sample. Suitability: Though this type of sampling has no status, it may be used for simple purposes such as testing ideas or gaining ideas or rough impression about a subject of interest. Advantage: It is the cheapest and simplest, it does not require a list of population and it does not require any statistical expertise. Disadvantage: The disadvantage is that it is highly biased because of researchers subjectivity, it is the least reliable sampling method and the findings cannot be generalized. Purposive (or judgment) sampling: This method means deliberate selection of sample units that conform to some pre-determined criteria. This is also known as judgment sampling. This involves selection of cases which we judge as the most appropriate ones for the given study. It is based on the judgement of the researcher or some expert. It does not aim at securing a cross section of a population. The chance that a particular case be selected for the sample depends on the subjective judgement of the researcher. Suitability: This is used when what is important is the typicality and specific relevance of the sampling units to the study and not their overall representativeness to the population. Advantage: It is less costly and more convenient and guarantees inclusion of relevant elements in the sample. Disadvantage: It is less efficient for generalizing, does not ensure the representativeness, requires more prior extensive information and does not lend itself for using inferential statistics. Quota sampling: This is a form of convenient sampling involving selection of quota groups of accessible sampling units by traits such as sex, age, social class, etc. it is a method of stratified sampling in which the selection within strata is non-random. It is this Non-random element that constitutes its greatest weakness. Suitability: It is used in studies like marketing surveys, opinion polls, and readership surveys which do not aim at precision, but to get quickly some crude results. Advantage: It is less costly, takes less time, non need for a list of population, and field work can easily be organized. Disadvantage: It is impossible to estimate sampling error, strict control if field work is difficult, and subject to a higher degree of classification.

Snow-ball sampling: This is the colourful name for a technique of Building up a list or a sample of a special population by using an initial set of its members as informants. This sampling technique may also be used in socio-metric studies. Suitability: It is very useful in studying social groups, informal groups in a formal organization, and diffusion of information among professional of various kinds. Advantage: It is useful for smaller populations for which no frames are readily available. Disadvantage: The disadvantage is that it does not allow the use of probability statistical methods. It is difficult to apply when the population is large. It does not ensure the inclusion of all the elements in the list. Q.6 What is a Research Report? What are the contents of Research Report? Answer: Research report is a means for communicating research experience to others. A research report is a formal statement of the research process and it results. It narrates the problem studied, methods used for studying it and the findings and conclusions of the study. Contents of the Research Report: The outline of a research report is given below: I. Prefatory Items Title page Declaration Certificates Preface/acknowledgements Table of contents List of tables List of graphs/figures/charts Abstract or synopsis II. Body of the Report Introduction Theoretical background of the topic Statement of the problem Review of literature The scope of the study The objectives of the study Hypothesis to be tested Definition of the concepts Models if any Design of the study Methodology Method of data collection Sources of data Sampling plan Data collection instruments Field work

III.

Data processing and analysis plan Overview of the report Limitation of the study Results: findings and discussions Summary, conclusions and recommendations Reference Material Bibliography Appendix Copies of data collection instruments Technical details on sampling plan Complex tables Glossary of new terms used.

Set- 2
Q.1 Differentiate between nominal, ordinal, interval and ratio scales with an example of each. Answer: Measurement may be classified into four different levels, based on the characteristics of order, distance and origin. 1. Nominal measurement: This level of measurement consists in assigning numerals or symbols to different categories of a variable. The example of male and female applicants to an MBA program mentioned earlier is an example of nominal measurement. The numerals or symbols are just labels and have no quantitative value. The number of cases under each category is counted. Nominal measurement is therefore the simplest level of measurement. It does not have characteristics such as order, distance or arithmetic origin. 2. Ordinal measurement: In this level of measurement, persons or objects are assigned numerals which indicate ranks with respect to one or more properties, either in ascending or descending order. Example: Individuals may be ranked according to their socio-economic class, which is measured by a combination of income, education, occupation and wealth. The individual with the highest score might be assigned rank 1, the next highest rank 2, and so on, or vice versa. The numbers in this level of measurement indicate only rank order and not equal distance or absolute quantities. This means that the distance between ranks 1 and 2 is not necessarily equal to the distance between ranks 2 and 3. Ordinal scales may be constructed using rank order, rating and paired comparisons. Variables that lend themselves to ordinal measurement include preferences, ratings of organizations and economic status. Statistical techniques that are commonly used to analyze ordinal scale data are the median and rank order correlation coefficients. 3. Interval measurement: This level of measurement is more powerful than the nominal and ordinal levels of measurement, since it has one additional characteristic equality of distance. However, it does not have an origin or a true zero. This implies that it is not possible to multiply or divide the numbers on an interval scale. Example: The Centigrade or Fahrenheit temperature gauge is an example of the interval level of measurement. A temperature of 50 degrees is exactly 10 degrees hotter than 40 degrees and 10 degrees cooler than 60 degrees. Since interval scales are more powerful than nominal or ordinal scales, they also lend themselves to more powerful statistical techniques, such as standard deviation, product moment correlation and t tests and F tests of significance. 4. Ratio measurement: This is the highest level of measurement and is appropriate when measuring characteristics which have an absolute zero point. This level of measurement has all the three characteristics order, distance and origin.

Examples: Height, weight, distance and area. Since there is a natural zero, it is possible to multiply and divide the numbers on a ratio scale. Apart from being able to use all the statistical techniques that are used with the nominal, ordinal and interval scales, techniques like the geometric mean and coefficient of variation may also be used. The main limitation of ratio measurement is that it cannot be used for characteristics such as leadership quality, happiness, satisfaction and other properties which do not have natural zero points. The different levels of measurement and their characteristics may be summed up. In the table below Levels measurement Nominal Ordinal Interval Ratio of Characteristics No order, distance or origin Order, but no distance or origin Both order and distance, but no origin Order, distance and origin

Q.2 What are the types of Hypothesis? Explain the procedure for testing Hypothesis. Answer: Types of Hypothesis: There are many kinds of hypothesis the researcher has to be working with. One type of hypothesis asserts that something is the case in a given instance; that a particular object, person or situation has particular characteristics. Another type of hypothesis deals with the frequency of occurrence or of association among variables; this type of hypothesis may state that X is associated with Y. A certain Y proportion of items e.g. urbanism tends to be accompanied by mental disease or than something are greater or lesser than some other thing in specific settings. Yet another type of hypothesis asserts that a particular characteristics is one of the factors which determine another characteristic, i.e. X is the producer of Y. hypothesis of this type are called casual hypothesis. 11) Null Hypotheses and Alternative Hypotheses: In the context of statistical analysis, we often talk about null and alternative hypotheses. If we are to compare the superiority of method A with that of method B and we proceed on the assumption that both methods are equally good, then this assumption is termed as a null hypothesis. On the other hand, if we think that method A is superior, then it is known as an alternative hypothesis. These are symbolically represented as: Null hypothesis = H0 and Alternative hypothesis = Ha Suppose we want to test the hypothesis that the population mean is equal to the hypothesized mean ( H0) = 100. Then we would say that the null hypothesis is that the population mean is equal to the hypothesized mean 100 and symbolically we can express it as: H0: = H0=100 If our sample results do not support this null hypothesis, we should conclude that something else is true. What we conclude rejecting the null hypothesis is known as an

alternative hypothesis. If we accept H0, then we are rejecting Ha and if we reject H0, then we are accepting Ha. For H0: = H0=100, we may consider three possible alternative hypotheses as follows: Alternative To be read as follows Hypotheses Ha: H0 (The alternative hypothesis is that the population mean is not equal to 100 i.e., it may be more or less 100) Ha: > H0 (The alternative hypothesis is that the population mean is greater than 100) Ha: < H0 (The alternative hypothesis is that the population mean is less than 100) The null hypotheses and the alternative hypotheses are chosen before the sample is drawn (the researcher must avoid the error of deriving hypotheses from the data he collects and testing the hypotheses from the same data). In the choice of null hypothesis, the following considerations are usually kept in view: The alternative hypothesis is usually the one, which is to be proved, and the null hypothesis is the one that is to be disproved. Thus a null hypothesis represents the hypothesis we are trying to reject, while the alternative hypothesis represents all other possibilities. If the rejection of a certain hypothesis when it is actually true involves great risk, it is taken as null hypothesis, because then the probability of rejecting it when it is true is (the level of significance) which is chosen very small. The null hypothesis should always be a specific hypothesis i.e., it should not state an approximate value. Generally, in hypothesis testing, we proceed on the basis of the null hypothesis, keeping the alternative hypothesis in view. Why so? The answer is that on the assumption that the null hypothesis is true, one can assign the probabilities to different possible sample results, but this cannot be done if we proceed with alternative hypotheses. Hence the use of null hypotheses (at times also known as statistical hypotheses) is quite frequent. Procedure for Testing Hypothesis: To test a hypothesis means to tell (on the basis of the data researcher has collected) whether or not the hypothesis seems to b valid. In hypothesis testing the main question is : whether the null hypothesis or not to accept the null hypothesis? Procedure for hypothesis testing refers to all those steps that we undertake for making a choice between the two actions i.e., rejection and acceptance of a null hypothesis. The various steps involved in hypothesis testing are stated below: 1. Making a Formal Statement: The step consists in making a formal statement of the null hypothesis (H0) and also of the alternative hypothesis (Ha). This means that hypothesis should clearly state, considering the nature of the research problem. For instance, Mr. Mohan of the Civil Engineering Department wants to test the load bearing capacity of an old bridge which must be more than 10 tons, in that case he can state his hypothesis as under: Null hypothesis H0: = 10 tons Alternative Hypothesis Ha: > 10 tons Take another example. The average score in an aptitude test administered at the national level is 80. To evaluate a states education system, the average score of 100 of the states students selected on the random basis was 75. The state wants to know if

there is a significance difference between the local scores and the national scores. In such a situation the hypothesis may be state as under: Null hypothesis H0: = 80 Alternative Hypothesis Ha: 10 tons The formulation of hypothesis is an important step which must be accomplished with due care in accordance with the object and nature of the problem under consideration. It also indicates whether we should use a tailed test or a two tailed test. If Ha is of the type greater than, we use alone tailed test, but when Ha is of the type whether greater or smaller then we use a two-tailed test. 2. Selecting a Significant Level: The hypothesis is tested on a pre-determined level of significance and such the same should have specified. Generally, in practice, either 5% level or 1% level is adopted for the purpose. The factors that affect the level of significance are: The magnitude of the difference between sample; The size of the sample; The variability of measurements within samples; Whether the hypothesis is directional or non-directional (A directional hypothesis is one which predicts the direction of the difference between, say, means). In brief, the level of significance must be adequate in the context of the purpose and nature of enquiry. 3. Deciding the Distribution to use: After deciding the level of significance, the next step in hypothesis testing is to determine the appropriate sampling distribution. The rules for selecting the correct distribution are similar to those which we have stated earlier in the context of estimation. 4. Selecting A Random Sample & Computing An Appropriate Value: Another step is to select a random sample(S) and compute an appropriate value from the sample data concerning the test statistic utilizing the relevant distribution. In other words, draw a sample to furnish empirical data. 5. Calculation of the Probability: One has then to calculate the probability that the sample result would diverge as widely as it has from expectations, if null hypothesis were in fact true. 6. Comparing the Probability: Yet another step consists in comparing the probability thus calculated with the specified value for value in case of one tailed test (and /2 in case of two tailed test), then reject the null hypothesis (i.e. accept the alternative hypothesis), but if the probability is greater then accept the null hypothesis. In case we reject H0 we run a risk of (at most level of significance) committing an error type I, but if we accept H0, then we run some risk of committing error type II. Q.3 What are the advantages and disadvantages of Case Study Method? How is Case Study method useful to Business Research? Answer: Advantages of Case Study Method: Case study of particular value when a complex set of variables may be at work in generating observed results and intensive study is needed to unravel the complexities. For example, an in-depth study of a firms top sales people and comparison with worst salespeople might reveal characteristics common to stellar performers. Here again, the exploratory investigation is best served by an active curiosity and willingness to deviate from the initial plan when findings suggest new courses of inquiry might prove more productive. It is easy to see how the explanatory

research objectives of generating insights and hypothesis would be well served by use of this technique. Disadvantages of Case Study Method: Blummer points out that independently, the case documents hardly fulfil the criteria of reliability, adequacy and representatives, but to exclude them form any scientific study of human life will be blunder in as much as these documents are necessary and significant both for theory building and practice. Case Study as a Method of Business Research In-depth analysis of selected cases is of particular value to business research when a complex set of variables may be at work in generating observed results and intensive study is needed to unravel the complexities. For instance, an in-depth study of a firms top sales people and comparison with the worst sales people might reveal characteristics common to stellar performers. The exploratory investigator is best served by the active curiosity and willingness to deviate from the initial plan, when the finding suggests new courses of enquiry, might prove more productive A case is written description of a business related problem or situation and often contains organizational and financial data specific to the situation or problem. This may also have external data and facts about social, economic or other micro economic circumstances impinging upon that business situation. Case offers student highest possible realism in management study (as compared to experimentation or hands-on projects in engineering or science) as it brings before student real situation and facts surrounding it. How things actually happen in business. Q.4 What are the Primary and Secondary sources of Data? Answer: Primary Sources of Data: Primary sources are original sources from which the researcher directly collects data that have not been previously collected e.g.., collection of data directly by the researcher on brand awareness, brand preference, brand loyalty and other aspects of consumer behaviour from a sample of consumers by interviewing them,. Primary data are first hand information collected through various methods such as observation, interviewing, mailing etc. Advantages of Primary Data It is original source of data It is possible to capture the changes occurring in the course of time. It flexible to the advantage of researcher. Extensive research study is based of primary data Disadvantages of Primary Data Primary data is expensive to obtain It is time consuming It requires extensive research personnel who are skilled. It is difficult to administer. Methods of Collecting Primary Data: Primary data are directly collected by the researcher from their original sources. In this case, the researcher can collect the required date precisely according to his research needs, he can collect them when he wants them and in the form he needs them. But the collection of primary data is costly

and time consuming. Yet, for several types of social science research required data are not available from secondary sources and they have to be directly gathered from the primary sources.In such cases where the available data are inappropriate, inadequate or obsolete, primary data have to be gathered. They include: socio economic surveys, social anthropological studies of rural communities and tribal communities, sociological studies of social problems and social institutions. Marketing research, leadership studies, opinion polls, attitudinal surveys, readership, radio listening and T.V. viewing surveys, knowledge-awareness practice (KAP) studies, farm managements studies, business management studies etc. There are various methods of data collection. A Method is different from a Tool while a method refers to the way or mode of gathering data, a tool is an instruments used for the method. For example, a schedule is used for interviewing. The important methods are (a) observation, (b) interviewing,(c)mail survey, (d)experimentation,(e) simulation and (f) projective technique. Each of these methods is discussed in detail in the subsequent sections in the later chapters. Secondary Sources of Data: These are sources containing data which have been collected and compiled for another purpose. The secondary sources consists of readily compendia and already compiled statistical statements and reports whose data may be used by researchers for their studies e.g., census reports , annual reports and financial statements of companies, Statistical statement, Reports of Government Departments, Annual reports of currency and finance published by the Reserve Bank of India, Statistical statements relating to Co-operatives and Regional Banks, published by the NABARD, Reports of the National sample survey Organization, Reports of trade associations, publications of international organizations such as UNO, IMF, World Bank, ILO, WHO, etc., Trade and Financial journals newspapers etc. Secondary sources consist of not only published records and reports, but also unpublished records. The latter category includes various records and registers maintained by the firms and organizations, e.g., accounting and financial records, personnel records, register of members, minutes of meetings, inventory records etc. Features of Secondary Sources: Though secondary sources are diverse and consist of all sorts of materials, they have certain common characteristics. First, they are readymade and readily available, and do not require the trouble of constructing tools and administering them. Second, they consist of data which a researcher has no original control over collection and classification. Both the form and the content of secondary sources are shaped by others. Clearly, this is a feature which can limit the research value of secondary sources. Finally, secondary sources are not limited in time and space. That is, the researcher using them need not have been present when and where they were gathered. Use of Secondary Data: The second data may be used in three ways by a researcher. First, some specific information from secondary sources may be used for reference purpose. For example, the general statistical information in the number of co-operative credit societies in the country, their coverage of villages, their capital structure, volume of business etc., may be taken from published reports and quoted as background

information in a study on the evaluation of performance of cooperative credit societies in a selected district/state. Second, secondary data may be used as bench marks against which the findings of research may be tested, e.g., the findings of a local or regional survey may be compared with the national averages; the performance indicators of a particular bank may be tested against the corresponding indicators of the banking industry as a whole; and so on. Finally, secondary data may be used as the sole source of information for a research project. Such studies as securities Market Behaviour, Financial Analysis of companies, Trade in credit allocation in commercial banks, sociological studies on crimes, historical studies, and the like, depend primarily on secondary data. Year books, statistical reports of government departments, report of public organizations of Bureau of Public Enterprises, Censes Reports etc, serve as major data sources for such research studies Advantages of Secondary Data: Secondary sources have some advantages: Secondary data, if available can be secured quickly and cheaply. Once their source of documents and reports are located, collection of data is just matter of desk work. Even the tediousness of copying the data from the source can now be avoided, thanks to Xeroxing facilities. Wider geographical area and longer reference period may be covered without much cost. Thus, the use of secondary data extends the researchers space and time reach. The use of secondary data broadens the data base from which scientific generalizations can be made. Environmental and cultural settings are required for the study. The use of secondary data enables a researcher to verify the findings bases on primary data. It readily meets the need for additional empirical support. The researcher need not wait the time when additional primary data can be collected. Disadvantages of Secondary Data: The use of a secondary data has its own limitations. The most important limitation is the available data may not meet our specific needs. The definitions adopted by those who collected those data may be different; units of measure may not match; and time periods may also be different. The available data may not be as accurate as desired. To assess their accuracy we need to know how the data were collected. The secondary data are not up-to-date and become obsolete when they appear in print, because of time lag in producing them. For example, population census data are published tow or three years later after compilation, and no new figures will be available for another ten years. Finally, information about the whereabouts of sources may not be available to all social scientists. Even if the location of the source is known, the accessibility depends primarily on proximity. For example, most of the unpublished official records and compilations are located in the capital city, and they are not within the easy reach of researchers based in far off places.

Q.5 Differentiate between Schedules and Questionnaire. What are the alternative modes of sending Questionnaires? Answer: Difference between schedules and questionnaire: Questionnaires are mailed to the respondent whereas schedules are carried by the investigator himself. Questionnaires can be filled by the respondent only if he is able to understand the language in which it is written and he is supposed to be a literate. This problem can be overcome in case of schedule since the investigator himself carries the schedules and the respondents response is accordingly taken. A questionnaire is filled by the respondent himself whereas the schedule is filled by the investigator. There are some alternative methods of distributing questionnaires to the respondents. They are: (1) Personal delivery: (2) Attaching questionnaire to a product (3) Advertising questionnaire in a newspaper or a magazine, and (4) News stand insets 1) Personal delivery: The researcher or his assistant may deliver the questionnaires to the potential respondents, with a request to complete them at their convenience. After a day or two, the completed questionnaires can be collected from them. Often referred to as the self-administered questionnaire method, it combines the advantages of the personal interview and the mail survey. Alternatively, the questionnaires may be delivered in person and the respondents may return the completed questionnaires through mail. 2) Attaching questionnaire to a product: A firm test marketing a product may attach a questionnaire to a product and request the buyer to complete it and mail it back to the firm. A gift or a discount coupon usually rewards the respondent. 3) Advertising questionnaire in a newspaper or a magazine: The questionnaire with the instructions for completion may be advertised on a page of a magazine or in a section of newspapers. The potential respondent completes it, tears it out and mails it to the advertiser. For example, the committee of Banks Customer Services used this method for collecting information from the customers of commercial banks in India. This method may be useful for large-scale studies on topics of common interest. 4) News stand insets: This method involves inserting the covering letter, questionnaire and self addressed reply-paid envelope into a random sample of newsstand copies of a newspaper or magazine. The significance of questionnaire method is that it affords great facilities in collecting data from large, diverse, and widely scattered groups of people. It is used in gathering objective, quantitative data as well as for securing information of a qualitative nature. In some studies, questionnaire is the sole research tool utilised but it is more often used in conjunction with other methods of investigations. In questionnaire technique, great reliance is placed on the respondents verbal report for data on the stimuli or experiences which is exposed as also for data on his behavior. Advantages of Questionnaires The advantages of mail surveys are: They are less costly than personal interviews, as cost of mailing is the same through out the country, irrespective of distance. They can cover extensive geographical areas.

Mailing is useful in contacting persons such as senior business executives who are difficult to reach in any other way. The respondents can complete the questionnaires at their convenience. Mail surveys, being more impersonal, provide more anonymity than personal interviews. Mail surveys are totally free from the interviewers bias, as there is no personal contact between the respondents and the investigator. Certain personal and economic data may be given accurately in an unsigned mail questionnaire. Disadvantages of Questionnaires The disadvantages of mail surveys are: 1. The scope for mail surveys is very limited in a country like India where the percentage of literacy is very low. 2. The response rate of mail surveys is low. Hence, the resulting sample will not be a representative one.

Q.6 Explain the various steps in processing of Data. Answer: The various steps in processing of data may be stated as: Identifying the data structures Editing the data Coding and classifying the data Transcription of data Tabulation of data. Objectives: After studying this lesson you should be able to understand: Checking for analysis Editing Coding Classification Transcription of data Tabulation Construction of Frequency Table Components of a table Principles of table construction Frequency distribution and class intervals Graphs, charts and diagrams Types of graphs and general rules Quantitative and qualitative analysis Measures of central tendency Dispersion Correlation analysis Coefficient of determination 1. Checking for Analysis: In the data preparation step, the data are prepared in a data format, which allows the analyst to use modern analysis software such as SAS or SPSS. The major criterion in this is to define the data structure. A data structure is a

dynamic collection of related variables and can be conveniently represented as a graph where nodes are labelled by variables. The data structure also defines and stages of the preliminary relationship between variables/groups that have been pre-planned by the researcher. Most data structures can be graphically presented to give clarity as to the frames researched hypothesis. A sample structure could be a linear structure, in which one variable leads to the other and finally, to the resultant end variable. The identification of the nodal points and the relationships among the nodes could sometimes be a complex task than estimated. When the task is complex, which involves several types of instruments being collected for the same research question, the procedures for drawing the data structure would involve a series of steps. In several intermediate steps, the heterogeneous data structure of the individual data sets can be harmonized to a common standard and the separate data sets are then integrated into a single data set. However, the clear definition of such data structures would help in the further processing of data. 2. Editing: The next step in the processing of data is editing of the data instruments. Editing is a process of checking to detect and correct errors and omissions. Data editing happens at two stages, one at the time of recording of the data and second at the time of analysis of data. a. Data Editing at the Time of Recording of Data Document editing and testing of the data at the time of data recording is done considering the following questions in mind. Do the filters agree or are the data inconsistent? Have missing values been set to values, which are the same for all research questions? Have variable descriptions been specified? Have labels for variable names and value labels been defined and written? All editing and cleaning steps are documented, so that, the redefinition of variables or later analytical modification requirements could be easily incorporated into the data sets. b. Data Editing at the Time of Analysis of Data Data editing is also a requisite before the analysis of data is carried out. This ensures that the data is complete in all respect for subjecting them to further analysis. Some of the usual check list questions that can be had by a researcher for editing data sets before analysis would be: 1. Is the coding frame complete? 2. Is the documentary material sufficient for the methodological description of the study? 3. Is the storage medium readable and reliable. 4. Has the correct data set been framed? 5. Is the number of cases correct? 6. Are there differences between questionnaire, coding frame and data? 7. Are there undefined and so-called wild codes? 8. Comparison of the first counting of the data with the original documents of the researcher. The editing step checks for the completeness, accuracy and uniformity of the data as created by the researcher. Completeness: The first step of editing is to check whether there is an answer to all the questions/variables set out in the data set. If there were any omission, the researcher

sometimes would be able to deduce the correct answer from other related data on the same instrument. If this is possible, the data set has to rewritten on the basis of the new information. For example, the approximate family income can be inferred from other answers to probes such as occupation of family members, sources of income, approximate spending and saving and borrowing habits of family members etc. If the information is vital and has been found to be incomplete, then the researcher can take the step of contacting the respondent personally again and solicit the requisite data again. If none of these steps could be resorted to the marking of the data as missing must be resorted to. Accuracy: Apart from checking for omissions, the accuracy of each recorded answer should be checked. A random check process can be applied to trace the errors at this step. Consistency in response can also be checked at this step. The cross verification to a few related responses would help in checking for consistency in responses. The reliability of the data set would heavily depend on this step of error correction. While clear inconsistencies should be rectified in the data sets, fact responses should be dropped from the data sets. Uniformity: In editing data sets, another keen lookout should be for any lack of uniformity, in interpretation of questions and instructions by the data recorders. For instance, the responses towards a specific feeling could have been queried from a positive as well as a negative angle. While interpreting the answers, care should be taken as a record the answer as a positive question response or as negative question response in all uniformity checks for consistency in coding throughout the questionnaire/interview schedule response/data set. The final point in the editing of data set is to maintain a log of all corrections that have been carried out at this stage. The documentation of these corrections helps the researcher to retain the original data set. 3. Coding: The edited data are then subject to codification and classification. Coding process assigns numerals or other symbols to the several responses of the data set. It is therefore a pre-requisite to prepare a coding scheme for the data set. The recording of the data is done on the basis of this coding scheme. The responses collected in a data sheet varies, sometimes the responses could be the choice among a multiple response, sometimes the response could be in terms of values and sometimes the response could be alphanumeric. At the recording stage itself, if some codification were done to the responses collected, it would be useful in the data analysis. When codification is done, it is imperative to keep a log of the codes allotted to the observations. This code sheet will help in the identification of variables/observations and the basis for such codification. The first coding done to primary data sets are the individual observation themselves. This responses sheet coding gives a benefit to the research, in that, the verification and editing of recordings and further contact with respondents can be achieved without any difficulty. The codification can be made at the time of distribution of the primary data sheets itself. The codes can be alphanumeric to keep track of where and to whom it had been sent. For instance, if the data consists of several public at different localities, the sheets that are distributed in a specific locality may carry a unique part code which is alphabetic. To this alphabetic code, a numeric code can be attached to distinguish the person to whom the primary instrument was distributed. This also helps the researcher to keep track of who the respondents are and who are the probable respondents from

whom primary data sheets are yet to be collected. Even at a latter stage, any specific queries on a specific responses sheet can be clarified. The variables or observations in the primary instrument would also need codification, especially when they are categorized. The categorization could be on a scale i.e., most preferable to not preferable, or it could be very specific such as Gender classified as Male and Female. Certain classifications can lead to open ended classification such as education classification, Illiterate, Graduate, Professional, Others. Please specify. In such instances, the codification needs to be carefully done to include all possible responses under Others, please specify. If the preparation of the exhaustive list is not feasible, then it will be better to create a separate variable for the Others please specify category and records all responses as such. Numeric Coding: Coding need not necessarily be numeric. It can also be alphabetic. Coding has to be compulsorily numeric, when the variable is subject to further parametric analysis. Alphabetic Coding: A mere tabulation or frequency count or graphical representation of the variable may be given in an alphabetic coding. Zero Coding: A coding of zero has to be assigned carefully to a variable. In many instances, when manual analysis is done, a code of 0 would imply a no response from the respondents. Hence, if a value of 0 is to be given to specific responses in the data sheet, it should not lead to the same interpretation of non response. For instance, there will be a tendency to give a code of 0 to a no, then a different coding than 0 should be given in the data sheet. An illustration of the coding process of some of the demographic variables is given in the following table. Question Number 1.1 Variable observation Organisation Response categories Private Public Government 3.4 4.2 Owner of Vehicle Yes No Vehicle performs Excellent Good Adequate Bad Worst 5.1 Age 1 1 2 3 S P Up to 20 years 21-40 years 40-60 years 5.2 Occupation Salaried Professional 3 2 5 4 Code Pt Pb Go 2 1

Technical Business Retired Housewife Others

T B R H =

= Could be treated as a separate variable/observation and the actual response could be recorded. The new variable could be termed as other occupation The coding sheet needs to be prepared carefully, if the data recording is not done by the researcher, but is outsourced to a data entry firm or individual. In order to enter the data in the same perspective, as the researcher would like to view it, the data coding sheet is to be prepared first and a copy of the data coding sheet should be given to the outsourcer to help in the data entry procedure. Sometimes, the researcher might not be able to code the data from the primary instrument itself. He may need to classify the responses and then code them. For this purpose, classification of data is also necessary at the data entry stage. 4. Classifications: When open ended responses have been received, classification is necessary to code the responses. For instance, the income of the respondent could be an open-ended question. From all responses, a suitable classification can be arrived at. A classification method should meet certain requirements or should be guided by certain rules. First, classification should be linked to the theory and the aim of the particular study. The objectives of the study will determine the dimensions chosen for coding. The categorization should meet the information required to test the hypothesis or investigate the questions. Second, the scheme of classification should be exhaustive. That is, there must be a category for every response. For example, the classification of martial status into three category viz., married Single and divorced is not exhaustive, because responses like widower or separated cannot be fitted into the scheme. Here, an open ended question will be the best mode of getting the responses. From the responses collected, the researcher can fit a meaningful and theoretically supportive classification. The inclusion of the classification Others tends to fill the cluttered, but few responses from the data sheets. But others categorization has to carefully used by the researcher. However, the other categorization tends to defeat the very purpose of classification, which is designed to distinguish between observations in terms of the properties under study. The classification others will be very useful when a minority of respondents in the data set give varying answers. For instance, the reading habits of newspaper may be surveyed. The 95 respondents out of 100 could be easily classified into 5 large reading groups while 5 respondents could have given a unique answer. These given answer rather than being separately considered could be clubbed under the others heading for meaningful interpretation of respondents and reading habits. Third, the categories must also be mutually exhaustive, so that each case is classified only once. This requirement is violated when some of the categories overlap or different dimensions are mixed up.

The number of categorization for a specific question/observation at the coding stage should be maximum permissible since, reducing the categorization at the analysis level would be easier than splitting an already classified group of responses. However the number of categories is limited by the number of cases and the anticipated statistical analysis that are to be used on the observation. 5. Transcriptions of Data: When the observations collected by the researcher are not very large, the simple inferences, which can be drawn from the observations, can be transferred to a data sheet, which is a summary of all responses on all observations from a research instrument. The main aim of transition is to minimize the shuffling proceeds between several responses and several observations. Suppose a research instrument contains 120 responses and the observations has been collected from 200 respondents, a simple summary of one response from all 200 observations would require shuffling of 200 pages. The process is quite tedious if several summary tables are to be prepared from the instrument. The transcription process helps in the presentation of all responses and observations on data sheets which can help the researcher to arrive at preliminary conclusions as to the nature of the sample collected etc. Transcription is hence, an intermediary process between data coding and data tabulation. a. Methods of Transcription: The researcher may adopt a manual or computerized transcription. Long work sheets, sorting cards or sorting strips could be used by the researcher to manually transcript the responses. The computerized transcription could be done using a data base package such as spreadsheets, text files or other databases. The main requisite for a transcription process is the preparation of the data sheets where observations are the row of the database and the responses/variables are the columns of the data sheet. Each variable should be given a label so that long questions can be covered under the label names. The label names are thus the links to specific questions in the research instrument. For instance, opinion on consumer satisfaction could be identified through a number of statements (say 10); the data sheet does not contain the details of the statement, but gives a link to the question in the research instrument though variable labels. In this instance the variable names could be given as CS1, CS2, CS3, CS4, CS5, CS6, CS7, CS8, CS9 and CS10. The label CS indicating Consumer satisfaction and the number 1 to 10 indicate the statement measuring consumer satisfaction. Once the labelling process has been done for all the responses in the research instrument, the transcription of the response is done. b. Manual Transcription: When the sample size is manageable, the researcher need not use any computerization process to analyze the data. The researcher could prefer a manual transcription and analysis of responses. The choice of manual transcription would be when the number of responses in a research instrument is very less, say 10 responses, and the numbers of observations collected are within 100. A transcription sheet with 100x50 (assuming each response has 5 options) row/column can be easily managed by a researcher manually. If, on the other hand the variables in the research instrument are more than 40 and each variable has 5 options, it leads to a worksheet of 100x200 sizes which might not be easily managed by the researcher manually. In the second instance, if the number of responses is less than 30, then the manual worksheet could be attempted manually. In all other instances, it is advisable to use a computerized transcription process. c. Long Worksheets: Long worksheets require quality paper; preferably chart sheets, thick enough to last several usages. These worksheets normally are ruled both

horizontally and vertically, allowing responses to be written in the boxes. If one sheet is not sufficient, the researcher may use multiple rules sheets to accommodate all the observations. Heading of responses which are variable names and their coding (options) are filled in the first two rows. The first column contains the code of observations. For each variable, now the responses from the research instrument are then transferred to the worksheet by ticking the specific option that the observer has chosen. If the variable cannot be coded into categories, requisite length for recording the actual response of the observer should be provided for in the work sheet. The worksheet can then be used for preparing the summary tables or can be subjected to further analysis of data. The original research instrument can be now kept aside as safe documents. Copies of the data sheets can also be kept for future references. As has been discussed under the editing section, the transcript data has to be subjected to a testing to ensure error free transcription of data. A sample worksheet is given below for reference.

Transcription can be made as and when the edited instrument is ready for processing. Once all schedules/questionnaires have been transcribed, the frequency tables can be constructed straight from worksheet. Other methods of manual transcription include adoption of sorting strips or cards. In olden days, data entry and processing were made through mechanical and semi auto-metric devices such as key punch using punch cards. The arrival of computers has changed the data processing methodology altogether. 6. Tabulation: The transcription of data can be used to summarize and arrange the data in compact form for further analysis. The process is called tabulation. Thus, tabulation is a process of summarizing raw data displaying them on compact statistical tables for further analysis. It involves counting the number of cases falling into each of the categories identified by the researcher. Tabulation can be done manually or through the computer. The choice depends upon the size and type of study, cost considerations, time pressures and the availability of software packages. Manual tabulation is suitable for small and simple studies.

Potrebbero piacerti anche