Sei sulla pagina 1di 18

JOURNAL OF OCCUPATIONAL BEHAVIOUR, Vol.

1, 163-179 (1980)

Toward a typology of organization development research'


JERRY 1. PORRAS
Graduate School of Business, Stanford University

NANCY ROBERTS
School of Education, Stanford University

SUMMARY OD research is currently in a confused yet rapidly developing state. It is argued here that this situation exists partly because of a basic confusion as to exactly what constitutes OD research. A definitional typology is proposed based on the intentions the researcher has for how the findings are to be used. Two primary intentions are identified: (a) use of research results to guide pratice, and (b) use of research results to develop theory. A typology using these two intentions individually or in combination is developed and discussed.

INTRODUCTION Since 1970, Organization Development (OD) has been evolving and expanding at an accelerating rate. Books and articles on OD have proliferated. OD divisions in professional groups such as the American Society of Training Directors (ASTD) have experienced substantial increases in membership. The OD Network, an organization of change practitioners is flourishing. All are testimony to OD's rapid growth in popularity among both academics and practitioners. Most will agree the OD is 'in'. But, despite a growing recognition of OD as a field, there remains strong disagreement over what OD is, the impact it has on organizations and their members, and how to determine this impact. These disagreements have many sources. One relates to the difficulty in conceptually and theoretically defining OD. As Kahn (1974) noted, the status of OD is confused because it is not precisely defined, nor is it reducible to specific uniform behaviours. It lacks a prescribed or verifiable place in a network of logically related concepts or in a clearly stated theory. One outgrowth of this theoretical poverty is that no organized thrusts exist in the investigations of the various aspects of a change process. OD researchers
'information for this analysis was derived from a computer search of the OD research literature covering the period 1975-78 plus approximately 100 articles (published and unpublished) received in response to a request for OD research materials sent to over 800 members of the American Academy of Management's OD division. Most of these articles were unpublished and reflected the current 'state of the art' in OD research.

0142-2774/80/0301-0163$01.00 1980 by John Wiley & Sons. Ltd.

Received 2 April 1979

164

Jerry I. Porras and Nancy Roberts

find it difficult to identify specific sets of variables to investigate in replication studies. As Ault (Nielsen, et al., 1977) observes: 'everyone want(s) to measure everything for every reason and we end up not knowing what we have measured or expecting one sort of assessment to satisfy all sorts of constituencies' (p. 20). The state of current OD research methodolgies further contributes to the confusion over the effects of OD and the best ways to assess them. Several articles examining various aspects of current OD research and research methodologies have recently appeared in the literature. Cummings, Molloy and Glen (1977) evaluated the quality of fifty-eight studies of job change projects. Pate, Nielsen and Bacon (1977) identified thirty-eight OD studies and attempted to assess important methodological characteristics. Porras and Berg (1978a) selected thirty-five empirical studies of human processual approaches to OD and carfully evaluated their methodological quality. On a slightly different tack, White and Mitchell (1976), using facet theory, developed a method for classifying OD research variables. Porras and Berg (1978b) used a grounded theory approach to the same problem and presented a classification scheme of the variables used in the empirical studies they analysed. On a more operational level, further difficulties in assessing OD are rooted in the lack of support for the assessment process. Universities tend to reward research with quicker turn-around times. OD research projects mormally take two lo three years. Tenure clocks run much too quickly to accommodate this sort of timetable. At the practitioner level, there is relatively little reinforcement given to OD change agents for researching their interventions. Porras and Patterson (1979) outline several barriers to assessment by practitioners and discuss a variety of reasons why it continues to be systematically avoided. The career of most change practitioners, by far the greatest number of individuals engaged in planned-organizational change activities, does not depend on performing effective assessments. In fact, almost all of the rewards given to applied change agents are for bringing about changes and not for measuring them. As a consequence, typically there are no strong organizational motives for engaging in more formal assessment activity. In fact, some would argue that not only is assessment not rewarding, it is actually punishing. Running the risk of discovering that a well-accepted and talked-about programme was, in actuality, ineffectual (a common finding) is not a desired experience for any practitioner. Performing assessments only to find that nobody really seems to care can be equally frustrating. It is also quite difficult for a successful practitioner to attempt to conduct assessments while being deluged with requests to engage in new or expanded planned-change programmes. The normal response would be to avoid the assessment process, a reaction which is totally reasonable, given the organization dynamics described above (p. 40) It is not our intention here to explore these issues more fully. Rather our purpose is to examine OD research from a somewhat different perspective. We intend to more precisely describe the OD research process by specifically

Organization Development Research

165

identifying and defining the various types of research presently conducted (having found in our literature review that the term research is used in several different ways to accomplish vastly different objectives and ends). However, before proceeding it seems appropriate to provide a working definition of OD so as to establish a context for this discussion. Friedlander and Brown (1974) suggest one of the more adequate descriptions of OD presently available in the literature. They view OD as: . . . a method for facilitating change and development in people (e.g., styles, values, skills), in technology (e.g., greater simplicity, complexity), and in organizational processes and structures (e.g., relationships, roles) . . . organization development calls for change in technology and structure (technostructural) or change in individuals and their interaction processes (human processual), rather than efforts to change only people, only the structure/process, or only the technology of the organization (p. 314). Armed with this definition of OD we now turn to the question of defining what OD research is. WHAT IS OD RESEARCH? The question of what research is in a certain field is an old query in some disciplines, but in the area of OD it is relatively new and critical . In a survey of OD research conducted at the OD '78 Conference of Current Theory and Practice in Organization Development in San Francisco (March 16-17, 1978), King, Sherwood and Manning (1978) found that: . . . everyone has their favorite study. Of those who could cite studies, . . . there is little agreement over what research provides the foundation for developmental work in organizations. OD research appears to be an unorganised and fragmented literature, having diverse appeal to practitioners and providing little if any structure for the strategies employed in planned change efforts. There simply is no central body of OD research recognized by those performing OD activities (p. 3). Based on this lack of agreement on what constitutes OD research, additional calls for more research without clarification of what is meant by research would seem to be disfunctional and inappropriate. The critical task, as we see it, is to begin defining OD research by constructing a framework for organizing this 'fragmented literature'. Fundamental to the search for an acceptable definition of OD research and a framework to organize it, is an assumption that the goal of OD research is to link theory and practice^ (Figure 1). OD research should stand at the interface
ultimate objective of OD research is similar to the goals of applied science research in the fields of education, medicine, and engineering.

166

Jerry I. Porras and Nancy Roberts

>
Research Practice

Theory

Figure 1. The role of research in organization development between theory building and the development of practice so as to provide the necessary linkage for converting organizational realities of planned change into generalized theoretical concepts and vice versa. In this sense, the research process leads to a mutual reinforcement and expansion of these two elements: theory building on practice, and practice testing the validity of theory. The relationship can be viewed as cyclical and symbiotic. Without this critical interaction process, one can argue that neither theory nor practice will develop. As Brown (1973) noted, 'a behavioural science that separates research and practice may avoid internal conflicts but at the high cost of virtual paralysis' (p. 3)^. Given this expressed ideal of linking behavioural science theory and practice via the research process, it is important to first ask whether or not current OD research reflects and supports this ideal. A reading of the literature gives some cause for concern. The relationship between theory building and practice in OD has been described as almost nonexistent by some, (Kahn, 1974; Sanford, 1970) strained by others, (Brown, 1973; White, Cochran, and Latham, 1977) and increasing in distance by still others (Margulies and Raia, 1968). Regardless of the dimension of this separation, most agree that a gap between theory-building and practice does exist. Brown (1973) cites three categories of differences which contribute to the gulf between theory building and practice: '(a) difference in norms and goals, (b) differences in organizational bases, and (c) differences in methodology' (p. 5). Briefly summarized, he sees practitioners as emphasizing workability and seeking answers to specific organizational problems that require immediate solutions, with little importance placed on rigorous research design and methodology. Theoreticians, on the other hand, usually associated with research-based institutions, typically seek general
'Brown uses the term research and practice as we are using the terms theory building and practice. Our substitution of the term theory building for research avoids the confusion of using the term research in different ways, a problem we discuss in more detail later on.

Organization Development Research

167

theoretical explanations for phenomena. Ideally this means examination of change processes over long periods of time using controlled experiments marked by careful research designs and statistical procedures. For the most part we agree with Brown's conclusion, that the gap between theory and practice is at least partially due to the conflicts between practitioners and theoreticians. However, we would extend his analysis one step further. It is our contention, that the gap also is due to the confusion in the OD literature over what constitutes the OD research process. Similar to the conclusions reached by King, Sherwood and Manning (1978), our review of the OD literature reveals different definitions and interpretations of OD research. More importantly, these definitions in turn vary their intentions to link theory with practice. Following this line of reasoning, two alternatives to dealing with this state of OD research emerge. Research can be narrowly defined to include only those investigative activities performed by theoreticians (usually referred to as basic or academic research) and the rest eliminated from consideration. Or, the definition of OD research can be broadened to include those OD activities which in their attempts to link theory and practice, contribute in varying degrees to the research process. The latter path is chosen for this paper. Broadly stated, OD research can be described as the investigation of the dynamics and outcomes of Organization Development processes. This general perspective permits OD research to be viewed as a multifaceted process, composed of several distinct types rather than the one single, narrowly-defined type often referred to as 'academic research'. In this view, each OD research type can therefore have its own unique characteristics, purpose, and outcomes. Specifying the research process in this manner should lead to more useful understanding of OD research and its associated problems. In the next section, different types of OD research will be described. Definitions of the various types will be derived from an analysis of the intentions of the researcher as he/she designs the research process.
OD RESEARCH DEFINED BY INTENTION

A careful review of the literature reveals variations in the intentions of the OD researcherintentions as to end useage of the research results. Two 'pure' uses appear prevalent. The first is the application of findings to guide practice. The second is the use of research to develop theory. These intentions seem to exist both in their pure form as well as in combination with each other. Using intention as a basis for categorizing OD research studies, four common patterns emerge: (a) use of findings for the real-time guidance of interventions (implementation research); (b) use of findings to determine the global outcomes (i.e., the overall impact) of an intervention (evaluation research); (c) use of findings to determination global outcomes plus an understanding of the processes through which the organization achieves them (assessment research); and finally (d) research results used to uncover the fundamental relationships existing in any planned change process (theory-building research). Before more fully describing each type; it is important to warn the reader that some of these terms, e.g., assessment and evaluation, are often used

168

Jerry I. Porras and Nancy Roberts

interchangeably in the literature. One important purpose of this discussion is to more clearly distinguish and label the various types of current research so as to first, avoid the confusion that exists when the same terms are used to describe somewhat different activities and second, to provide the basis for a more precise discussion of the problem faced in OD research. This latter issue is especially important since most research studies warn against research problems unique to their domain, problems which often are not appropriate concerns for the other types of OD research. For example, advising the practitioner who focuses on implementation research to conduct assessment with the rigour required for theory-building research may not only be impractical, but also inappropriate. One additional cautionthe discussion below is organised around the use of research results and does not, at this stage, deal with the appropriateness or inappropriateness of whatever research methodologies were used to collect the data. At this point we wish to remain separated from questions and debates over which methodology is more suited to OD research.'' One way to conceptualize the differences among the four categories identified is to graphically demonstrate their location on a spectrum running from total application of research results to practice, to total application of research results to theory building. Figure 2 depicts the general relationship of the four research approaches along these two dimensions. The right-hand vertical axis on the figure reflects the degree to which research results are used for theory building, the left-hand axis the degree to which they are used for practice. Therefore, any point along the horizontal axis will reflect a combination of theory building and practice along the vertical axis. For example, the right extreme would represent 100 per cent use of research results for theory building while the left extreme would reflect the total use of research results for practice. All points in between

Results used for practice

Results used for ttieory development

Implementation research

Evaluation research

Assessment research

Theorybuilding research

II
0)
l/>

&^2
O t/5 <

Figure 2. Research classifications based on use of results ''For further elaboration, on different models of inquiry see Bowen (1978), Argyris (1978), Dunn and Swierczek (1977). Akin (1978) (for rebuttal of Dunn and Swierczek) and Diesing (1971).

Organization Development Research

169

would represent combinations of the two uses. As the figure shows and the discussion above has specified, the four categories of research; implementation, evaluation, assessment, and theory-building, taken in order, reflect increasingly greater emphasis on theory development and less orientation toward direct application of research results.
Implementation research

Probably the most common form of research in OD is what we have labelled 'implementation research'. Its specific purpose is to guide the intervention process being carried out by the OD consultant. Within this general category three types are distinguishable: (a) diagnostic research; (b) survey feedback research; and (c) action-research. Each of these three types is more complex than the one immediately preceding it on the list, and each typically incorporates the activities of the other types before it. Figure 3 demonstrates these relationships. The first type, diagnostic research, is to one degree or another, a part of every quality OD intervention. Diagnostic research typically consists of the measurement of attitudes, behaviours, values, and goals of the organizational members for the purpose of isolating organizational problems and determining appropriate intervention strategies and techniques to be applied. This type of research is generally somewhat informal and unstructured and, depending on the predilections of the consultant, the client system may participate in varying degrees in the collection and interpretation of the data generated.^ Survey feedback, a second type of implementation research, incorporates many of the processes of diagnostic research but goes quite a but further in rigour and use. Data are collected, analysed, fed back to selected members of the organization, actions planned, and actions taken. Unlike diagnostic research, however, the participation of the client is generally much higher (mostly occurring at the feedback stage), and research results are available on a more systematic, preplanned basis. In this sense, the survey feedback process has an iterative character to it, i.e., it occurs on a regular basis rather than only at those points in the intervention process which might call for a diagnosis. Methods of feeding back data are systematic and include a wide range of organizational participationtypically involving every person who provided data for the analysis. The progression of data feedback characteristically begins with the top management group of the target system and cascades down the
Diagnostic research Survey feedback research Actian research Data collectian in initial phase of intervention cycle Data collection in initial phase of intervention cycle Data collection in initial phase of intervention cycle Periodically scheduled data collection, feedback, action planning and action Periodically scheduled data collection, feedback, action planning and action Ongoing collection of data, action planning, action, and assessment of results

Figure 3. Types of implementation research


'For examples of diagnostic research see Beckhard (1969), Czepiel and Greller (1978).

170

Jerry I. Porras and Nancy Roberts

organization to include succeedingly lower levels of management and their respective work groups. The systematic nature of this feedback process further implies regularly scheduled periods of data collection and feedback to the client system. In this manner, the overall development of the change process is guided by the data,'' Action-research, a third form of implementation research, typically incorporates both diagnostic activities and survey feedback procedures. French and Bell (t978) describe action-research as a . . . process of systematically collecting research data about an ongoing system relative to some objectives, goal, or need of that system, feeding these data back into the system; taking actions by altering selected variables within the system based both on the data and on the hypothesis; and evaluating results of actions by collecting more data' (p. 88). The main difference between action-research and survey feedback research is the the former involves a more intimate and real-time connection between research and action than the latter. With action-research, the research process is more tightly intertwined with the action process so that the two can operate in closer concert to guide change activity and monitor intervention effects. The interplay between action and research in the survey feedback approach described above is much less complex and more preplanned than organic. In action-research, the research process grows out of the action process and vice versa. Frohman, Sashkin, and Kavanaugh (t976) have developed similar distinctions between survey feedback and action-research. As they use the term, survey feedback also suggests more of a standardized data collection and feedback process without consideration of alternative methods. In contrast action-research emphasizes the process of client-consultant collaboration for exploring problems, generating data, developing action plans, implementing these plans and ultimately evaluating the total process. Action-research, then, is an emergent process, organically developing from the interaction betweeen client and consultant, while survey feedback is more prescriptive, planned and less flexible in its approach to data collection. Although most definitions of action-research concur with this brief analysis (Lewin, 1946; Lippitt, t950; Cory, t953; Beckhard, t969; Bowers and FrankHn, t972; French and Bell, 1978), there is one aspect of action-research on which there seems to be less accord. Kurt Lewin (1946), who first coined the word action-research, believed that scientific insight could be derived by applying research methods to the study of social programmes in order to determine whether or not change takes place. Notwithstanding Lewin's call for the transfer of this knowledge gained in practice to the more general theoretical realm, there is not much evidence in the literature to suggest that the type of
*See Bowers and Franklin (1972) for an example of survey guided development, an approach to change that uses survey feedback research, and Bass (1974), 'For examples of action-research see Culbert (1972), McGill and Horton (1973), Hautaluoma and Gavin (1975) and Manley and McNichols (1977),

Organization Development Research

111

action-research done today generates new behavioural science knowledge and general theory. Brown (1973). Rapoport (1970), and Tichy and Horstein (in press) concur in this assessment, generally concluding that 'action-research has been shorn of its original commitment to developing general theory and knowledge in addition to specific problem solutions' (Brown, 1973, p. 2) In summary, there appears to be general agreement in the literature on the various types of research characterized here as implementation research: (a) diagnostic research; (b) feedback research; and (c) action-research. As presently conducted, all three tend to ignore or de-emphasize the need to link practice with general behavioural science theory and all put primary emphasis on finding solutions to specific organization problems.
Evaluation research

Evaluation research, although also predominantly focused on the practice of OD, attempts to go beyond implementation research and allow for some theory building. In evaluation research, the impact of change interventions on prespecified organizational outcome variable is measured. It aims at testing applications and the ability to influence variables through controlled interventions (Suchman, 1967). Its particular purpose, according to Nicholas (1978), 'is to provide guidance for specific programmes involving specific decisions to be made by particular individuals' (p. 6). Nielsen and Kimberly (1976) describe evaluation research as consisting of: '. . . the analysis and interpretation of the consequences of the interaction between organization resources (e.g., employees, time and money) and procedures (e.g., work rules and job descriptions) in order that the past activities can be evaluated and future course of action can be determined'* (p. 33). Pate describes evaluation as 'a decision tool that can be used to provide organizational members with relevant information to planning future courses of action. This view emphasizes the careful use of research methods to obtain reliable and valid data' (Nielsen et al., 1977, p. 9). The results of evaluation research are used for decision-making on the efficacy and possible continuation of the OD project from which they are derived. By specifying if a predetermined set of objectives has been met, decision-makers can ascertain whether the intervention is accomplishing the changes in organizational outputs that they desire and what further programme planning and refinement is necessary.' Variables typically selected to evaluate the outcomes of OD interventions generally focus on organizational outcomes such as production levels, productivity, costs, quality, turnover or absenteeism and satisfaction. Variables typically selected to evaluate the outcomes of OD interventions

*We should note that Nielsen and Kimberly (1976) actually used the term 'assessment' rather than evaluation. 'For examples of evaluation research see Stahl, McNichols, and Manley (1978), Pate, Nielson, and Mowday (1977).

172

Jerry I. Porras and Nancy Roberts

generally focus on organizational outcomes such as production levels, productivity, costs, quality, turnover or absenteeism and satisfaction. In contrast to implementation research which places virtually no emphasis on the use of research results for theory development, in this case there is more of a balance between practice and research. Evaluation research results are at times used to build theory. A second important difference between implementation research and evaluation research concerns the selection of research variables. In the latter case, variables which 'bracket' the intervention (i.e., variables which describe outcomes of the system's activities are of more interest). In the former, the research process focuses not only on outcomes but on internal organizational dynamics for the purpose of knowing exactly which processes need intervention activity. If the organization is considered a 'black box', evaluation research tends to measure what comes out of the box, while implementation research measures not only what comes out, but also what goes on inside the box. System outputs are typically less important to the implementation researcher than the knowledge of internal changes occurring in the organization. On the other hand, for the decision-maker faced with deciding about the continuation or development of OD programmes, knowing the 'end results' of an OD project tends to be the issue of key interest. It appears that if evaluation research continues to be primarily programme evaluation (i.e., one specific programme being measured by a limited set of outcome variables), the interest of OD researchers will be primarily on OD practice. When the concern turns to general variables that underlie multiple studies, evaluation research begins to move toward theory building and development of behavioural science knowledge. Evaluation research should contribute to our knowledge of planned change (Beer, 1970), but at this point there exist few evaluation studies that actually do contribute to theory development.
Assessment research

Assessment research is the investigation and study of both the processes and outcomes of OD interventions for the expressed purpose of producing a series of conclusions about the impact of the change activities, (to aid in future interventions with the same or other organizations) and to widen the theory base of change activities in general (for academic theory development purposes). This third category of OD research is much less clearly defined in the OD literature than the other three. Most often in the literature 'assessment' is used interchangeably with 'evaluation' with neither term completely describing the research activities discussed in this analysis. Yet, the currently available research studies indicate that in actual fact this category of research does exist and, as a consequence, should be isolated from the others'". This lack of clarity in the OD research literature represents an important void in our understanding of the function of research in OD. It follows from the definition above that assessment research includes a
'"For examples of assessment research, see Keys and Bartunek (1977), Keller (1978), and Zand, Steele, and Zalkind (1969),

Organization Development Research

173

broader range of investigative activities than the other research categories previously discussed. For example, in examining both assessment research and implementation research, many differences surface: (a) the methodology of assessment research is much more rigorous and sophisticated than implementation research. Conscious of threats to validity and reliability, quasi-experimental designs using statistical procedures are employed for methodological rather than action consideration, (b) In assessment research data are collected at points in time typically too far apart to be useful for guiding action. Furthermore, the complexity of the data normally collected requires a longer time frame for analysis than is amenable to the needs of the implementation researcher, (c) The results of action-research guide the next day's intervention and specific ongoing action, while assessment research, which puts less emphasis on data feedback to the organization, looks toward the development of theory rather than action plans, (d) The choice of variables for assessment research ideally draws from a relevant theory of change, although at present no generally accepted theory of organizational change exists. Consequently, this type of research often has to substitute micro-level theory to guide the selection of variables for measurement. In this latter situation, assessment research and action-research are relatively dissimilar. The consultant/researcher going into an action-research intervention would not know what actions she/he would take and, as a consequence, would not know which specific variables to measure. Since change activities are expected to develop 'organically' in response to evolving needs, a priori variable selection is rather difficult. In contrast, the assessment researcher is guided in variable selection by either a micro-level theory, which only looks at subparts of the overall change process, or by a perspective to measure a wide range of variables in the hopes of assessing the really important ones. This latter approach has been supported through the development of 'general organizational instruments' such as the Survey of Organizations (Taylor and Bowers, 1972). (e) And lastly, there are different levels of generalizability distinguishing assessment research from action-research. Consultants and practitioners of action-research organically develop their personal experiences and expertise from one system intervention to another and tend not to move toward a more general theory of action or intervention. Assessment researchers, on the other hand, tend to use knowledge and information gained from assessments to alter their future activities. This point is discussed more fully below in the comparison between assessment research and evaluation research. As would be expected, the contrasts between assessment and evaluation research are not as distinct as the ones above. Since assessment research examines both process and outcome variables, it provides a much more comprehensive overview of the change process than does evaluation research, which tends to emphasize the measurement of outcome variables. As a consequence, in contrast to evaluation research, assessment research results are more often used for theory development than for action. In assessment research theory development generally refers to both the direct development of applied theory by the OD practitioner, as well as the evolution of more abstract general theory by the OD theoretician. The former notion requires a bit of elaboration since the theory development process of the change practitioner is seldom specified. We see the OD consultant

174

Jerry I. Porras and Nancy Roberts

using assessment research results in two important ways. First, findings from this research process can be used to develop theories about how to change the specific organization in which the practitioner is working. In this regard, the change agent is developing an organization-specific theory of change. A second use of assessment research results by the practitioner is for the development of theory which might be idiosyncratic to the particular style of the change agent, and which works for him/her in all types of organizations. This latter theory could be viewed as an intervener-specific theory of change. In terms of methodological rigour and the time frame of research, both evaluation research and assessment research are quite similar. Clearly the differences between the two lie in the variable measured, the frequency of measurement, and the use of the research results. In most other respects, the two types of research are basically the same. In summary, the evaluation researcher is more interested in discovering the outcomes of the change process so as to provide information to decison-makers about the scope and direction of future programmes. By contrast, whatever the organizational setting, the main objective for the assessment researcher is to modify and refine his/her knowledge of general planned change dynamics through the knowledge gained from assessment research results in specific organizations. It is this important linkage between practice and theory that is one of the most significant aspects of assessment research.
Theory-building research

Sometimes referred to in the literature as basic scientific or academic research, theory-building research is the fourth category of OD research identified" A representative definition of this type of research is one developed by Kerlinger (1973) who defines research as a 'systematic, controlled, empirical, and critical investigation of hypothetical propositions about the presumed relations among natural phenomena' (p. 11). One major purpose of scientific research, and the reason it is called theory-building research, is to investigate whether or not organizational 'facts' support theory-based predictions about organizational change behaviour and phenomena. Theory as defined here is 'a set of interrelated constructs (concepts), definitions, and propositions that present a systematic view of phenomena by specifying relations among variables, with the purpose of explaining and predicting the phenomena' (Kerlinger, 1973, p. 9). In brief, this systematic view of plienomena is concerned with: (a) measurement and the properties of measures (e.g., reliability and validity); (b) measurement methods (e.g., observation, questionnaires, interviews, and unobtrusive measure); (c) sampling procedures; (d) the manner in which a study is planned and executed (e.g., pre-experimental, experimental or quasi-experimental designs, Campbell and Stanley, 1966); and (e) strategies researchers may employ to study any given phenomenon (e.g., field studies, laboratory experiments, field experiments, sample surveys, or case studies).

"For examples of theory-building research see Hand, Estafen, and Sims (1975); Toronto (1975), or Van Gundy (1976).

Organization Development Research

175

In contrast to assessment research, theory-building research is only indirectly concerned with immediate application of findings to practice, although it does study both the processes and outcomes of change, as does assessment research. Theory-building research is usually predictive in character, specifying relations among variables derived from hypothetical propositions. This hypothesis generation process charcteristically singles out theory-building research from the other categories, which typically fail to systematically and overtly specify variables selected, and to predict relationships among them. Although to some degree all of the discussed forms of research have underlying assumptions of relationships and causality, the degrees of specificity, overtness and methodological rigour are major points of difference which separate theory-building research from the other three. The primary orientation of theory-building research is to develop basic and general theories of planned change, applicable to wide ranges of change situations. Emphasis here, as compared to the other types, is on methodological rigour and the investigation of more abstract theoretical issues.

SUMMARY AND CONCLUSIONS

A summary of the characteristics of the four categories of OD research is shown in Table 1 which lists the basic points of similarity and difference mentioned above. This matrix is, in some respects, somewhat like the model developed by Berkowitz (1969) differentiating evaluation research in terms of its various audiences, and also to Thompson's model (Thompson, 1978), which was adapted from Berkowitz to expand on his original points. Both authors reviewed evaulation research from the point of view of decision-makers, change agents (technical specialists or consultants) and behavioural scientists, arguing that different types of studies were more appropriate for different types of audiences. However, since Berkowitz's categories of audiences are neither exhaustive nor mutually exclusive (e.g., 'participants' can be 'organizational decision-makers' or behavioural scientists'), they can result in overlap and some confusion in determining the appropriateness of any research strategy. The intention perspective presented here considers the audience as one dimension rather than as a basis for the entire classification scheme. This allows for some potential fuzziness in the determination of audiences without jeopardizing the overall approach. Critical to the continued development of the OD field is the conceptualization of the research process as a linking mechanism between theory building and practice. In order to understand this process, clear definition among the different types of research presently conducted is imperative. We need to know what research is being done, how it is done, how it differs in its approach, and ultimately how to coordinate the results. In order to aviod paralysis of the OD field, behavioural science practice must be derived from theory, and theory must be applied and tested in practice. Our goal here has been to conceptualize the research process in such a way that the coordination and the linkage of theory and practice are more probable in the future.

176

Jerry I. Porras and Nancy Roberts

a
<L>
"O

c
CO

^^

frt '^^

o >' o

o
CL E

c -=

g. m
C

58
O 3 P3 o u V, E

C u U

u.s 2
CO

S E u H oowc

o c u c
CO

CA

o ><

a 8

I -3 .2 o
it (U o <j

g <2 S A

'C O
CO ^

c
t. C >^ O

E J=.

P ^13

8 1 " 1 ^ 1 -I
13 C O

o o
B U
a
CO
.3

2 .2 C 2 O

op

'x.
O

o 8o o
CJ

>^

c
>
K] "cO

Is

c o
o

ca

5^1

3 3

>^ E
C O

Xl c

II
C/5

d, S pj .E

uS

c 2
T3

8 H nost sol inte;ntioi


< u

;aniz mber dec lsion

inten

(U T3

o c

c ^

O 3

O .82 u c

C o
CO < "

tica
c/:

a C O

c D

O E

(2 ^ W . S

00

'3
c

c 3
00

o
B
C/3

U CO

a
It: use
(L>
t/i

> 2
mei
CL.

X> u E u (U

CO

00

3
<> L

i <u *o ~ ^
CJ

< 2
o 3 <;

4?S
^ o
c

c .2
CO

8
o u |3

.23
CO

3 < CO U

ill ^^
W) O

a.

c u u O.E <>a E

Ter sim ana


CO

T3 "Q. >

c
CO

Organization Development Research


ACKNOWLEDGEMENTS

111

An earlier version of this paper was originally presented at the 1978 National Academy of Management meetings, San Francisco. The authors are indebted to Tom Cummings, Rueben Harris, John Nicholas, and Kerry Patterson for their comments on the earlier drafts. This research was supported in part by the Shell Companies Foundation, Inc.

REFERENCES Akin, G. (1978). 'Grounded theory doesn't come easily: a response to Dunn and Swierezek', Journal of Apptied Behavioral Science, 14(4), 557-560. Argyris, C. (1978). 'How normal science makes leadership research less additive and less applicable'. Paper presented for the Leadership Symposium at Southern Illinois University, Carbondale Campus, October. Bass, B. M. (1974). 'The substance and the shadow', yiwerican Psychologist, 29, 870-886. Beckhard, R. (1969). Organization development: Strategies and models, Addison-Wesley, Reading, Mass. Beer, M. (1970). 'Evaluating organizational and management development programs: Trials, tribulations and prospects'. Symposium on the Evaluation of Psychological Programs in Organizations, Bowling Green State University, December 4. Berkowitz, I. N. (1969). 'Audiences and their implications for evaluation research', Journal of Applied Behavioral Science, 5(3), 411-428. Bowen, D. D. (1978). 'OD as inquiry: Alternatives and issues', unpublished manuscript. University of Tulsa. Bowers, D. G., and Franklin, J. L. (1972). 'Survey guided development: Using human resources measurement in organizational change'. Journal of Contemporary Business, 1, 43-55. Brown, L. D. (1973). 'Intergroup relations and action research'. Paper presented to the meeting of the Academy of Management in Boston, Mass., August, revised. Campbell, D. T. and Stanley, D. C. (1966). Experimental and Quasi-experimental Designs for Research, Rand McNally, Chicago. Cory, S. M. (1953). Action Research to Improve School Practices, Bureau of Publications, Teachers College, Columbia University, New York. Culbert, S. (1972). 'Using research to guide an organization development project'. Journal of Applied Behavioral Science, 8(2), 203236. Cummings, T. G., Malloy, E. S., and Glen, R. (1977). 'A methodological critique of fifty-eight selected work experiments'. Human Relations, 30(8), 675-708. Czepiel, J. A., and Greller, M. M. (1978). 'Improving satisfaction in a public agency setting: An interdisciplinary approach'. (Report 78-23) New York: New York University, Graduate School of Business Administration, College of Business and Public Administration. Diesing, P. (1971). Patterns of Discovery in the Social Sciences, Aldine-Atherton, Chicago. Dunn, W. N., and Swierezek, F. W. (1977). 'Planned organizational change: toward grounded theory'. The Journal of Applied Behavioral Science, 13 (2), 135-157. French, W. L., and Bell, C. H. Jr. (1978). Organization Development, 2nd Ed., Prentice-Hall, Englewood Cliffs, N.J. Friedlander, R., and Brown, L. D. (1974). 'Organization Development. Annual Review of Psychology, 25, 313-341. Frohman, M. A., Sashkin, M., and Kavanaugh, M. J. (1976). 'Action-research as applied to organization development'. Organization and Administrative Sciences, 7(1,2), 129-161. Hand, H. H., Estafen, B. D., and Sims Jr., H. P. (1975). 'How effective is data survey and feedback as a technique of organization development? An experiment'. Journal of Applied Behavioural Science, 29(8), 22-27.

178

Jerry I. Porras and Nancy Roberts

Hautaluoma, J. E., and Gavin, J. F. (1975). 'Effects of organizational diagnosis and intervention on blue-collar "blues".' Journal of Applied Behavioral Science, 11(4), 475-496. Kahn, R. L. (1974) 'Organization Development: Some problems and proposals', 7oMr/ of Applied Behavioral Science, 10, 485-502. Keller, R. T. (1978). 'A longitudinal assessment of an Organization Development intervention', unpublished manuscript. University of Houston. Kerlinger, F. (1973). Foundations of Behavioral Research, 2nd ed. Holt, Rhinehard and Winston, New York. Keys, C. B., and Bartunek, J. M. (1977). 'Organization Development in schools: Goal agreement, process skills, and diffusion of change'. Paper presented at the meeting of the American Education Research Association. King, D. C, Sherwood, J. J., and Manning, M. R. (1978). 'OD's research-base: How to expand and utilize it'. Paper No. 665, West Lafayette, Indiana: Purdue University, Institute for Research in the Behavioral, Economic, and Management Science. Lewin, K. (1946). 'Action research and minority problems'. Journal of Social Issues, 4, 34-46. Lippitt, R. (1950). 'Value-judgment problems of the social scientist participating in action-research'. Paper presented at the annual meeting of the American Psychological Association, September. Manley, T. R., and McNichols, C. W. (1977). 'OD at a major government research laboratory'. Public Personnel Management, January-February, 5160. Margulies, N., and Raia, A. P. (1968). 'Action-research and the consultative process'. Business Perspectives, 26-21. McGill, M. E., and Horton, M. E. (1973). Action Research Designs for Training and Development, National Training and Development Service Press, Washington D.C. Nicholas, J. M. (1978). 'Evaluation research in organizational change interventions: considerations and some suggestions'. Journal of Applied Behavioral Science, in press. Nielsen, W. R., Kimberly, J. R., Pate, L. E., Golembiewski, R. T., Frame, R. M., Wakefield, J. J., and Ault, R. E. (1977). 'Organization development assessment: An exchange of ideas between researchers and practitioners'. Wisconsin Working Paper 12-77-48, Madison: University of Wisconsin, Graduate School of Business, December. Nielsen, W. R., and Kimberly, J. R. (1976). 'Designing assessment strategies for Organization Development', Human Resource Management, 15(1), 32-39. Pate, L. E., Nielsen, W. R., and Mowday, R. T. (1977). 'A longitudinal assessment of the impact of Organization Development on absenteeism, grievance rates and product quality'. Academy of Management Proceedings, August, 353-357. Pate, L. E., Nielsen, W. R., and Bacon, P. C. (1977). 'Advances in research on organization development: toward a beginning'. Group and Organization Studies, 2(4); 449-460. Porras, J. I., and Berg. P. O. (1978a). 'Evaluation methodology in organization development: an analysis and critique. Journal of Applied Behavioral Science, 14(2), 151-173. Porras, J. I. and Berg, P. O. (1978b). 'The impact of organization development'. Academy of Management Review, 3(2), 249-266. Porras, J. I., and Patterson, K. (1979). 'Assessing planned change'. Group and Organization Studies, 4(1), 39-58. Rapoport, R. N. (1970). 'Three dilemmas in action research'. Human Relations, 23, 499-513. Sanford, N. (1970). 'Whatever happened to action research?' Journal of Social Issues, 26, 3-23. Stahl, M. J., McNichols, C. W., and Manley, T. R. (1978). 'An assessment of team development at the air force flight dynamics laboratory; Research design and baseline measurement'. Technical Report No. 78-3, Air Force Institute of Technology, School of Engineering, Wright-Patterson Air Force Base, Ohio. Suchman, E. A. (1967). Evaluation Research, Russell Sage, New York. Taylor, J., and Bowers, D. G. (1972). The Survey of Organizations: A Machine-Scored Standardized Questionnaire Instrument, Institute for Social Research, Ann Arbor, MI. Thompson, J. T. (1978). 'Toward an evaluation strategy for OD practitioners', unpublished manuscript.

Organization Development Research

179

Tichy, N. M., and Hornstein, H. A. 'Collaborative organization model building.' In: Lawler, E., Nadler, D. and Cammann, C. (Eds.) Organizational Assessment: Perspectives on the Measurement of Organizational Behavior and the Quality of Working Life, Wiley Interscience, New York, in press. Toronto, R. S. (1975). 'A general systems model for the analysis of organizational change'. Behavioral Science, 20(3), 145-156. Van Gundy, A. B. (1976). 'Integration of techno-structural and human-processual approaches in a field-based Organizational Development program'. University of Oklahoma, March. White, D., Cochran, D. S., and Latham, D. R. (1977). 'Enhancing academic research through the consulting engagement: A case in point'. Academy of Management Proceedings '77', 158-162. White, S. E., and Mitchell, T. R. (1976). 'Organization Development: A review of research content and research design', ^cade/ny of Management Review, 1(2), 57-73. Zand, D. E., Steele, F. I., and Zaikind, S. S. (1969). 'The impact of an Organizational Development program on perceptions of interpersonal, group, and organizational functioning,' Journal of Applied Behavioral Science, 5(3), 393-410. Author's address: Dr Jerry I. Porras, Graduate School of Business, Stanford University, Stanford, California, U.S.A.

Potrebbero piacerti anche