Sei sulla pagina 1di 14

PHENOMENOLOGY

Phenomenology comes from the academic disciplines of


philosophy and psychology, and it is based upon the
work of the 20th century philosopher Edmund Husserl,
which was then later developed by Heidegger.

Introduction
In its broadest sense, 'phenomenology' refers to a
person's perception of the meaning of an event, as
opposed to the event as it exists externally to (outside
of) that person.
The focus of phenomenologic inquiry is what people
experience in regard to some phenomenon or other and
how they interpret those experiences.
A phenomenological research study is a study that
attempts to understand people's perceptions,
perspectives and understandings of a particular situation
(or phenomenon).
In other words, a phenomenological research study tries
to answer the question 'What is it like to experience
such and such?'.
By looking at multiple perspectives of the same
situation, a researcher can start to make some
generalisations of what something is like as an
experience from the 'insider's' perspective.
What is phenomenology?
The objective of phenomenology is the direct
investigation and description of phenomena as
consciously experienced, without theories about their
causal explanations or their objective reality.
It therefore seeks to understand how people construct
meaning.

What is ethnography?
Ethnography is the study of social interactions, behaviours, and perceptions that occur within
groups, teams, organisations, and communities. Its roots can be traced back to
anthropological studies of small, rural (and often remote) societies that were undertaken in
the early 1900s, when researchers such as Bronislaw Malinowski and Alfred Radcliffe-
Brown participated in these societies over long periods and documented their social
arrangements and belief systems. This approach was later adopted by members of the
Chicago School of Sociology (for example, Everett Hughes, Robert Park, Louis Wirth) and
applied to a variety of urban settings in their studies of social life.

The central aim of ethnography is to provide rich, holistic insights into people’s views and
actions, as well as the nature (that is, sights, sounds) of the location they inhabit, through the
collection of detailed observations and interviews. As Hammersley states, “The task [of
ethnographers] is to document the culture, the perspectives and practices, of the people in
these settings. The aim is to ‘get inside’ the way each group of people sees the world.”1 Box
1 outlines the key features of ethnographic research.

Box 1 Key features of ethnographic research2

 A strong emphasis on exploring the nature of a particular social phenomenon, rather


than setting out to test hypotheses about it
 A tendency to work primarily with “unstructured data” —that is, data that have not
been coded at the point of data collection as a closed set of analytical categories

Investigation of a small number of cases

QUASI-EXPERIMENTAL DESIGNS IN EVALUATION

As stated previously, quasi-experimental designs are commonly employed in the


evaluation of educational programs when random assignment is not possible or
practical. Although quasi-experimental designs need to be used commonly, they
are subject to numerous interpretation problems. Frequently used types of quasi-
experimental designs include the following:

Nonequivalent group, posttest only (Quasi-experimental).

The nonequivalent, posttest only design consists of administering an outcome


measure to two groups or to a program/treatment group and a comparison. For
example, one group of students might receive reading instruction using a whole
language program while the other receives a phonetics-based program. After
twelve weeks, a reading comprehension test can be administered to see which
program was more effective.

A major problem with this design is that the two groups might not be necessarily
the same before any instruction takes place and may differ in important ways
that influence what reading progress they are able to make. For instance, if it is
found that the students in the phonetics groups perform better, there is no way
of determining if they are better prepared or better readers even before the
program and/or whether other factors are influential to their growth.

Nonequivalent group, pretest-posttest.

The nonequivalent group, pretest-posttest design partially eliminates a major


limitation of the nonequivalent group, posttest only design. At the start of the
study, the researcher empirically assesses the differences in the two groups.
Therefore, if the researcher finds that one group performs better than the other
on the posttest, s/he can rule out initial differences (if the groups were in fact
similar on the pretest) and normal development (e.g. resulting from typical home
literacy practices or other instruction) as explanations for the differences.

Some problems still might result from students in the comparison group being
incidentally exposed to the treatment condition, being more motivated than
students in the other group, having more motivated or involved parents, etc.
Additional problems may result from discovering that the two groups do differ on
the pretest measure. If groups differ at the onset of the study, any differences
that occur in test scores at the conclusion are difficult to interpret.

TRUE EXPERIMENTAL DESIGNS

The strongest comparisons come from true experimental designs in which


subjects (students, teachers, classrooms, schools, etc.) are randomly assigned to
program and comparison groups. It is only through random assignment that
evaluators can be assured that groups are truly comparable and that observed
differences in outcomes are not the result of extraneous factors or pre-existing
differences. For example, without random assignment, what inference can we
draw from findings that students in reform classrooms outperformed students in
non-reform classrooms if we suspect that the reform teachers were more
qualified, innovative, and effective prior to the reform? Do we attribute the
observed difference to the reform program or to pre-existing differences between
groups? In the former case, the reform appears to be effective, likely worth the
investment, and possibly justifying expansion; in the latter case, alternative
inferences are warranted. There are several types of true experimental design:

Posttest Only, Control Group.

Posttest only, control group designs differ from previously discussed designs in
that subjects are randomly assigned to one of the two groups. Given sufficient
numbers of subjects, randomization helps to assure that the two groups (or
conditions, raters, occasions, etc.) are comparable or equivalent in terms of
characteristics which could affect any observed differences in posttest scores.
Although a pretest can be used to assess or confirm whether the two groups were
initially the same on the outcome of interest(as in pretest-posttest, control group
designs), a pretest is likely unnecessary when randomization is used and large
numbers of students and/or teachers are involved. With smaller samples,
pretesting may be advisable to check on the equivalence of the groups.

Other Designs.

Some other general types of designs include counterbalanced and matched


subjects (for a more detailed discussion of different designs see Campbell &
Stanley, 1966). With counterbalanced designs, all groups participate in more
than one randomly ordered treatment (and control) conditions. In matched
designs, pairs of students matched on important characteristics (for example,
pretest scores or demographic variables) are assigned to one of the two treatment
conditions. These approaches are effective if randomization is employed.

Even experimental designs, however, can be problematic even when true


experimental designs are employed (Cook & Campbell, 1979). One threat is that
the control group can be inadvertently exposed to the program; such a threat also
occurs when key aspects of the program also exist in the comparison group.
Additionally, one of the conditions (groups), such as instructional programs may
be perceived as more desirable than the other. If participants in the study learn
of the other group, then important motivational differences (being demoralized
or even trying harder to compensate) could impact the results. Differences in the
quality with which a program or comparison treatment is implemented also can
influence results (the teachers implementing one or the other have greater
content or pedagogical knowledge). Still another threat to the validity of a design
is differential participant mortality in the two groups.

Experimental designs also are limited by narrow range of evaluation purposes


they address. When conducting an evaluation, the researcher certainly needs to
develop adequate descriptions of programs, as they were intended as well as how
they were realized in the specific setting. Also, the researcher frequently needs to
provide timely, responsive feedback for purposes of program development or
improvement. Although less common, access and equity issues within a critical
theory framework may be important. Experimental designs do not address these
facets of evaluation.
With complex educational programs, rarely can we control all the important
variables which are likely to influence program outcomes, even with the best
experimental design. Nor can the researcher necessarily be sure, without
verification, that the implemented program was really different in important
ways from the program of the comparison group(s), or that the implemented
program (not other contemporaneous factors or events) produced the observed
results. Being mindful of these issues, it is important for evaluators not to
develop a false sense of security.

Finally, even when the purpose of the evaluation is to assess the impact of a
program, logistical and feasibility issues constrain experimental frameworks.
Randomly assigning students in educational settings frequently is not realistic,
especially when the different conditions are viewed as more or less desirable.
This often leads the researcher to use quasi-experimental designs. Problems
associated with the lack of randomization are exacerbated as the researcher
begins to realize that the programs and settings are in fact dynamic, constantly
changing, and almost always unstandardized.

Time series designs.


In time series designs, several assessments (or measurements) are obtained from
the treatment group as well as from the control group. This occurs prior to and
after the application of the treatment. The series of observations before and after
can provide rich information about students' growth. Because measures at
several points in time prior and subsequent to the program are likely to provide
a more reliable picture of achievement, the time series design is sensitive to
trends in performance. Thus, this design, especially if a comparison group of
similar students is used, provides a strong picture of the outcomes of interest.
Nevertheless, although to a lesser degree, limitations and problems of the
nonequivalent group, pretest-posttest design still apply to this design.

The defining feature of time series research designs is that each participant or sample is
observed multiple times, and its performance is compared to its own prior performance. In
other words, each participant or population serves as its own control. The outcome—
depression or smoking rates, for example—is measured repeatedly for the same subject or
population during one or more baseline and treatment conditions.

When the researcher studies only one or a few individuals, these are called Single Subject
Research Designs (SSRD). They are particularly useful when:

 Few participants are available—problems with low incidence rates, for example
 Participants are relatively heterogeneous
 Participants demonstrate variability from day to day

When the researcher studies an entire population, such as a community, city, or health care
delivery system, these are interrupted time series designs (ITSD). They are particularly useful
when you want to evaluate the effects of a law, policy or public health campaign that has
been implemented in a community.

We will examine both of these designs together because they share many similar
considerations.

Basic Terms

In order to critically examine the quality of a time series study, it is helpful to understand
some of the basic terms.

 Baseline

Baseline refers to a period of time in which the target behavior or outcome (dependent
variable) is observed and recorded as it occurs before introducing a new intervention.
The baseline behavior provides the frame of reference against which future behavior
is compared. In some designs, the term baseline can also refer to a period of time
following a treatment in which conditions match what was present in the original
baseline.

 Treatment Condition

Treatment condition or treatment phase in these designs describes the period of time
during which the experimental manipulation is introduced and the target behavior
continues to be observed and recorded.

Ethnography

The ethnographic approach to qualitative research comes largely from the field of
anthropology. The emphasis in ethnography is on studying an entire culture. Originally, the
idea of a culture was tied to the notion of ethnicity and geographic location (e.g., the culture
of the Trobriand Islands), but it has been broadened to include virtually any group or
organization. That is, we can study the "culture" of a business or defined group (e.g., a Rotary
club).

Ethnography is an extremely broad area with a great variety of practitioners and methods.
However, the most common ethnographic approach is participant observation as a part of
field research. The ethnographer becomes immersed in the culture as an active participant and
records extensive field notes. As in grounded theory, there is no preset limiting of what will
be observed and no real ending point in an ethnographic study.
Phenomenology

Phenomenology is sometimes considered a philosophical perspective as well as an approach


to qualitative methodology. It has a long history in several social research disciplines
including psychology, sociology and social work. Phenomenology is a school of thought that
emphasizes a focus on people's subjective experiences and interpretations of the world. That
is, the phenomenologist wants to understand how the world appears to others.

Field Research

Field research can also be considered either a broad approach to qualitative research or a
method of gathering qualitative data. the essential idea is that the researcher goes "into the
field" to observe the phenomenon in its natural state or in situ. As such, it is probably most
related to the method of participant observation. The field researcher typically takes extensive
field notes which are subsequently coded and analyzed in a variety of ways.

Grounded Theory

Grounded theory is a qualitative research approach that was originally developed by Glaser
and Strauss in the 1960s. The self-defined purpose of grounded theory is to develop theory
about phenomena of interest. But this is not just abstract theorizing they're talking about.
Instead the theory needs to be grounded or rooted in observation -- hence the term.

Grounded theory is a complex iterative process. The research begins with the raising of
generative questions which help to guide the research but are not intended to be either static
or confining. As the researcher begins to gather data, core theoretical concept(s) are
identified. Tentative linkages are developed between the theoretical core concepts and the
data. This early phase of the research tends to be very open and can take months. Later on the
researcher is more engaged in verification and summary. The effort tends to evolve toward
one core category that is central.

There are several key analytic strategies:

 Coding is a process for both categorizing qualitative data and for describing the
implications and details of these categories. Initially one does open coding,
considering the data in minute detail while developing some initial categories. Later,
one moves to more selective coding where one systematically codes with respect to a
core concept.
 Memoing is a process for recording the thoughts and ideas of the researcher as they
evolve throughout the study. You might think of memoing as extensive marginal
notes and comments. Again, early in the process these memos tend to be very open
while later on they tend to increasingly focus in on the core concept.
 Integrative diagrams and sessions are used to pull all of the detail together, to help
make sense of the data with respect to the emerging theory. The diagrams can be any
form of graphic that is useful at that point in theory development. They might be
concept maps or directed graphs or even simple cartoons that can act as summarizing
devices. This integrative work is best done in group sessions where different members
of the research team are able to interact and share ideas to increase insight.
Eventually one approaches conceptually dense theory as new observation leads to new
linkages which lead to revisions in the theory and more data collection. The core concept or
category is identified and fleshed out in detail.

When does this process end? One answer is: never! Clearly, the process described above
could continue indefinitely. Grounded theory doesn't have a clearly demarcated point for
ending a study. Essentially, the project ends when the researcher decides to quit.

What do you have when you're finished? Presumably you have an extremely well-considered
explanation for some phenomenon of interest -- the grounded theory. This theory can be
explained in words and is usually presented with much of the contextually relevant detail
collected.

Ex Post Facto Defined

Sometimes you want to study things you can't control - things you can't ethically or
physically control. For instance, you can't make someone overweight to study the effects it
has on their brain. You can't alter someone's eyesight to see how it affects their motor skills.

Ex post facto design is a quasi-experimental study examining how an independent variable,


present prior to the study, affects a dependent variable. So like we just said, there is
something about the participant that we're going to study that we don't have to alter in the
participant. We will make this a little clearer a little later with some examples and
descriptions.

But first, quasi-experimental simply means participants are not randomly assigned. In a true
experiment, you have what is called random assignment, which is where a participant has an
equal chance of being in the experimental or control group. Random assignment helps ensure
that when you apply some kind of condition to the experimental and control groups, there
isn't some predisposition in one group to respond differently than the other.

A true experiment and ex post facto both are attempting to say: this independent variable is
causing changes in a dependent variable. This is the basis of any experiment - one variable is
hypothesized to be influencing another. This is done by having an experimental group and a
control group. So if you're testing a new type of medication, the experimental group gets the
new medication, while the control group gets the old medication. This allows you to test the
efficacy of the new medication.

Ex post facto designs are different from true experiments because ex post facto designs do
not use random assignment. True experiments have random assignment because you're
looking at something else. In ex post facto, you are looking at a prior variable present in the
participant.

In an ex post facto design, you are not randomly assigning people to an experimental group
or control group. You are purposefully putting people in a particular group based on some
prior thing they have. I say 'thing' because it could be 'must have glasses,' or 'must be
overweight.' There is no limit to the ways you could divide up the population.

This prior thing that they must have is something you can't just create or apply to people.
Commonly, an ex post facto design is used for health psychology because, like gender, you
can't assign obesity, organ defects or brain damage. I mean, you could give someone brain
damage, but it's really unethical.

Chapter 9: Textual Analysis


I. Introduction
A. Textual analysis is the method communication researchers use to describe and
interpret
the characteristics of a recorded or visual message.
1. The purpose of textual analysis is to describe the content, structure, and functions
of the
messages contained in texts.
2. The important considerations in textual analysis include selecting the types of
texts to be
studied, acquiring appropriate texts, and determining which particular approach to
employ
in analyzing them.
3. There are two general categories of texts:
a. Transcripts of communication (verbatim recordings)
b. Outputs of communication (messages produced by communicators)
4. In terms of acquiring texts, outputs of communication are more readily available
than
transcripts.
a. Archival communication research involves examining the communication
embedded in
existing records of human behavior kept in archives.
b. Acquisition of texts is important as is the representativeness of the texts selected
since
sampling is typically used.
c. Another issue is determining how complete and accurate the texts are in order to
conduct
a sound analysis.
II. Approaches to Textual Analysis
A. There are four major approaches to textual analysis: rhetorical criticism, content
analysis,
interaction analysis, and performance studies.
B. Rhetorical Criticism
1. The terms, rhetoric and criticism, conjure up interesting images.
a. Rhetoric often carries negative connotations, such as when it is applied to grand,
eloquent, bombastic, or verbose discourse.
b. Andrews believes that criticism is typically associated with tearing down or
denigrating
comments; despite its function as constructive advice.
c. For scholars, the word rhetoric is associated with Aristotle’s definition: “the
available
means of persuasion” and criticism is the “systematic process of illuminating and
evaluating products of human activity” (Andrews, 1983, p. 4).
2. Rhetorical Criticism, therefore, is a systematic method for describing, analyzing,
interpreting, and evaluating the persuasive force of messages embedded within
texts.
3. The process serves five important functions (Andrews, 1983) including:
a. sheds light on the purposes of a persuasive message
b. can aid in understanding historical, social, and cultural contexts
c. can be used as a form of social criticism to evaluate society
d. can contribute to theory building by showing how theories apply to persuasive
discourse
e. serves a pedagogical function by teaching people how persuasion works and what
constitutes effective persuasion.
4. Classical rhetoric examined the characteristics and effect of persuasive public
speaking
during the Greek and Roman civilizations.
5. Contemporary rhetoric has expanded to incorporate a wide range of
philosophical,
theoretical, and methodological perspectives that are used to study the persuasive
impact of many different types of texts and messages.
6. There are four steps to conducting rhetorical criticism
a. Choosing a text(s) to study
b. Choosing a specific type of rhetorical criticism
c. Analyzing the text(s) according to the method chosen
d. Writing the critical essay
7. There are several types of rhetorical criticism and they may be used to answer a
wide
range of questions including:
a. What is the relationship between a text and its context?
b. How does a text construct reality for an audience?
c. What does a text suggest about the rhetor?
i. Historical Criticism examines how important past events shape and are shaped by
rhetorical messages. Researchers go beyond merely describing and recreating past
events from documents to evaluate the reasons why the past events occurred as
they did.
ii. Oral Histories investigate spoken, as opposed to written, accounts of personal
experiences to understand more fully what happened in the past.
iii. Historical Case Studies examine texts related to a single, salient historical to
understand the role played by communication.
iv. Biographical Studies examine public and private texts of prominent, influential, or
otherwise remarkable individuals. They analyze how the messages used by these
individuals helped them to accomplish what they did.
v. Social Movement Studies examine persuasive strategies used to influence the
historical development of specific campaigns and causes.
vi. Neo-Aristotelian Criticism evaluated whether the most appropriate and effective
means, as articulated in the specific set of criteria given in Aristotle’s Rhetoric, were
used to create the rhetorical text(s) intended to influence a particular audience.
vii. Genre Criticism rejects using a single set of criteria to evaluate all persuasive
messages, arguing instead, that standards vary according to the particular type, or
genre of text being studied.
(a) Forensic Rhetoric deals with the past and concerns issued involving legality and
justice.
(b) Epideictic rhetoric concerns the present and is ceremonial.
© Deliberative rhetoric speaks to the future and involves political oratory.
viii. Dramatistic Criticism primarily analyzes texts according to philosopher Kenneth
Burke’s view that all communication can be seen in terms of five essential elements
that comprise a dramatic event.
(a) Act: A particular message produced by a communicator.
(b) Purpose: The reason for the message.
(c) Agent: The person who communicated the message.
(d) Agency: The medium used to express the message
(e) Pentadic Analysis, as it is called, uses these five elements to isolate essential
characteristics of and differences between symbolic acts.
ix. Metaphoric Criticism assumes that we can never know reality directly.
x. Narrative Criticism assumes that many (or all) persuasive messages function as
narratives—storied, accounts, or tales.
xi. Fantasy Theme Analysis, based on the work of Ernest Bormann, examines the
common images used to portray narrative elements of situations described in a text.
Fantasy themes are mythic stories present in communication that involve characters
with which people identify.
xii. Feminist Criticism analyzes how conceptions of gender are produced and
maintained in persuasive messages.

A correlational research design is useful to researchers who are


interested in determining to what
degree two variables are related, however, correlational research “does not “prove” a
relationship; rather, it indicates an association between two or more variables”
(Creswell, 2008).
When a correlational research design is appropriate for a study, it can be designed
by following
the steps outlined by Creswell (2008) and Lodico et al. (2006):
Identify two variables that may be related- researchers often select variables to
study with
a correlational research design by reading published studies previously conducted
by
researchers. Other individuals tend to select variables from real-world situations as
they
are interested in findings that are specific to their own situation.

 Select a sample-Samples should be selected randomly and include at least


30 individuals
willing to partake in the study
Select a method of measurement-Often the most complex part of a correlational
study is
determining how to effectively measure each variable. As with other research
designs,
tools should be determined to be both valid and reliable. (link to validity and reliability
portion of site) Usually in correlational research two measurement tools are required
as
each tool measures one of the two variables involved in the study.
Collect necessary data-Correlational studies require that researchers obtain data
for each
variable from each participant.

Analyze the data-Data from correlational research is analyzed by using statistical


tests
that depend greatly on the type of variables being studied. Variables can be either
continuous, meaning that they change according to small increments (e.g. test
scores), or
dichotomous, in which the variable is divided into categories (e.g. gender, grade).
Interpret results-When attempting to interpret results, researchers consider both
the
strength and the size of the correlation coefficient

COMPARATIVE RESEARCH :
It is a type of descriptive research since it describes
conditions that already exist. It is a form of investigation in which
the researcher has no direct control over independent variable as its
expression has already occurred or because they are essentially nonmanipulable.
It also attempts to identify reasons or causes of preexisting
differences in groups of individuals i.e. if a researcher
observes that two or more groups are different on a variable, he tries
to identify the main factor that has led to this difference. Another
name for this type of research is ex post facto research (which in
Latin means “after the fact”) since both the hypothesised cause and
the effect have already occurred and must be studied in retrospect.
Causal-comparative studies attempt to identify cause-effect
relationships, correlational studies do not. Causal-comparative
studies involve comparison, correlational studies involve
relationship. However, neither method provides researchers with true
experimental data. On the other hand, causal-comparative and
experimental research both attempt to establish cause-and-effect
relationships and both involve comparisons. In an experimental
study, the researcher selects a random sample and then randomly
divides the sample into two or more groups. Groups are assigned to
the treatments and the study is carried out. However, in causalcomparative
research, individuals are not randomly assigned to
75
treatment groups because they already were selected into groups
before the research began. In experimental research, the independent
variable is manipulated by the researcher, whereas in causalcomparative
research, the groups are already formed and already
different on the independent variable.
Inferences about cause-and-effect relationships are made
without direct intervention, on the basis of concomitant variation of
independent and dependent variables. The basic causal-comparative
method starts with an effect and seeks possible causes. For example,
if a researcher observes that the academic achievement of students
from different schools. He may hypothesise the possible cause for
this as the type of management of schools, viz. private-aided,
private-unaided, or government schools (local or state or any other).
He therefore decides to conduct a causal-comparative research in
which academic achievement of students is the effect that has
already occurred and school types by management is the possible
hypothesised cause. This approach is known as retrospective causalcomparative
research since it starts with the effects and investigates
the causes.
In another variation of this type of research, the investigator
starts with a cause and investigates its effect on some other variable.
i.e. such research is concerned with the question ‘what is the effect
of X on Y when X has already occurred?’ For example, what longterm
effect has occurred on the self-concept of students who are
grouped according to ability in schools? Here, the investigator
hypothesises that students who are grouped according to ability in
schools are labelled ‘brilliant’, ‘average’ or ‘dull’ and this over a
period of time could lead to unduly high or unduly poor self-concept
in them. This approach is known as prospective causal-comparative
research since it starts with the causes and investigates the effects.
However, retrospective causal-comparative studies are far more
common in educational research.
Causal-comparative research involves two or more groups
and one independent variable. The goal of causal-comparative
research is to establish cause-and-effect relationships just like an
experimental research. However, in causal-comparative research, the
researcher is able to identify past experiences of the subjects that are
consistent with a ‘treatment’ and compares them with those subjects
who have had a different treatment or no treatment. The causalcomparative
research may also involve a pre-test and a post-test. For
instance, a researcher wants to compare the effect of “Environmental
Education” in the B.Ed. syllabus on student-teachers’ awareness of
76
environmental issues and problems attitude towards environmental
protection. Here, a researcher can develop and administer a pre-test
before being taught the paper on “Environmental Education” and a
post-test after being taught the same. At the same time, the pre-test
as well as the post-test are also administered to a group which was
not taught the paper on “Environmental Education”. This is
essentially a non-experimental research as there is no manipulation
of the treatment although it involves a pre-test and a post-test. In this
type of research, the groups are not randomly assigned to exposure
to the paper on “Environmental Education”. Thus it is possible that
other variables could also affect the outcome variables. Therefore, in
a causal-comparative research, it is important to think whether
differences other than the independent variable could affect the
results.
In order to establish cause-and-effect in a causal-comparative
research, it is essential to build a convincing rational argument that
the independent variable is influencing the dependent variable. It is
also essential to ensure that other uncontrolled variables do not have
an effect on the dependent variable. For this purpose, the researcher
should try to draw a sample that minimises the effects of other
extraneous variables. According to Picciano, “In stating a hypothesis
in a causal comparative study, the word “effect” is frequently used”.