Sei sulla pagina 1di 16

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/1475-7702.htm

RAF
10,1 Auditors’ going-concern
judgments: rigid, adaptive,
or both?
30
Andrew J. Rosman
University of Connecticut, Storrs, Connecticut, USA

Abstract
Purpose – The purpose of this paper is to examine when auditors’ decision behavior is rigid and
adaptive in the going-concern judgment. Because rigid behavior has been found to produce
inappropriate outcomes, understanding when decision behavior is rigid or adaptive can lead to
improved decision making.
Design/methodology/approach – An experiment is conducted using cases based on real
companies to produce information search traces as dependent measures that are studied in the
ill-structured and structured parts of the going-concern task.
Findings – Auditors are adaptive in ill-structured tasks and rigid in structured tasks as predicted by
theory. Evidence of flawed decision making commonly found in studies of fixation and related concepts
was not found.
Research limitations/implications – The findings suggest the importance of explicitly
accounting for task structure when studying decision behavior in situated contexts. Future
research could assess whether task structure similarly impacts behavior in non-auditing contexts.
Practical implications – Researchers and practitioners have long been concerned about
inappropriate rigid behavior. This paper helps practitioners better understand when rigid or
adaptive behavior is likely to occur to improve decision making.
Originality/value – Taking a novel approach to reconcile two well established but conflicting bodies
of literature by focusing on “when” not “whether” people are rigid or adaptive, this paper resolves a
long-standing paradox. The implication for the literature is that reframing the question and directly
measuring behavior demonstrates that individuals are neither rigid nor adaptive, but can be both as
they follow behavior that is consistent with the demands of the task when the demands are defined in
terms of task structure.
Keywords Auditors, Decision making, Cognition, Individual behaviour, Going concern value
Paper type Research paper

I. Introduction
Research on fixation and related concepts in the cognitive and Gestalt psychology
literatures focuses on whether decision makers learn rigid decision behaviors
(i.e. persistently use a previously developed approach to solving a problem) or
adaptive decision behaviors (i.e. modify their decision processes to changes in
information), and generally find the former in accounting and other contexts (Ashton,
1976; Barnes and Webb, 1986; Bloom et al., 1984; Chang and Birnberg, 1977; Dearborn
and Simon, 1958). But, might experienced decision makers also learn adaptive
behaviors, and might some rigid behavior be appropriate? While the empirical evidence
Review of Accounting and Finance suggests otherwise, some question the research focus used to derive that evidence.
Vol. 10 No. 1, 2011
pp. 30-45 Gibbins and Jamal (1993, p. 453) observed that research “has led to theory development
q Emerald Group Publishing Limited [. . .] of the persons who carry out a specific kind of task rather than a theory of the task
1475-7702
DOI 10.1108/14757701111113802 itself [. . .].” They, therefore, propose “shifting the focus more toward the task setting
to which they may adapt.” Similarly, others warn that research that fails to consider the Auditors’
environment or setting will miss important determinants of performance since going-concern
environmental factors affect motivation, knowledge, and ability (Libby and Luft, 1993).
One way to shift the focus from the individual to the task setting in which decision judgments
makers learn is to reframe the research question to ask what is it about novel (routine)
situations that would cause decision makers to learn to be adaptive (rigid) and when is
being adaptive (rigid) appropriate? In other words, rather than ask whether individuals’ 31
behavior is rigid or adaptive the question becomes under what conditions or when will
individuals learn to be rigid or adaptive? To investigate how task setting may affect how
rigid and adaptive behaviors are learned, this paper uses theories of situated cognition
and adaptive behavior to examine the role of task structure in determining rigid and
adaptive decision making by auditors in structured and ill-structured tasks.
Theories of situated cognition and adaptive learning are used to examine when
auditors learn rigid and adaptive behavior relative to changes in task structure that are
consistent with recommendations by situated cognition theorists (Elsbach et al., 2005) as
well as learning theorists who caution that:
[. . .] a theory of thinking and problem solving cannot predict behavior unless it encompasses
both an analysis of the structure of task environments and an analysis of the limits of rational
adaptation to the environment (Newell and Simon, 1972, p. 55).
Auditor judgment is well suited in this study because it is learned through formal
training and by accumulating on-the-job “situated” experiences working in small groups
(i.e. audit teams). The distinctive two-stage going-concern judgment is an ideal task for
studying adaptive and rigid behavior because of the way the stages are structured. The
first stage (evaluation) is “ill-structured”: auditors are not restricted in the type of
information acquisition they engage into evaluate whether they have substantial doubt,
but rather can freely apply intuition based on experience to adapt information
acquisition. Should they express substantial doubt, they proceed with the second stage
(audit opinion), which is “well structured.” Here, auditing standards require that
auditors rigidly follow specific rules that restrict their decision making, even limiting the
type of information that they can consider in rendering their final judgment.
I find that auditors were adaptive in the ill-structured component of the going-concern
task and rigid in the well-structured component. The implication for the literature is that
reframing the question and directly measuring behavior demonstrates that individuals
are neither rigid nor adaptive, but can be both as they follow behavior that is consistent
with the demands of the task when the demands are defined in terms of task structure.
The next section summarizes the two competing literatures on rigidity and adaptive
behavior and discusses how each relates to task structure. Section III describes situated
cognition as a framework within which rigid and adaptive behavior can be better
understood. Section IV provides a task analysis of the two-stage going-concern
judgment and develops hypotheses. Section V describes the experiment. The results of
the experiment and related statistical analysis are presented in Section VI. Finally,
Section VII offers a discussion of results and concluding observations.

II. Rigidity, adaptability, and task structure


Research on rigidity and adaptation dates back over 70 years to Gestalt psychologists
who investigated “insight problems” (i.e. problems for which participants do not have
RAF a solution available from memory and for which there was no available schemata
10,1 that guaranteed a solution). They were the first to theorize that as individuals gain
experience with a task, their knowledge acts as a mental set that limits their search for
possible solutions, and thus promotes rigid decision processes. One of the earliest
demonstrations of this effect is Maier’s two-string problem (Maier, 1931), where
participants were unable to solve the problem because of their fixation on the usual uses
32 for an object. Subsequent “insight” studies replicated and extended this result (Ashton,
1976; Chang and Birnberg, 1977; Duncker, 1945; Maier, 1931; Marchant, 1990).
Cognitive psychologists and decision process researchers extended the relevance
of the rigidity problem to ill-structured tasks (Anderson, 1990; Cyert and March, 1963;
Frensch and Sternberg, 1989; Holyoak, 1991; Katz, 1982; Mason and Mitroff, 1981;
Rosman et al., 1994, 1999; Turner, 1976; Walsh, 1988). Task structure is the extent
to which the components of a task are well organized, interrelated, and understood
(Abdolmohammadi, 1999; Bowrin, 1998; Prawitt, 1995) and can be defined in terms of the
number of constraints left unspecified by the initial problem statement (Reitman, 1965).
Structured tasks are well organized and understood so they typically are handled in
a routine or rigid way because it is appropriate to persistently apply the same schema.
Ill-structured tasks “have not been encountered in quite the same form” so that “no
predetermined and explicit set of ordered responses exist in the organization”
(Mintzberg et al., 1976, p. 246). That is, by definition ill-structured tasks are novel and are
more likely to be approached by the decision maker in a flexible or adaptive way because
the decision maker understands that persistent and stable schemas would produce
flawed decisions (Elsbach et al., 2005).
Consistent with the Gestalt psychologists’ studies about insight, most studies
concluded that experienced decision makers, when confronted with an ill-structured
problem, will tend to rigidly rely on their prior domain-specific knowledge, rather than
adjust their behaviors to the specific information presented in problems. Examples
include study of chess masters (Chase and Simon, 1973), bridge champions (Frensch and
Sternberg, 1989), physicists (Chi et al., 1981), and tax consultants (Marchant et al., 1991).
From these and other studies have come warnings about rigidity and its analogues,
selective perception (Dearborn and Simon, 1958), perceptual screens (Cyert and March,
1963), functional fixedness (Katz, 1982), tunnel vision (Mason and Mitroff, 1981),
collective blindness (Turner, 1976), escalation of commitment (Staw, 1981), and
commitment to the status quo (Geletkanycz and Black, 2001).
Perhaps, the most widely cited study among those listed above is one on functionally
trained decision makers (Dearborn and Simon, 1958), which found that when experts’
domain-specific knowledge does not efficiently map onto the structure of a decision task,
they tend to display a “departmental” bias; that is, functionally trained decision makers
(e.g. those employed in the marketing department) will tend to identify organizational
problems in terms of their specific functional experiences, regardless of the nature of the
problem.
Notwithstanding this extensive literature, another well-established body of research
shows that individuals have the capacity to adapt behaviors in the short term to respond
to unique contexts, to create new knowledge to solve problems over the intermediate
period, and to evolve to a new way of thinking through learning over a longer period
of time (Newell and Simon, 1972; Simon, 1981). Moreover, characteristics of tasks,
including setting and structure, are critical in understanding human behavior.
Analyzing the demands of task environments enables researchers to examine both Auditors’
whether and when individuals adapt. going-concern
A well-structured task can engender appropriate rigid decision processes because
decision makers, who have had repeated experience with that task structure, should judgments
recognize that their domain-specific knowledge efficiently (at minimum cost) maps onto
the structure of the decision task (Shanteau, 1992; Stewart et al., 1997). That is, because
well-structured decision tasks have few unspecified constraints, the domain-specific 33
knowledge learned by experienced decision makers should prompt them to cut short
information search and rigidly (but appropriately) jump to solutions used in the past
without much conscious thought. In contrast, ill-structured tasks should engender
appropriate non-rigid decision behaviors because they are not as constraining.
Nonetheless, few studies explicitly account for task structure when analyzing decision
behavior other than performance. This omission may be because the focus of most studies
is on the question of whether experienced decision makers display rigid decision behaviors,
rather than on when, or under what task conditions, rigid behaviors are appropriate
(Gibbins and Jamal, 1993; Libby and Luft, 1993). Therefore, the many warnings about
rigidity that frequently appear in the literature may be an artifact of the research design.

III. Situated cognition for rigid and adaptive behaviors


Situated cognition, which is “thinking that is embedded in the context in which it occurs”
(Elsbach et al., 2005, p. 423), suggests that rigid behavior results when pre-existing
schemas are used that do not reflect or react to changes in situation or context. Being able
to identify when interactions between schemas and context produce flawed decisions
(e.g. inappropriate rigid behavior) is critical to learning how to “manage cognitive
processes” (Elsbach et al., 2005, p. 431) in successful organizations. The ability to do so
often is derived from actively engaging a professional work environment where learning
is tacit from personal and shared experience among colleagues rather than exclusively
from formal and passive training (i.e. “social interaction” (Contu and Willmott, 2003) and
“knowing in action and in practice” (Handley et al., 2007).
Situated cognition is related to the concepts of situated learning (Brown et al., 1989;
Griffin, 1995) and legitimate peripheral participation (Lave and Wenger, 1991) in
the education literature. Situated cognition promotes active rather than passive learning
that is contextual rather than isolated and abstract (Herbert and Burt, 2004; Handley et al.,
2007). For instance, learning vocabulary from use in conversation with others is more
effective than rote memorization by oneself from a dictionary. Situated cognition
suggests among other things that schemas tend to be relatively stable and persistent
(i.e. rigid) unless particularly novel situations are encountered (Elsbach et al., 2005).
In contrast to the research on adaptive behavior, empirical studies of situated
cognition in education focus on novice behavior (Brown et al., 1989; Griffin, 1995) with
the goal of developing experience-based schemas through approaches such as
mentorship. Research on more experienced decision makers tends to be limited to
examples of bounded or rigid behavior (Elsbach et al., 2005), perhaps because situated
cognition is rooted in the education literature. Thus, research needs to extend the theory
of situated cognition to more comprehensively address the rigid and adaptive behavior
of experienced decision makers in real-world contexts.
Bridging the theories of situated context and adaptive behavior, I draw on the insights
from Holyoak (1991), who distinguishes between “routine” and “adaptive” experts. Holyoak
RAF reasons that the expertise of routine experts is based on mastering foundation skills, rules,
10,1 and procedures in familiar (i.e. routine) tasks and settings, while adaptive experts rely more
on an abstract understanding that can be applied to both familiar and unfamiliar tasks and
settings. Adaptive experts develop a deeper conceptual understanding of the task and
context because of the richness of their learning environment, which is more “variable”
and “unpredictable” than the “stereotyped” environment of the routine expert (i.e. more
34 unbounded and ill-structured) (Holyoak, 1991, p. 310). In short, adaptive experts have
a larger set of experiences in different situations that enable them to adapt to the current
context.
Although Newell and Simon (1972) suggest that everyone with high domain
knowledge has the capacity to adapt to both structured and ill-structured task demands, it
may be that only those who have learned or acquired domain knowledge in ill-structured
tasks are able to “invent new procedures” to fit the challenge of ill-structured tasks
(Holyoak, 1991, p. 310 emphasis added). Thus, being exposed to ill-structured situated
contexts is critical to developing the ability to adapt.
Consistent with Holyoak (1991), chess masters (Chase and Simon, 1973), bridge
playing experts (Frensch and Sternberg, 1989), and physicists (Chi et al., 1981), who
rely primarily on a mastery of routinized knowledge, are routine experts. In contrast,
various types of business functional experts, including marketing decision makers
(Beyer et al., 1997) are adaptive experts. The latter acquire their expertise from repeated
exposure to complex tasks in ill-structured (unbounded) learning environments, each
with some degree of underlying structure, but clearly with a moderate to high degree of
ill structure.
Adaptive experts use a combination of rigid and non-rigid decision processes
depending upon the mix of ill-structured and well-structured informational cues
contained in the decision task (Holyoak, 1991). Routine experts, on the other hand,
inappropriately apply routine (fixated or rigid) decision processes to ill-structured tasks
and settings because they are more bound to procedural knowledge, and thus are less
aware of unspecified constraints in that task. Thus, it follows that individuals should
follow similar procedures across structured tasks because the task structure requires
continued use of the same routine response. In contrast, individuals should adapt
procedures across ill-structured tasks, which demand different responses.

IV. Task analysis and hypotheses


The going-concern task has two stages (evaluation and opinion) that involve the same
judgment (i.e. how doubtful the auditor is about the firm’s continued existence) but
contain very different core constraints or task structures. Recall that ill-structured tasks
are unique, address problems that have few or specified guidelines, and need judgment
and insight to resolve, while well-structured tasks as are routine, address well-defined
problems with well-defined alternatives, and require little judgment. The literature
identifies the first stage as “ill-structured” and the second as “structured”
(Abdolmohammadi, 1999; Prawitt, 1995).
In stage 1, the auditor assesses whether there is substantial doubt that the firm
will remain a viable going concern. The official accounting literature neither defines the
going-concern concept nor specifies how to evaluate whether there is substantial doubt, but
ambiguously leaves the judgment to be based on the aggregate data collected in the audit.
As one practitioner explains, “[b]ecause the assumption itself is not defined, there are Auditors’
wide-ranging interpretations of what an exception comprises” (Venuti, 2004, p. 42). going-concern
At the end of stage 1, the auditor arrives at a preliminary judgment of the firm’s
going-concern viability by deciding what information is salient, whether the information judgments
suggests a problem that might threaten the firm’s status, and, if a problem is detected,
what the cause might be. As such, the task requires decision processes that are inherently
iterative, complex, and unbounded. Because every firm is unique and may contain many 35
problems and causes that are too subtle to detect with simple ratio analysis, the auditor
must look into the qualitative context of the business that frames and gives meaning to
the numbers, without the aid of any specific audit procedures. The evaluation stage is
thus ill-structured because it involves an unconstrained task that is not well defined.
Should the evaluation in stage 1 suggest substantial doubt about viability, then before
issuing a final report, the auditor must judge how effective management’s plans are for
addressing the firm’s core problems. This stage, which results in an audit opinion, is
bounded by the findings of the evaluation stage (i.e. the need to consider management’s
plan arises solely in reaction to a finding of substantial doubt in the first stage). For
instance, if liquidity is the area of concern, the auditor analyzes and tests viability with
respect to liquidity only. Because rules and procedures limit the actions that can be taken
by the auditor, this second (“opinion”) stage is thus constrained and well structured. Put
differently, stage 2 is well defined because the problems that might affect the type of
opinion to be issued have been defined in stage 1, and in turn, they lead to clearly
articulated steps to assess the ability of management to mitigate the problems[1].
Building upon the theoretical frameworks of situated cognition and adaptive
behavior and the task analysis of the going-concern judgment, I hypothesize that:
H1. Auditors will use adaptive decision processes for ill-structured situations
(evaluative stage).
H2. Auditors will use rigid decision processes for well-structured situations
(opinion stage).

V. Experiment
Sample and task description
The sample consists of 23 auditors from international accounting firms who had
attained the status of senior or manager to help ensure that each had expertise with the
going-concern task. The experiment required about 60 minutes to complete and was
presented in the field via search monitor, which is menu-driven software well suited for
investigating information acquisition because it unobtrusively and completely records
all information acquired by the user (Biggs et al., 1993).
The experiment included data from six companies that were extracted from
public documents. Using real companies was intended to help ensure external validity,
although fictitious names were used. Case order was varied across participants to
minimize an order effect. Each session began with a practice session to allow
participants to become familiar with the task.
Each case began with a description of the company. The second screen listed ten
categories of information and then requested the selection of one category. Auditors
could select from five categories of financial information (profitability, liquidity,
financial leverage, inventory turnover, and capital intensity). Each measure had three
RAF years of data. Seven pieces of strategic information were also available (e.g. biographies
10,1 of the president/CEO, senior vice presidents, and vice presidents; market demand;
competition; description of products; and description of any patents) consistent with
studies that have shown the importance of non-financial information in going-concern
judgments (Parker et al., 2005).

36 Dependent variables and research design


Rigid and adaptive behaviors pertain to specific underlying decision processes
(e.g. hypothesis generation, information acquisition, and hypothesis evaluation) that
result in a judgment (Hogarth, 1987). Situated cognition specifically refers to the
“ongoing cognitive processes,” referred to as “sensemaking,” including environmental
scanning, interpretation, and understanding, as mediating between schemas and actions
(Elsbach et al., 2005, p. 424). I directly measure one tangible aspect of the ongoing
cognitive process, information acquisition, as the dependent construct. Information
acquisition provides important evidence of a decision maker’s schema (i.e. beliefs about
the task environment including its structure). Information acquisition is the first step in
the process of learning, which relates to Holyoak’s (1991) distinction between adaptive
and routine experts in that the former are able to “learn” how to adjust to changing task
demands. Information acquisition can be measured in a verifiable and reliable way
(Biggs et al., 1993), which overcomes some of the limitations to existing data collection
methods for research on situated cognition (Elsbach et al., 2005).
Relying on the literature, which established that the first stage of the going-concern
task is ill-structured (and the second stage is structured), I developed measures of
processing behavior that are conceptually driven, conform to aspects of the task of
interest, and are directly observed rather than inferred, to help improve the construct
validity of the measures. While Wilner and Birnberg (1986) specifically mention verbal
protocols as a source of process data, I selected computer process data because the latter
have been documented to be more complete traces of information acquisition (Biggs et al.,
1993). Moreover, the processing measures conform to the nature of the going-concern
task, in that they represent observable aspects of rigidity and adaptation that are
tailored to reflect the nature of the structure underlying each stage of the going-concern
decision. This allows measurement of the change in processing that occurs with changes
in the task (Payne and Bettman, 2004).

Measures for testing H1


In their review of the literature on adaptive decision making, Payne and Bettman
(2004, p. 120) summarize four possible “observable aspects of processing” that “vary with
changes in the decision task”: amount of information processed, selectivity of information
processed, alternative vs attribute-based processing, and attribute processing involving
multiple attributes. The latter two measures, which examine attribute and
alternative-based processing, have specific application in the literature; that is, they are
used to identify the type of strategy selected by decision makers generally in order to be
able to describe whether compensatory processing was being followed (Payne, 1976).
As such, they are not relevant to the research question about “when.”
In contrast, the first two measures, amount and selectivity, are relevant to the present
inquiry. It has been well established that amount of information is a valid measure
of information acquisition (Ford et al., 1989; Swain and Haka, 2000) and adaptation
(Payne and Johnson, 1988; Rosman and Bedard, 1999; Rosman et al., 1994; Walsh, 1988). Auditors’
While amount of information is usually a simple count, the second measure, selectivity, going-concern
is related more specifically to the type of information that is examined. Below, amount
and selectivity measures are developed that relate to the specific task in the experiment. judgments
Because employing a single dependent measure can threaten construct validity, four
measures of rigidity of information acquisition are used in the ill-structured evaluation
stage. “Amount financial” compares a simple (continuous) count of the number of 37
financial informational cues that were acquired by each participant in the experiment for
one company, with the similarly computed count for a second company. “Amount
strategic” is the equivalent comparison score for the qualitative cues[2].
“Amount” measures are coarse grained since they do not inform us as to what that
information was. Two finer grained, within-subject measures (same financial and same
strategic), are also used. These two additional sameness measures are consistent with
the taxonomy offered by Payne and Bettman (2004) relating to measures of selectivity,
and are constructed using the following three steps.
In step 1, “same financial” was constructed by determining whether a cue from one of
the five categories of financial information (e.g. capital intensity) was acquired for one
of the two near-bankrupt companies. If no cue was acquired, then the measure was
scored a “0”. If at least one cue was acquired, then the measure was scored a “1.” In step 2,
the scores for each company were subtracted. In step 3, the absolute value of each
difference scores was obtained and then summed over each of the five categories.
Absolute values were used since the direction of a difference does not matter as described
previously for the “amount” scores. Rather, what matters is only that a difference
occurred. A score of “1” implies non-rigidity or adaptation (i.e. the auditor approached
the two companies differently, rather than automatically acquiring the same category of
information). For example, the auditor may have judged that a particular category of
financial information contained cues salient to one company, but not to another.
To illustrate, assume that an individual acquired at least one cue in the liquidity and
leverage categories for the first company, and at least one cue in the profitability,
liquidity, and capital intensity categories for the second company (see Appendix 1).
A score of “1” would be assigned to profitability for Company 2, to liquidity for both
companies, to leverage for Company 1, and to capital intensity for Company 2. All other
category/company combinations would be assigned a score of “0.” Difference scores
would be calculated and then the absolute value would be obtained. The absolute value
scores would be summed across the five categories to produce the metric for data
analysis. In this example, the absolute value of “1” would be scored for profitability,
leverage, and capital intensity, whereas liquidity and inventory turnover would receive
a score of “0.” The sum of the absolute value of difference scores is three, which is the
value used for data analysis. A similar analysis was conducted using the five categories
of strategic information (management, market demand, competition, description of
products, and patents) to obtain a metric for each participant for “same strategic.”

Measures for testing H2


Auditors participate in the opinion stage only if they identified that they had substantial
doubt in the evaluation stage and list factors that they considered to be important in
order to determine if management might be able to mitigate the substantial doubt
expressed in the evaluation stage. These mitigating factors were then classified
RAF according to the guidance provided by the authoritative audit literature appearing in
10,1 SAS 59 (AICPA, 1988, paragraphs 7-9), which provides auditors with general guidance
on factors that might mitigate any doubts expressed in the initial evaluation stage.
These six measures are:
(1) the number of factors consistent with the guidance provided in SAS 59 (SAS 59 yes);
(2) the number of factors not specifically referred to in SAS 59 (SAS 59 no);
38
(3) the number of factors dealing with financial issues (financial);
(4) the number of factors dealing with non-financial issues (strategic);
(5) the number of factors addressing past activity (past); and
(6) the number of factors addressing future possible activities (future).

Because auditors proceed to the opinion stage of the going-concern task and seek
additional information only when they conclude in the evaluation stage that there is
substantial doubt about the entity’s viability, the within-subject analysis used in the
opinion stage is performed on a reduced set of participants (i.e. those who expressed
substantial doubt in the evaluation stage). In contrast, the within-subject analysis used
in the evaluation stage is performed for all participants.
Measures in the second stage are constructed to be similar to those in the first stage.
The opinion stage measure similar to amount of information acquired in the evaluation
stage is the number or amount of mitigating factors identified by participants.
The opinion stage measures similar to “sameness” measures in the evaluation stage
are based on the characteristics of the mitigating factors identified previously in SAS 59:
financial, strategic, past, future, consistent with or not consistent with SAS 59.
The calculation of the sameness measures in the opinion stage is similar for the opinion
stage (see the illustration in the Appendix 2). In sum, there are four measures of rigidity
in the evaluation stage (amount and same for financial and strategic) and seven
measures of rigidity in the opinion stage.

VI. Results
Table I presents the within-subject results of the paired t-tests that address H1 for the
ill-structured evaluation stage. H1 states that auditors will exhibit adaptive behavior in
this stage of the going-concern task because the ill-structured nature of this task lends
itself to adaptive behavior. Each t-test analyzes whether the difference in the means is
statistically different from zero. Statistically significant differences provide evidence of
adaptive behavior.

n Mean difference SE t pa

Amount financial 23 0.58 0.09 6.67 0.00


Amount strategic 23 0.64 0.09 7.00 0.00
Same financial 23 0.93 0.13 7.38 0.00
Same strategic 23 0.83 0.11 7.47 0.00
Table I.
Analysis of H1 for the Notes: aNon-Bonferroni corrected p-values; tests of whether the absolute value of difference scores are
ill-structured stage statistically different from zero
Table I shows data across all six cases. Each of the mean differences tested in Table I is Auditors’
statistically significant at p ¼ 0.00 consistent with the expectation in H1 and provides
evidence of adaptive behavior. Recognizing that the use of separate t-tests may bias the
going-concern
p-values in favor of significant findings, it is appropriate to apply the Bonferroni judgments
family-wise error correction to adjust p-values. Doing so, however, does not change the
results in Table I (i.e. each reported p-value remains significant at below the 0.01 level).
Table II presents the results of the t-tests in the well-structured opinion stage in cases 39
where an auditor expressed substantial doubt about going-concern viability. H2 states
that auditors will exhibit rigid behavior in this stage of the going-concern task because the
structured nature of this task promotes rigid behavior. Each t-test analyzes whether the
difference in the means is statistically different from zero. Statistically significant
differences provide evidence of adaptive behavior, which would not be consistent with H2.
The variable in stage 2 that most closely resembles the amount variable in stage 1 is
shown in Table II as “amount (no. of mitigating factors).” It is a count of mitigating
factors. The amount variable is statistically significant overall, which suggests adaptive
behavior that is not consistent with the expectation in H2. One of the other variables in
Table II, which resembles the sameness measures in Table I, is statistically significant
(same SAS 59 yes (t ¼ 2.45 and p ¼ 0.04)) and another is marginally significant (same
strategic (t ¼ 1.96 and p ¼ 0.08)). Otherwise, the remaining four paired t-tests are not
statistically significant. However, a Bonferroni family-wise error correction to adjust
p-values for same SAS 59 yes and same strategic would produce non-statistically
significant results. Therefore, after the Bonferroni correction, the p-values for only the
amount (number of mitigating factors) variable would remain statistically significant.

VII. Discussion and conclusion


Motivated by research on situated cognition and adaptive decision making, I investigated
the influence of task structure on rigid and adaptive information acquisition across
different task settings in the going-concern task. To minimize threats to construct validity,
I relied on the literature to establish that the first stage of the going-concern task is
ill-structured (H1) and the second stage is structured (H2), employed multiple dependent
measures.
Auditors, whose professional experience interacting with and learning from others
across both structured and ill-structured tasks, generally performed consistent with
theory. By manipulating task structure, auditors were shown to adapt to the changing
task when appropriate in ill-structured tasks. Adaptation occurs because individuals

n Mean difference SE t pa

Amount (no. of mitigating factors) 10 0.80 0.13 6.00 0.00


Same financial 10 0.20 0.13 1.50 0.17
Same strategic 10 0.30 0.15 1.96 0.08
Same past 10 0.20 0.13 1.50 0.17
Same future 10 0.10 0.10 1.00 0.34
Same SAS 59 no 10 0.10 0.10 1.00 0.34
Same SAS 59 yes 10 0.40 0.16 2.45 0.04
Table II.
Notes: aNon-Bonferroni corrected p-values; tests of whether the absolute value of difference scores are Analysis of H2 for the
statistically different from zero structured stage
RAF have at their disposal the schemas necessary to apply to the novel (ill-structured)
10,1 context. Auditors also remained rigid when appropriate in structured tasks when
rigidity was measured by the more fine-grained sameness variables, but rigidity was not
observed for the course-grained count (amount) measures after the Bonferroni
correction.
Amount measures are course grained while the sameness measures are more fine
40 grained. Given the complex nature of the judgments involved in the going-concern task,
it can be argued that the fine-grained measures are more revealing about behavior and
diagnostic regarding rigidity vs adaptation. However, it also means that research in
other tasks using similar course grained and fine-tuned measures should investigate
whether decision behavior is adaptive across ill-structured tasks and rigid across
structured tasks. If a similar pattern is observed, then researchers may be able to
conclude from the accumulated evidence that rigidity exists in the underlying substance
of the information acquired even if the absolute amount of information acquired differs
(i.e. the decision maker is adaptive).
It remains an open question as to whether the conclusions will generalize to other
professionals, which is another reason to support additional research. However, task
structure is a characteristic of virtually all decision contexts regardless of the domain.
For instance, neurologists and CEOs make decisions in structured and ill-structured
settings. Thus, the findings with auditors should hold across domains.
In sum, the findings suggest the importance of explicitly accounting for the nature of
the task when conducting research about decision behaviors in situated contexts. It is
important to consider task structure when examining behavior because “understanding
the response requires understanding the stimulus” (Gibbins and Jamal, 1993, p. 453).
Although such concepts as selective perception, cognitive embeddedness, functional
fixedness, strategic myopia, and strategic blind spots, may have merit, they were
uncovered by research designs that only implicitly accounted for task structure, and
therefore may overstate the dark side of expertise. Understanding decision behavior of
experts can only be enhanced:
[. . .] [b]y learning what types of interactions of schema and context lead to the most effective
outcomes.” Doing so shifts the focus of learning “from overly rigid models and routines that are
unresponsive to change [. . .] toward routines that encourage the development of the transitory
perceptual frames that are relevant to the current context (Elsbach et al., 2005, p. 431).

Notes
1. The paper has described the going-concern judgment task using terminology and procedures
consistent with generally accepted auditing standards (GAAS) in the USA because the
participants in this study are auditors in the USA who follow these standards. A review of the
comparable international standards (IFAC, 2009) describes essentially the same stages as
under US standards. For example, in paragraph 16 of ISA 570, the standard discusses what the
auditor should do once events or conditions are found that “may cast significant doubt on the
entity’s ability to continue as a going concern” including the evaluation of mitigating factors.
That is, like US GAAS, ISA 570 specifically considers two stages, one in which substantial
doubt is identified and a subsequent stage in which the event of condition that raised doubt is
then investigated to see if it can be mitigated.
2. This study is interested in differences in acquisition behavior across the companies rather
than in the direction of differences. Therefore, the absolute value of the difference scores is
used since it does not matter whether a higher or lower amount of information was acquired Auditors’
for company one compared with company two, but rather what matters is only that
a difference occurred. A non-zero score for any of the two “amount” measures implies that the going-concern
auditor demonstrated non-rigidity (flexibility) in the acquisition of information for the two judgments
companies in the experiment.

References 41
Abdolmohammadi, M. (1999), “A comprehensive taxonomy of audit task structure, professional
rank, and decision aids for behavioral research”, Behavioral Research in Accounting,
Vol. 11, pp. 51-92.
AICPA (1988), Statement on Auditing Standards (SAS ) No. 59: The Auditor’s Consideration of
an Entity’s Ability to Continue as a Going Concern, American Institute of Certified Public
Accountants, New York, NY.
Anderson, J.R. (1990), The Adaptive Character of Thought, Erlbaum, Hillsdale, NJ.
Ashton, R.H. (1976), “Cognitive changes induced by accounting changes: experimental evidence
on the functional fixation hypothesis”, Journal of Accounting Research, Vol. 14,
pp. 1-17 (Supplement).
Barnes, P. and Webb, J. (1986), “Management information changes and functional fixation: some
experimental evidence from the public sector”, Accounting, Organizations and Society,
Vol. 11 No. 1, pp. 1-18.
Beyer, J.M., Chattopadhyay, P., George, E., Glick, W.H., Olgivie, D.T. and Pugliese, D. (1997),
“The selective perception of decision makers revisited”, Academy of Management Journal,
Vol. 40 No. 3, pp. 716-37.
Biggs, S., Rosman, A. and Sergenian, G. (1993), “Methodological issues in judgment and
decision-making research: concurrent verbal protocol validity and simultaneous traces of
process data”, Journal of Behavioral Decision Making, Vol. 6 No. 3, pp. 187-206.
Bloom, R., Elgers, P.T. and Murray, D. (1984), “Functional fixation in product pricing:
a comparison of individuals and groups”, Accounting, Organizations and Society, Vol. 9
No. 1, pp. 1-11.
Bowrin, A.R. (1998), “Review and synthesis of audit structure literature”, Journal of Accounting
Literature, Vol. 17, pp. 40-71.
Brown, J.S., Collins, A. and Duguid, P. (1989), “Situated cognition and the culture of learning”,
Educational Researcher, Vol. 18, January-February, pp. 32-42.
Chang, D.L. and Birnberg, J.G. (1977), “Functional fixity in accounting research: perspective and
new data”, Journal of Accounting Research, Vol. 15 No. 2, pp. 300-12.
Chase, W.G. and Simon, H.A. (1973), “The mind’s eye in chess”, in Chase, W.G. (Ed.), Visual
Information Processing, Academic Press, New York, NY.
Chi, M.T.H., Feltovich, P. and Glaser, R. (1981), “Categorization and representation of physics
problems by experts and novices”, Cognitive Science, Vol. 5 No. 2, pp. 121-52.
Contu, A. and Willmott, H. (2003), “Re-embedding situatedness: the importance of power
relations in learning theory”, Organization Science, Vol. 14 No. 3, pp. 283-96.
Cyert, R. and March, J. (1963), A Behavioral Theory of the Firm, Prentice-Hall, Englewood Cliffs, NJ.
Dearborn, D.C. and Simon, H.A. (1958), “Selective perception: a note on the departmental
identifications of executives”, Sociometry, Vol. 21 No. 2, pp. 140-4.
Duncker, K. (1945), “On problem solving”, Psychological Monographs 58, No. 5 (Whole No. 270),
American Psychological Association, Washington, DC.
RAF Elsbach, K.D., Barr, P.S. and Hargadon, A.B. (2005), “Identifying situated cognition in
organizations”, Organization Science, Vol. 16 No. 4, pp. 422-33.
10,1
Ford, J.K., Schmitt, N., Schechtman, S.L., Hults, B.M. and Doherty, M.L. (1989), “Process tracing
methods: contributions, problems, and neglected research questions”, Organizational
Behavior and Human Decision Processes, Vol. 43 No. 1, pp. 75-117.
Frensch, P.A. and Sternberg, R.J. (1989), “Expertise and intelligent thinking: when is it worse to
42 know better?”, in Sternberg, R.J. (Ed.), Advances in the Psychology of Human Intelligence,
Erlbaum, Hillsdale, NJ.
Geletkanycz, M.A. and Black, S.S. (2001), “Bound by the past? Experience-based effects on
commitment to the strategic status quo”, Journal of Management, Vol. 27 No. 1, pp. 3-21.
Gibbins, M. and Jamal, K. (1993), “Problem-centered research and knowledge-based theory in the
professional accounting setting”, Accounting, Organizations and Society, Vol. 18 No. 5,
pp. 451-66.
Griffin, M.M. (1995), “You can’t get there from here: situated learning, transfer, and map skills”,
Contemporary Educational Psychology, Vol. 20 No. 1, pp. 65-87.
Handley, K., Clark, T., Fincham, R. and Sturdy, A. (2007), “Researching situated learning:
participation, identity and practices in client-consultant relationships”, Management
Learning, Vol. 38 No. 2, pp. 173-91.
Herbert, D.M.B. and Burt, J.S. (2004), “What do students remember? Episodic memory and the
development of schematization”, Applied Cognitive Psychology, Vol. 18, pp. 77-88.
Hogarth, R.M. (1987), Judgement and Choice: The Psychology of Decision, 2nd ed., Wiley,
New York, NY.
Holyoak, K.J. (1991), “Symbolic connectionism: toward third-generation theories of expertise”,
in Ericsson, K.A. and Smith, J. (Eds), Toward a General Theory of Expertise: Prospects and
Limits, Cambridge University Press, New York, NY, pp. 301-35.
IFAC (2009), International Standard on Auditing (ISA) 570, “Going Concern”, International
Federation of Accountants, available at: http://web.ifac.org/download/a031-2010-iaasb-
handbook-isa-570.pdf (accessed April 20, 2010).
Katz, R. (1982), “The influence of job and group longevities”, in Katz, R. (Ed.), Career Issues in
Human Resource Management, Prentice-Hall, Englewood Cliffs, NJ.
Lave, J. and Wenger, E. (1991), Situated Learning: Legitimate Peripheral Participation,
Cambridge University Press, Cambridge.
Libby, R. and Luft, J. (1993), “Determinants of judgment performance in accounting settings:
ability, knowledge, motivation, and environment”, Accounting, Organizations and Society,
Vol. 18 No. 5, pp. 425-50.
Maier, N.R.F. (1931), “Reasoning in humans II. The solution of a problem and its appearance in
consciousness”, Journal of Comparative Psychology, Vol. 12, pp. 181-94.
Marchant, G. (1990), “Accounting changes and information processing: some further empirical
evidence”, Behavioral Research in Accounting, Vol. 2, pp. 93-103.
Marchant, G., Robinson, J., Anderson, U. and Schadewald, M. (1991), “Analogical transfer and
expertise in legal reasoning”, Organizational Behavior and Human Decision Processes,
Vol. 48 No. 2, pp. 272-90.
Mason, J. and Mitroff, I. (1981), Challenging Strategic Planning Assumptions, Wiley, New York, NY.
Mintzberg, H., Raisinghani, D. and Theoret, A. (1976), “The structure of ‘unstructured’ decision
processes”, Administrative Science Quarterly, Vol. 21 No. 2, pp. 246-75.
Newell, A. and Simon, H.A. (1972), Human Problem Solving, Prentice-Hall, Englewood Cliffs, NJ.
Parker, S., Peters, G.F. and Turetsky, H.F. (2005), “Corporate governance factors and auditor Auditors’
going concern assessments”, Review of Accounting and Finance, Vol. 14 No. 3, pp. 5-29.
going-concern
Payne, J.W. (1976), “Task complexity and contingent processing in decision-making: an
information search and protocol analysis”, Organizational Behavior and Human
judgments
Performance, Vol. 16 No. 2, pp. 366-87.
Payne, J.W. and Bettman, J.R. (2004), “The information-processing approach to decision making”,
in Koehler, D.J. and Harvey, N. (Eds), Blackwell Handbook of Judgment and Decision 43
Making, Blackwell, Malden, MA, pp. 110-32.
Payne, J.W., Bettman, J.R. and Johnson, E.J. (1988), “Adaptive strategy selection in decision
making”, Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 14
No. 3, pp. 534-52.
Prawitt, D.F. (1995), “Staffing assignments for judgment-oriented audit tasks: the effects of
structured audit technology and environment”, The Accounting Review, Vol. 70 No. 3,
pp. 443-65.
Reitman, W. (1965), Cognition and Thought, Wiley, New York, NY.
Rosman, A. and Bedard, J. (1999), “Lenders’ decision strategies and loan structure decisions”,
Journal of Business Research, Vol. 46 No. 1, pp. 83-94.
Rosman, A., Lubatkin, M. and O’Neill, H. (1994), “Rigidity in decision behaviors: a within-subject
test of information acquisition using strategic and financial informational cues”, Academy
of Management Journal, Vol. 37 No. 4, pp. 1017-33.
Rosman, A., Seol, I. and Biggs, S. (1999), “The effect of stage of development and financial health
on auditor decision behavior in the going-concern task”, Auditing: A Journal of Practice
& Theory, Vol. 18 No. 1, pp. 37-54.
Shanteau, J. (1992), “The psychology of experts: an alternative view”, in Wright, G. and
Bolger, F. (Eds), Expertise and Decision Support, Plenum, New York, NY, pp. 11-23.
Simon, H.A. (1981), The Sciences of the Artificial, 2nd revised and enl. ed., MIT Press,
Cambridge, MA.
Staw, B.M. (1981), “The escalation of commitment to a course of action”, Academy of
Management Review, Vol. 6 No. 4, pp. 577-87.
Stewart, T.R., Roebber, P.J. and Bosart, L.F. (1997), “The importance of the task in analyzing
expert judgment”, Organizational Behavior and Human Decision Processes, Vol. 69 No. 3,
pp. 205-19.
Swain, M.R. and Haka, S.F. (2000), “Effects of information load on capital budgeting decisions”,
Behavioral Research in Accounting, Vol. 12, pp. 171-98.
Turner, B.A. (1976), “The organizational and interorganizational development of disasters”,
Administrative Science Quarterly, Vol. 21 No. 3, pp. 378-97.
Venuti, E.K. (2004), “The going-concern assumption revisited: assessing a company’s future
viability”, CPA Journal, Vol. 74 No. 5, pp. 40-3.
Walsh, J.P. (1988), “Selectivity and selective perception: an investigation of decision makers’
belief structures and information processing”, Academy of Management Journal, Vol. 31
No. 4, pp. 873-96.
Wilner, N. and Birnberg, J. (1986), “Methodological problems in functional fixation research:
criticism and suggestions”, Accounting, Organizations and Society, Vol. 11, pp. 71-80.
RAF Further reading
10,1 Abdolmohammadi, M. and Wright, A. (1987), “An examination of the effects of experience and
task complexity on audit judgments”, The Accounting Review, Vol. 62, pp. 1-13.
Arunachalam, V. and Beck, G. (2002), “Functional fixation revisited: the effects of feedback and a
repeated measures design on information processing changes in response to an accounting
change”, Accounting, Organizations and Society, Vol. 27 Nos 1/2, pp. 1-25.
44 Cook, T.D. and Campbell, D.T. (1979), Quasi-experimentation: Design and Analysis Issues for
Field Settings, Houghton Mifflin, Boston, MA.
Spilker, B.C. and Prawitt, D.F. (1997), “Adaptive responses to time pressure: the effects of experience
on tax information search behavior”, Behavioral Research in Accounting, Vol. 9, pp. 172-98.

Appendix 1. Calculation of “same” financial measure for H1


Assume that an individual acquired at least one cue in the liquidity and leverage categories for the
first company, and at least one cue in the profitability, liquidity, and capital intensity categories for
the second company. To calculate the “same” financial metric, a score of “0” is assigned for each
category for each company when information was not acquired and a “1” when information was
acquired (step 1). In step 2, a difference score would be calculated (Companies 1 and 2). In step 3, the
absolute value of the difference scores was obtained and the difference scores were summed. Here,
the resulting metric used in data analysis was “3.” A similar analysis was conducted using the five
categories of strategic information (management, market demand, competition, description of
products, and patents) to obtain a metric for each participant for “same strategic.”

Step 1 Step 2 Step 3


Category Company 1 Company 2 Difference Absolute value of difference

1. Profitability 0 1 21 1
2. Liquidity 1 1 0 0
3. Financial leverage 1 0 1 1
4. Inventory turnover 0 0 0 0
5. Capital intensity 0 1 21 1
Table AI. Total 3

Appendix 2. Calculation of “same” measure for H2


Assume that an individual identified at least one mitigating factor relating to a financial cue for the
first company and at least one mitigating factor for the second company. To calculate the “same”
financial metric a score of “0” is assigned for each category for each company when information
was not acquired and a “1” when information was acquired (step 1). In step 2, a difference score
would be calculated (Companies 1 and 2). In step 3, the absolute value of the difference scores was
obtained and the difference scores were summed. Here, the resulting metric used in data analysis
was “0” indicating that there was no difference across the two companies. If the individual had
identified at least one mitigating factor for either Company 1 or 2 but not both, then the difference
would be 1 or 21, respectively, and the absolute value of the difference would be 1. A similar
analysis was conducted for each of the remaining opinion stage variables (strategic, past, future,
SAS 59 no, and SAS 59 yes).

Step 1 Step 2 Step 3


Variable Company 1 Company 2 Difference Absolute value of difference

Table AII. Financial 1 1 0 0


About the author Auditors’
Andrew J. Rosman is an Associate Professor at the University of Connecticut and a University
Teaching Fellow. He has been at the University of Connecticut since 1989 and teaches global going-concern
financial reporting and analysis. Andrew J. Rosman’s primary research focus has been on how judgments
decision makers use accounting information with the objective of identifying ways to improve
decision behavior. Other interests include accounting regulation issues and research methods. He
has published research in the Journal of Accounting and Economics; Journal of Accounting,
Auditing and Finance; Auditing: A Journal of Theory and Practice; Academy of Management 45
Journal; Journal of Behavioral Decision Making; Journal of Business Venturing; Journal of Business
Research; and Research in Accounting Regulation. Andrew J. Rosman can be contacted at:
andy.rosman@business.uconn.edu

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Potrebbero piacerti anche