Sei sulla pagina 1di 11

Behavioural Ethics

Triangle of Fraud
The term fraud triangle was first coined by American sociologist Donald R. Cressey who worked extensively
in the fields of criminology and white-collar crime. Fraud is often a white-collar crime but not always.
The Fraud triangle is a framework designed to explain the reasoning behind a worker’s decision to commit
workplace fraud. The three stages, categorised by the effect on the individual, can be summarised as
pressure, opportunity and rationalisation. Broken down, they are:

Step 1 – the pressure on the individual – is the motivation behind the crime and can be either personal
financial pressure, such as debt problems, or workplace debt problems, such as a shortfall in revenue or
peer/social pressure. The pressure is seen by the individual as unsolvable by orthodox, legal, sanctioned
routes and unshareable with others who may be able to offer assistance. Maintenance of a lifestyle is
another common example.

Step 2 – the opportunity to commit fraud – is the means by which the individual will defraud the
organisation. In this stage the worker sees a clear course of action by which they can abuse their position
to solve the perceived unshareable financial problem in a way that – again, perceived by them – is unlikely
to be discovered. In many cases the ability to solve the problem in secret is key to the perception of a
viable opportunity.

Step 3 – the ability to rationalise the crime – is the final stage in the fraud triangle. This is a cognitive stage
and requires the fraudster to be able to justify the crime in a way that is acceptable to his or her internal
moral compass. Most fraudsters are first-time criminals and do not see themselves as criminals, but rather
a victim of circumstance. Rationalisations are often based on external factors, such as a need to take care
of a family, or a dishonest employer which is seen to minimise or mitigate the harm done by the crime.
The term fraud triangle was first coined by American sociologist Donald R. Cressey who worked extensively
in the fields of criminology and white-collar crime. Fraud is often a white-collar crime but not always.

Elements, Psychological Bias, Rationalizations of Behavioural Ethics

Role Morality
Role morality is the notion that people sometimes fail to live up to their own ethical standards because they
see themselves as playing a certain role that excuses them from those standards.
For example, say a person views herself as a loyal employee of a company. In that role, she might act
unethically to benefit her employer in ways that she would never do to help herself. To paraphrase
researcher Keith Levitt, the same person may make a completely different decision based on what hat – or
occupational role – she may be wearing at the time, often without even realizing it.
In one study people were asked to judge the morality of a company selling a drug that caused unnecessary
deaths when its competitors’ drugs did not. 97% of people concluded that it would be unethical to sell the
drug. Then, the researchers placed different people into groups, and asked each group to assume the role
of the company’s directors. Acting as directors, every one of the 57 groups decided to sell the drug. They
framed the issue as a business decision in dollars-and-cents terms. They ignored the harmful impact their
decision would have on others.
So, ethical behavior requires maintaining the same moral standards regardless of the roles we play at
home, at work, or in society
Sometimes organizational and psychological pressures cause even good people to act unethically. In a
lawsuit over a car wreck, an insurance company representing the defendant demanded the right to have
its doctor examine the plaintiff. When he did, the doctor found that the plaintiff had a life-threatening
brain aneurysm. Because it would have disadvantaged the insurance company’s defense, the doctor did
not tell the plaintiff, who did not find out for two more years. Why would a doctor keep this vital
information from an injured man? Obviously, the doctor viewed his job as protecting the insurance
company’s financial interests, Hippocratic Oath be damned. This is an example of something ethicists call
role morality.

Role morality has been defined as feeling that you have permission to harm others in ways that would be
wrong if it weren’t for the role that you are playing. Role morality often involves people acting in ways that
they would view as clearly unethical if they were acting on their own behalf, but because they are acting
on behalf of their employer or a client, they view their actions as permissible.

In a detailed study of a corporation, sociologist Robert Jackall found that many employees segregated their
personal beliefs from the ethics of their workplace. He quoted an officer as saying: “What is right in the
corporation is not what is right in a man’s home or in his church. What is right in the corporation is what
the guy above you wants from you. That’s what morality is in the corporation.”
When people check their personal moral code at the door, they can suddenly become capable of doing
horrendous things. After World War II, Albert Speer, Hitler’s Minister of Armaments and War Production,
said that he viewed his role as an “administrator.” As a mere administrator, he convinced himself that
matters relating to human beings, including, of course, the Holocaust, were not his concern. This man
checked his humanity at the door.
A study by professors at Brigham Young University found that family businesses are more likely to act in a
socially responsible way than bigger companies. The family name is on the door and officers want to act in
ways that reflect well upon their family. However, people working in bigger corporations find it easier to
separate their personal feelings of how business should be done from their role inside the organization.
We cannot leave behind our personal beliefs as to right and wrong when we walk through our office doors.

Ethical Fading
Ethical fading occurs when the ethical aspects of a decision disappear from view.
This happens when people focus heavily on some other aspect of a decision, such as profitability or
winning. People tend to see what they are looking for, and if they are not looking for an ethical issue, they
may miss it altogether.
Psychologist Anne Tenbrunsel and colleagues find that innate psychological tendencies often cause us to
engage in self-deception, which blinds us to the ethical components of a decision. For example,
euphemisms like “We didn’t bribe anyone… we just ‘greased the wheels,’” help people disguise and
overlook their own wrongdoing.
Ethical fading is similar to moral disengagement. Moral disengagement is when people restructure reality
in order to make their own actions seem less harmful than they actually are. Both ethical fading and moral
disengagement help people minimize the guilt they feel from violating ethical standards.
So, while ethical fading is common, we can try to counteract it by learning to recognize when we put
ethical concerns behind other factors in making decisions.
In the book he wrote about his crimes, disgraced lobbyist Jack Abramoff— Casino Jack—asked: “What was
I thinking?” This is a familiar refrain among white collar criminals. Why can they see their ethical failings in
retrospect, but not earlier when it really mattered?
Part of the explanation is what professors Ann Tenbrunsel and David Messick call ethical fading. Imagine
that you work for a company in internal audit and your boss asks you to inappropriately massage some
earnings numbers. And it happens to be the week that the company is deciding whom to lay off in the
most recent round of cutbacks. And you want to keep your job, of course. It is possible that you will not
even notice the ethical dimensions of the action you have just been asked to take by your boss. These
ethical dimensions may just fade from view.
Ethical decisions are often made almost automatically by the parts of our brain that process emotions.
Only later do our cognitive processes kick in. When we think we are reasoning to an ethical conclusion,
often all we are really doing is searching for rationalizations to support the decision that we have already
made instinctively.
As time distances us from the decision we have made, the ethical issues may start to reappear. We may
feel the need to reduce the dissonance that results from the conflict of our view of ourselves as ethical
people and the unethical action we have committed. Studies show that offering people an opportunity to
wash their hands after behaving immorally are often enough to restore their self-image. There’s a reason
we talk about starting with a “clean” slate

Even if our minds cannot cause an ethical issue to fade from view, a process known as moral
disengagement can mitigate the sting of an unethical decision. Moral disengagement is a process by which
our brain enables us to turn off our usual ethical standards when we feel the psychological need to do so,
just like we’d turn off a TV when a show comes on that makes us uncomfortable.

Studies show, for example, that people who want to buy an article of clothing that they know was
manufactured with child labor will suddenly view child labor as less of a societal problem than they
thought before. Moral disengagement allows us to suspend our personal codes of ethics, yet continue to
view ourselves as ethical people.
There is no easy cure for ethical fading and moral disengagement. Our only option is to be vigilant in
looking out for ethical issues and equally circumspect in monitoring our own actions and rationalizations.

Fundamental Attribution Error


Attribution Theory
Attribution theory demonstrates that individuals tend to decide that a behavior is caused by a particular
characteristic or event. We make these attributions or judgments about what caused the resulting
behavior based on our personal observation or evaluation of the situation. For instance, after being fired
from a position one might blame the dismissal of an internal factor or personal characteristic such as being
an incompetent worker. Or the individual might blame the dismissal on an external factor such as a
declining economy. Understanding how and why we make these attributions is important because several
achievement theories stress that future decisions and behaviors are based more on our perception of why
something happened rather than on the actual outcome. Therefore, we tend to reinforce our beliefs of
ourselves and others based on the perceptions we gain from these experiences.
The fundamental attribution error is the tendency people have to overemphasize personal characteristics
and ignore situational factors in judging others’ behavior. Because of the fundamental attribution error, we
tend to believe that others do bad things because they are bad people. We’re inclined to ignore situational
factors that might have played a role.
For example, if someone cuts us off while driving, our first thought might be “What a jerk!” instead of
considering the possibility that the driver is rushing someone to the airport. On the flip side, when we cut
someone off in traffic, we tend to convince ourselves that we had to do so. We focus on situational
factors, like being late to a meeting, and ignore what our behavior might say about our own character.
For example, in one study when something bad happened to someone else, subjects blamed that person’s
behavior or personality 65% of the time. But, when something bad happened to the subjects, they blamed
themselves only 44% of the time, blaming the situation they were in much more often.
So, the fundamental attribution error explains why we often judge others harshly while letting ourselves
off the hook at the same time by rationalizing our own unethical behavior.
Think about the last time you were driving and someone passed you going well over the speed limit. What
did you think to yourself? Commonly, people say: “What an idiot!” But, if you are like most drivers, you’ve
sped yourself. Of course, you had a good reason to speed. You were late for a test, perhaps. But maybe
that “idiot” had a good reason, too.
The fundamental attribution error is the tendency we have to attribute causes of other people’s behavior
to their character rather than to situational factors. In other words, we tend to take circumstances into
account (indeed to exaggerate them) in judging our own behavior, but tend not to do so when judging
other people’s behavior.
The relevance this has for business ethics is significant. We concluded that the other guy cheated on his
wife because he's a bad person. I cheated on my wife because I had too much to drink. The other guy
fudged the numbers at work because he is a criminal. I fudged the numbers because my boss made me.
The other guy padded his expense account because he's a crook. I padded my expense account because
I'm working really hard and my boss underpays me.
The bottom line is that when we read in the newspaper that someone has been involved in a scandal, we
tend to say to ourselves: “That person did a bad thing. She must be a bad person. I’m a good person. I
wouldn’t do a bad thing.” And we dismiss the possibility of ever being caught in such an ethical blunder or
dilemma ourselves.

But if we think about it, we realize that good people do bad things all the time. Good people are subject to
many psychological tendencies and organizational pressures that influence human decision making —
things such as the desire to please authority and to be part of a team, the vulnerability to role morality
and incrementalism, the often overwhelming self -serving bias, and the like.

And we all tend to be overconfident in our own ethicality. Indeed, eighty percent or so of us just know that
we’re more ethical than our peers. If this overconfidence makes us too cocky, we may be blindsided by the
fundamental attribution error and become one of the many good people who do bad things every day.
When we hear about other people who have made ethical mistakes, perhaps the best thing we can do is
put ourselves in their shoes and try to understand why they made the mistakes they made. We must avoid
automatically assuming that we are better people than those who made an ethical misstep. A healthy dose
of “there but for the grace of God go I” might be in order. If we can be humble about our own morality and
learn from the mistakes of others, perhaps we can guard against making those same mistakes ourselves.

Moral Equilibrium
Moral equilibrium is the idea that most people keep a running mental scoreboard where they compare
their self-image as a good person with what they actually do.

When we do something inconsistent with our positive self-image, we naturally feel a deficit on the good
side of our scoreboard. Then, we will often actively look for an opportunity to do something good to bring
things back into equilibrium. This is called moral compensation.

Conversely, when we have done something honorable, we feel a surplus on the good side of our mental
scoreboard. Then, we may then give ourselves permission not to live up to our own ethical standards. This
is called moral licensing.

For example, Oral Suer, the hard-working CEO of the Washington D.C.-area United Way, raised more than
$1 billion for local charities. Unfortunately, Suer gave himself license to divert substantial sums intended
for the charity for his personal use to reward himself for his good deeds.

So, our tendency to maintain moral equilibrium may mean that we will act unethically. Indeed, we must
guard against our natural inclination to give ourselves permission to depart from our usual moral
standards.

Over the years we’ve all seen high-profile televangelists and “family values” politicians involved in sex
scandals. You might have also noticed numerous cases of embezzlement by employees of charitable
organizations. How is it that seemingly good people can act so unethically?

One factor is a psychological phenomenon known as moral equilibrium. The basic idea is that most of us
want and indeed need to think of ourselves as good people. We keep a sort of running scoreboard in our
heads, comparing our mental image of ourselves as good people to our actual behavior.
When we act in ways that don’t live up to our own ethical standards, we tend to feel bad and look for
ways to make up for it. So we might do good deeds in order to restore balance to our internal
scoreboard. This is called moral compensation.
On the flip side, when we do something good, we add points to the plus side of our mental scoreboard,
and we then may give ourselves permission to fail to meet our own ethical standards. This is called moral
licensing.
Moral compensation and moral license are the two components of moral equilibrium. Moral licensing is
the scary one. It is what allows TV evangelists, family values politicians, and people who work for
charities to start telling themselves how wonderful they are, and then to give themselves permission to
depart from their own ethical standards. Importantly, these people don’t even realize how their past
actions are affecting their current decisions.
One study asked two groups of people to write about themselves. The first group wrote about something

they did that they were NOT proud of, and the second group wrote about something they did that they

WERE proud of. Afterwards, both groups were asked to donate to charity or to volunteer.

The first group donated more to charity and volunteered more than the second group. The first group –
bad deeds fresh in their mind – was engaged in moral compensation. The second group – focused on
their own goodness – was practicing moral license.
There are many more studies on moral equilibrium, and they all make the same point: don’t get cocky!

Just when you’re feeling especially good about yourself, you’re most in danger of giving yourself license

to screw up.

Moral Myopia
Moral myopia refers to the inability to see ethical issues clearly.

The term, coined by Minette Drumwright and Patrick Murphy, describes what happens when we do not
recognize the moral implications of a problem or we have a distorted moral vision. An extreme version of
moral myopia is called moral blindness.

For example, people may become so focused on other aspects of a situation, like pleasing their professor
or boss or meeting sales targets, that ethical issues are obscured.

Organizations can experience moral myopia too, as Major League Baseball did during the steroid era. For
more than a decade, players got bigger, hit more home runs, and revenues rose dramatically. But the
League didn’t see it, even as evidence of steroid use was rampant.

Societies may also suffer moral myopia, as they often have done at the expense of minorities. For
instance, the treatment of Native Americans and the enslavement of African-Americans are two examples
of moral blindness in the history of the United States.

Moral myopia is closely related to ethical fading. In both cases, people’s perception of reality becomes
altered so that ethical issues are indistinct and hidden from view.

The truth is that there are many people with good intentions out there, people who pledge to abide by
honor codes in college and ethics codes in the workplace, who make bad decisions and get caught up in
ethical problems and even scandals. How is it that these people who don’t intend to do anything wrong
get into trouble?

My coauthor, Patrick Murphy, and I have found in our research that some people have moral lapses
because of what we have called “moral myopia.” “Moral myopia” is a distortion of moral vision that
keeps ethical issues from coming clearly into focus. In fact, moral myopia can be so severe that an
individual is blind to ethical lapses and doesn’t see them at all.

Moral myopia can take many forms, but it generally occurs at one of three levels— the individual, the
organization, or society. At the individual level, a person with moral myopia may not see a problem with
something like fudging the numbers on a timesheet or an expense report or with lying to a supervisor or a
client in order to look a little better.
Assume that a salesperson lies and claims falsely to have sold a major piece of equipment to a client in
the current quarter when she knows that the client will not actually want to purchase the equipment
until next quarter. She does this to qualify for a bonus. But think of the costs—the order has to be
processed; the equipment has to ship to a warehouse, and the company has to bear the costs of storage
until next quarter. Sales information is inaccurate, and there are distortions in expectations that can
jeopardize effective decision-making. There also will be the cost of records clean-up when the distortion
eventually comes to light; and so on. This can create an addictive cycle because the salesperson has
cannibalized next quarter’s sales. Most likely, she’ll have to find a way to inflate next quarter’s sales to
compensate. And all this assumes that the salesperson is not caught and penalized for gaming the
system.
At the organizational level, an advertising executive might say, “I could just never advertising cigarettes,”
but if her agency simultaneously has a tobacco account as a client, and she doesn’t see an ethical
problem, then she has a form of moral myopia.
We found that moral myopia tends to occur most often at the societal level, and here’s an example of the
form it might take. Again, think about an advertising executive. Assume that she knows that ultra-thin
models in ads can have a negative impact on young women’s perceptions of beauty and contribute to
problems such as eating disorders, but she doesn’t see any connection between the models that she
selects for the ads and this societal problem or feel any responsibility for contributing to it.

How can smart people miss these things that should be so apparent? The culprit seems to be
rationalizations —we use them with our parents, with our teachers and supervisors, and we use them
with ourselves. Some of the most common rationalizations that underpin moral myopia are
rationalizations such as “If it’s legal, it must be moral.” Or if it’s not illegal, it must be ethical. Listen to
what the CEO of a major company said to me:

I think this is probably one of the most ethical businesses there is. It is so regulated. Everything that we
do have to go through our lawyers to make sure it's conforming to law, and then our client's lawyers. . . .
It's really hard to be unethical in this business even if you wanted to.

He’s making a classic mistake. Most all ethicists and legal scholars view the law as the minimum, yet we
get comfort from the law. Guess what this CEO’s industry is— advertising. In poll after poll on industry
ethics, advertising comes in second to last. The only industry less trusted than advertising is used car
sales.
What occurs in the companies--and in other types of organizations as well--is that someone gets so
caught up in the enthusiasm of her organization and its efforts to reach certain goals that she doesn’t see
signs that should be red flags.
And then there’s the ostrich syndrome—just sticking your head in the sand and ignoring ethical issues,
and we all know that that’s never a solution.

It’s important to be aware of moral myopia and the rationalizations that support it, so that the
rationalizations will raise red flags and prompt a careful examination. It’s also important to talk about the
issues that prompt rationalizations with people whom we respect. It can be helpful to have trusted
advisers outside of our work unit, company, industry, or profession because sometimes an entire group
of people can suffer from moral myopia. We all know that ethical issue can be difficult, but are certainly
more likely to make sound ethical decisions if those issues come clearly into focus.

Incentive Gaming

Organizations and institutions frequently use financial incentives to motivate productive behavior. The
majority of people dislike effort to some degree, which forces authorities to either monitor people
intensely to ensure that they contribute, or to pay them based on their observable performance.
Salespeople are given commissions, bankers are given bonuses, and even teachers are paid for student
performance.

The problem with these incentives, of course, is that you need to decide on which metrics to base the
incentives, and then communicate those rules to people in order to motivate their performance.

You can only pay people based on what you observe, and you can’t observe everything. Which is why we
get incentive gaming.

Incentive gaming is when people manipulate pay -for-performance schemes in ways that increase their
compensation without benefiting the party that pays. Often referred to as “rewarding A while hoping for
B”, incentive gaming is an example of how opportunistic and strategic people can be when there are
financial rewards involved. People will focus all their effort on those incentives that pay them the best,
and will even manipulate information to represent their performance on those dimensions as higher than
it actually is.

The financial crisis of 2008 provided several excellent examples of incentive gaming. Some mortgage
brokers, who were paid commissions for originating mortgages, quickly learned they could earn more
money if they relaxed the credit requirements for homebuyers. They were compensated based on
originating a loan, not on whether that loan defaulted in subsequent years, a costly outcome for the bank
and homeowner.

The incentives designed to motivate effort and entrepreneurial behavior also motivate people to increase
their earnings in ways that hurt both their customers and market efficiency.
Examples of incentive gaming are everywhere. When teachers are paid based on the standardized test
performance of their students, they focus much of their effort on teaching to the test and hurt student
education. When salespeople are given bonuses for reaching monthly sales targets, they offer customers
unnecessary discounts to buy now
Rather than later. When workers are paid based on their relative rankings, they may focus their effort
on sabotaging their coworker instead of improving their own performance.

The implication of incentive gaming is that managers and policy- makers need to understand that humans
are clever and opportunistic beings. If you give them an incentive system, many of them will figure out
how to manipulate it to maximize pay and minimize effort. Designers of incentive -based compensation
systems must think carefully about unintended consequences, putting themselves in the shoes of their
employees, and ask, “If I were given these incentives, what might I do to game them?”

Moral Muteness

Moral muteness occurs when people witness unethical behavior and choose not to say anything. It can
also occur when people communicate in ways that obscure their moral beliefs and commitments.

When we see others acting unethically, often the easiest thing to do is look the other way. Studies show
that less than half of those who witness organizational wrongdoing report it. To speak out risks conflict,
and we tend to avoid conflict because we pay an emotional and social cost for it.

For example, in one study, psychologist Harold Takooshian planted fur coats, cameras, and TVs inside 310
locked cars in New York City. He sent a team of volunteers to break into the cars and steal the valuables,
asking the “thieves” to act in an obviously suspicious manner. About 3,500 people witnessed the break-
ins, but only 9 people took any kind of action. Of those who spoke up, five were policemen.
Indeed, only a relatively small percentage of people who see wrongdoing speak up. But, if we wish to be
ethical people, we must strive to combat moral muteness in all areas of our lives.

We know problems occur when people don’t see ethical issues. And there are also problems that develop
when people know deep down that something is wrong, but they find some way to rationalize not
speaking up about it. Scholars, Frederick Bird and James Waters, refer to this as “moral muteness.” Moral
Muteness is when people either don’t voice moral sentiments or they communicate in ways that obscure
their moral beliefs and commitments.

In a study of people working in advertising agencies, Patrick Murphy and I found that moral muteness
was supported by a number of rationalizations, and we think that these rationalizations are applicable to
business more generally. For example, one common rationalization was that ethics is bad for business. As
one person said:

If you bring up ethics, people will think that you don’t have business savvy. They say, “If you’re going to
talk like that, go run a church.”

The fact of the matter is, when push comes to shove, many people would prefer to win than to be ethical,
but this is often a short-term approach that can actually hurt both the individual and the organization in
the long term.

Moral muteness can be supported by another kind of rationalization that involves a competing goal --
something that seems like a higher good that trumps moral concerns. But usually, the analysis of the
other goal is superficial. For example, the rationalization “The client is always right.” can lead to a please-
o-holic syndrome in which you don’t tell your client things they don’t want to hear—like moral concerns.
There is a big difference in a trusted business advisor and a please-o-holic. Trusted business advisors
sometimes have to give uncomfortable advice in order to do their jobs well. Please-o-holics may feel like
they are doing their jobs well in the short term, but in the long run, research shows that a different type
of relationship is more productive and more profitable.

One way to avoid moral muteness obviously is to make a habit of talking about issues with people whom
we trust. When we raise concerns, we can position ourselves as someone who is trying to help the
organization, to protect it from problems, rather than as someone who is trying to slap hands, or judge.
Often making persuasive arguments by illustrating the potential consequences for individuals and for the
organization can be helpful. And it’s important to view our role as that of a trusted business advisor – an
indispensable professional in any organization.

Conflict of Interest

A conflict of interest arises when what is in a person’s best interest is not in the best interest of another
person or organization to which that individual owes loyalty.

For example, an employee may simultaneously help himself but hurt his employer by taking a bribe to
purchase inferior goods for his company’s use.
A conflict of interest can also exist when a person must answer to two different individuals or groups
whose needs are at odds with each other. In this case, serving one individual or group will injure the other.

In business and law, having a “fiduciary responsibility” to someone is known as having a “duty of loyalty.”
For example, auditors owe a duty of loyalty to investors who rely upon the financial reports that the
auditors certify. But auditors are hired and paid directly by the companies whose reports they review. The
duty of loyalty an auditor owes to investors can be at odds with the auditor’s need to keep the company –
its client – happy, as well as with the company’s desire to look like a safe investment.

So, those of us who wish to be ethical people must consciously avoid situations where we benefit ourselves
by being disloyal to others.

Tangible & Abstract

The bias of tangible and abstract describes the fact that people are influenced more by what is
immediately observable than by factors that are hypothetical or distant, such as something that could
happen in the future or is happening far away.

For example, people may make decisions about natural resources without adequately considering the
impact those decisions may have on future generations, or on people in other countries.
In a famous example, the Pinto automobile flunked almost every routine safety test involving rear-end
collisions, but Ford put it on the market anyway in 1971. The company was racing to get a small car on the
market to challenge popular Japanese imports. Ford decided not to withhold the car from the market to
avoid the immediate negative consequences of delaying, like a stock price hit, employee layoffs, and a
public relations crisis. All those factors were very tangible for Ford. The considerations against selling the
car were much more removed and abstract. For example, any potential crash victims, at that point, were
nameless and faceless. Their injuries would occur, if ever, off in the future, and they would likely be
someone else’s worry.

So, the principle of the tangible and abstract underscores how we can become blind to the negative
consequences of our actions. Indeed, we make moral errors by discounting factors outside our immediate
frame of reference.

Psychological studies show that human decision making is naturally impacted more by vivid, tangible,
contemporaneous factors than by factors that are removed in time and space. For example, people are
more moved by relatively minor injuries to their family, friends, neighbors and even pets than they are by
the starvation of millions of people abroad. This tendency we have – to give greater value to the tangible
over the abstract – can cause problems that have very real ethical dimensions.

Consider a corporate CFO who realizes that if he does not sign false financial statements, the company’s
stock price will immediately plummet. Not only will his firm’s reputation will be seriously damaged today,
but employees whom he knows and likes may well lose their jobs tomorrow. Those losses are vivid and
immediate. On the other hand, to fudge the numbers will visit a loss, if at all, mostly upon a mass of
nameless, faceless investors sometime off in the future. Perhaps unconsciously, the CFO will feel
substantial pressure to go ahead and fudge the numbers to protect against immediate and tangible losses.
Sociologist Robert Jackall studied in detail the inner workings of a corporation in his book Moral Mazes.
Jackall interviewed a manager of a chemical company who said that of course he didn’t want to damage
the environment or hurt people’s health. When the manager was faced with a choice between putting a
chemical in water that would kill twenty people out of a million versus spending $25 million of the
company’s money to spare those lives, he said: “Is it worth it to spend that much money? I don’t know
how to answer that question as long as I’m not one of those twenty people. As long as those people can’t
be identified, as long as they are not specific people, it’s OK to put the chemical in the water. Isn’t that
strange?”

Psychologists Max Bazerman and Anne Tenbrunsel tell the story of a Goldman Sachs employee who blew
the whistle on a “late trading” scandal that allowed certain favored clients to trade to the detriment of
most of Goldman’s average clients. The Goldman Sachs employee had originally viewed victims of the
practice as part of ‘a nameless, faceless business.” She said, “…In this business this is how you look at it.
You don’t look at it with a face.” But when her own sister asked for investing advice for her 401(k),
suddenly she saw things differently, “I saw one face—my sister’s face—and then I saw the faces of
everyone whose only asset was a 401(k). At that point I felt the need to try and make the regulators look
into [these] abuses.”

These are just a couple of examples of the tangible and the abstract at work, and often it’s not a pretty
picture. Working inside big corporate bureaucracies often causes people to feel largely separated from the
consequences of their decisions. Likewise, working in global or multinational corporations with offices
around the world can also cause people to feel that they are not responsible for the impact of their
decisions. Just because we can’t vividly perceive the impact of our decisions on those around us, it doesn’t
mean there isn’t one. To be ethical, we must look to the horizon and beyond when making business
decisions.

Potrebbero piacerti anche