Sei sulla pagina 1di 4

ASSESSING CANDIDATES

So Who Have We Really Been Hiring?


We make assumptions about the tools we use to hire people.
But sometimes, we’re assuming a bit too much.
By Gary C. Townsend, head of HR Business Integration and Workforce Analytics, Coca-Cola Africa

easurement as the “scientific” basis of the hiring zation has to get it right. Everything else is a consequence
M process is generally unquestioned.The assumption
is that the tools in use have been validated and are generally
of this choice: coaching, communication, creativity and
innovation, diversity, empowerment, initiative and risk-
accepted in practice as being statistically sound. As a taking, mentoring, personal integrity, planning and
consequence, most of the ongoing efforts of recruiters have organizing, problem solving and decision making, quality
been focused on figuring out how the assessments are of results, teamwork, technical competency, vision, and the
helping delivery of all these strategies such as “recruiting list goes on. All said and done, the selection process, more
the best,” “building world-class talent,” “driving diversity often than not, is really a guesstimate based on interviewer
through creative recruiting strategies,” “finding more experience, some “timeless” tools (and accompanying
innovative ways to hire the best,” and “building pipelines assumptions) we inherit with it, the odd gut-feel (often
for leadership talent.” lumped into the “fit-factor”), and the internal politics of the
Similarly, there’s debate about the transactional aspects day.
of assessments — should we administer Given the aforementioned, maybe the
These very
them with paper or online, for example. In recruiting community should pause to
other words, the use of assessments has
impressive-looking consider the tremendous implications of
become a given. In many ways, the manuals, usually how we construct our assessments and how
perception of the recruiting community is filled with equally they can be affected by things that may seem
that it has moved along to bigger and better impressive statistics, trivial or academic. It is often left in the
things, as though the assessment part of the are often not worth hands of the apparent experts from various
process has been neatly packaged and dealt
the paper they are outsourced companies to select and
with by virtue of its historical contribution. administer these products because of the
written on.
The question, however, remains: Are the perceived overly technical nature of this part
assessments we’re using in fact providing the solid of the recruiting exercise. This is understandable, given
foundation we assume they are? the effort it takes to go through any assessment’s detailed
While all the inspirational strategies provide most technical report to validate its proposed accuracy and
recruiters with great material for planning sessions and applicability. These very impressive-looking manuals,
personal performance-development plans, one cannot but usually filled with equally impressive statistics, are often not
reflect on where it all points to. What is fundamentally worth the paper they are written on.
driving all the fanciful models, and more important, what The truth of the matter is that any interpretation can only
is everyone ultimately attempting to achieve? Once distilled, be as good as the quality of its measure. Herein lies the
it appears relatively straightforward — measuring for the fundamental problem we’re facing — that recruiters are
most suitable person for the job — nothing more and often unaware of how these measures are constructed.
nothing less. When one considers the diligence exhibited throughout the
Hiring is the one objective opportunity that every organi- recruiting processes, the selection of tools and instruments,

©2007 ERE Media, Inc. Journal of Corporate Recruiting Leadership | crljournal.com | JUNE 2007 9
ASSESSING CANDIDATES

FUZZY MATH

and sometimes even the choice of statistical analysis, it exactly the same score of 9.
would make sense to focus equally on the primary question Considering this situation, it is clear that we are making
of the basis of the measurement construction. It’s the following assumptions when we add up the rating in this
impossible to make inferences about who is the better fashion: 1) that each item contributes equally to the
person for the job without rigorous measurement. This measurement of the construct, and 2) that each item is
has to be the pivotal issue once we decide to use any form measured on the same scale.
of psychological assessment as part of the selection battery. When considering the first assumption, we’re concluding
So, to the point. If we’re looking for the best, how do we that each item has exactly the same qualitative value when
measure for “the best”? And even more important, how measuring this imaginary idea about what constitutes
do we know we have a tool that is calibrated to measure the leadership. However, it is glaringly obvious that each of the
best in a way that is invariant and can be reproduced items represented above brings distinctively different
whenever necessary? qualitative values to the overall leadership measure we’re
For a bit more clarity, let’s deconstruct this using a typical attempting. Consider items 1 and 3 in Figure 1. It is
example of how these rating scales are put together when obvious that a high score on item 1 should carry more
developing an occupational or personality instrument. First, weight than an equally high score on item 3. One could
the researcher would establish a group of items argue as well that item 2 also holds a relatively
intended to measure a specific construct such low standing on this leadership hierarchy
as leadership. After these items were
We have to ensure relative to item 1 and possibly a higher ranking
administered to a selected sample of that we are than item 3. This brings us to the heart of the
individuals, the responses would in turn be measuring and not discussion. If our items reflect a distinct
aggregated and presented as a total scale value. simply counting difference in the levels of endorsement they
In our example, let’s assume that the observations. bring to the leadership ability we’re seeking to
researcher is developing a scale to measure measure, then we cannot avoid analyzing our
leadership, with high total scores representing data in a way that distinguishes the value that
more of the qualities being measured and low scores each item brings to the measurement of the whole thing.
indicating less. The items are scored on a five-point Likert There can be very little doubt that a score of 4 on item 1
scale: 1 (never), 2 (seldom), 3 (neither most of the time nor contributes significantly more to leadership than a score of
seldom), 4 (most of the time), 5 (always). Given this 4 on item 3.
context, let’s assume that the following represent a few of The second assumption in turn raises the issue of the
the items measuring this leadership construct. interval scale. By including the five-point Likert scale, we
assume that it represents a uniform
Figure 1
distribution between each point on
the item scale as well as across all
the items. Let’s consider the first
item in our example. Given the
assumption of each point on the
scale being equidistant, we would
Let us now assume that the rating-scale response for an imagine that (N) never is equally far from (S) seldom as
individual on the three items represented above is 1, 3, (M) most of the time is from (A) always or (S) seldom is
and 5, respectively. Traditionally, this person would be from (NSM) neither seldom nor most of the time, etc.
assigned a score of 9 on the leadership scale, and this 9 However, considering the item statement Exploits
would then be used as the measure on all further statistical information from various constituents to formulate plans
analyses. Now let’s consider another individual who for competing in the marketplace, it could very well be
responded 4, 4, and 1. It is patent that this person receives possible that (N) never and (S) seldom are psychologically

10 Journal of Corporate Recruiting Leadership | crljournal.com | JUNE 2007 ©2007 ERE Media, Inc.
ASSESSING CANDIDATES

FUZZY MATH

much closer to each other (in the minds of the respondent, for example, could have just completed an
respondents), as are (M) most of the time and (A) always. exercise gathering strategic market-related intelligence.
Let us explore this graphically using item 1 (see Figure 2). On the other hand, the respondent could be accessing
Responses based on the assumption of linearity and information based on similar but more historical personal
equidistance would graphically look something like this (the actions. These two scenarios could be all that separate the
assumption being that the difference between N and S is the choice of always from the choice of most of the time!
same as that between S and NSM, and so on). However, it’s very unlikely that, given the suggested context
above, the second scenario
Figure 2 would result in a tossup between
(NSM) neither seldom nor most
of the time and (M) most of the
time — (c).
In reality, and as explained above, there’s a very distinct In much the same way as explained above, this lack of
difference in respondents’ psychological interpretation of linearity is usually evident across all the items as well. Here’s
the distances between these various options. Respondents what I mean by that. The psychological distance (value)
to this and other items would find the choice between (M) between NSM and M — (c) on item 1 may be very different
more often than not and (A) always much easier than from that of item 3.
between (S) seldom and (NSM) neither seldom nor most Two prospective applicants with exactly the same psycho-
of the time. (N) never and (S) seldom are also much more logical style and associated skills being identified by item 1
difficult to choose between in terms of endorsement. could very possibly view the psychological distinction
If we were to graphically lay this out, we would note that between NSM, M, and A very differently.
the relative distances in terms of strength of endorsement One individual, for example, could feel comfortable
look something like this: selecting M as a response, while another would have no
problem selecting NSM. This
Figure 3 would have minimal impact when
hiring some frontline supervisor,
but when we start dealing with
high-stakes personnel, it can so
easily translate into either an
unqualified success or devastating
failure for the organization. So one
cannot make the assumption that
This spatial representation of the psychological impact of the “value” of a move from NSM to M is the same as that
making these choices demonstrates that there’s a large of a move from M to A.
psychological difference between endorsing A on this Further, simply tallying raw scores and using them as an
fictitious leadership item and rejecting it N — (d). The indicator of the strongest candidate on this particular item
decision is relatively definitive — you either exploit will most definitely result in a bias against the individual who
information from various constituents or you don’t. selected (M) most of the time as opposed to (A) always.
However, when one considers the psychological shift that In essence, raw item ratings are unable to factor in this
has to be made in deciding to endorse either (M) more lack of linearity both within and across the various items
often than not or (A) always, the boundaries start blurring measuring a specific quality such as leadership. The
considerably — (b). majority of instruments currently in circulation perpetuate
The choice now becomes more reflexive. This could be this fundamental weakness in their designs. They confuse
based on a host of psychological precursors. The counts with measures.

©2007 ERE Media, Inc. Journal of Corporate Recruiting Leadership | crljournal.com | JUNE 2007 11
ASSESSING CANDIDATES

FUZZY MATH

The quantitative observations they use to arrive at a final


score are grounded in counting observed events or, as in
this example, leadership properties, whereas for any
measurement to be meaningful, it must be based on the
arithmetical properties of the interval scales used. So, before
we even begin to consider whether one candidate is better
suited for a particular job than another, we have to be
assured of this fundamental prerequisite — that
measurement can only take place once a calibrated
measure, with a well-defined origin and unit, has been
constructed in a way that can be consistently and reliably
used. We have to ensure that we are measuring and not
simply counting observations.
To date we have readily accepted the various reports and
analyses stemming from a host of assessments without any
reservations. I propose that the next time we pick up the
phone to call our favorite assessment consultancy or reach
for one off the shelf, we are able to answer the question,
Does this instrument allow me to independently estimate
the measure of the person’s ability or endorsement level
on the latent trait and the level of difficulty of the various
items on the same latent trait and yet compare them
explicitly to one another? After all, how accurate could any
selection process be if the determining psychological
construct has never been measured comprehensively in the
first place?

Gary C.Townsend is head of HR Business Integration


and Workforce Analytics at Coca-Cola Africa.
gtownsend@afr.ko.com
Gary Townsend joined Coca-Cola Southern & East
Africa Division as part of a team that developed and
managed a unique marketing management training
program for graduates. He was tasked with the curriculum design,
assessments, recruitment, training, management, and placement.After
three years, he moved into the role of human resources information
strategy manager for the Coca-Cola Southern and East Africa
Division. In 2005, he assumed the role of head of HR Business
Integration and Workforce Analytics as a member of the Center of
Excellence for Coca-Cola Africa, with an accountability extending
across 56 countries. In this role he provides strategic direction and
thought leadership in supporting the Africa Group and local division
strategies.

12 Journal of Corporate Recruiting Leadership | crljournal.com | JUNE 2007 ©2007 ERE Media, Inc.

Potrebbero piacerti anche