Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1. Introduction
Hierarchical ranking is the most common and simplest instrument of
comparison between discrete indicators. It is the quickest way to obtain a
comparison between competing entities, as long as objectives, rules of
behavior and the relevant measurement tools are shared within a community
(school class, athletic group, business, technology… measured by marks,
speed, profit, financial assets, etc.). As an instrument of evaluation, ranking
can be applied to almost any criteria. From school marks to the book of
records (that hold the attention on the individual who first performed record
figures), ranking is used everywhere as a way of measurement and of
comparison. How can one improve this unavoidable measure? If one agrees
with the choice of data, ranking presents no difficulties excepting the quality
of data collection. According to a criterion or a set of criteria, the researcher
ranks an entity number one, until another entity surpasses it under the
accepted criteria. This process has been accelerated within an environment
marked by increasingly open societies and expanding economies and
societies, providing ranking with a new era of development.
Even so, ranking appears highly problematic when dealing with complex
and intangible goods such as knowledge, science, technology and
innovation. In the case of the “production” of universities, simple criteria do
not apply due to the high complexity level (which is the case in most
dimensions of Social Sciences). Ranking may help at first in making crude
distinctions, but it immediately becomes a limited instrument, for there is no
“unique best way” to apply it in any human activity.
Given the fact that there are many possibilities to improve the ranking
process within its own rules and limits, this chapter intends to “drain” from
the methodology of evaluation several elements with which to improve
ranking of “world-class universities”. The author will begin with the
2 Evaluation Practices and Methodologies
and with their results. These partners becoming “drivers”, their requirements
differ widely from one to the other. That is to say:
Public authorities involved into university support are increasingly
concerned with the use of public money. The main share of universities’
budgets still depends mainly on public decisions (in coherence with Adam
Smith and Alfred Marshal theory of external effects being attributed to
knowledge and education). In a period of relative shortage of public budgets
– due to increasing competition between states, to the non interventionism
ideology and to cuts into budget deficits – an increasing responsibility is put
on public project managers, with more attention being paid to results (and
consequently to the universities’ management mode).
Taxpayers are increasingly reactive to the way their money is used by
public as well as private research institutions. This often justifies political
and public financial short-term views, in contrast to the long-term
dimension of the research process and the complex articulation between
Fundamental and Applied Sciences.
Universities have become increasingly decisive tools for economic
competitiveness, knowledge and innovation. This has led industries to be
directly concerned with university possesses (e.g., hiring skilled students
and benefiting as directly as possible from new research). This concerns not
only high tech industries, but also includes every “mid-tech” and “low-tech”
business that is involved into innovation and to increasing use of general
purpose technology (Helpman, 1998).
Finally, journalists and other opinion makers are very active in
universities’ visibility. They create and convey the images – given to people
as proven reality – emphasizing both fictive and real strengths and
weaknesses of universities.
So, one can well see that universities have become increasingly in debt
to, or at least dependent upon an increasing number of external partners,
such as taxpayers, government administration and politicians, national and
international organizations, business managers, journalists, as well as
foundations and NGOs, etc. For various reasons, those external stakeholders
focus on the final “outputs” of universities. At best, they require information
concerning the relation between material and labor “inputs” (what they have
paid for) and “output” or “production”. Thus, external stakeholders are
largely unconcerned with the two central processes of university activity,
i.e., the production of new knowledge, and the teaching-learning process
4 Evaluation Practices and Methodologies
is very rarely the case. Yet, one can consider the ongoing processes of
evaluation as trials to identify the useful indicators for a given set of
questions and for a set of universities. The characterization of universities is
thus a first step toward the benchmarking of universities worldwide.
In the evaluation processes, various types of practices and meanings can
be considered:
─ discipline-based versus institution-based evaluation;
─ inputs-based versus outputs-based evaluation;
─ internal versus external evaluation;
─ qualitative versus quantitative evaluation.
3. Evaluation Indicators
This section will present data and indicators commonly used for evaluation,
the objective being to deduct a few especially grounded data that can be
adapted and used for general ranking processes. In so doing, however, it is
vital to keep in mind that ranking on a worldwide scale induces very strict
constraints that limit the number of possible indicators that can be
employed. Therefore, indicators will be limited to those that are: already
existing, comparable, and, easy to collect.
This explains why the selection of indicators will be very limited. This
brings new credit to some very restrictive measures (such as the Nobel Prize
award) due to their worldwide recognition and already existing status.
Figure 4. The “third mission” dimension, eight items for data collection
1. Human resources
– Competencies trained through research transferred to industry (typical case of “embodied
knowledge”).
The essential indicator is: PhD students who work in industry, built upon numbers and
ratios. The combination is important, since having a ratio of 100 percent, i.e., all work with
industry with one PhD delivered might be far less relevant for industry than ratio of 25
percent based on twenty PhD students.
2. Ownership
– Research leading to publications or patents; with a changing balance between them.
The key indicators are: patent inventors (number and ratio) and returns to the university (via
licenses form patents, copyrights, etc., calculated as a total amount/ratio to non-public
resources). Other complementary indicators reflect the proactive attitude of the university
(existence of patent office, numbers of patents taken by university).
3. Spin-offs
– Indicators relevant here are composite ones, that is to say they take into consideration three
following entries:
– the number of incorporated firms;
– the number of permanent staff involved;
Bertrand BELLON 15
– more qualitative involvement such as: the existence of support staff funded by university;
the presence of business incubators; incentives for creation, funds for seed capital; strategic
alliances with venture capital firms, etc.
science” (and the Nobel Prizes it brings) with new forms of “co-operative
science” and more “internally driven” research strategies. This new
landscape, with a wider variety of dynamic models, must now be taken into
account.
At this point, strong arguments exist to advocate a radical divergence
between evaluation (and its characterization of universities) and ranking. On
a strictly critical approach, there exist, on one side, ranking processes,
limited to structurally crude bibliometric approaches, based on the smallest
most visible parts of “output” of the complex process of knowledge. The
risk appears that such a limited focus will lead to a caricatured vision of
university missions, providing almost no possibility to draw useful relations
between input and output. The existing set of indicators for the Jiao Tong
University ranking is:
Criterion Indicator
Quality of Education Alumni of an institution winning Nobel Prizes and Fields Medals
Staff of an institution winning Nobel Prizes and Fields Medals
Quality of Faculty
Highly cited researchers in 21 broad subject categories
Articles published in Nature and Science*
Research Output Articles in Science Citation Index-expanded and Social Science
Citation Index
Size of Institution Academic performance with respect to the size of an institution
On the other side, there exist evaluation processes, which are over-
complex, too qualitative and subjective, appearing restricted to internal use
by each individually evaluated university. External comparisons are thus
limited to the benchmarking of specific functions between selected
universities. At first glance, therefore, it seems evaluation processes may not
be well adapted to making global comparisons between universities.
From this perspective, ranking and evaluation processes stand opposed
to one another. Yet, from the perspective of their objectives, they seem very
close. That is, both aim to meet the need of better efficiency via a better
management of university missions.
The “ways and means” for institutional ranking have already progressed,
but they can still be greatly improved. In this respect, ranking can greatly
benefit from certain indicators being used in the act of evaluation, but only
when they can be generalized. The question now remaining is how to best
Bertrand BELLON 17
identify relevant indicators and discover the best way to produce them
within the strict governing limit of means.
4.2. The set of characteristics should fulfill input, output and governance
indicators
The “relative utility” or relevance of an indicator is its ability to be used as a
tool for university management (finance, governance and work). Indicators
must provide access to the university’s production spectrum and
differentiation profile. The first two improvements concern the
discontinuance of some existing indicators and the adoption of more
appropriate ones, for example:
18 Evaluation Practices and Methodologies
4.3. These changes will require specific collections of data: new indicators
mean new work
Such a renewal project will demand specific computation of existing
information as well as the creation of new information. New methodological
work has to be done, in addition to the creation of normalized measures
necessary for the rebuilding of global indicators. For this to happen,
effective connections with the OECD are crucial.
4.4. Enrich the debate in order to enrich the process and the result
The debate on academic ranking will grow in importance in the future, and
will not be limited to a simple evaluation-ranking dispute. At this point, four
questions arise:
─ What does excellence mean, and what is its impact on research and
teaching orientations and activities?
─ How important is the degree of diversity within globalization (not
being limited to the dualistic global/local debate)?
─ What are the differences and specificities within processes of
production, productivity and visibility?
─ Finally (under a transversal approach), a debate appears, questioning
the quality of data themselves and their adaptation to the diversity of
legal, financial and administrative structure of the bodies that form
“universities”.
Bertrand BELLON 19
References