Sei sulla pagina 1di 20

Intellectual Capital Reporting for Universities: Conceptual background

and application within the reorganisation of Austrian universities

Dr. Karl-Heinz Leitner


Department of Technology Policy
Systems Research Technology-Economy-Environment
ARC Seibersdorf researchGmbH
A-2444 Seibersdorf

Paper prepared for the Conference


”The Transparent Enterprise. The Value of Intangibles.”
Autonomous University of Madrid
Ministry of Economy
November 25-26, 2002, Madrid, Spain

Abstract: European universities are faced with numerous challenges caused by the
political aim to harmonise the different national university systems and new research
modes and by the claims of various stakeholders such as the industry and society. In many
countries universities are increasingly provided with greater autonomy regarding their
organisation, management, and budget allocation, which requires new management and
reporting systems. As organisations that are mainly funded by the public sector, they are
also faced with an increased demand for transparency and accountability. Universities are
knowledge producers per se, their most important output is knowledge incorporated in new
research results, publications and educated students. A university’s most valuable
resources are the researchers and students with their relations and organisational routines.
These resources can be interpreted as intangible assets, even though the term has so far not
been used within the context of universities. The instrument of intellectual capital reporting
and methods for valuing intangibles seem to be a rich foundation for the development of
the necessary new management and reporting systems. The Austrian Ministry of Education
has decided to study the potential of intellectual capital reporting and will introduce this
new instrument in the course of the reorganisation of the Austrian universities in the future.
The conceptual framework will be presented in this paper, embedded in the international
discussion of new management and valuation systems for universities.

Key words: Research valuation, universities, evaluation, education, Intellectual Capital


Report, research management, science system, performance management, new public
management

1
1. Introduction
In most European countries universities are continuously faced with various new
challenges caused by political initiatives, an altered economy and society as well as new
research modes. A major challenge for universities, driven by European-wide political
activities, such as the Bologna declaration and its mission to create the European Higher
Education Area, is the harmonisation of the individual national university systems. The
aim of this declaration is to achieve greater compatibility and comparability of the higher
education sector. Measures such as the establishment of a system of credits as a means to
promote student mobility, the adoption of a system of comparable degrees as well as the
promotion of quality assurance are among the first initiatives (Salamanca 2001).

In recent years in many countries universities have been provided with greater autonomy
regarding their organisation, management, and budget allocation. This has also raised also
questions regarding their basic functions and tasks within society, which are, for instance,
expressed by discussions about Humbold’s ideal with its basic idea of the ‘unity of
teaching and research’. Furthermore, there is a discussion of the ‘entrepreneurial
university’, an institution which is no longer an ‘ivory tower’ but directly linked to the
socio-economic demands of society and economy especially in the region where it is
embedded. Universities fulfil various tasks, these include carrying out basic research and
education, but more and more frequently also include applied research and development in
co-operation with industry or public agencies, thereby trying to provide solutions for the
immediate needs of society. These additional tasks are sometimes termed as a ‘third
mission’ (OECD 1999).

Moreover, the development of universities is influenced by the idea of New Public


Management. The two major principles of New Public Management are the output and the
performance orientation for public organisations, which, in turn, requires new budget
allocation mechanisms and management systems. Management systems based on this idea
have already been successfully implemented in many areas of the public sector.

As organisations which are mainly financed by public funding, universities are also
confronted with an increased demand by the owners and citizens for transparency
regarding the use of those funds. This call for public accountability requires the disclosure
about the social and economic outcomes of universities.

Furthermore, universities are constantly faced with new paradigms for knowledge
production, often labelled as ‘modus II of knowledge production’, which are characterised
by a stronger orientation towards applied research and the necessity for an interdisciplinary
research approach (Gibbons 1994). This implies the urge for universities to collaborate
more frequently with other research institutions, but also with private firms or public
organisations, or even to participate in international research networks. This development
has been clearly accelerated by information technologies, which enable more intensive co-
operation and also provide new techniques for scientific analysing and modelling in an
efficient way.

Universities are part of the science, education, and innovation system of a nation and
knowledge producers per se, their most important output is knowledge, incorporated in
new research results, publications and well educated students. Obviously, the most
valuable resources of universities are thus their researchers and students with their
relationship networks as well as their organisational routines. These resources can be

2
interpreted as intangible resources and assets, even though the term has so far not been
used within the context of universities. Considering a wider definition of intangible assets -
which is independent of the regulations by accounting standards’ committees and national
laws, such as the right of disposal - as followed by most academics of the IC literature,
intangible assets can doubtless be identified in universities.

Due to their legal status, most universities do not have owner structures like private firms.
Consequently, universities usually do not need to produce the kind of annual reports
required by commercial law. Nevertheless, they have often to implement financial
accounting systems, increasingly based on the double bookkeeping system as substitute for
the public accounting system.

All the above mentioned challenges call for the implementation of new management and
reporting systems within universities. In comparison with the past, universities are faced
with the task of allocating budgets, they have to define their organisational goals and
strategies in a more explicit way and due to the broader autonomy and the extended
competition with other research organisations, universities gain a new leeway for setting
strategic priorities. While in former times, when universities basically did only scientific
research and education, today their research processes are much more manifold, ranging
from basic research to the development of technologies. They educate full-time students
but also offer training courses and provide labs for industrial applications. Nevertheless,
university systems and traditions vary a great deal throughout Europe, a fact which has to
be taken into account when dealing with the question of the management and measurement
of the knowledge production process within universities.

The instrument of intellectual capital (IC) reporting and the general methods for valuing
intangible assets seem to be a rich foundation for the development of these new
management and reporting systems. Therefore in 2001 the Austrian Ministry for
Education, Science and Culture initiated a study of the potential of IC reporting. In 2002
the Austrian Parliament finally decided that in the course of the reorganisation of Austrian
universities, universities would have the obligation to publish IC Reports. In German IC
reports are termed Wissensbilanz, which means knowledge account.

The general conceptual framework and background for the implementation of IC reports
will be first presented in this paper. Then this concept will be discussed in the context of
other proposed management and valuation systems for universities. In universities the
assessment of research, education, and knowledge-based processes has been mainly treated
within the concepts of evaluation. In addition, the application of performance management
systems, based on the principles of New Public Management, has been discussed and
partly implemented. The paper compares the approach of evaluation, performance
management and IC reporting and tries to point out the potential benefits and opportunities
of IC reporting for universities. The differences between valuing intangible assets in
universities, research organisation, and private firms will also be highlighted.

3
2. A new management and reporting system for universities based on the
principles of Intellectual Capital Reporting
2.1. The international development
The concept of IC Reporting for universities is mostly based on experiences and methods
developed within industry. The idea of IC reporting was born in the Scandinavian countries
at the beginning of the nineties. Scandia, the Swedish Insurance Company, published the
first IC report in 1992 as a supplement to its annual report. In the following years a new
movement towards the disclosure of information about the value and development of
intangibles has attracted attention. In European and the US, companies have issued
”accounts of intellectual capital“, which enable them to measure intangibles and their
results. In the industrial application, approaches have been established which record
intangible assets with the help of financial and non-financial indicators. Hereby different
forms of intangible assets are differentiated and each asset is valued with the help of
indicators, for instance the length of product development, customer satisfaction, etc. (see
for instance Sveiby 1997 and Edvinson 1997). Besides quantitative assessments methods
such as qualitative valuations and narrations are also applied in these reports (Mouritsen et
al. 1998).

The idea of IC reporting did not gain much attention within the research and science sector.
So far only Research Technology Organisations, which carry out applied research and
development, have published Intellectual Capital Reports. In 1999 the Austrian Research
Centers Seibersdorf published as the first European organisation an IC report for the entire
organisation, followed by the German DLR, which published its first IC report in 2000.
Both reports are based on a similar conceptual framework, developed within ARC, which
allows at the same time the comparison of indicators between these two organisations. The
Institute Optics and Fluid Dynamic of the Danish Risø research organisation has also had
experiences with IC Reporting since 1999.

In Germany, Maul (2000) dealt with the question of the exploratory power of financial
statements, published by certain German universities in the course of the reorganisation in
some German Federal States. He argues that the traditional financial account does not
deliver sufficient information on the financial situation, earnings and assets, which is the
main task of the annual account according to the German HGB (Handelsgesetzbuch), the
German accounting law. This is obviously due to the huge amount of intangible assets of
universities. Therefore, he claims that universities, which have to produce financial
statements, should integrate information about intangible assets within the management
report of the financial statements.

Based on the concepts and experiences with IC Reporting in general and with research
organisations in particular, a concept for the application of IC Reporting for Austrian
universities was developed for the Austrian Ministry for Education, Science and Culture
(see Leitner et al. 2001 and www.weltklasse-uni.at)1. The reorganisation of Austrian
universities revealed a high demand for such a new concept and consequently a demand for
the implementation of new management and reporting systems.

1
The research project was carried out by the ARC Seibersdorf research GmbH Systems Research and
Montanuniversität Leoben, Institute for Economics and Business Management and funded the Austrian
Ministry for Education, Science and Culture.

4
2.2. The role of IC reporting within the Austrian reorganisation of universities
The main task of the IC project was to develop a model for universities which meets the
specifics of their knowledge production process in the new organisational and legal context
of universities, formed by the national and European frameworks. The IC report also had to
be integrated in the newly created administration and governance system of the university
sector.

The reorganisation of Austrian universities is based on the principles of New Public


Management with the premises of increased autonomy, output orientation and
performance-based funding (Titscher et al. 1999). Apart from new governance structures
and labour laws, an important element to implement the transformation according to the
new paradigm is the introduction of ‘performance contracts’ (Leistungsvereinbarungen).
These contracts define the duties of both the universities (studies offered, human resources,
research programs, co-operations and social goals) and the Ministry (funding), and assigns
a global budget for the duration of three years. After three years new contracts are be
drawn up. The funding, which is based on four criteria: need, demand, performance and
social goals, is agreed between universities and the Ministry. It will be partly based on
performance indicators up to 20% of the funds will be allocated, dependent on the
development of selected performance measures, fixed within the performance contracts. In
addition, every year the universities have to generate a performance report, which provides
information about the development and achievement of the contract.

According to the new university law (see UOG 2002), the publication of the IC report has
to be published parallel to the development of the performance contract and the
performance report. While the performance report only deals with the topics addressed
within the performance contract, the idea behind IC reports is to give universities the
opportunity to report on their full range of activities and to provide an instrument for
internal management tasks and decision-making. The IC report has a wider scope than the
performance contract and should allow the universities to inform about their whole nature,
unrestricted by a few measures. Both reports are generated by the university itself and it is
the Vice Chancellor (Rektor) who is finally responsible for their proper implementation.

IC reporting for Austrian universities should thus fulfil two aims: First, it should provide
information for the management of intangible resources. Universities have to decide
whether, and how much they should invest in the training of scientists, with whom co-
operations should be fostered, which research programs should be emphasised, etc. The
implementation of an IC Report requires the discussion of goals and strategies, the IC
model should support this discussion. With the IC report, which can be realised for
departments, institutes and the whole university, goals can be operationalised and their
achievement monitored. Secondly, IC reports should provide external stakeholders with
information about the development and productive use of the intellectual capital. Thus, the
Ministry gets useful information for the allocation of its resources and the management of
research programs but also for the definition of the national science and education policy.

Besides the performance contract and IC reporting, evaluation is the third main instrument
for the governance and development of Austrian universities. In Austria, evaluations do not
have a long history and were only occasionally carried out in the past. In the course of the
reorganisation, evaluations are to be institutionalised for all universities. They have a clear
focus on in-depth assessments of the quality of research and education as well as the

5
research personal and are to be carried out every three to five years by external peers (see
also next chapter).

Considering this background and guiding principles, the new university law finally defines:
“The Wissensbilanz explicitly has to show i) the aim related to society, the self-defined
strategy as well as the universities’ strategy, ii) the intangible assets, separated in Human
Capital, Structural Capital, Relational Capital, iii) the performance processes and output
measurements, in correspondence to the definition within the performance contract” (UOG
2002). Every Austrian university has to produce the IC report annually by April 30th, and
report to the Ministry. It is up to the university to also publish it for other stakeholders.

2.3. The IC model for Austrian universities


The conceptual framework for IC reporting is based on a model which tries to trace the
knowledge production process within universities (see Fig. 1). It consists of four main
elements: the goals, the intellectual capital, the performance processes and the impacts.
The model conceptualises the transformation process of intangible resources when carrying
out different activities (research, education etc.) resulting in the production of different
outputs according to the specific and general goals.

Intellectual Performance processes Impact


capital
Framework conditions

Research Stakeholder:
Political Human capital Education Ministry
goals Students
Training
Industry
Structural capital Commercialising of research
Public
Organi- Knowledge transfer to the public
Science
sational Relational capital Services Community
goals
Infrastructure etc.
Output
Input

Fig. 1.: Model for IC Reporting of Austrian Universities


Source: Leitner et al. (2001)

The development of intellectual capital is guided by political goals, set by the Ministry,
which, in turn, are based on the Austrian Science and Education policy, but also by the
organisational goals defined by the universities themselves. Within this model intellectual
capital or intangible resources are interpreted as the input for the knowledge-production
process within universities. Three elements of intellectual capital are differentiated: human
capital, structural capital and relational capital. The model thus adopts the widely diffused
classification also proposed by the MERITUM group (MERITUM 2001). Human Capital
are the researchers and non-scientific staff of the university. Structural capital are the
routines and processes, an often neglected infrastructure of the universities. Relational
capital are the relationships and networks of the researchers as well as the entire
organisation.

6
The model differentiates six performance processes which can be reduced or enlarged
during the implementation, depending on the specific profile and type of the university
(e.g. universities for art, business schools). Apart from scientific research and education,
which are the kernel of university activities, training, commercialising of research,
knowledge transfer to the public, services, and infrastructure services are separated. The
latter ones can be summarised as the third mission of the modern university (OECD 1999).
These elements are mainly captured with process and output measures.

In the category of impact, the achievements of the performance processes are assessed. The
different stakeholders of the universities addressed are the scientific community, students,
citizens, industry, etc. Naturally, these are the most difficult element to evaluate by
quantitative data.

As in the case of IC reporting by industrial firms the different elements of the model will
be measured and assessed by financial and non-financial indicators, as well as by
qualitative information and valuations.

Within the project a list of indicators was developed. Whereas some of them are obligatory
indicators, every university has to publish, others are optional and can be used depending
on the context and aims. The design and selection of indicators was based on i) the set of
measures used in the past within Austrian universities, ii) proposed indicators within the
intellectual capital literature, and iii) the findings of the evaluation research. A list of 200
indicators was proposed, whereas 24 indicators will be obligatory for universities2. Even
though most indicators are non-financial by nature, some indicators are financial figures.
As mentioned, the financial assessment of outcomes is the most difficult one. In the case of
commercialising, the amount of licensing and the sale of spin-out firms, are possible
answers for that problem.

Although scientific research and education are equally important for all universities,
universities also have the choice to set certain priorities regarding their performance
processes. This can be represented as the opposite levels of a continuum. On the one hand,
for example, they can focus purely on basic research, or on the other hand carry out applied
research. The latter is becoming increasingly important and can in turn deliver even new
fundamental research questions. Universities can co-operate with industry or public
agencies to solve concrete problems or commercialise their research findings via spin-offs .
Universities may choose to educate only the best students and provide an elite education,
while others provide education for masses. Universities will select the indicators according
to their specific goals and the relevant processes. Thus, the list of indicators reflects the
strategic priorities of a university.

Finally, within the project a rough guideline for the implementation was also developed.
The critical points for the implementation are the procedure and organisation of the
implementation within the university (bottom-up versus top-down), the question of the
participation of professors, administrators, research staff, etc., the co-ordination with other
internal and external reporting systems as well as the successful use of the information
produced via the IC report and the establishment of feed-back loops. This implementation
guideline will be further elaborated in 2003, when the new university law will be executed
within Austrian universities. So far in Austria only the Institute for Economics and

2
See Leitner et al. (2001) or http://www.weltklasse-uni.at/upload/attachments/170.pdf for the full list.

7
Business Management from the Montanuniversität Leoben, which had been also a partner
of the IC project and has a long experience in quality management, developed an IC report
based on the presented IC model (Biedermann et al. 2002).

3. Intellectual Capital reporting in the context of evaluation and


performance management
3.1. The evaluation of research and education
Evaluations have the task to assess the effects, impacts and the efficiency of publicly
funded institutions and programs. Evaluations started in the US in the 60s with the
assessment of different public programs (regional development, employment, health etc.).
They are linked to the long debate about how to assess public spending and how to
estimate the social and economic effects of public programs and public research
expenditures. In the meantime, there is a considerable body that evaluates university
research outcomes on the level of projects, programs, disciplines and institutions. In
general, various approaches and concepts have been developed for the evaluation of the
aims, scope and output of evaluations.

While the US, the UK, and the Netherlands soon started to evaluate the research and
education provided byuniversities, in other European countries the use of these concepts
started quite late and the application and extent of evaluations still varies considerably
across Europe. In most countries, evaluations are primarily carried out for institutes and
departments or specific research programs, seldom for the entire university.

In general, evaluations should deliver information for public organisations and


administration with the aim to solve or overcome identified problems and weaknesses.
Evaluations are also seen as an instrument to promote reforms and self-organisation within
universities. However, in practice, they often fail to establish feed-back loops because,
frequently, the reports are not discussed intensively across the evaluated institution
(Blalock 1999).

In addition to the purpose of delivering information for the public authority, some authors
argue that evaluations should also deliver information for the general public and thus meet
the demand for social accountability (Richter 1991).

Finally, modern evaluations have the task of providing information for the resource
allocation. In practice, evaluations are partly related to resource allocation decisions. While
in the UK the evaluations are directly associated with government resource allocations, in
other countries such as the Netherlands evaluation reports are to a stronger extent aligned
to support research groups and universities for their management tasks.

Evaluations are based on different methods: quantitative tests, qualitative approaches,


observations, bibliometry, self-assessments and benchmarking (Turpin et al. 1999).
Evaluations are faced with methodological problems, especially regarding validity and
credibility (Kieser et al. 1996). Despite the use of quantitative methods and data, most
evaluation reports are still qualitative by nature. Because of their complex nature and
diverse impacts, evaluations of public initiatives and programs are restricted to qualitative
analyses.

8
Evaluations are usually executed by external experts, in addition, internal evaluations have
gained also popularity, especially because they are better suited to enhance learning within
the organisation. The Dutch evaluation approach, for instance, starts with the preparation
of the internal evaluation report, prepared by the institution (self-evaluation), followed by
an evaluation by external experts (peer-group) (Richter 1991).

When evaluating research and education, the goals have to be defined explicitly. However,
goals of institutions and programs are often manifold, vague or even contradictory. A
crucial task is thus the definition of valuation criteria, which are related to the goals and
interest groups. Considering the specific aims, a major challenge for evaluations is the
common definition of the output – which is especially difficult in the case of education and
research, e.g. “What is a good education?” or “What are basic skills and learning
capabilities?” This definition is, in turn, again dependent on the national and political
context of education (see above).

The problems discussed above also lead to the debate as to whether evaluations should
generate recommendations. Rossi et al. (1988), for instance, argue that evaluation reports
should only deliver data, whereas the assessment should be done by the contracting
authority of the evaluation. Another variant is that the evaluator assesses on the basis of the
goals that have been officially articulated. Shadish et al. (1991) proposed a “goal-free-
evaluation”, which means that the evaluators have to work out the goals considering the
context of the entire program or institution. Finally, constructivistic approaches argue that
an objective specification of the conditions and an objective assessment, can never be
achieved (Guba and Lincoln 1998). Even the evaluators have biased points of view and can
thus never serve as objective authority.

There is rich literature on evaluation of research and innovation theory, which deals with
the scope of indicators. Important issues refer to the use of publications or patents, to
measure research output or university-industry relationships (e.g. Dodgson and Hinze
2000, Schartinger et al. 2000). The limitation of the use of the indicator ‘number of
publications’ is widely recognised. The mere counting of publications is quite problematic
since the quantity can hardly ever be equalled with the quality of publications and because
occasionally quite subjective scientific paradigms and either the very conservative or very
progressive attitudes of certain journals, can restrict or determine the publication of
innovative results (Kieser et al. 1996).

Regarding the organisation of and responsibility for evaluations, in many countries


national agencies have developed guidelines for the preparation of evaluations. In the US
and the UK, government legislation is the driving force for the systematic performance of
evaluation and the definition of measures and indicators: GPRA in the US, Office of
Science and Technology in the UK, whereas in the Netherlands the responsibility for
evaluation has been decentralised and delegated to intermediary organisations, such as
funding foundations or university associations. The above mentioned Dutch model focuses
strongly on self-assessment, combined with a peer review approach.

3.2. The concept of performance management


In the 80s, especially in the Anglo-American countries, public organisations such as
hospitals, schools or public agencies started to implement performance management
systems, which are rooted in the idea of New Public Management, as well as the adoption

9
of ideas and methods successfully used in the private sector. Some proponents even
explicitly state that they desire to make the public sector more like the private sector with
respect to accountability (Itner and Larcker 1998). Herein, the idea of output and outcome
orientation is central. In terms of classical accounting, performance management can be
classified as a “management control system” (Cunningham 2000).

Performance management is clearly a management and planning tool. Like evaluations,


performance management systems try to base judgements of the effectiveness of social
programs and public institutions on more appropriate and trustworthy information,
however, the approach is different.

Performance management is the process of the definition of goals and performance


categories, the definition of performance standards and the reporting and communication to
the public owners (National Academy of Public Administration, cited in English and
Linquist 1998). Thereby, initial objectives for programs and institutions have to be defined,
for which, in turn, relavantindicators have to be developed. This process allows to assess
the achievement of objectives. A crucial activity in designing performance management
systems is the development of performance measures, which include measures for outputs
and outcomes, sometimes also for inputs and processes. This leads to the necessity to
define the outputs as well as the products. As discussed above, the definition and
measurement of outputs of universities is provocative. If the inputs and outputs are
quantified by indicators, it is possible to link inputs and outputs and to develop efficiency
measures (Davies 1999). In addition, with these indicators the allocation of resources can
be carried out. In some European countries universities and Ministries started to use
contracts to allocate the funds in accordance with performance indicators, a method which
is in line with the idea of performance management (Ziegele 2001).

Basically, performance management systems use financial as well as non-financial


indicators with the main aim to define the outcome and they collect quantitative measures,
often based on IT-based systems or Management Information Systems. In practice,
however, they often produce simple statistics to answer critical planning and managerial
questions, instead of conducting more complex data analyses. Thus, they are often
criticised as using quick-turn-around data in internal automated Management Information
Systems that delivers relatively simple comparisons of gross outcomes against
performance goals (Blalock 1999).

Though criticised that the true complexity of programs and institutions is not captured by
performance management systems, proponents argue that these systems can free agencies
and staff from rigid regulations (Cunningham 2000). Furthermore, the definition of
performance measures is consistent with program goals or institutional goals and supports
hence the strategic development process at all levels of an institution. Hereby one
important challenge for public organisations is the evolution of the use of outcome instead
of output indicators (Cunningham 2000).

A widely recognised problem with performance management is the ‘goal displacement’,


which occurs when performance management creates incentives that direct effort towards
meeting the requirements of measuring and reporting and not towards the overall aims of
the institution (Davies 1999). This leads to a behaviour where people will work “first for
the indicators”. Empirical studies also stress the problem of “bean counting“ in practice
(Cunningham 2000). Obviously, this is a problem for quantitative evaluations, too.

10
Critical factors for a successful implementation are the top leadership support, the personal
involvement of the senior management, the participation of the relevant stakeholders and
clarity about how the performance information shall be used (Greene 1999). Cunningham
(2000) found that the commitment and involvement of all actors as well as the integration
in the strategic planning process is critical. The benefits arise rather from the process itself
than from the end result of the performance reporting activity. In particular, the
implementation of performance management systems requires communication about
performance areas that have not been present before. Thus, it is rather the communication
that leads to benefits than the performance information itself.

3.3. IC reporting, evaluation and performance management at a glance


The three instruments presented have different aims and scopes of applications, which
clearly overlap to some extent. In the following the three instruments will be discussed
according to their different features and aims (see Fig. 2 for an overview).

Regarding the approach and focus of the instruments, evaluation is the scientific-oriented
approach with in-depth analysis, carried out at certain points of time, while performance
management systems are characterised by a pragmatic approach, primarily based on output
indicators, derived from the goals. In practice, both are often implemented simultaneous.
IC reporting for universities, as presented in this paper, is a tool which encloses the entire
knowledge production process within universities, with the aim of generating information
for management decisions. It is mainly based on indicators which are categorised in a
model. In addition, it also incorporates qualitative information, that especially should
express the complex nature and interdependencies between driving factors and results.

In contrast to performance management systems and evaluation, IC reports focus explicitly


on the intellectual capital and hence enlarge the existing input and output categories of
performance management systems. Particularly structural capital has to be considered as a
blind spot within universities. Based on the model presented in this paper the importance
of intangible resources is highlighted for universities. IC reporting relies on quantitative as
well as qualitative valuations, whereas performance management systems often focus on a
few indicators.

All three instruments apply different kinds of qualitative and quantitative methods.
Although evaluations rely more often on qualitative methods, in recent years these have
increasingly integrated different kinds of indicators. Indicators have already become
sometimes an integral part or method of certain evaluations, especially in the US, where
data about the financial resources, student satisfaction, or academic reputation, partly
gathered by questionnaires, is aggregated via a standardised procedure (Kellermann 1992).

The classifications of input, processes and results is commonly used in all three
instruments. Stufflebeam (1983), for instance, proposes that evaluations have to analyse
the context, e.g. what are the aims addressed with the program?, inputs, e.g. human
resources and tangible resources, process, activities by the program or institutions, and
products, the results of the program. This is also applied by the IC model presented in this
paper.

11
performance external evaluation intellectual capital
management reporting3
focus measuring results via assessment of efficiency intangible resources and
indicators and effectiveness its management
approach / performance indicators , scientific-oriented and in- indicator-based
methods gathered via information depth obligatory indicators, but
systems tailored to the specific flexible for adaptation
mainly based on goals indicators
information systems & various qualitative and narrations
databases quantitative report
often qualitative
report
data collection Continuous periodically (3-5 years) or periodically,
based on information at discrete points preferable based on
systems information systems
support shareholder/ partly major aim information for policy
public authority decisions

internal management deliver information for restricted deliver information for


support evaluations evaluations

resource allocation yes, based on indicators yes, mainly in the US possible

recommendation no partly no

responsibility organisation (unit) national agencies organisation (unit)

Fig. 2.: Comparison of evaluation, performance management and IC reporting

With respect to the measurement of results, the separation into output, outcome and
impact, whereby the latter two are sometimes used synonymously, is used frequently in
evaluations (Garrett-Jones 2000). In general, output refers to the routine products of
research activities, such as publications, conference papers, training courses, degrees etc.
Outcome means the achievements of the activity such as new theories, new devices or
analytical techniques. Outcome is not treated separately in the IC model. Impact is a
measure of the influence or benefit of the research outcome or output, either within the
research community itself or the wider society. It thus includes the social, economic or
environmental benefit. The assessment of impacts is explicitly treated in the IC model.

The aspects of learning and supporting management is addressed differently by the three
instruments. Performance management systems document outcomes, but to a lesser extent
give information to understand the complex nature of knowledge-production. They collect
data on a continuous basis, which tends to rely on easy-to-obtain quantitative measures that
are conditioned by cost acceptance. In the field of evaluations, internal evaluations have a
strong focus to increase the performance and efficiency by internal discussions. IC reports
should also enhance the academic discussion within the faculty board (dean) and the
university’s governing boards and should therefore support the strategic development
process.

The linking of performance measurement to budgeting is a global phenomenon and is


frequently treated within performance management systems. IC reports have the potential

3
As conceptualised for the implementation in Austrian universities.

12
to support this task. However, in Austria, the allocation of financial resources will be based
on the performance contract (see above). Within this procedure, the IC report supports the
management and decision process, either for the Ministry, or for the internal resource
allocation decisions between university departments, research programs, etc. In such a
case, data also have to be indicated on the department level. However, the latter is not
demanded by the new university law and is the responsibility of the universities
themselves.

As concerns the responsibility for accomplishing the instruments, traditional evaluations


are carried out by an external expert team, which, based on different methods, generates
data, interprets it and writes a report. Internal evaluations are produced by the members of
the organisation itself. Performance management is implemented within the organisation
and administrated by the top management or administration. IC reports are produced by the
members of the organisation, as is the case in industry. Probably, in the future, external
authorities will control the process and ensure the quality of the production, for instance
auditors or Ministry agents.

4. Discussion and Outlook


Considering the long history of assessing results of research and education, the idea of
measuring and reporting knowledge-based resources and results for application at
universities is not really brand new. However, IC reporting as conceptualised here focuses
on the identification of intangible assets and tries to link them to the outcome of the
universities, which is a new idea in the context of universities. The preparation of an IC
report requires the discussion and definition of a university’s aims and strategies, which, in
turn, stimulates learning inside the organisation. IC reports, regularly published, and based
on information systems, deliver information for monitoring the development of the
organisation and supports the organisation university to develop a coherent strategic
profile. It fosters the establishment of an organisational structure and culture within
universities, which is increasingly important for many universities in the new competitive
environment. When taking intangibles explicitly into account, the efficient use of resources
can be improved. However, aims, rationales and the knowledge production processes of
universities are considerably different in comparison to private industry. This phenomenon
has to be reflected by the specific indicators and aims incorporated in the IC model and
reports of universities.

Like knowledge-based enterprises, IC reporting universities are faced with the


methodological problem of measuring the ‘soft’, non-physical processes and outputs.
There is a long history and vast literature dealing with the measurement of scientific and
educational processes and their performance by indicators within the evaluation research
and innovation theory. A common problem is that indicators may lack validity and may not
be the best proxy for outcome variables. In general, as one moves from output to impacts,
the results are getting broader in their effect, need a period to manifest, are harder to
quantify and less traceable to a particular research project, program or organisation
(Garrett-Jones 2000). The definition of the output of universities – a public good - is, as
already described, difficult. Due to the limitations of a purely indicator-based method, IC
reports should also integrate qualitative methods (best practice, narration, etc.), for
instance, to illustrate the connection between research and education and to show the
synergies and flows between the different elements of the model. However, currently there

13
are no quantitative measures available in most IC approaches, which would allow to
monitor the flows or synergies between the different elements.

In general, in the field of research there are still concerns within the community regarding
the measurement of research outputs and the establishment of quantitative or financial
methods which goes hand in hand with the fear that curiosity, discovery, creativity, and
innovation will be restricted too much. However, this might be the wrong perception, it is
not about restraining the process itself, but taking decisions on scarce resources in an
increasingly complex environment. IC reporting should serve as an instrument to take
more strategic, efficient and transparent decisions, taking into account different
perspectives simultaneously. This debate has also similarities with the discussion within
the knowledge management literature and the question, as to whether or not one can
manage knowledge at all.

In industry there is a strong demand for the valuation of intangible assets by financial
figures, with mixed and successful solutions offered. In the case of universities, this call is
not that loud, but existent. The financial assessment of results is related to the estimation of
the economic effects of research and education. Market approaches are seldom applicable
for universities. Nevertheless, some indicators, which are based on questionnaires,
elevating information such as the salary of former students, can be seen as such financial
measures. Based on the employment or profits generated with these firms, an economic
benefit might also be estimated. However, the development is still in the early stages and
has to be treated with caution within the universities.

In general, the valuation of IC indicators is dependent on the specific goals and the context
of the university or institute. This issue is also widely stressed in the IC literature (e.g.
Roberts 1999). IC reports can be developed for the entire university, for institutes,
departments and research programs. Clearly, this causes additional problems of
aggregation and comparison. An interesting question, for instance is the possibility of
valuing and comparing the performance of a research-oriented institute with a more
training-oriented institute. In practice, the interpretation of the reports is dependent on the
context. This becomes especially evident for education, since the meaning and value of
education is contingent on the cultural context. In the German Humboldtian tradition, for
instance, education is referred to as “scientific”, for the French, education is regardedas
“professional”, which also explains France’s phenomenon of the “grande ecoles”, finally,
the Anglo-American tradition stresses the “liberal” nature of education (Scott 2002). These
few examples already point out the necessity for IC reports to deliver contextual
information, but also shows the limits of comparability. This is thus a further complicating
aspect of IC reports of universities in contrast to IC reports of industrial firms. While in the
latter case, eventually all measures can be linked (explicitly) to the financial performance
of the firm, in the former, the performance measurement is per se multi-dimensional,
expressing different aims and objectives.

If IC reports serve as a management instrument, the generated information has to be linked


to action and behaviour and hence the establishment of learning loops is crucial. In
comparison to IC reporting in industry, the presented IC model has a stronger focus on the
management perspective. The general lack of accounting, management and control
instruments in universities has thus to be keep in mind in this context. Empirical studies,
carried out in industry regarding the introduction of the Balanced Score Sard, delivered
some evidence that indicator systems helped the firms to make their corporate goals and

14
strategies more concrete and measurable, which might be also come true in the context of
universities (Hoque and James 1999).

Obviously, some problems might emerge with the two aims, namely to serve as a
management instrument for the university itself and also for the ministry. Institutes will not
be willing to deliver information and especially intimate information, if they have the fear
that this will have negative consequences for their funding. The problem of tactical
behaviour due to evaluations is well recognised in the corresponding literature, especially
if results are bound to financial allocation mechanisms (Voroeijentijen 1991, Blalock
1999).

As far as the structure of the IC model applied to the Austrian universities is concerned, it
is different to most classical IC models for industry since it follows a process logic or
input-output logic and should thus be labelled as ‘process-oriented model’. Classical
models of IC Reporting separate different elements of intellectual capital but seldom link
this with the production process of the organisation. An exception are IC reports, based on
EFQM model such as the Danish company Ramboll applies for its IC measurement
(Ramboll 2001). In current models the development of the intangible resources is explicitly
embedded in the context of goals, processes and outputs. However, like most other
approaches for IC measurement, the presented model is not directly compatible or cannot
be incorporated in the classical financial accounting system.

As described above, evaluation, performance management and IC reporting are


overlapping to some extent, but offer different solutions for the same problems. The latter
are partly substitutes for the former because the organisation and its performance can be
analysed by internal and external stakeholders without the help of the evaluators. The
question of the balance of internal self-assessment versus external evaluation is a crucial
one for the further development and IC reporting has to be linked to this question. Within
this context, universities and ministries will have to develop their systems for governance,
management, and control, considering the integration efforts within Europe. But there is no
doubt, with the variety of information provided for the stakeholders, IC reporting allows
the comparability between different universities and enables quality assurance at
universities, something which is demanded in the course of the harmonisation of European
university systems with its principles of transparency, performance and quality assessment
and an open dialogue with the community.

What are possible development trajectories for the future? The presented functional model
for application in Austrian universities is a first step. It is flexible enough for individual
adaptations and adjustments and has the potential for further improvement and elaboration.
Furthermore, through the definition of categories and defined indicators, comparability
should be made possible. Although the various national university systems still vary
considerably, this instrument, which is flexible for modification and improvement,
addresses some general issues relevant for all universities, independent of their national
specifics. On the whole, there is so far no common model or standard for valuing
intangible assets and preparing IC reports for universities at the international level. In
many countries, national agencies have developed guidelines for the preparation of
evaluations and have partly also defined some indicators, which have to be linked to and
co-ordinated with the further development within the IC movement at universities. In
addition, work carried out at the national and international level regarding IC management
and measurement provide valuable information for the development of specific guidelines

15
for the application within universities (e.g. MERITUM for HEROs – Higher Education and
Research Organisations, currently in development).

There is also the demand for further investigation of the synergies of the different
approaches and to link the three disciplines: evaluation of research, performance
measurement and IC theory. Furthermore, universities can not only learn from the private
sector and IC theory, private research organisations and not-for profit research
organisations as well as R&D departments of companies, can also benefit from the
experiences of universities with the management of research and human capital.

References
Austrian Research Centers Seibersdorf (2000): Intellectual Capital Report 1999.
Seibersdorf 2000.

Biedermann, H., Graggober, M., Sammer, M. (2002): Die Wissensbilanz als Instrument
zur Steuerung von Schwerpunktbereichen am Beispiel eines Universitätsinstitutes. In:
Wissensmanagement: Konzepte und Erfahrungsberichte aus der betrieblichen Praxis.
Bornemann, M., Sammer, M. (Edt.), Wiesbaden 2002 S. 53-72.

Blalock, A.B. (1999): Evaluation Research and the Performance Management Movement,
Evaluation, 5, 2, 117-149.

Bundesministerium für Bildung, Wissenschaft und Kultur - BM:BWK (2002):


Bundesgesetz über die Organisation der Universitäten und ihrer Studien
(Universitätsgesetz 2002), Wien: URL: http://www.weltklasse-
uni.at/upload/attachments/150.pdf

Bomaccorsi, A., Piccaluga, A. (1994): A Theoretical Framework for The Evaluation of


University - Industry Relationships, R&D Management, 24, 3, 154-169.

Bormans, M.J, Brouwer, R. In’t Veld, R.J., Merten, F.J. (1987): The role of performance
indicators in improving the dialogue between government and universities, in:
International Journal of Institutional Management in Higher Education, 11, 181-194.

Broadbent, J., Guthrie, J. (1992): Changes in the Public sector: A review of recent
“alternative” accounting research, Accounting, Auditing & Accountability Journal, 5, 2 3-
31.

Canibano, L./Covarsi, M./Sanchez M.P. (1999): The value relevance and managerial
implications of intangibles: A literature review, International Symposium: Measuring and
Reporting Intellectual Capital: Experiences, Issues, and Prospects, OECD, 9-10 June,
Amsterdam.

Conceicao, P., Heitor, M. (1999): On the Role of the University in the Knowledge
Economy. Science and Public Policy, 26, 1, 37-51.

Cooper, R. et al. (1981): Accounting in organised anarchies: Understanding and designing


accounting system in ambiguous situations, Accounting, Organizations and Society, 6, 3,
175-191.

16
Cronbach, L.J., Ambron, S., Dornbusch, S.M., Hess, S.M., Hornik, R.C. et al. (1980):
Towards Reform of Program Evaluation, San Francisco.

Cunningham, G. (2000): Towards A Theory of Performance Reporting in Achieving


Public Sector Accountability: A Field Study. Paper presented at the Annual Meeting of the
British Accounting Association 2000.

Davies, I.C. (1999): Evaluation and Performance Management in Government. In:


Evaluation, 5, 5, 150-159.

Deutsches Zentrum für Luft- und Raumfahrt DLR (2001): Intellectual Capital Report 2000,
Köln 2001.

Dodgson, M., Hinze, S. (2000): Indicators used to measure the innovation process: defects
and possible remedies, Research Evaluation, 8, 2, 101-114.

Edvinsson, L. (1997): Intellectual Capital, Skandia, Stockholm 1997.

English, J., Lindquist, E. (1998): Performance Management: Linking Results to Public


Debate, Ottawa.

Etzkowitz, H., Webster, A., Heavley, I. (1998): Capitalizing Knowledge: New


Intersections of Industry and Academia, State University of New York Press, Albany.

Etzkowitz, H., Leydesdorff, L. (2000): The dynamics of innovation: from National


Systems and “Mode 2” to a Triple Helix of university-industry-government relation,
Research Policy, 29, 109-123.

European Foundation for Quality Management: URL: http://www.efqm.org/

Garrett-Jones, S. (2000): International trends in evaluation university research outcomes:


what lessons for Australia.

Gibbons, M. (1994): The New Production of Knowledge, Pinter Publishers, London and
New York.

Ginsberg, P.E. (1984): The Dysfunctional Side of Quantitative Indicator Production. In:
Evaluation and Program Planning, 7, 1-12.

Guba, E.G., Lincoln, Y.S. (1989): Fourth generation evaluation, Newbury Parc, CA.

Hoque, Z., James, W. (1999): Strategic Priorities, Balanced Scorecard Measures and their
Interaction with Organizational Effectiveness: An Empirical Investigation, British
Accounting Association, Annual Conference, March 29-31 1999, University of Glasgow.

Itner, C.D., Larcker, D.F. (1988): Innovations in performance measurement: Trends and
research implications, Journal of Management Accounting Research, 10, 205-238.

17
Jaffe, A. (1989): Real effects of academic research. In: American Economic Review 79. Jg
(1989), S. 957-970.
Kieser, A. (1998): Going Dutch – Was lehren niederländische Erfahrungen mit der
Evaluation universitärer Forschung. In: Die Betriebswirtschaft, 58, 2, 208-224.

Leitner, K.-H., Sammer, M., Graggober, M., Schartinger, D., Zielowski, C. (2001):
Wissensbilanzierung für Universitäten, Auftragsprojekt für das Bundesministerium für
Bildung, Wissenschaft und Kunst. 2001. URL: http://www.weltklasse-uni.ac.at.

Lenn, M.P. (1992): The US accreditation system, in: Craft, A. (Edt.): Quality Assurance in
Higher Education. Proceedings of an International Conference Hong Kong, 1991, London,
161-186.

Lindgren, L. (2001): The Non-profit Sector Meets the Performance-management


Movement. In: Evaluation, 7, 3, 285-303.

Lundvall, B. (2002): The University in the Learning Economy, DRUID Working Paper No
02-06, Copenhagen

Maul, K-H. (2000): Wissensbilanzen als Teil des handelsrechtlichen Jahresabschlusses. In:
Deutsches Steuerrecht, 38, 4, 2009-2016.

MERITUM Project (2001): Guidelines for Managing and Reporting on Intangibles


(Intellectual Capital Report).

Mouritsen, J., Larsen, H.T., Bukh, P.N.D. (1998): Intellectual Capital and the ‘Capable
Firm’: Narrating, Visualising and Numbering for Managing Knowledge, Copenhagen
Business School and Aarhus School of Business 1998.

National Science Board (1998): Science and Engineering Indicators - 1998, National
Science Foundation, Arlington, VA.

OECD (1999): University Research in Transition, Paris 1999.

Ramboll (2001): Annual Report 2000, Copenhagen.

Richter, R. (1991): Qualitätsevaluation von Lehre und Forschung an den Universitäten der
Niederlande. Eine Bilanz der letzten 10 Jahre, in: Weber, W. Otto, H. (Edt.): Der Ort der
Leher in der Hochschule, Weinheim, 337-362.

Rossi, P., Freeman, H.E., Hofmann, G. (1988): Programm-Evaluation. Einführung in die


Methoden angewandter Sozialforschung, Stuttgart.

Risø National Laboratory Optics and Fluid Dynamic Department (1999): Intellectual
Capital Accounts, Roskilde.

Roberts, H. (1999): The Control of Intangibles in the Knowledge-intensive Firm. Paper


presented at the 22nd Annual Conference of the European Accounting Association,
Bordeaux, 1999.

18
Rosenberg, N., Nelson, R.R. (1996): The Roles of Universities in the Advance of Industrial
Technology. In: Rosenbloom, R.S. and W.J. Spencer (Edt.): Engines of Innovation.
Cambridge: Harvard Business School Press.

Salamanca (2001): The European Higher Education Area. Joint declaration of the
European Ministers of Education. Convened in Bologna on the 19th of June 1999.
http://www.salamanca2001.org/
Scott, P. (2002). The future of general education in mass higher education systems, Higher
Education Policy 15, 61-75.

Scriven, M. (1993): Hard-won lessons in program evaluation, San Francisco, CA

Schartinger D., Schibany A., Gassler H. (2001): Evidence of Interactive Relations Between
the Academic Sector and Industry. In: The Journal of Technology Transfer, 26, 3.

Schenker-Wicki, A. (1996): Evaluation von Hochschulleistungen, Leistungsindikatoren


und Performance Measurement. Wiesbaden 1996.

Schimanek, U., Winnes, M. (2000): Beyond Humboldt? The relationship between teaching
and research in European university systems. In: Science and Public Policy, 27, 6, 397-
408.

Stufflebeam, D.L. (1983): The CIPP model for program evaluation, in: Madaus, G.F.,
Scriven, M.S., Suffelbeam, D.L. (Edt.): Evaluation models. Viewpoints on Education and
Human Service Evaluation, Boston, 117-142.

Stephan, P. E. (1996): The Economics of Science, Journal of Economic Literature, 34. Jg.
(1996), S. 1199-1235.

Sveiby, K.E. (1997): The New Organizational Wealth: Managing and Measuring
Knowledge-Based Assets, Berrett-Koehler, San Francisco.

Titscher, S. et al. (Edt.) (2000): Universitäten im Wettbewerb. München 2000.

Turpin, T., Garrett-Jones, S., Aylward, D. et al. (1999): Valuing University Research:
International Experiences in Monitoring and Evaluation Research Outputs and Outcomes.
Australian Research Council and Centre for Research Policy, University of Wollongong,
May 1999.

Vroeijenstijn, T. (1992): External quality assessment: Servant of two masters? The


Netherlands university perspective, in Craft, A. (Edt.): Quality Assurance in Higher
Education: Proceedings of an International Conference Hong Kong, 1991, London, 109-
132.

Ziegele, F. (2001): Akademisches Controlling: Theoretische Grundlagen, Ziele, Inhalte


und Ergebnisse. In: Akademisches Controlling und hochschulinterne Zielvereinbarung,
Kooperationsprojekt der Technischen Universität München und des CHE Centrum für
Hochschulentwicklung, München, Gütersloh Jänner 2001.

19
Appendix: Exemplary indicators for Austrian universities
(obligatory indicators are labelled with an asterisk)

Human Capital
Number of scientific staff total*
Number of scientific staff total (employed)*
Number of full-time professors
Number of student assistants
Fluctuation of scientific staff (as % of all scientific staff)*
Fluctuation of scientific staff (not empl.) (as % of total scientific staff (not empl.)*
Growth of scientific staff (in%)*
Growth of scientific staff (not employed) (in%)*
Average duration of scientific staff
Expenses for training
Structural Capital
Investments in library and electronic medias*
Relational Capital
Research grants abroad (as % of scientific staff)*
International scientists at the university (total in months)*
Number of conferences visited*
Number of employees financed by non-institutional funds
Number of activities in committees etc.
Hit rate EC research programs
New co-operation partners
Research
Publications (referred)*
Publications (proceedings etc.)
Publications total*
Number publications with co-authors from the industry
Habilitation*
PhDs
Non-institutional funds (contract research etc.)
Education
Graduations
Average duration of studies
Teacher per student
Drop-out-ratio
PhDs and master theses finalised
Commercialising
Number of spin-offs
Employees created by spin-offs
Income generated from licences
Knowledge transfer to the public
Hits internet site
Lectures (non-scientific)
Services
Measurement and lab services and expert opinions
Leasing of rooms and equipment

20

Potrebbero piacerti anche