Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract: European universities are faced with numerous challenges caused by the
political aim to harmonise the different national university systems and new research
modes and by the claims of various stakeholders such as the industry and society. In many
countries universities are increasingly provided with greater autonomy regarding their
organisation, management, and budget allocation, which requires new management and
reporting systems. As organisations that are mainly funded by the public sector, they are
also faced with an increased demand for transparency and accountability. Universities are
knowledge producers per se, their most important output is knowledge incorporated in new
research results, publications and educated students. A university’s most valuable
resources are the researchers and students with their relations and organisational routines.
These resources can be interpreted as intangible assets, even though the term has so far not
been used within the context of universities. The instrument of intellectual capital reporting
and methods for valuing intangibles seem to be a rich foundation for the development of
the necessary new management and reporting systems. The Austrian Ministry of Education
has decided to study the potential of intellectual capital reporting and will introduce this
new instrument in the course of the reorganisation of the Austrian universities in the future.
The conceptual framework will be presented in this paper, embedded in the international
discussion of new management and valuation systems for universities.
1
1. Introduction
In most European countries universities are continuously faced with various new
challenges caused by political initiatives, an altered economy and society as well as new
research modes. A major challenge for universities, driven by European-wide political
activities, such as the Bologna declaration and its mission to create the European Higher
Education Area, is the harmonisation of the individual national university systems. The
aim of this declaration is to achieve greater compatibility and comparability of the higher
education sector. Measures such as the establishment of a system of credits as a means to
promote student mobility, the adoption of a system of comparable degrees as well as the
promotion of quality assurance are among the first initiatives (Salamanca 2001).
In recent years in many countries universities have been provided with greater autonomy
regarding their organisation, management, and budget allocation. This has also raised also
questions regarding their basic functions and tasks within society, which are, for instance,
expressed by discussions about Humbold’s ideal with its basic idea of the ‘unity of
teaching and research’. Furthermore, there is a discussion of the ‘entrepreneurial
university’, an institution which is no longer an ‘ivory tower’ but directly linked to the
socio-economic demands of society and economy especially in the region where it is
embedded. Universities fulfil various tasks, these include carrying out basic research and
education, but more and more frequently also include applied research and development in
co-operation with industry or public agencies, thereby trying to provide solutions for the
immediate needs of society. These additional tasks are sometimes termed as a ‘third
mission’ (OECD 1999).
As organisations which are mainly financed by public funding, universities are also
confronted with an increased demand by the owners and citizens for transparency
regarding the use of those funds. This call for public accountability requires the disclosure
about the social and economic outcomes of universities.
Furthermore, universities are constantly faced with new paradigms for knowledge
production, often labelled as ‘modus II of knowledge production’, which are characterised
by a stronger orientation towards applied research and the necessity for an interdisciplinary
research approach (Gibbons 1994). This implies the urge for universities to collaborate
more frequently with other research institutions, but also with private firms or public
organisations, or even to participate in international research networks. This development
has been clearly accelerated by information technologies, which enable more intensive co-
operation and also provide new techniques for scientific analysing and modelling in an
efficient way.
Universities are part of the science, education, and innovation system of a nation and
knowledge producers per se, their most important output is knowledge, incorporated in
new research results, publications and well educated students. Obviously, the most
valuable resources of universities are thus their researchers and students with their
relationship networks as well as their organisational routines. These resources can be
2
interpreted as intangible resources and assets, even though the term has so far not been
used within the context of universities. Considering a wider definition of intangible assets -
which is independent of the regulations by accounting standards’ committees and national
laws, such as the right of disposal - as followed by most academics of the IC literature,
intangible assets can doubtless be identified in universities.
Due to their legal status, most universities do not have owner structures like private firms.
Consequently, universities usually do not need to produce the kind of annual reports
required by commercial law. Nevertheless, they have often to implement financial
accounting systems, increasingly based on the double bookkeeping system as substitute for
the public accounting system.
All the above mentioned challenges call for the implementation of new management and
reporting systems within universities. In comparison with the past, universities are faced
with the task of allocating budgets, they have to define their organisational goals and
strategies in a more explicit way and due to the broader autonomy and the extended
competition with other research organisations, universities gain a new leeway for setting
strategic priorities. While in former times, when universities basically did only scientific
research and education, today their research processes are much more manifold, ranging
from basic research to the development of technologies. They educate full-time students
but also offer training courses and provide labs for industrial applications. Nevertheless,
university systems and traditions vary a great deal throughout Europe, a fact which has to
be taken into account when dealing with the question of the management and measurement
of the knowledge production process within universities.
The instrument of intellectual capital (IC) reporting and the general methods for valuing
intangible assets seem to be a rich foundation for the development of these new
management and reporting systems. Therefore in 2001 the Austrian Ministry for
Education, Science and Culture initiated a study of the potential of IC reporting. In 2002
the Austrian Parliament finally decided that in the course of the reorganisation of Austrian
universities, universities would have the obligation to publish IC Reports. In German IC
reports are termed Wissensbilanz, which means knowledge account.
The general conceptual framework and background for the implementation of IC reports
will be first presented in this paper. Then this concept will be discussed in the context of
other proposed management and valuation systems for universities. In universities the
assessment of research, education, and knowledge-based processes has been mainly treated
within the concepts of evaluation. In addition, the application of performance management
systems, based on the principles of New Public Management, has been discussed and
partly implemented. The paper compares the approach of evaluation, performance
management and IC reporting and tries to point out the potential benefits and opportunities
of IC reporting for universities. The differences between valuing intangible assets in
universities, research organisation, and private firms will also be highlighted.
3
2. A new management and reporting system for universities based on the
principles of Intellectual Capital Reporting
2.1. The international development
The concept of IC Reporting for universities is mostly based on experiences and methods
developed within industry. The idea of IC reporting was born in the Scandinavian countries
at the beginning of the nineties. Scandia, the Swedish Insurance Company, published the
first IC report in 1992 as a supplement to its annual report. In the following years a new
movement towards the disclosure of information about the value and development of
intangibles has attracted attention. In European and the US, companies have issued
”accounts of intellectual capital“, which enable them to measure intangibles and their
results. In the industrial application, approaches have been established which record
intangible assets with the help of financial and non-financial indicators. Hereby different
forms of intangible assets are differentiated and each asset is valued with the help of
indicators, for instance the length of product development, customer satisfaction, etc. (see
for instance Sveiby 1997 and Edvinson 1997). Besides quantitative assessments methods
such as qualitative valuations and narrations are also applied in these reports (Mouritsen et
al. 1998).
The idea of IC reporting did not gain much attention within the research and science sector.
So far only Research Technology Organisations, which carry out applied research and
development, have published Intellectual Capital Reports. In 1999 the Austrian Research
Centers Seibersdorf published as the first European organisation an IC report for the entire
organisation, followed by the German DLR, which published its first IC report in 2000.
Both reports are based on a similar conceptual framework, developed within ARC, which
allows at the same time the comparison of indicators between these two organisations. The
Institute Optics and Fluid Dynamic of the Danish Risø research organisation has also had
experiences with IC Reporting since 1999.
In Germany, Maul (2000) dealt with the question of the exploratory power of financial
statements, published by certain German universities in the course of the reorganisation in
some German Federal States. He argues that the traditional financial account does not
deliver sufficient information on the financial situation, earnings and assets, which is the
main task of the annual account according to the German HGB (Handelsgesetzbuch), the
German accounting law. This is obviously due to the huge amount of intangible assets of
universities. Therefore, he claims that universities, which have to produce financial
statements, should integrate information about intangible assets within the management
report of the financial statements.
Based on the concepts and experiences with IC Reporting in general and with research
organisations in particular, a concept for the application of IC Reporting for Austrian
universities was developed for the Austrian Ministry for Education, Science and Culture
(see Leitner et al. 2001 and www.weltklasse-uni.at)1. The reorganisation of Austrian
universities revealed a high demand for such a new concept and consequently a demand for
the implementation of new management and reporting systems.
1
The research project was carried out by the ARC Seibersdorf research GmbH Systems Research and
Montanuniversität Leoben, Institute for Economics and Business Management and funded the Austrian
Ministry for Education, Science and Culture.
4
2.2. The role of IC reporting within the Austrian reorganisation of universities
The main task of the IC project was to develop a model for universities which meets the
specifics of their knowledge production process in the new organisational and legal context
of universities, formed by the national and European frameworks. The IC report also had to
be integrated in the newly created administration and governance system of the university
sector.
According to the new university law (see UOG 2002), the publication of the IC report has
to be published parallel to the development of the performance contract and the
performance report. While the performance report only deals with the topics addressed
within the performance contract, the idea behind IC reports is to give universities the
opportunity to report on their full range of activities and to provide an instrument for
internal management tasks and decision-making. The IC report has a wider scope than the
performance contract and should allow the universities to inform about their whole nature,
unrestricted by a few measures. Both reports are generated by the university itself and it is
the Vice Chancellor (Rektor) who is finally responsible for their proper implementation.
IC reporting for Austrian universities should thus fulfil two aims: First, it should provide
information for the management of intangible resources. Universities have to decide
whether, and how much they should invest in the training of scientists, with whom co-
operations should be fostered, which research programs should be emphasised, etc. The
implementation of an IC Report requires the discussion of goals and strategies, the IC
model should support this discussion. With the IC report, which can be realised for
departments, institutes and the whole university, goals can be operationalised and their
achievement monitored. Secondly, IC reports should provide external stakeholders with
information about the development and productive use of the intellectual capital. Thus, the
Ministry gets useful information for the allocation of its resources and the management of
research programs but also for the definition of the national science and education policy.
Besides the performance contract and IC reporting, evaluation is the third main instrument
for the governance and development of Austrian universities. In Austria, evaluations do not
have a long history and were only occasionally carried out in the past. In the course of the
reorganisation, evaluations are to be institutionalised for all universities. They have a clear
focus on in-depth assessments of the quality of research and education as well as the
5
research personal and are to be carried out every three to five years by external peers (see
also next chapter).
Considering this background and guiding principles, the new university law finally defines:
“The Wissensbilanz explicitly has to show i) the aim related to society, the self-defined
strategy as well as the universities’ strategy, ii) the intangible assets, separated in Human
Capital, Structural Capital, Relational Capital, iii) the performance processes and output
measurements, in correspondence to the definition within the performance contract” (UOG
2002). Every Austrian university has to produce the IC report annually by April 30th, and
report to the Ministry. It is up to the university to also publish it for other stakeholders.
Research Stakeholder:
Political Human capital Education Ministry
goals Students
Training
Industry
Structural capital Commercialising of research
Public
Organi- Knowledge transfer to the public
Science
sational Relational capital Services Community
goals
Infrastructure etc.
Output
Input
The development of intellectual capital is guided by political goals, set by the Ministry,
which, in turn, are based on the Austrian Science and Education policy, but also by the
organisational goals defined by the universities themselves. Within this model intellectual
capital or intangible resources are interpreted as the input for the knowledge-production
process within universities. Three elements of intellectual capital are differentiated: human
capital, structural capital and relational capital. The model thus adopts the widely diffused
classification also proposed by the MERITUM group (MERITUM 2001). Human Capital
are the researchers and non-scientific staff of the university. Structural capital are the
routines and processes, an often neglected infrastructure of the universities. Relational
capital are the relationships and networks of the researchers as well as the entire
organisation.
6
The model differentiates six performance processes which can be reduced or enlarged
during the implementation, depending on the specific profile and type of the university
(e.g. universities for art, business schools). Apart from scientific research and education,
which are the kernel of university activities, training, commercialising of research,
knowledge transfer to the public, services, and infrastructure services are separated. The
latter ones can be summarised as the third mission of the modern university (OECD 1999).
These elements are mainly captured with process and output measures.
In the category of impact, the achievements of the performance processes are assessed. The
different stakeholders of the universities addressed are the scientific community, students,
citizens, industry, etc. Naturally, these are the most difficult element to evaluate by
quantitative data.
As in the case of IC reporting by industrial firms the different elements of the model will
be measured and assessed by financial and non-financial indicators, as well as by
qualitative information and valuations.
Within the project a list of indicators was developed. Whereas some of them are obligatory
indicators, every university has to publish, others are optional and can be used depending
on the context and aims. The design and selection of indicators was based on i) the set of
measures used in the past within Austrian universities, ii) proposed indicators within the
intellectual capital literature, and iii) the findings of the evaluation research. A list of 200
indicators was proposed, whereas 24 indicators will be obligatory for universities2. Even
though most indicators are non-financial by nature, some indicators are financial figures.
As mentioned, the financial assessment of outcomes is the most difficult one. In the case of
commercialising, the amount of licensing and the sale of spin-out firms, are possible
answers for that problem.
Although scientific research and education are equally important for all universities,
universities also have the choice to set certain priorities regarding their performance
processes. This can be represented as the opposite levels of a continuum. On the one hand,
for example, they can focus purely on basic research, or on the other hand carry out applied
research. The latter is becoming increasingly important and can in turn deliver even new
fundamental research questions. Universities can co-operate with industry or public
agencies to solve concrete problems or commercialise their research findings via spin-offs .
Universities may choose to educate only the best students and provide an elite education,
while others provide education for masses. Universities will select the indicators according
to their specific goals and the relevant processes. Thus, the list of indicators reflects the
strategic priorities of a university.
Finally, within the project a rough guideline for the implementation was also developed.
The critical points for the implementation are the procedure and organisation of the
implementation within the university (bottom-up versus top-down), the question of the
participation of professors, administrators, research staff, etc., the co-ordination with other
internal and external reporting systems as well as the successful use of the information
produced via the IC report and the establishment of feed-back loops. This implementation
guideline will be further elaborated in 2003, when the new university law will be executed
within Austrian universities. So far in Austria only the Institute for Economics and
2
See Leitner et al. (2001) or http://www.weltklasse-uni.at/upload/attachments/170.pdf for the full list.
7
Business Management from the Montanuniversität Leoben, which had been also a partner
of the IC project and has a long experience in quality management, developed an IC report
based on the presented IC model (Biedermann et al. 2002).
While the US, the UK, and the Netherlands soon started to evaluate the research and
education provided byuniversities, in other European countries the use of these concepts
started quite late and the application and extent of evaluations still varies considerably
across Europe. In most countries, evaluations are primarily carried out for institutes and
departments or specific research programs, seldom for the entire university.
In addition to the purpose of delivering information for the public authority, some authors
argue that evaluations should also deliver information for the general public and thus meet
the demand for social accountability (Richter 1991).
Finally, modern evaluations have the task of providing information for the resource
allocation. In practice, evaluations are partly related to resource allocation decisions. While
in the UK the evaluations are directly associated with government resource allocations, in
other countries such as the Netherlands evaluation reports are to a stronger extent aligned
to support research groups and universities for their management tasks.
8
Evaluations are usually executed by external experts, in addition, internal evaluations have
gained also popularity, especially because they are better suited to enhance learning within
the organisation. The Dutch evaluation approach, for instance, starts with the preparation
of the internal evaluation report, prepared by the institution (self-evaluation), followed by
an evaluation by external experts (peer-group) (Richter 1991).
When evaluating research and education, the goals have to be defined explicitly. However,
goals of institutions and programs are often manifold, vague or even contradictory. A
crucial task is thus the definition of valuation criteria, which are related to the goals and
interest groups. Considering the specific aims, a major challenge for evaluations is the
common definition of the output – which is especially difficult in the case of education and
research, e.g. “What is a good education?” or “What are basic skills and learning
capabilities?” This definition is, in turn, again dependent on the national and political
context of education (see above).
The problems discussed above also lead to the debate as to whether evaluations should
generate recommendations. Rossi et al. (1988), for instance, argue that evaluation reports
should only deliver data, whereas the assessment should be done by the contracting
authority of the evaluation. Another variant is that the evaluator assesses on the basis of the
goals that have been officially articulated. Shadish et al. (1991) proposed a “goal-free-
evaluation”, which means that the evaluators have to work out the goals considering the
context of the entire program or institution. Finally, constructivistic approaches argue that
an objective specification of the conditions and an objective assessment, can never be
achieved (Guba and Lincoln 1998). Even the evaluators have biased points of view and can
thus never serve as objective authority.
There is rich literature on evaluation of research and innovation theory, which deals with
the scope of indicators. Important issues refer to the use of publications or patents, to
measure research output or university-industry relationships (e.g. Dodgson and Hinze
2000, Schartinger et al. 2000). The limitation of the use of the indicator ‘number of
publications’ is widely recognised. The mere counting of publications is quite problematic
since the quantity can hardly ever be equalled with the quality of publications and because
occasionally quite subjective scientific paradigms and either the very conservative or very
progressive attitudes of certain journals, can restrict or determine the publication of
innovative results (Kieser et al. 1996).
9
of ideas and methods successfully used in the private sector. Some proponents even
explicitly state that they desire to make the public sector more like the private sector with
respect to accountability (Itner and Larcker 1998). Herein, the idea of output and outcome
orientation is central. In terms of classical accounting, performance management can be
classified as a “management control system” (Cunningham 2000).
Though criticised that the true complexity of programs and institutions is not captured by
performance management systems, proponents argue that these systems can free agencies
and staff from rigid regulations (Cunningham 2000). Furthermore, the definition of
performance measures is consistent with program goals or institutional goals and supports
hence the strategic development process at all levels of an institution. Hereby one
important challenge for public organisations is the evolution of the use of outcome instead
of output indicators (Cunningham 2000).
10
Critical factors for a successful implementation are the top leadership support, the personal
involvement of the senior management, the participation of the relevant stakeholders and
clarity about how the performance information shall be used (Greene 1999). Cunningham
(2000) found that the commitment and involvement of all actors as well as the integration
in the strategic planning process is critical. The benefits arise rather from the process itself
than from the end result of the performance reporting activity. In particular, the
implementation of performance management systems requires communication about
performance areas that have not been present before. Thus, it is rather the communication
that leads to benefits than the performance information itself.
Regarding the approach and focus of the instruments, evaluation is the scientific-oriented
approach with in-depth analysis, carried out at certain points of time, while performance
management systems are characterised by a pragmatic approach, primarily based on output
indicators, derived from the goals. In practice, both are often implemented simultaneous.
IC reporting for universities, as presented in this paper, is a tool which encloses the entire
knowledge production process within universities, with the aim of generating information
for management decisions. It is mainly based on indicators which are categorised in a
model. In addition, it also incorporates qualitative information, that especially should
express the complex nature and interdependencies between driving factors and results.
All three instruments apply different kinds of qualitative and quantitative methods.
Although evaluations rely more often on qualitative methods, in recent years these have
increasingly integrated different kinds of indicators. Indicators have already become
sometimes an integral part or method of certain evaluations, especially in the US, where
data about the financial resources, student satisfaction, or academic reputation, partly
gathered by questionnaires, is aggregated via a standardised procedure (Kellermann 1992).
The classifications of input, processes and results is commonly used in all three
instruments. Stufflebeam (1983), for instance, proposes that evaluations have to analyse
the context, e.g. what are the aims addressed with the program?, inputs, e.g. human
resources and tangible resources, process, activities by the program or institutions, and
products, the results of the program. This is also applied by the IC model presented in this
paper.
11
performance external evaluation intellectual capital
management reporting3
focus measuring results via assessment of efficiency intangible resources and
indicators and effectiveness its management
approach / performance indicators , scientific-oriented and in- indicator-based
methods gathered via information depth obligatory indicators, but
systems tailored to the specific flexible for adaptation
mainly based on goals indicators
information systems & various qualitative and narrations
databases quantitative report
often qualitative
report
data collection Continuous periodically (3-5 years) or periodically,
based on information at discrete points preferable based on
systems information systems
support shareholder/ partly major aim information for policy
public authority decisions
recommendation no partly no
With respect to the measurement of results, the separation into output, outcome and
impact, whereby the latter two are sometimes used synonymously, is used frequently in
evaluations (Garrett-Jones 2000). In general, output refers to the routine products of
research activities, such as publications, conference papers, training courses, degrees etc.
Outcome means the achievements of the activity such as new theories, new devices or
analytical techniques. Outcome is not treated separately in the IC model. Impact is a
measure of the influence or benefit of the research outcome or output, either within the
research community itself or the wider society. It thus includes the social, economic or
environmental benefit. The assessment of impacts is explicitly treated in the IC model.
The aspects of learning and supporting management is addressed differently by the three
instruments. Performance management systems document outcomes, but to a lesser extent
give information to understand the complex nature of knowledge-production. They collect
data on a continuous basis, which tends to rely on easy-to-obtain quantitative measures that
are conditioned by cost acceptance. In the field of evaluations, internal evaluations have a
strong focus to increase the performance and efficiency by internal discussions. IC reports
should also enhance the academic discussion within the faculty board (dean) and the
university’s governing boards and should therefore support the strategic development
process.
3
As conceptualised for the implementation in Austrian universities.
12
to support this task. However, in Austria, the allocation of financial resources will be based
on the performance contract (see above). Within this procedure, the IC report supports the
management and decision process, either for the Ministry, or for the internal resource
allocation decisions between university departments, research programs, etc. In such a
case, data also have to be indicated on the department level. However, the latter is not
demanded by the new university law and is the responsibility of the universities
themselves.
13
are no quantitative measures available in most IC approaches, which would allow to
monitor the flows or synergies between the different elements.
In general, in the field of research there are still concerns within the community regarding
the measurement of research outputs and the establishment of quantitative or financial
methods which goes hand in hand with the fear that curiosity, discovery, creativity, and
innovation will be restricted too much. However, this might be the wrong perception, it is
not about restraining the process itself, but taking decisions on scarce resources in an
increasingly complex environment. IC reporting should serve as an instrument to take
more strategic, efficient and transparent decisions, taking into account different
perspectives simultaneously. This debate has also similarities with the discussion within
the knowledge management literature and the question, as to whether or not one can
manage knowledge at all.
In industry there is a strong demand for the valuation of intangible assets by financial
figures, with mixed and successful solutions offered. In the case of universities, this call is
not that loud, but existent. The financial assessment of results is related to the estimation of
the economic effects of research and education. Market approaches are seldom applicable
for universities. Nevertheless, some indicators, which are based on questionnaires,
elevating information such as the salary of former students, can be seen as such financial
measures. Based on the employment or profits generated with these firms, an economic
benefit might also be estimated. However, the development is still in the early stages and
has to be treated with caution within the universities.
In general, the valuation of IC indicators is dependent on the specific goals and the context
of the university or institute. This issue is also widely stressed in the IC literature (e.g.
Roberts 1999). IC reports can be developed for the entire university, for institutes,
departments and research programs. Clearly, this causes additional problems of
aggregation and comparison. An interesting question, for instance is the possibility of
valuing and comparing the performance of a research-oriented institute with a more
training-oriented institute. In practice, the interpretation of the reports is dependent on the
context. This becomes especially evident for education, since the meaning and value of
education is contingent on the cultural context. In the German Humboldtian tradition, for
instance, education is referred to as “scientific”, for the French, education is regardedas
“professional”, which also explains France’s phenomenon of the “grande ecoles”, finally,
the Anglo-American tradition stresses the “liberal” nature of education (Scott 2002). These
few examples already point out the necessity for IC reports to deliver contextual
information, but also shows the limits of comparability. This is thus a further complicating
aspect of IC reports of universities in contrast to IC reports of industrial firms. While in the
latter case, eventually all measures can be linked (explicitly) to the financial performance
of the firm, in the former, the performance measurement is per se multi-dimensional,
expressing different aims and objectives.
14
strategies more concrete and measurable, which might be also come true in the context of
universities (Hoque and James 1999).
Obviously, some problems might emerge with the two aims, namely to serve as a
management instrument for the university itself and also for the ministry. Institutes will not
be willing to deliver information and especially intimate information, if they have the fear
that this will have negative consequences for their funding. The problem of tactical
behaviour due to evaluations is well recognised in the corresponding literature, especially
if results are bound to financial allocation mechanisms (Voroeijentijen 1991, Blalock
1999).
As far as the structure of the IC model applied to the Austrian universities is concerned, it
is different to most classical IC models for industry since it follows a process logic or
input-output logic and should thus be labelled as ‘process-oriented model’. Classical
models of IC Reporting separate different elements of intellectual capital but seldom link
this with the production process of the organisation. An exception are IC reports, based on
EFQM model such as the Danish company Ramboll applies for its IC measurement
(Ramboll 2001). In current models the development of the intangible resources is explicitly
embedded in the context of goals, processes and outputs. However, like most other
approaches for IC measurement, the presented model is not directly compatible or cannot
be incorporated in the classical financial accounting system.
What are possible development trajectories for the future? The presented functional model
for application in Austrian universities is a first step. It is flexible enough for individual
adaptations and adjustments and has the potential for further improvement and elaboration.
Furthermore, through the definition of categories and defined indicators, comparability
should be made possible. Although the various national university systems still vary
considerably, this instrument, which is flexible for modification and improvement,
addresses some general issues relevant for all universities, independent of their national
specifics. On the whole, there is so far no common model or standard for valuing
intangible assets and preparing IC reports for universities at the international level. In
many countries, national agencies have developed guidelines for the preparation of
evaluations and have partly also defined some indicators, which have to be linked to and
co-ordinated with the further development within the IC movement at universities. In
addition, work carried out at the national and international level regarding IC management
and measurement provide valuable information for the development of specific guidelines
15
for the application within universities (e.g. MERITUM for HEROs – Higher Education and
Research Organisations, currently in development).
There is also the demand for further investigation of the synergies of the different
approaches and to link the three disciplines: evaluation of research, performance
measurement and IC theory. Furthermore, universities can not only learn from the private
sector and IC theory, private research organisations and not-for profit research
organisations as well as R&D departments of companies, can also benefit from the
experiences of universities with the management of research and human capital.
References
Austrian Research Centers Seibersdorf (2000): Intellectual Capital Report 1999.
Seibersdorf 2000.
Biedermann, H., Graggober, M., Sammer, M. (2002): Die Wissensbilanz als Instrument
zur Steuerung von Schwerpunktbereichen am Beispiel eines Universitätsinstitutes. In:
Wissensmanagement: Konzepte und Erfahrungsberichte aus der betrieblichen Praxis.
Bornemann, M., Sammer, M. (Edt.), Wiesbaden 2002 S. 53-72.
Blalock, A.B. (1999): Evaluation Research and the Performance Management Movement,
Evaluation, 5, 2, 117-149.
Bormans, M.J, Brouwer, R. In’t Veld, R.J., Merten, F.J. (1987): The role of performance
indicators in improving the dialogue between government and universities, in:
International Journal of Institutional Management in Higher Education, 11, 181-194.
Broadbent, J., Guthrie, J. (1992): Changes in the Public sector: A review of recent
“alternative” accounting research, Accounting, Auditing & Accountability Journal, 5, 2 3-
31.
Canibano, L./Covarsi, M./Sanchez M.P. (1999): The value relevance and managerial
implications of intangibles: A literature review, International Symposium: Measuring and
Reporting Intellectual Capital: Experiences, Issues, and Prospects, OECD, 9-10 June,
Amsterdam.
Conceicao, P., Heitor, M. (1999): On the Role of the University in the Knowledge
Economy. Science and Public Policy, 26, 1, 37-51.
16
Cronbach, L.J., Ambron, S., Dornbusch, S.M., Hess, S.M., Hornik, R.C. et al. (1980):
Towards Reform of Program Evaluation, San Francisco.
Deutsches Zentrum für Luft- und Raumfahrt DLR (2001): Intellectual Capital Report 2000,
Köln 2001.
Dodgson, M., Hinze, S. (2000): Indicators used to measure the innovation process: defects
and possible remedies, Research Evaluation, 8, 2, 101-114.
Gibbons, M. (1994): The New Production of Knowledge, Pinter Publishers, London and
New York.
Ginsberg, P.E. (1984): The Dysfunctional Side of Quantitative Indicator Production. In:
Evaluation and Program Planning, 7, 1-12.
Guba, E.G., Lincoln, Y.S. (1989): Fourth generation evaluation, Newbury Parc, CA.
Hoque, Z., James, W. (1999): Strategic Priorities, Balanced Scorecard Measures and their
Interaction with Organizational Effectiveness: An Empirical Investigation, British
Accounting Association, Annual Conference, March 29-31 1999, University of Glasgow.
Itner, C.D., Larcker, D.F. (1988): Innovations in performance measurement: Trends and
research implications, Journal of Management Accounting Research, 10, 205-238.
17
Jaffe, A. (1989): Real effects of academic research. In: American Economic Review 79. Jg
(1989), S. 957-970.
Kieser, A. (1998): Going Dutch – Was lehren niederländische Erfahrungen mit der
Evaluation universitärer Forschung. In: Die Betriebswirtschaft, 58, 2, 208-224.
Leitner, K.-H., Sammer, M., Graggober, M., Schartinger, D., Zielowski, C. (2001):
Wissensbilanzierung für Universitäten, Auftragsprojekt für das Bundesministerium für
Bildung, Wissenschaft und Kunst. 2001. URL: http://www.weltklasse-uni.ac.at.
Lenn, M.P. (1992): The US accreditation system, in: Craft, A. (Edt.): Quality Assurance in
Higher Education. Proceedings of an International Conference Hong Kong, 1991, London,
161-186.
Lundvall, B. (2002): The University in the Learning Economy, DRUID Working Paper No
02-06, Copenhagen
Maul, K-H. (2000): Wissensbilanzen als Teil des handelsrechtlichen Jahresabschlusses. In:
Deutsches Steuerrecht, 38, 4, 2009-2016.
Mouritsen, J., Larsen, H.T., Bukh, P.N.D. (1998): Intellectual Capital and the ‘Capable
Firm’: Narrating, Visualising and Numbering for Managing Knowledge, Copenhagen
Business School and Aarhus School of Business 1998.
National Science Board (1998): Science and Engineering Indicators - 1998, National
Science Foundation, Arlington, VA.
Richter, R. (1991): Qualitätsevaluation von Lehre und Forschung an den Universitäten der
Niederlande. Eine Bilanz der letzten 10 Jahre, in: Weber, W. Otto, H. (Edt.): Der Ort der
Leher in der Hochschule, Weinheim, 337-362.
Risø National Laboratory Optics and Fluid Dynamic Department (1999): Intellectual
Capital Accounts, Roskilde.
18
Rosenberg, N., Nelson, R.R. (1996): The Roles of Universities in the Advance of Industrial
Technology. In: Rosenbloom, R.S. and W.J. Spencer (Edt.): Engines of Innovation.
Cambridge: Harvard Business School Press.
Salamanca (2001): The European Higher Education Area. Joint declaration of the
European Ministers of Education. Convened in Bologna on the 19th of June 1999.
http://www.salamanca2001.org/
Scott, P. (2002). The future of general education in mass higher education systems, Higher
Education Policy 15, 61-75.
Schartinger D., Schibany A., Gassler H. (2001): Evidence of Interactive Relations Between
the Academic Sector and Industry. In: The Journal of Technology Transfer, 26, 3.
Schimanek, U., Winnes, M. (2000): Beyond Humboldt? The relationship between teaching
and research in European university systems. In: Science and Public Policy, 27, 6, 397-
408.
Stufflebeam, D.L. (1983): The CIPP model for program evaluation, in: Madaus, G.F.,
Scriven, M.S., Suffelbeam, D.L. (Edt.): Evaluation models. Viewpoints on Education and
Human Service Evaluation, Boston, 117-142.
Stephan, P. E. (1996): The Economics of Science, Journal of Economic Literature, 34. Jg.
(1996), S. 1199-1235.
Sveiby, K.E. (1997): The New Organizational Wealth: Managing and Measuring
Knowledge-Based Assets, Berrett-Koehler, San Francisco.
Turpin, T., Garrett-Jones, S., Aylward, D. et al. (1999): Valuing University Research:
International Experiences in Monitoring and Evaluation Research Outputs and Outcomes.
Australian Research Council and Centre for Research Policy, University of Wollongong,
May 1999.
19
Appendix: Exemplary indicators for Austrian universities
(obligatory indicators are labelled with an asterisk)
Human Capital
Number of scientific staff total*
Number of scientific staff total (employed)*
Number of full-time professors
Number of student assistants
Fluctuation of scientific staff (as % of all scientific staff)*
Fluctuation of scientific staff (not empl.) (as % of total scientific staff (not empl.)*
Growth of scientific staff (in%)*
Growth of scientific staff (not employed) (in%)*
Average duration of scientific staff
Expenses for training
Structural Capital
Investments in library and electronic medias*
Relational Capital
Research grants abroad (as % of scientific staff)*
International scientists at the university (total in months)*
Number of conferences visited*
Number of employees financed by non-institutional funds
Number of activities in committees etc.
Hit rate EC research programs
New co-operation partners
Research
Publications (referred)*
Publications (proceedings etc.)
Publications total*
Number publications with co-authors from the industry
Habilitation*
PhDs
Non-institutional funds (contract research etc.)
Education
Graduations
Average duration of studies
Teacher per student
Drop-out-ratio
PhDs and master theses finalised
Commercialising
Number of spin-offs
Employees created by spin-offs
Income generated from licences
Knowledge transfer to the public
Hits internet site
Lectures (non-scientific)
Services
Measurement and lab services and expert opinions
Leasing of rooms and equipment
20