Sei sulla pagina 1di 364

The Capitalization of Knowledge

The Capitalization of
Knowledge
A Triple Helix of University–Industry–
Government

Edited by

Riccardo Viale
Fondazione Rosselli, Turin, Italy

Henry Etzkowitz
Stanford University, H-STAR, the Human-Sciences and
Technologies Advanced Research Institute, USA and the
University of Edinburgh Business School, Centre for
Entrepreneurship Research, UK

Edward Elgar
Cheltenham, UK • Northampton, MA, USA
© The Editors and Contributions Severally 2010

All rights reserved. No part of this publication may be reproduced, stored in a


retrieval system or transmitted in any form or by any means, electronic,
mechanical or photocopying, recording, or otherwise without the prior
permission of the publisher.

Published by
Edward Elgar Publishing Limited
The Lypiatts
15 Lansdown Road
Cheltenham
Glos GL50 2JA
UK

Edward Elgar Publishing, Inc.


William Pratt House
9 Dewey Court
Northampton
Massachusetts 01060
USA

A catalogue record for this book


is available from the British Library

Library of Congress Control Number: 2009941141

ISBN 978 1 84844 114 9

Printed and bound by MPG Books Group, UK


02
Contents
List of contributors vii
Acknowledgements ix
Abbreviations x

Introduction: anti-cyclic triple helix 1


Riccardo Viale and Henry Etzkowitz

PART I HOW TO CAPITALIZE KNOWLEDGE

1 Knowledge-driven capitalization of knowledge 31


Riccardo Viale
2 ‘Only connect’: academic–business research collaborations and
the formation of ecologies of innovation 74
Paul A. David and J. Stanley Metcalfe
3 Venture capitalism as a mechanism for knowledge governance 98
Cristiano Antonelli and Morris Teubal
4 How much should society fuel the greed of innovators? On
the relations between appropriability, opportunities and rates
of innovation 121
Giovanni Dosi, Luigi Marengo and Corrado Pasquali
5 Global bioregions: knowledge domains, capabilities and
innovation system networks 143
Philip Cooke
6 Proprietary versus public domain licensing of software and
research products 167
Alfonso Gambardella and Bronwyn H. Hall

PART II TRIPLE HELIX IN THE KNOWLEDGE ECONOMY

7 A company of their own: entrepreneurial scientists and the


capitalization of knowledge 201
Henry Etzkowitz
8 Multi-level perspectives: a comparative analysis of national
R&D policies 218
Caroline Lanciano-Morandat and Eric Verdier

v
vi The capitalization of knowledge

9 The role of boundary organizations in maintaining separation


in the triple helix 243
Sally Davenport and Shirley Leitch
10 The knowledge economy: Fritz Machlup’s construction of a
synthetic concept 261
Benoît Godin
11 Measuring the knowledge base of an economy in terms of
triple-helix relations 291
Loet Leydesdorff, Wilfred Dolfsma and Gerben Van der Panne
12 Knowledge networks: integration mechanisms and
performance assessment 312
Matilde Luna and José Luis Velasco

Index 335
Contributors
Cristiano Antonelli, Dipartimento di Economia S. Cognetti de Martlis
Università di Torino, Italy and BRICK (Bureau of Research in Innovation,
Complexity and Knowledge) Collegio Carlo Alberto, Italy.
Philip Cooke, Centre for Advanced Studies, Cardiff University, Wales,
UK.
Sally Davenport, Victoria Management School, Victoria University of
Wellington, New Zealand.
Paul A. David, Department of Economics, Stanford University, CA,
USA.
Wilfred Dolfsma, School of Economics and Business, University of
Groningen, The Netherlands.
Giovanni Dosi, Laboratory of Economics and Management, Sant’Anna
School of Advanced Studies, Pisa, Italy.
Henry Etzkowitz, Stanford University, H-STAR, the Human-Sciences
and Technologies Advanced Research Institute, USA and the University
of Edinburgh Business School, Centre for Entrepreneurship Research,
UK.
Alfonso Gambardella, Department of Management, Università Luigi
Bocconi, Milano, Italy.
Benoît Godin, Institut National de la Recherche Scientifique, Montréal,
Québec, Canada.
Bronwyn H. Hall, Department of Economics, University of California at
Berkeley, Berkeley, CA, USA.
Caroline Lanciano-Morandat, Laboratorie d’économie et de sociologie
du travail (LEST), CNRS, Université de la Méditerranée et Université de
Provence, Aix en Provence, France.
Shirley Leitch, Swinburne University of Technology, Melbourne,
Australia.

vii
viii The capitalization of knowledge

Loet Leydesdorff, Amsterdam School of Communications Research,


University of Amsterdam, The Netherlands.
Matilde Luna, Instituto de Investigaciones Sociales, Universidad Nacional
Autónoma de México.
Luigi Marengo, Sant’Anna School of Advanced Studies, Pisa, Italy.
J. Stanley Metcalfe, ESRC-CRIC, University of Manchester, UK.
Corrado Pasquali, Università degli Studi di Teramo, Italy.
Morris Teubal, Department of Economics, The Hebrew University,
Jerusalem, Israel.
Gerben Van der Panne, Economics of Innovation, Delft University of
Technology, Delft, The Netherlands.
José Luis Velasco, Instituto de Investigaciones Sociales, Universidad
Nacional Autónoma de México.
Eric Verdier, Laboratorie d’économie et de sociologie du travail (LEST),
CNRS, Université de la Méditerranée et Université de Provence, Aix en
Provence, France.
Riccardo Viale, Fondazione Rosselli, Turin, Italy.
Acknowledgements
The publishers wish to thank the following, who have kindly given permis-
sion for the use of copyright material.
Elsevier for articles by Alfonso Gambardella and Bronwyn H. Hall
(2006), ‘Proprietary versus public domain licensing of software and
research products’, in Research Policy, 35 (6), 875–92, Loet Leydesdorff,
Wilfred Dolfsma and Gerben Van der Panne (2006), ‘Measuring the
knowledge base of an economy in terms of triple-helix relations among
“technology, organization, and territory”’ in Research Policy, 35 (2),
181–99 and G. Dosi, L. Marengo and C. Pasquali (2006) ‘How much
should society fuel the greed of innovators? On the relations between
appropriability, opportunities and rates of innovation’, in Research Policy,
35 (8), 1110–21.
Every effort has been made to trace all the copyright holders, but if any
have been inadvertently overlooked the publishers will be pleased to make
the necessary arrangements at the first opportunity.
Some of the chapters in this book are completely new versions of the
papers presented at the 5th Triple Helix Conference held in Turin in 2005
and organized by Fondazione Rosselli. Other chapters have been written
especially for this book or are a new version of already published papers.

ix
Abbreviations
ACRI Association of Crown Research Institutes (New Zealand)
AIM Alternative Investment Market (UK)
CADs Complex Adaptive Systems
CAFC Court of Appeals for the Federal Court (USA)
CEO chief executive officer
CNRS National Centre for Scientific Research (France)
CRI Crown Research Institute (New Zealand)
DARPA Defense Research Projects Agency (USA)
DBF dedicated biotechnological firms
EPO European Patent Office
ERISA Employment Retirement Income Security Act (USA)
FDI foreign direct investment
GDP gross domestic product
GM genetic modification
GMO genetically modified organism
GNF Genomics Institute of the Novartis Research Foundation
(USA)
GPL Generalized Public License (USA)
HEI higher education institution
HMS Harvard Medical School
HR human resources
ICND Institute of Childhood and Neglected Diseases (USA)
ICT information and communication technology
INRIA National Institute of Computer Science (France)
IP intellectual property
IPO initial public offering
IPR intellectual property rights
JCSG Joint Center for Structural Genomics (USA)
KIS knowledge-intensive services
LGPL lesser Generalized Public License (USA)
LSN Life Sciences Network (New Zealand)
MIT Massachusetts Institute of Technology
NACE Nomenclature générale des Activités économiques dans les
Communautés

x
Abbreviations xi

NASDAQ National Association of Securities Dealers Automated


Quotations
NIBR Novartis Institutes for Biomedical Research (USA)
NIH National Institutes of Health (USA)
NSF National Science Foundation (USA)
NUTS Nomenclature des Unités Territoriales Statistiques
NYU New York University
OECD Organisation for Economic Co-operation and
Development
OTC over the counter
PD public domain
PLACE proprietary, local, authoritarian, commissioned, expert
PoP professor of practice
PR proprietary research
PRO public research organization
R&D research and development
RCGM Royal Commission into Genetic Modification (New
Zealand)
RDI research, development and innovation
RIO regional innovation organizer
RoP researcher of practice
RSNZ Royal Society of New Zealand
SHIPS Strategic, founded on Hybrid and interdisciplinary
communities, able to stimulate Innovative critique and
should be Public and based on Scepticism
SMEs small and medium-sized enterprises
TIP Technology Investment Program (USA)
TTO technology transfer officer
UCSD University of California San Diego
UCSF University of California San Francisco
VCE virtual centre of excellence
WYSIWYG what you see is what you get (IT)
Introduction: anti-cyclic triple helix
Riccardo Viale and Henry Etzkowitz

THE TRIPLE HELIX IN ECONOMIC CYCLES

The year 2009 may have represented a turning point for research and
innovation policy in Western countries, with apparently contradictory
effects. Many traditional sources of financing have dried up, although
some new ones have emerged, for example as a result of the US stimu-
lus package. Manufacturing companies are cutting their R&D budgets
because of the drop in demand. Universities saw their endowments fall
by 25 per cent or more because of the collapse in financial markets.
Harvard interrupted the construction of its new science campus, while
Newcastle University speeded up its building projects in response to the
economic crisis. Risk capital is becoming increasingly prudent because
of the increased risk of capital loss (according to the International
Monetary Fund, the ratio between bank regulatory capital and risk-
weighted assets increased on average between 0.1 and 0.4 for the main
OECD countries during 2009) while sovereign funds, like Norway’s,
took advantage of the downturn to increase their investments. According
to the National Venture Capital Association, American venture capital
shrank from US$7.1 billion in the first quarter of last year to US$4.3
billion in the first quarter of 2009 (New York Times, 13 April 2009).
Many of the pension funds, endowments and foundations that invested
in venture capital firms have signalled that they are cutting back on the
assets class. The slowdown is attributable in part to venture capitalists
and their investors taking a wait-and-see approach until the economy
improves.
The future outlook for R&D looks poor unless a ‘white knight’ comes
to its rescue. This help may come from an actor whose role was down-
played in recent years, but that now, particularly in the USA, seems to
be in the ascendant again. It is the national and regional government
that will have to play the role of the white knight to save the R&D
system in Western economies (Etzkowitz and Ranga, 2009). In the previ-
ous 20 years the proportion of public financing had gradually fallen in

1
2 The capitalization of knowledge

percentage terms, while the private sector had become largely dominant
(the percentage of Gross Domestic Expenditure in R&D financed by
industry now exceeds 64 per cent in OECD countries). In some technolog-
ical sectors, such as biotechnology, the interaction between academy and
industry has become increasingly autonomous from public intervention.
University and corporate labs established their own agreements, created
their own joint projects and laboratories, exchanged human resources
and promoted the birth of spin-off and spin-in companies without rel-
evant help from local and national bodies. Cambridge University biotech
initiatives or University of California at San Diego relations with biotech
companies are just some of many examples of double-helix models of
innovation. In other countries and in other technological sectors the
double-helix model didn’t work and needed the support of the public
helix. Some European countries, like France, Germany and Italy, saw a
positive intervention of public institutions. In France, Sophia Antipolis
was set up with national and regional public support. In Italy, support
from Piedmont regional government to the Politecnico of Turin allowed
the development of an incubator of spin-off companies that incubated
more than 100 companies.
In sectors such as green technologies, aerospace, security and energy,
public intervention to support the academy–industry relationship is
unavoidable. Silicon Valley venture capitalists invested heavily in renew-
able energy technology in the upturn, and then looked to government
to provide funding to their firms and rescue their investments once the
downturn took hold. In emerging and Third World economies, the role of
the public helix in supporting innovation is also unavoidable. In the least
developed countries industry is weak, universities are primarily teach-
ing institutions and government is heavily dependent upon international
donors to carry out projects. In newly developed countries the universities
are developing research and entrepreneurship activities and industry is
taking steps to promote research, often in collaboration with the universi-
ties, while government plays a creative role in developing a venture capital
industry and in offering incentives to industry to support research through
tax breaks and grants.
The novelty of the current crisis is that the public helix becomes crucial
even in countries and in sectors where the visible public role was minimal
in the past. The Advanced Technology Program, the US answer to the
European Framework Programmes, shrunk to virtual inactivity with zero
appropriations under the Bush Administration but has found a second
life under the Obama Administration and has been renamed the TIP (the
Technology Investment Program).
The triple-helix model seems to play an anti-cyclic role in innovation.
Introduction: anti-cyclic triple helix 3

It is a default model that guarantees optimal or quasi-optimal levels of


academy–industry interaction through public intervention. It expresses
its potential when the interaction is not autonomous, as is now the case in
times of crisis, and the collaboration between universities and companies
calls for financial support and organizational management. It works as a
‘nudge tool’ (Thaler and Sunstein, 2008), whose aim is to maintain a suf-
ficient flow of innovation through the right incentives and institutional
mechanisms for academy–industry collaboration.
In this book we will examine various models for the capitalization of
knowledge and attempt to discern the features of the new relationship that
is emerging between the state, universities and industry. Are they converg-
ing in a certain way across different sociopolitical cultures and political
institutions?
Which of the key groups (scientists, politicians, civil servants, agency
officials, industrialists, lobby groups, social movements and organized
publics) are emerging as relevant players in the science and technology
(S&T) policy arenas? What and how divergent are the strategies that they
are pursuing and at what levels in the policy-making process do they take
part?
What are the ‘appropriate’ policies that respond to these changes? Do
they call for a radical paradigm-like shift from previously established
research policy?
What degrees of freedom and autonomy can universities gain within the
new triangular dynamics? Are the new patterns of interaction among those
sectors designing a new mode of knowledge production? How are such
changes altering the structure and operations of the knowledge-producing
organizations inside these sectors?

POLYVALENT KNOWLEDGE: THREATS AND


BENEFITS TO ACADEMIC LIFE

The triple helix is a model for capitalizing knowledge in order to pursue


innovation (Etzkowitz, 2008). Academic communities are fearful that
capitalization will diminish the university goal of knowledge produc-
tion per se. This fear seems to be linked to a traditional image of the
division of labour in universities. Curiosity-driven research is separated
from technology-driven research. Therefore, if a university focuses on
the latter, it handicaps and weakens the former. On the contrary, in our
opinion, in many technological fields knowledge production simultane-
ously encompasses various aspects of research. The theory of polyvalent
knowledge (Etzkowitz and Viale, 2009) implies that, contrary to the
4 The capitalization of knowledge

division of knowledge into divergent spheres – applied, fundamental,


technological – or into mode 1 (disciplinary knowledge) and mode 2
(applied knowledge) (Stokes, 1997; Gibbons et al., 1994), a unified
approach to knowledge is gradually becoming established. In frontier
areas such as nanotechnologies and life sciences, in particular, practical
knowledge is often generated in the context of theorizing and fundamen-
tal research. And, on the other hand, new scientific questions, ideas and
insights often come from the industrial development of a patent and the
interaction of basic researchers and industrial labs. The polyvalence of
knowledge encourages the multiple roles of academics and their involve-
ment in technology firms, and vice versa for industrial researchers in
academic labs.
One way of testing the reliability of this theory is to verify whether or
not there is any complementarity between scientific and technological
activities, measured by the number of publications and patents respec-
tively. In the case of polyvalent knowledge, the same type of knowledge
is able to generate both scientific output and technological output. Since
the scientific knowledge contained in a publication generates technological
applications represented by patents, and technological exploitation gener-
ates scientific questions and answers, we should expect to see some com-
plementarity between publishing and patenting. Researchers who take
out patents should show greater scientific output and a great capacity to
affect the scientific community, measured by the impact factor or citation
index.
In other words, increasing integration between basic science and tech-
nology implies that there is no rivalry between scientific and technological
output. The rivalry hypothesis holds that there is a crowding-out effect
between publication activities and patenting. The substitution phenome-
non between publications and patents stems from the inclusion of market-
related incentives into the reward structure of scientists (Dasgupta and
David, 1985; Stephan and Levin, 1996). Scientists increasingly choose to
allocate their time to consulting activities and research agreements with
industrial partners. They spend time locating licensees for their patents
or working with the licensee to transfer the technology. Time spent
doing research may be compromised. These market goals substitute
peer-review judgement and favour short-term research trajectories and
lower-quality research (David, 1998). Moreover, the lure of economic
rewards encourages scientists to seek IP (intellectual property) protection
for their research results. They may postpone or neglect publication and
therefore public disclosure. Industry funding, commercial goals and con-
tract requirements may lead researchers to increase secrecy with regard
to research methodology and results (Blumenthal et al., 1986; Campbell
Introduction: anti-cyclic triple helix 5

et al., 2002). Both these mechanisms may reduce the quantity and the
quality of scientific production. This behaviour supports the thesis of a
trade-off between scientific research and industrial applications.
On the contrary, a non-rivalry hypothesis between publishing and
patenting is based on complementarity between the two activities. The
decision of whether or not to patent is made at the end of research and not
before the selection of scientific problems (Agrawal and Henderson, 2002).
Moreover, relations with the licensee and the difficulties arising from the
development of patent innovation can generate new ideas and suggestions
that point to new research questions (Mansfield, 1995). In a study, 65 per
cent of researchers reported that interaction with industry had positive
effects on their research. A scientist said: ‘There is no doubt that working
with industry scientists has made me a better researcher. They help me to
refine my experiments and sometimes have a different perspective on a
problem that sparks my own ideas’ (Siegel et al., 1999).
On the other hand, the opposition between basic and technological
research seems to have been overcome in many fields. In particular, in
the area of key technologies such as nanotechnology, biotechnology,
ICT (information and communication technologies), new materials and
cognitive technologies, there is continuous interaction between curiosity-
driven activities and control of the technological consequences of the
research results. This is also borne out by the epistemological debate. The
Baconian ideal of a science that has its raison d’être in practical applica-
tion is becoming popular once again after years of oblivion. And the
technological application of a scientific hypothesis, for example regarding
a causal link between two classes of phenomena, represents an empirical
verification. An attempt at technological application can reveal anomalies
and incongruities that make it possible to define initial conditions and
supplementary hypotheses more clearly.
In short, the technological ‘check’ of a hypothesis acts as a ‘positive
heuristic’ (Lakatos, 1970) to develop a ‘positive research programme’
and extend the empirical field of the hypothesis. These epistemological
reasons are sustained by other social and economic reasons. In many
universities, scientists wish to increase the visibility and weight of their
scientific work by patenting. Collaboration with business and licensing
revenues can bring additional funds for new researchers and new equip-
ment, as well as meeting general research expenses. This in turn makes it
possible to carry out new experiments and to produce new publications.
In fact Jensen and Thursby (2003) suggest that a changing reward struc-
ture may not alter the research agenda of faculty specializing in basic
research. Indeed, the theory of polyvalent knowledge suggests that dual
goals may enhance the basic research agenda.
6 The capitalization of knowledge

COMPLEMENTARITY BETWEEN PUBLISHING AND


PATENTING

The presence of a complementary effect or the substitution of publish-


ing and patenting has been studied empirically in recent years. Agrawal
and Henderson (2002) have explored whether at the Departments of
Mechanical and Electrical Engineering of MIT patenting acts as a substi-
tute or a complement to the process of fundamental research. Their results
suggest that while patent counts are not a good predictor of publication
counts, they are a reasonable predictor of the ‘importance’ of a professor’s
publications as measured by citations. Professors who patent more write
papers that are more highly cited, and thus patenting volume may be cor-
related with research impact. These results offer some evidence that, at
least at the two departments of MIT, patenting is not substituting for more
fundamental research, and it might even be an accelerating activity.
Stephan et al. (2007) used the Survey of Doctorate Recipients to
examine the question of who is patenting in US universities. They found
patents to be positively and significantly correlated to the number of publi-
cations. When they broke the analysis down into specific fields, they found
that the patent–publishing results persisted in the life sciences and in the
physical/engineering sciences. The complementarity between publishing
and patenting in life sciences has been studied by Azoulay et al. (2005).
They examined the individual, contextual and institutional determinants
of academic patenting in a panel data set of 3884 academic life scientists.
Patenting is often accompanied by a flurry of publication activity in the
year preceding the patent application. A flurry of scientific output occurs
when a scientist unearths a productive domain of research. If patenting is
a by-product of a surge of productivity, it is reasonable to conclude that a
patent is often an opportunistic response to the discovery of a promising
area.
In the past, senior scientists and scientists with the most stellar academic
credentials were usually also the most likely to be involved in commercial
endeavours. But a feature of the Second Academic Revolution and the
birth and diffusion of entrepreneurial universities is that the academic
system is evolving in a way that accommodates deviations from tradi-
tional scientific norms of openness and communalism (Etzkowitz, 2000).
In fact, Azoulay et al.’s (2005) data indicate that many patenting events
now take place in the early years of scientists’ careers and the slope of the
patent experience curve has become steeper with more recent cohorts of
scientists. Patents are becoming legitimate forms of research output in
promotion decisions. Azoulay et al. (2005) show that patents and papers
encode similar pieces of knowledge and correspond to two types of output
Introduction: anti-cyclic triple helix 7

Non-patenters Patenters
70

60

50
% of scientists

40

30

20

10

0
0 100 200 300 400 500 600 0 100 200 300 400 500 600
Total number of publications

Figure I.1 Distribution of publication count for patenting and


non-patenting scientists

that have more in common than previously believed. Figure I.1 shows the
complementary of patenting and publishing in Azoulay et al. (2005). It
plots the histogram for the distribution of publication counts for our 3884
scientists over the complete sample period, separately for patenting and
non-patenting scientists.
The study that makes the most extensive analysis of the complementa-
rity between patenting and publishing is by Fabrizio and DiMinin (2008).
It uses a broad sample drawn from the population of university inventors
across all fields and universities in the USA, with a data set covering 21
years. Table I.1 provides the annual and total summary statistics for the
entire sample and by inventor status. A difference of mean test for the
number of publications per year for inventors and non-inventors suggests
that those researchers holding a patent applied for between 1975 and 1995
generate significantly more publications per year than non-inventors. The
inventors in their sample are more prolific in terms of annual publications,
on the order of 20–50 per cent more publications than their non-inventor
colleagues. The results suggest also that there is not a significant posi-
tive relationship between patenting and citations and a faculty member’s
publications.
Nor was evidence of a negative trade-off between publishing and pat-
enting found in Europe. Van Looy et al. (2004) compared the publishing
8 The capitalization of knowledge

Table I.1 Patenting and publishing summary statistics for inventors and
non-inventors

Inventors Non-inventors All


Mean St. dev. Mean St. dev. Mean St. dev.
Annual pubs 3.99 5.18 2.24 2.96 3.12 4.32
Annual pats 0.56 1.55 0 0 0.28 1.14
Total pubs 79.93 84.78 43.71 47.72 62.00 71.30
Total pats 11.02 16.21 0 0 5.57 12.77

output of a sample of researchers in the contract research unit at the


Catholic University of Leuven in Belgium with a control sample from the
same university. The researchers involved in contract research published
more than their colleagues in the control sample. Univalent single-sourced
formats are less productive than the polyvalent research groups at the
Catholic University of Louvain that ‘have developed a record of applied
publications without affecting their basic research publications and, rather
than differentiating between applied and basic research publications, it is
the combination of basic and applied publications of a specific academic
group that consolidates the groups R&D potential’ (Ranga et al., 2003,
pp. 301–20). This highly integrated format of knowledge production
evolved from two divergent sources: industrial knowledge gained from
production experience and scientific knowledge derived from theory and
experimentation.
In Italy an empirical analysis of the consequences of academic patenting
on scientific publishing has been made by Calderini and Franzoni (2004),
in a panel of 1323 researchers working in the fields of engineering chem-
istry and nanotechnologies for new materials over 30 years. As shown in
Table I.2, the impact of patents is positive in the quantity of publications.
Development activities are likely to generate additional results that are
suitable for subsequent publications, although there might be one or two
years of lag. Moreover, quality of research measured by the impact factor
is likely to increase with the number of patents filed in the period follow-
ing the publication. Scientific performance increases in the proximity of a
patent event. This phenomenon can be explained in two ways. Top-quality
scientific output generates knowledge that can be exploited technologi-
cally. And technological exploitation is likely to generate questions and
problems that produce further insights and, consequently, additional pub-
lications. The same kind of results are found by Breschi et al. (2007), in a
study done on a sample of 592 Italian academic inventors (see Table I.3).
Introduction: anti-cyclic triple helix 9

Table I.2 Results of T-test and of test of proportions (two samples)

T-test Mean Mean Variance, Variance, Stat T P-value


patent- non- patent- non- Ho: 0 fidd (two tails)
holders patent- holders patent- in mean
holders holders
Observations 133 1190 133 1190
Total
publications 32.41 19.30 1026.59 897.86 4.50 0.00*
Impact factor 1.70 1.48 1.88 2.82 0.01 0.00*
Total cites 8.21 5.17 39.97 33.30 5.30 0.00*
received at
2003
Test of Mean Mean Stat Z P-value
proportions patent- non- Ho: 0 fidd (two tails)
holders patent- in mean
holders
Observations 133 1190
Academic
personnel 0.98 0.96 0.89 0.37
Technician 0.02 0.38 1.35 0.187
Area 0.68 0.70 0.42 0.68

Note: * r < 0.05.

TRIPLE HELIX: LABORATORY OF INNOVATION

The incorporation of economic development into university missions


and the further integration of the knowledge infrastructure into innova-
tion systems take different forms in various countries and regions. Most
regions, however, lack innovation systems; rather they are innovation
environments in which some elements to encourage innovation are present
and others missing. In such situations it is important for some group or
organization to play the role of regional innovation organizer (RIO)
and bring the various elements of the triple helix together to foster new
projects. Momentum starts to grow around concepts such as Silicon Alley
in New York and Oresund in Copenhagen/southern Sweden, uniting
politicians, business persons and academics. Imagery is also important
since there often are not strong market reasons to allocate resources to the
development of a region.
10 The capitalization of knowledge

Table I.3 Publications per year, inventors versus controls, 1975–2003; by


field

N Mean Std Median


Inventors
Chem. eng. & materials tech.** 63 2.0 1.75 1.5
Pharmacology* 83 2.2 1.21 2.0
Biology* 78 2.5 2.10 2.0
Electronic & telecom* 72 1.7 1.04 1.4
All fields 296 2.1 1.60 1.8

Controls
Chem. eng. & materials tech. 63 1.3 1.10 1.1
Pharmacology 83 1.7 1.11 1.6
Biology 78 1.8 1.27 1.5
Electronics & Telecom 72 1.3 1.18 1.0
All fields 296 1.6 1.28 1.3

Note: * – ** Inventor–control distribution difference significant at 0.90 - 0.95 (Kolmogorov–


Smirnov test)

Source: Elaborations on EP–INV database and ISI Science Citation Index.

Rather than importing innovation mechanisms that appear to have


worked well elsewhere, it is important as an initial step to analyse a local
situation in order to determine:

(1) the available resources that can be used to start the incubation
process for knowledge-based development;
(2) what is missing and how and where those missing resources can be
found, either locally or internationally.

To arrive at such a determination and follow-on collaboration means that


there must be discussions among the potential actors rather than govern-
ment saying by itself: this is what should be done.
A consensus space, a forum that brings together the different triple-
helix actors in a region, is often the source of new ideas and plans for
knowledge-based development. From the analysis of the resources in a
region, an awareness can be generated of the potential of its knowledge
space, the research units, formal and informal, in the science and arts that,
in turn, can become the basis for the creation of an innovation space, a
mechanism to translate ideas into reality. The invention of the venture
capital firm in 1940s New England is one example. These ‘triple-helix
Introduction: anti-cyclic triple helix 11

spaces’ may be created in any order, with any one of them used as the basis
for the development of others (Etzkowitz and Ranga, 2010).
Creating new technology-based economic niches has become a third
strategy for regional and local development. As the number of niches
for science-based technology increases, the opportunity for more players
to get involved also increases. Universities not traditionally involved in
research are becoming more research-oriented, often with funding from
their state and local governments, which increasingly realize that research
is important to local economic growth. A firm may start from a business
concept looking for a technology to implant within it or a technology
seeking a business concept to realize its commercial potential. The entre-
preneur propelling the technology may be an amateur or an experienced
professional. Whichever the case, the technology comes with a champion
who is attempting to realize its commercial potential by forming a firm.
Universities, as well as firms, are developing strategic alliances and
joint ventures. Karolinska University has recruited schools in the health
and helping professions across Sweden into collaborations in order to
increase its ‘critical mass’ in research. Groups of universities in Oresund,
Uppsala and Stockholm have formed ‘virtual universities’, which are then
translated into architectural plans for centres and science parks to link the
schools physically.
As entrepreneurial academic activities intensify, they may ignite a self-
generating process of firm-formation, no longer directly tied to a particu-
lar university. The growth of industrial conurbations around universities,
supported by government research funding, has become the hallmark of
a regional innovation system, exemplified by Silicon Valley; the profile of
knowledge-based economic development was further raised by the found-
ing of Genentech and other biotechnology companies based on academic
research in the 1980s. Once take-off occurs in the USA, only the private
sector is usually credited; the role of government, for example, the Defense
Research Projects Agency (DARPA), in founding SUN, Silicon Graphics
and Cisco is forgotten.
The triple helix denotes not only the relationship of university, industry
and government, but also the internal transformation within each of these
spheres. The transformation of the university from a teaching institution
into one that combines teaching with research is still ongoing, not only in
the USA, but in many other countries. There is a tension between the two
activities, but nevertheless they coexist in a more or less compatible rela-
tionship. Although some academic systems still operate on the principle of
separating teaching and research, it has generally been found to be both
more productive and more cost-effective to combine the two functions, for
example by linking research to the PhD training process. Will the same
12 The capitalization of knowledge

relationship hold for three functions, with the emerging third mission of
economic and social development combined with teaching and research?
A recent experiment at Newcastle University points the way towards
integration of the three academic functions. A project for the redevelop-
ment of the region’s economy as a Science City was largely predicated on
building new laboratories for academic units and for firms in the expecta-
tion that the opportunity to ‘rub shoulders’ with academics in related fields
would be a sufficient attractor. However, a previous smaller-scale project,
the Centre for Life, based on the same premise, did not attract a signifi-
cant number of firms and the allotted space was turned over to academic
units. To jump-start Science City, the professor of practice model, based
on bringing distinguished practitioners into the university as teachers, has
been ‘turned on its head’ to attract researchers of a special kind: PhD sci-
entific entrepreneurs who have started successful firms but may have been
pushed aside as the firm grew and hired professional managers.
Newcastle University, in collaboration with the Regional Development
Agency in Northeast UK, established four professors of practice (PoPs),
one in each of the Science City themed areas – a scheme for knowledge-
based economic development from advanced research. The PoPs link
enterprise to university and are intentionally half-time in each venue so
that they retain their industrial involvement at a high level and do not
become traditional academics. The PoPs have initiated various projects,
ranging from an interdisciplinary centre drawing together the university’s
drug discovery expertise, which aims to undertake larger projects and
attract higher levels of funding, to a new PhD programme integrating
business, engineering and medical disciplines to train future academic and
industrial leaders in the medical devices field.
The next step in developing the PoP model is to extend it down the
academic ladder by creating researchers of practice (RoPs), postdoctoral
fellows and lecturers, who will work half-time in an academic unit and
half-time in the business development side of the university, e.g. technol-
ogy transfer office, incubator facility or science park. The RoPs would be
expected to involve their students in analysing feasibility of technology
transfer projects and in developing business plans with firms in the uni-
versity’s incubator facility. Each PoP could mentor three or four RoPs,
extending the reach of the senior PoPs as they train their junior colleagues.
Moreover, the PoP model is relevant to all academic fields with practi-
tioner constituencies, including the arts, humanities and social sciences.
Until this happens, entrepreneurial activities will typically be viewed as an
adjunct to, rather than an equal partner with, the now traditional missions
of teaching and research.
In the medium term, the PoP model may be expected to become a
Introduction: anti-cyclic triple helix 13

forward linear model, as professors spinning out firms reduce to a half-time


academic workload, superseding the typical current UK practice of forced
choice. As some professors reduce to half-time, additional RoPs may be
hired to share their positions. When 25 per cent or one in four academics
are PoPs or RoPs, the entrepreneurial academic model will be institution-
alized. The RoPs are intended to link academic units with the new business
development units of the university. Incubators and tech transfer offices
are typically established as administrative arms rather than as extensions
of academic units, their natural location once the ‘third mission’ for eco-
nomic and social development is fully accepted. In the interim, there is a
need to bridge internal ‘silos’ as well as external spheres.
The university is a flexible and capacious organization. Like the church,
its medieval counterpart, it is capable of reconciling apparent contradic-
tions while pursuing multiple goals in tandem. As the university takes up
a new role in promoting innovation, its educational and research missions
are also transformed. As the university expands its role in the economy,
from a provider of human resources to a generator of economic activity,
its relationship to industry and government is enhanced. Paradoxically, as
the university becomes more influential in society, it is also more subject
to influence, with academic autonomy increased in some instances and
reduced in others. When bottom-up initiatives that have proved success-
ful, such as the incubator movement in Brazil, are reinforced by top-down
policies and programmes, perhaps the most dynamic and fruitful result is
achieved. It also means that universities and other knowledge-producing
institutions can play a new role in society, not only in training students
and conducting research but also in making efforts to see that knowledge
is put to use.
There is also a convergence between top-down and bottom-up initia-
tives, which ideally reinforce one another. The flow of influence can go
in both directions. If top-down, the local or regional level may adapt the
policy or programme to local or regional needs. Bottom-up may also be
the source of public pressure for action and the creation of models that
can later be generalized top-down or through isomorphic mimesis. The US
agricultural innovation system is a classic example of the ‘hybridization’ of
top-down and bottom-up approaches, with government at various levels
acting as a ‘public entrepreneur’ (Etzkowitz et al. 2001).
The result is an interactive model, with intermediate mechanisms that
integrate the two traditional starting points of science and technology
policy. In contrast to biological evolution, which arises from mutations
and natural selection, social evolution occurs through ‘institution forma-
tion’ and conscious intervention. The triple helix provides a flexible frame-
work to guide our efforts, working from different starting points to achieve
14 The capitalization of knowledge

the common goal of knowledge-based economic and social development.


Innovation thus becomes an endless transition, a self-organizing process
of initiatives among the institutional spheres.

CAPITALIZING KNOWLEDGE AND THE TRIPLE


HELIX
This book is structured into two parts. Part I deals with the ways, proprie-
tary and not, to obtain an economic return from scientific and technologi-
cal research. One of its focuses is how the epistemological and cognitive
features of knowledge, its generation and utilization, can constrain and
shape the way in which it is capitalized.
Economic and social factors are necessary to explain the capitalization
of knowledge. However, they are not sufficient. Their flaw is to neglect
the constraints on the capitalization process deriving from the epistemo-
logical structure and cognitive dimension of knowledge. According to
Chapter 1 by Riccardo Viale, these are crucial factors in the organizational
and institutional changes taking place in the capitalization of knowledge.
For example let’s consider the different ontologies and languages present
in physics compared to material science or biology. The different use of
quantitative measures versus qualitative and pictorial representations, the
different types of laws and the role of experiments constrain the organiza-
tional ways knowledge can be capitalized. Any attempt to devise the right
format for capitalization should consider these aspects. ‘Nudging’ (Thaler
and Sunstein, 2008) the capitalization of knowledge means impressing
institutions and organizations on the minds of the actors involved in the
process of knowledge generation and application. Higher cognitive com-
plexity, as in the case of converging technologies, cannot be coped with
by isolated individual minds but calls for increasing division of compu-
tational labour. Greater epistemological generality, as in the case of the
application of inclusive theories of physics and chemistry, allows better
knowledge transfer but requires the involvement of many disciplines to
widen the innovation field. Different background knowledge between
university and companies hinders the reciprocal understanding that can
be improved by better face-to-face interaction. Different cognitive styles
in problem-solving, reasoning and decision-making hamper collaboration
in research projects, but that can be remedied by the emergence of hybrid
roles and organizations. Two ‘nudge’ suggestions made in the chapter are
greater proximity between universities and companies and the emergence
of a two-faced Janus scientist, a hybrid figure that combines the cognitive
styles and values of both academic and industrial scientist.
Introduction: anti-cyclic triple helix 15

While Viale’s chapter sees the collaboration between businesses and


universities as being difficult owing to differences in values and cogni-
tive styles, the thesis proposed by Paul David and Stanley Metcalfe in
Chapter 2 proposes an alternative, but not incompatible, account. After
the Bayh–Dole law, US universities are under increasing pressure to capi-
talize their knowledge and strengthen IPR (intellectual property rights).
Collaboration with companies is not intermediated by faculty researchers
but by employees of university ‘service units’ (e.g. technology transfer
office, university research services, sponsored research office, external rela-
tions office). The main task of these offices is to sell licences and patents
derived from the knowledge produced in university labs. Their main exper-
tise is property right protection and avoidance of legal liability. When they
have the chance to collaborate with companies on a joint research project,
their risk adversity towards legal liability and concerns over IPR often
hinders the agreement. Firms are always complaining about the propri-
etary approach taken by universities. Interaction has become difficult. As
Stuart Feldman, IBM’s vice-president for computer science, explained to
the New York Times: ‘Universities have made life increasingly difficult to
do research with them because of all the contractual issues around intel-
lectual property . . . We would like the universities to open up again’. Thus,
in order to be more useful for companies, universities should become less
entrepreneurial in managing IPR, according to IBM.
The paradox is obvious and the thesis is counterintuitive. Most innova-
tion policies in the USA and Europe were informed by the opposite equa-
tion: more knowledge transfer equals a more entrepreneurial university.
Without reorienting the incentive structure of universities towards com-
mercial aims, it seemed difficult to increase the knowledge transfer towards
businesses. In short, if it is true that joint collaboration implies more simi-
larity in background knowledge and cognitive styles between academic
and industrial researchers, and if it is true that universities should abandon
their overly aggressive IPR policy and that there is a growing need for
open innovation and science commons in universities, the conclusion is
that companies will bear the burden of cultural change. Their researchers
should acquire more academic values and styles in order to pursue fruitful
joint collaborations with universities. New ways of connecting universities
with companies, like academic public spaces and new academic and indus-
trial research roles, like the Janus scientists or professors of practice, must
be introduced to support a rewarding collaboration.
Older technology companies are especially fearful of universities creat-
ing competitors to themselves. They are concerned that by licensing IP
to start-ups, including those founded by faculty and students, new firms
may be created that will displace their prominence on the commercial
16 The capitalization of knowledge

landscape. Thus, we have the irony of formerly closed firms espousing


‘open innovation’ and imploring universities to donate their IP rights. Not
surprisingly, most universities take the innovation side of the IP debate
and increasingly assist new firm formation. Universities are making a
long-term bet that equity and employment created in a growth firm will
redound to the benefit of the region where they are located and to the
university’s endowment. As universities invest in the formation of firms
from the IP they generate, they become closely tied to the venture capital
phenomenon that, in an earlier era, they helped create.
A clear example of ‘knowledge-driven capitalization of knowledge’ is
the emergence of venture capitalism. Until the beginning of the twentieth
century, technological knowledge was mainly idiosyncratic and tacit.
These epistemological features limited the possibility of its tradability
but also limited the need for the legal protection of intellectual property.
With the widening of the scientific base of innovation, knowledge became
more general and explicit. As tacitness decreased, so the tradability of
knowledge and the need for IPR grew. Recognition of this epistemologi-
cal change is essential to understanding the institutional phenomena that
brought about the emergence of venture capitalism, the current major
institution in the capitalization of knowledge. Cristiano Antonelli and
Morris Teubal in Chapter 3 outline the evolutionary dynamics of the
financial markets for innovation.
Whoever finances innovation must assess two combined sets of risk:
that the innovative project could fail and that the result cannot be appro-
priated by the inventor. The second risk is better faced by equity finance,
e.g. corporate bodies, because the investors have the right to claim a share
of the profits of successful companies whereas the lenders, e.g. banks, can
claim only their credits. The first risk is better faced by banks because their
polyarchic decision-making (i.e. great variety of expertise and number of
experts less tied to vested interests) results in a higher chance of including
outstanding projects, whereas the hierarchical (i.e. less variety of expertise
and experts more closely tied to vested interests) decision-making of cor-
porate bodies tends to favour only minor incremental innovation. Venture
capitalism seems to be able to combine the advantages of both, that is the
screening procedure performed by competent polyarchies, a distinctive
feature of banks, and the equity-based provision of finance to new under-
takings, a distinctive feature of corporations.
Since the early days venture capital firms have specialized in the provi-
sion of ‘equity finance’ to new science-based start-up companies, together
with business services and management advice. Limited partnerships,
which were the leading form of organization for start-ups during the
1960s and 1970s, converged progressively into private stock companies
Introduction: anti-cyclic triple helix 17

based upon knowledge-intensive property rights shares in the new science-


based start-up companies. Private investors and financial companies
elaborated exit strategies for collecting the value of these new firms after
their creation and successful growth. Exit took place mainly through the
sale of knowledge-intensive property rights. Initially these were private
transactions over the counter. Later a public market emerged, character-
ized by automatic quoting mechanisms to report the prices and quantities
of private transactions. This mechanism, better known as NASDAQ,
evolved into a marketplace for selling knowledge-intensive property rights
to the public at large. The demand for new knowledge-intensive property
rights by investment funds, pension funds and retail investors accelerated
the diffusion of NASDAQ with a snowball effect. The growing size of the
market enabled it to become an efficient mechanism for identifying the
correct value of knowledge-intensive property rights, a key function for
the appreciation of the large share of intangible assets in the value of the
new science-based companies.
Intellectual property rights (IPR) have become the currency of technol-
ogy deals, the ‘bargaining chips’ in the exchange of technology among dif-
ferent firms. What kind of justification is there for this strong emphasis on
IPR? Are IPR the best way to strengthen the capitalization of knowledge?
Is the current explosion of patents the right explanation for the innova-
tion rate? Are there other factors, detached from proprietary incentives,
capable of driving innovation pathways? For example, in the case of
converging technologies, the high interdisciplinarity of their epistemo-
logical problem-solving dynamics tends to overcome any attempt to create
proprietary monopolies of knowledge. The rate of change of knowledge
is very fast; the knowledge useful for innovation is tacit in the minds of
scientists; innovative problem-solving stems from the conceptual recombi-
nation and theoretical integration of different sources of knowledge; only
open discussion, without IPR constraints, among academic and indus-
trial researchers can generate the proper ‘gestalt shift’ that will afford the
proper solution. Moreover, each particular body of knowledge drives the
dynamics of knowledge change and capitalization of knowledge.
The many components (ontic, deontic, epistemological and cognitive)
of knowledge shape the directions of technological development towards
given innovative products. They address the technological trajectories of
technological paradigms described by Giovanni Dosi, Luigi Marengo and
Corrado Pasquali in Chapter 4. The authors discuss the classic dilemma of
the relation between IPR and innovation: on the one hand, the intellectual
property monopolies afforded by patents or copyright raise product prices,
while on the other, IPR provide a significant economic incentive for pro-
ducing new knowledge. The answer to this question is not straightforward.
18 The capitalization of knowledge

It is important to emphasize that, as far as product innovations are con-


cerned, the most effective mechanisms are secrecy and lead time, while
patents are the least effective, with the partial exception of drugs and
medical equipment (Levin et al., 1987; Cohen et al., 2000). Moreover, the
effects of IPR seem to be deleterious for innovation in the case of strongly
cumulative technologies in which each innovation builds on previous ones.
To the extent that a given technology is critical for further research, the
attribution of broad property rights might hamper further developments.
For example, in the case of the Leder and Stewart patent on the genetically
engineered mouse that develops cancer, if the patent (the ‘onco-mouse’)
protects all the class of products that could be produced (‘all transgenic
non-human mammals’) or all the possible uses of a patented invention (a
gene sequence), it represents a serious obstacle to research and innovation.
On the other hand, Stanford University’s Office of Technology Licensing
demonstrated that, by proactively licensing the Cohen–Boyer patent for
recombinant DNA at reasonable rates, it helped create a new industry.
How to navigate between the Scylla of creating fears due to appearance of
‘free riders’ in the absence of Clear IPR and the Charybdis of over protec-
tion stifling innovation is a persisting question.
Symmetrically a ‘tragedy of anti-commons’ is likely to arise also when
IP regimes give too many subjects the right to exclude others from using
fragmented and overlapping pieces of knowledge with no one having the
effective privilege of use. In the software industry, extensive portfolios of
legal rights are considered means for entry deterrence and for infringement
and counter-infringement suits against rivals. When knowledge is so finely
subdivided into separate property claims on complementary chunks of
knowledge, a large number of costly negotiations might be needed in order
to secure critical licences. Finally, the history of innovation highlights
many cases in which industry developed strongly with weak IPR regimes.
For example, the core technologies of ICT – including transistors, semi-
conductors, software and telecommunication technologies like the mobile
phone – were developed under weak IPR regimes.
The organizational effect of the epistemological structure of knowledge
is evident in some science-based innovations, such as biotechnology,
in particular biopharmaceuticals. The high level of interdisciplinarity
between biotechnology and biochemistry, informatics, mathematics, nan-
otechnology, biophysics, immunology and so on makes the knowledge
very unstable. The potential problem-solving resulting from the intersec-
tion, hybridization and conceptual recombination of different and con-
nected models and theories is very high. Thus there are many inventive
solutions that continuously transform the field. As pointed out by Philip
Cooke in Chapter 5, skills are in short supply and requirements change
Introduction: anti-cyclic triple helix 19

rapidly. The potential inventions are in the scientists’ minds. Therefore


the level of useful and crucial tacit knowledge is very high. Knowledge in
the scientist’s mind is often taken as a tacit quasi proof-of-concept of an
invention waiting to be disclosed. This is why we need proximity and direct
interaction between academics and business. Interaction and face-to-face
discussion can improve focusing on the right technological solution,
making knowledge transfer easier. Thus, at the early stage of knowledge
exploration, the cluster of dedicated biotechnological firms (DBFs) and
research institutes is geographically localized. Only when the cluster enters
the stage of knowledge exploitation does it tend to globalize.
Knowledge-driven capitalization of knowledge is evident also in another
way. Why did Germany experience so many difficulties and false starts in
biotechnology? Because the predominant knowledge and methodology
was that of organic chemistry and pharmaceutics in which Germany was
the world leader. Germany tried to implant the embryo of a biotechno-
logical industry in an epistemological environment characterized by the
knowledge structure of industrial chemistry, the methodological tech-
niques of pharmaceutics and the reasoning and decision-making processes
of organic chemists. On the contrary, the UK epistemological environment
was much more fertile because it was the birthplace of molecular biology,
of which biotechnology is an applied consequence. Nevertheless, there
was a significant gap between the discovery of the double helix and the
rise of a UK biotech industry. In the interim, the US took world leader-
ship through a plethora of ‘companies of their own’ founded by entrepre-
neurial scientists at universities that had made an early bet on molecular
biology. Consequently, different epistemological path dependencies lead
to different ways of capitalizing knowledge. Lastly, the birth of biopharma
clusters is a clear example of a triple-helix model of innovation. The holy
trinity of research institutes, DBFs and big pharmaceutical companies
within the cluster is often triggered by local governments or national
agencies. For example, the driver of cluster-building in Washington
and Maryland’s Capitol region was, mainly, the National Institutes of
Health. From this point of view, biopharma clusters neither conform to
the Schumpeterian model nor to Porterian clusters. They tend to be more
milieu than market. They need public financial support in order to imple-
ment research programmes.
The phenomenon of knowledge-driven capitalization of knowledge is
evident in the different architectures of capitalization depending on the
different conceptual domains and disciplines. In a few cases the propri-
etary approach may be useful at the first stages of research, while in many
other cases it is better only for downstream research. Sometimes for the
same kind of knowledge a change happens in the way it is capitalized.
20 The capitalization of knowledge

The change is linked to cultural and economic factors. In some instances


the proprietary domain is perceived to be socially bad (because of
hyperfocusing and exaggerated risk-perception of the consequences of
the ‘tragedy of anti-commons’) and there are emergent successful cases
of open innovation having good economic returns. Therefore scientific
communities tend to shift from a proprietary approach to an open one
and back again as their interest shifts from working with existing firms to
starting new ones. Indeed, the same individual may pursue both courses
of action simultaneously with different research lines or even with pieces
of the same one.
Software research is a case of a knowledge capitalization that has shifted
in recent years from the first to the second category in upstream research,
maintaining the proprietary domain only for the downstream stages. This
is because if the proprietary domain is applied to the upstream basic soft-
ware knowledge, there is a risk of lack of development and improvement
that is achieved mainly through an open source approach. If the public
domain is applied to the downstream commercial developments, there is
a risk of lack of economic private resources because companies don’t see
any real economic incentive to invest. Alfonso Gambardella and Bronwyn
H. Hall in Chapter 6 analyse and explain in economic terms the evolution
of the proprietary versus public domain in the capitalization of knowl-
edge. The proprietary regime assigns clear property rights and provides
powerful incentives at the cost of creating temporary monopolies that will
tend to restrict output and raise prices. The public regime does not provide
powerful incentives but the dissemination of knowledge is easy and is
achieved at low cost.
There is an alternative third system, that of ‘collective invention’. This
system allows ‘the free exchange and spillover of knowledge via person-
nel contact and movement, as well as reverse engineering, without resort
to intellectual property protection’. Collective invention in the steel and
iron industry, in the semiconductor industry, and in the silk industry are
some of the historical examples of this third alternative. The production
of knowledge is supported by commercial firms that finance it through
the sale of end products. The sharing of information is motivated since
rewards come from product sales rather than information about incre-
mental innovations. This system works when an industry is advancing and
growing rapidly and the innovation areas are geographically localized, but
it doesn’t work when an industry is mature or the innovation areas are not
geographically localized. In these cases, how can an open science approach
to upstream research be supported? Without coordination, scientists don’t
perceive the advantages of a public rather than a proprietary approach,
namely the utility of a larger stock of public knowledge and the visibility
Introduction: anti-cyclic triple helix 21

of their research and achievements. Therefore, without coordination they


tend to behave egoistically and collective action is hard to sustain (as in
Mancur Olson’s famous theory of collective action).
A policy device, particularly useful in software research, and that could
sustain the right amount of coordination, is the Generalized Public License
(GPL), also dubbed the copyleft system: ‘the producer of an open source
program requires that all modifications and improvements of the program
are subject to the same rules of openness, most notably the source code
of all the modifications ought to be made publicly available like the origi-
nal program’(Gambardella and Hall, this volume) In order to make the
GPL function there is need of legal enforcement because the norms and
social values of the scientific community, and the reputation effect of their
infringement, seem to be insufficient.
Part II deals with the growing importance of the triple-helix model in the
knowledge economy. The generation of knowledge, in particular knowl-
edge that can be capitalized, seems to be linked to emerging knowledge
networks characterized by academy–industry–government interaction.
As an institution, the university has changed its mission many times in the
last hundred years. From the ivory tower of the German university model
in the nineteenth century, focused mainly on basic research and education,
the shift has been towards a growing involvement in solving social and
economic problems at the beginning of the twentieth century. The growing
involvement of universities in social and economic matters reached its acme
with the birth and prevalence of the MIT–Stanford model. According to
Henry Etzkowitz in Chapter 7 academy–industry relations are radically
changing the actors of the knowledge economy. Universities are becoming
the core of a new knowledge and creative economy. Paradoxically, it is by
holding to the values of basic research that radical innovations with the
highest market value are created. Capitalization of knowledge is becoming
a central target of research policy in most American and many European
universities. Technology transfer officers (TTOs) are acquiring a strong and
proactive policy for IPR, the sale of licences and the creation of spin-off
enterprises. Dual career is an ever more popular option. Academic scientists
in some fields such as life sciences feel increasingly at ease in collaborating
with companies.
The market culture is not a novelty in the American university. Already
during and especially after the 1980s the fall in public funding for research
obliged academics to seek resources in a competitive way from companies,
foundations and public agencies. The concept of the ‘quasi firm’ was born
at that time. Researchers joined to form a group of colleagues with common
research and also economic objectives. They competed with other groups
for a slice of the funding cake. Universities that organized their funding
22 The capitalization of knowledge

successfully through ‘quasi firms’, were the most suitable to become entre-
preneurial and to capitalize knowledge.
The capitalization of knowledge through IPR is losing ground accord-
ing to the analysis made by Caroline Lanciano-Morandat and Eric Verdier
in Chapter 8. National R&D policies can be divided into four categories
based on numerous important factors:

1. the Republic of Science is summarized by the Mertonian ethos and


has as its main aim the development of codified knowledge. Its incen-
tive structure is based on peer evaluation that implies disclosure and
priority norms;
2. the state as an entrepreneur is based on the convention of a mission-
oriented public policy aimed at pursuing national priorities in tech-
nology and innovation. It uses top-down planning exemplified by
the traditional French technology policy successful in big military
projects or in aerospace and energy. The incentives shaping individual
behaviour are mainly public power over the actors of the scientific and
industrial worlds;
3. the state as a regulator promotes the transfer of scientific knowledge
to the business world. The objectives of academic research should be
shaped by the market expectations. The incentives are focused on the
definition of property rights in order to promote the creation of high-
tech academic startups and the development of contractual relations
between universities and firms;
4. the state as facilitator of technological projects is represented by the
triple helix of the joint co-production of knowledge by universities,
companies and public agencies.

The emergence of hybrid organizations, strategic alliances and spin-offs


relies on local institutional dynamics. The information generated by
invention is ephemeral and rapidly depreciates due to the speed of tech-
nological change. This tends to reduce the protective role of contracts
and IPR for the capitalization of knowledge. Therefore incentives in
science and technology districts are fewer royalties and more capitaliza-
tion through shares, revenues and stock options arising from participation
in new industrial initiatives. Individual competences include the ability to
cooperate, to work in networks and to combine different types of knowl-
edge. ‘Janus scientists’ capable of interfacing knowledge and markets and
of integrating different conceptual tools pertaining to different disciplines
will become increasingly sought after in the labour market. This R&D
policy is gaining ground in most Western countries.
The triple helix is characterized by the birth of ever new and changing
Introduction: anti-cyclic triple helix 23

hybrid organizations. One specific example of a hybrid organization is


that of boundary organizations that group the representatives of the three
helixes and aim to bridge the gap between science and politics. In Chapter 9
Sally Davenport and Shirley Leitch present a case study on the Life Sciences
Network (LSN) in New Zealand that acted as the boundary between the
position of the scientific community and industry and that of the political
parties in the discussion about the moratorium on GMO. LSN increased the
chances of pro-GMO arguments being accepted by public opinion because
of its supposed neutrality and authoritative status compared to those of
member organizations. In this way it increased the chances of achieving the
common aims of the representatives of the three helixes.
The economic role of the triple helix in the knowledge economy is more
than just a way to capitalize knowledge. Knowledge is not economically
important only because it can be capitalized. The concept of the knowl-
edge economy is justified by a wider interpretation of the economic impact
of knowledge. The economic utility of knowledge is present not only when
it becomes an innovation. All economic activities involved in knowledge
production and distribution are relevant. The problem is: how can we
define what knowledge is and how can we measure its economic value?
Chapter 10 by Benoît Godin introduces the pioneering work of Fritz
Machlup. He tries to define knowledge with the help of epistemology,
cybernetics and information theory.
Knowledge is not only an explicit set of theories, hypotheses and empiri-
cal statements about the world. It is also the, often implicit and tacit, set
of procedures, skills and abilities that allows individuals to interact in the
real world. Or in the words of Ryle, and of Polanyi, knowledge is not only
represented by ‘know-that’ statements but also by ‘know-how’ abilities
(or to put it differently, by ontic and deontic knowledge). If knowledge
does not need to be merely explicit, and a true linguistic representation of
certified events and tested theories, and if it can also be subjective, con-
jectural, implicit and tacit, then it can include many expressions of social
and economic life: practical, intellectual, entertaining, spiritual, as well
as accidentally acquired. From an operational point of view, knowledge
should be analysed in two phases: generation and transmission. Therefore
R&D, education, media and information services and machines are the
four operational elements of knowledge.
Knowledge is not only a static concept, that is to say what we know, but
a dynamic one, that is what we are learning to know. The first is knowledge
as state, or result, while the second means knowledge as process, or activ-
ity. How can we measure knowledge? Not by using the Solow approach
as a production function. By using this approach Solow formalized early
works on growth accounting (breaking down GDP into capital and
24 The capitalization of knowledge

labour) and equated the residual in his equation with technical change.
According to Machlup, the production function is only an abstract con-
struction that correlates input and output, without any causal meaning.
The only reliable way to measure knowledge is by national accounting,
that is the estimate of costs and sales of knowledge products and services
(according to his broad definition). Where the data were not available, as
in the case of internal production and the use of knowledge inside a firm,
he looked at complementary data, such as occupational classes of the
census, differentiating white-collar workers from workers who were not
knowledge producers, like blue-collar workers. His policy prescriptions
were in favour of basic research and sceptical about the positive influence
of the patent system on inventive activity.
Basic research is an investment, not a cost. It leads to an increase in eco-
nomic output and productivity. Too much emphasis on applied research
is a danger because it drives out pure research, which is its source of
knowledge. Finally, his policy focus on information technologies was very
supportive. Information technologies are a source of productivity growth
because of improved records, improved decision-making and improved
process controls, and are responsible for structural changes in the labour
market, encouraging continuing movement from manual to mental and
from less to more highly skilled labour.
The knowledge economy is difficult to represent. The representation
must not focus only on economic growth and knowledge institutions.
It should focus also on the knowledge base and on the dynamic dis-
tribution of knowledge. To reach this goal, knowledge should not be
represented only as a public good but as a mechanism for coordinating
society. Machlup was the first to describe knowledge as a coordination
mechanism when he qualified it in terms of the labour force. In Chapter
11, Loet Leydesdorff, Wilfred Dolfsma and Gerben Van der Panne try
to define a model for measuring the knowledge base of an economy. In
their opinion it can be measured as an overlay of communications among
differently codified communications. The relevant variables are the ter-
ritorial economy, its organization and technology. The methodological
tools are scientometrics, which measures knowledge flow, and economic
geography. Territorial economies are created by the proximity – in terms
of relational dimensions – of organizations and technologies. New niches
of knowledge production emerge as densities of relations and as a conse-
quence of the self-organization of these interactions. The triple helix is an
exemplification of this dynamics. It is the emergence of an overlay from
the academy–industry–government interaction. In some cases feedback
from the reflective overlay can reshape the network relations from which
it emerged.
Introduction: anti-cyclic triple helix 25

Academy–industry relations aim to establish knowledge networks that


function as complex problem-solvers devoted to the generation and diffu-
sion of knowledge. They are Complex Adaptive Systems (CADs), whose
emergent dynamics are difficult to predict but whose micromotives driving
the individual behaviours can be represented (Viale and Pozzali, 2010).
According to Chapter 12 by Matilde Luna and José Luis Velasco, there
are four mechanisms for integrating triple-helix actors with different and
diverging norms, interests, resources, theories and abilities: trust, trans-
lation, negotiation and deliberation. As is highlighted in other chapters
of the book, communication and collaboration between the members of
the three helixes offset the difficulties posed by a different set of values,
interests and skills. It is difficult for industrial scientists to coordinate with
academic researchers if their perception of time and money is different:
if for one the aim is commercial and for the other it is epistemological,
or if one has expertise of the more practical problem-solving kind while
the other tends towards more theoretical solutions. If there are too many
differences, no trust can be generated and therefore collaboration is dif-
ficult. Moreover, often there is linguistic distance, and cognitive styles
of reasoning and decision-making are different. This calls for translation
between two worlds. This can be provided by players who fulfil a bridging
role between academy and industry (e.g. technology transfer officers) or by
translation provided by the scientists themselves (e.g. Janus scientists).
Without translation it is impossible to find a rational ground for
deliberation about the goals, methods, techniques and timescale of the
research project. There can be only tiring and long negotiations, often with
irrational and unbalanced results. Academy–industry relations can be
assessed on two different functional and operative levels. If a knowledge
network is capable of generating outputs that satisfy normative, episte-
mological and pragmatic desiderata, and if these outputs are achieved
with the lowest costs (time, technical resources, money, physical effort
etc.), they show a positive functional performance. If they become stable
and ‘robust’ in their organizational structure and activities, they show a
positive operative feature. Under these conditions The Capitalization of
Knowledge both advances knowledge and presages a new mode of pro-
duction beyond industrial society in which the university is co-equal with
industry and government.

ACKNOWLEDGEMENTS

We wish to thank Laura Angela Gilardi of Fondazione Rosselli for her


invaluable help with the secretarial and editorial work. The book could
26 The capitalization of knowledge

not have been completed without her support. Thanks to Chiara Biano
for her editorial processing. We also thank Raimondo Iemma for supply-
ing some of the data for the introduction. We also wish to thank the staff
of Fondazione Rosselli and in particular Daniela Italia, Paola Caretta,
Rocío Ribelles Zorita, Elisabetta Nay, Carlotta Affatato, Giulia Datta,
Elena Bazzanini, Anna Mereu, Giovanni De Rosa, Michele Salcito,
Maria Cristina Capetti, Francesca Villa, Fabiana Manca and Laura
Alessi for the excellent organization of the conference and the follow-up
initiatives.

REFERENCES

Agrawal, A. and R. Henderson (2002), ‘Putting patents in context: exploring


knowledge transfer from MIT’, Management Science, 48 (1), 44–60.
Azoulay P., W. Ding and T. Stuart (2005), ‘The determinants of faculty pat-
enting behavior: demographics or opportunities?’, NBER Working Paper
11348.
Blumenthal, D., M. Gluck, K. Lewis, M. Stoto and D. Wise (1986), ‘University–
industry relations in biotechnology: implications for the university’, Science, 232
(4756), 1361–66.
Breschi, S., F. Lissoni and F. Montobbio (2007), ‘The scientific productivity of
academic inventors: new evidence from Italian data’, Economics of Innovation
and New Technology, 16 (2), 101–18.
Calderini, M. and C. Franzoni (2004), ‘Is academic patenting detrimental to high
quality research? An empirical analysis of the relationship between scientific
careers and patent applications’, Università Bocconi, CESPRI Working Paper
162.
Campbell, E.G., B.R. Clarridge, M. Gokhale, L. Birenbaum, S. Hilgartner, N.A.
Holtzman and D. Blumenthal (2002), ‘Data withholding in academic genetics’,
JAMA, 287 (4), 473–80.
Cohen, W., R.R. Nelson and J. Walsh (2000), ‘Protecting their intellectual assets:
appropriability conditions and why US manufacturing firms patent or not’,
NBER Discussion Paper 7552.
Dasgupta, P. and P. David (1985), ‘Information disclosure and the economics of
science and technology’, CEPR Discussion Papers 73.
David, P.A. (1998), ‘Common agency contracting and the emergence of “open
science” institutions’, The American Economic Review, 88 (2), 15–21.
Etzkowitz, H. (2000), ‘Bridging the gap: the evolution of industry–university links
in the United States’, in L. Branscomb et al. (eds), Industrializing Knowledge,
Cambridge: MIT Press, pp. 203–33.
Etzkowitz, H. (2008), The Triple Helix: University–Industry–Government Innovation
in Action, London: Routledge.
Etzkowitz, H. and M. Ranga (2009), ‘A transKeynesian vision of government’s
role in innovation: picking winners revisited’, Science and Public Policy, 36 (10),
799–808.
Etzkowitz, H. and M. Ranga (2010), ‘From spheres to spaces: a triple helix system
Introduction: anti-cyclic triple helix 27

for knowledge-based regional development’, http://www.triplehelix8.org/, last


accessed 22 June 2010.
Etzkowitz, H. and R. Viale (2009), ‘The third academic revolution: polyvalent
knowledge and the future of the university’, Critical Sociology, 34: 4 July 2010.
Etzkowitz, H., M. Gulbrandsen and J. Levitt (2001), Public Venture Capital, 2nd
edn, New York: Aspen Kluwer.
Fabrizio, K.R. and A. DiMinin (2008), ‘Commercializing the laboratory: faculty
patenting and the open science environment’, Research Policy, 37 (5), 914–31.
Gibbons, M., C. Limoges, H. Nowotny, S. Schwartzman, P. Scott and M. Trow
(1994), The New Production of Knowledge: The Dynamics of Science and
Research in Contemporary Societies, Sage: London.
Jensen, R. and M. Thursby (2003), ‘The academic effects of patentable research’,
paper presented at the NBER Higher Education Meeting, 2 May, Cambridge,
MA.
Lakatos, I. (1970), ‘Falsification and the methodology of scientific research pro-
grammes’, in I. Lakatos and A. Musgrave (eds), Criticism and the Growth of
Knowledge, Cambridge: Cambridge University Press, pp. 51–8.
Levin, R., A. Klevorick, R.R. Nelson and S. Winter (1987), ‘Appropriating the
returns from industrial R&D’, Brookings Papers on Economic Activity, 18 (3),
783–832.
Mansfield, E. (1995), ‘Academic research underlying industrial innovation:
sources, characteristics, and financing’, Review of Economics and Statistics, 77
(1), 55–65.
Ranga, L.M., K. Debackere and N. von Tunzelman (2003), ‘Entrepreneurial uni-
versities and the dynamics of academic knowledge production: a case of basic vs.
applied research in Belgium’, Scientometrics, 58 (2), 301–20.
Siegel, D., D. Waldman and A. Link (1999), ‘Assessing the impact of organiza-
tional practices on the productivity of university technology transfer offices: an
exploratory study’, NBER Working Paper 7256.
Stephan, P.E. and S.G. Levin (1996), ‘Property rights and entrepreneurship in
science’, Small Business Economics, 8 (3), pp. 177–88.
Stephan, P.E., S. Gurmu, A.J. Sumell and G. Black (2007), ‘Who’s patenting in
the university? Evidence from the survey of doctorate recipients’, Economics of
Innovation and New Technology, 16 (2), 71–99.
Stokes, D.E. (1997), Pasteur’s Quadrant: Basic Science and Technological
Innovation, Washington, DC: Brookings Institution Press.
Thaler, R.H. and C.R. Sunstein (2008), Nudge. Improving Decisions About Health,
Wealth, and Happiness, New Haven, CT: Yale University Press.
Van Looy, B., L.M. Ranga, J. Callaert, K. Debackere and E. Zimmerman (2004),
‘Balancing entrepreneurial and scientific activities: feasible or mission impos-
sible? An examination of the performance of university research groups at K.U.
Leuven, Belgium’, Research Policy, 33 (3), 443–54.
Viale, R. and A. Pozzali (2010), ‘Complex adaptive systems and the evolutionary
triple helix’, Critical Sociology, 34.
PART I

How to capitalize knowledge


1. Knowledge-driven capitalization of
knowledge
Riccardo Viale

INTRODUCTION

Capitalization of knowledge happens when knowledge generates an eco-


nomic added value. The generation of economic value can be said to be
‘direct’ when one sells the knowledge for some financial, material or behav-
ioural good. The generation of economic value is considered ‘indirect’ when
it allows the production of some material or service goods that are sold on
the market. The direct mode comprises the sale of personal know-how, such
as in the case of a plumber or of a sports instructor. It also comprises the
sale of intellectual property, as in the case of patents, copyrights or teaching.
The indirect mode comprises the ways in which organizational, declarative
and procedural knowledge is embodied in goods or services. The economic
return in both cases can be financial (e.g. cash), material (e.g. the exchange
of consumer goods) or behavioural (e.g. the exchange of personal services).
In ancient times, the direct and indirect capitalization of knowledge was
based mainly on procedural knowledge. Artisans, craftsmen, doctors and
engineers sold their know-how in direct or indirect ways within a market
or outside of it. Up to the First Industrial Revolution, the knowledge that
could be capitalized remained mainly procedural. Few were the inventors
that sold their designs and blueprints for the construction of military or
civil machines and mechanisms. There were some exceptions, as in the case
of Leonardo da Vinci and several of his inventions, but, since technological
knowledge remained essentially tacit, it drove a capitalization based pri-
marily on the direct collaboration and involvement of the inventors in the
construction of machines and in the direct training of apprentices.
In the time between the First and Second Industrial Revolutions, there
was a progressive change in the type of knowledge that could be capital-
ized. The ‘law of diminishing returns’, as it manifested itself in the eco-
nomic exploitation of invention, pushed companies and inventors, lacking
a scientific base, to look for the causal explanation of innovations (Mokyr,
2002a, 2002b). For example, Andrew Carnegie, Eastman Kodak, DuPont,

31
32 The capitalization of knowledge

AT&T, General Electric, Standard Oil, Alcoa and many others under-
stood the importance of scientific research for innovation (Rosenberg and
Mowery, 1998). Moreover, the revolution in organic chemistry in Germany
shifted industrial attention towards the fertility of collaboration between
universities and companies. Searching for a scientific base for inventions
meant developing specific parts of declarative knowledge. Depending on
the different disciplines, knowledge could be more or less formalized and
could contain more or fewer tacit features. In any case, from the Second
Industrial Revolution onwards, the capitalization of technological knowl-
edge began to change: a growing part of knowledge became protected by
intellectual property rights (IPR); patents and copyrights were sold to
companies; institutional links between academic and industrial labora-
tories grew; companies began to invest in R&D laboratories; universities
amplified the range and share of applied and technological disciplines
and courses; and governments enacted laws to protect academic IPR and
introduced incentives for academy–industry collaboration. New institu-
tions and new organizations were founded with the aim of strengthening
the capitalization of knowledge.
The purpose of this chapter is to show that one of the important deter-
minants of the new forms of the capitalization of knowledge is its episte-
mological structure and cognitive processing. The thesis of this chapter
is that the complexity of the declarative part of knowledge and the three
tacit dimensions of knowledge – competence, background and cognitive
rules (Pozzali and Viale, 2007) – have a great impact on research behav-
iours and, consequently, on the ways of capitalizing knowledge. This
behavioural impact drives academy–industry relations towards greater
face-to-face interactions and has led to the development of a new aca-
demic role, that of the ‘Janus scientist’1. The need for stronger and more
extensive face-to-face interaction is manifested through the phenomenon
of the close proximity between universities and companies and through
the creation of hybrid organizations of R&D. The emergence of the new
academic role of the Janus scientist, one who is able to interface both with
the academic and industrial dimensions of research, reveals itself through
the introduction of new institutional rules and incentives quite different
from traditional academic ones.

EPISTEMOLOGICAL AND COGNITIVE


CONSTRAINTS IN KNOWLEDGE TRANSFER

Scientific knowledge is variegated according to different fields and disci-


plines. The use of formal versus natural language, conceptual complexity
Knowledge-driven capitalization of knowledge 33

versus simplicity, and explicit versus tacit features of knowledge vary a


great deal from theoretical physics to entomology (to remain within the
natural sciences). Different epistemological structures depend mainly on
the ontology of the relative empirical domain. For example, in particle
physics the ontology of particles allows the use of numbers and of natural
laws written in mathematical language. On the contrary, in entomology
the ontology of insects allows us to establish empirical generalizations
expressed in natural language. Different epistemological structures mean
different ways of thinking, reasoning and problem-solving. And this cog-
nitive dimension influences behavioural and organizational reality. To
better illustrate the role of epistemological structure, I shall introduce
several elementary epistemological concepts.
Knowledge can be subdivided into the categories ontic and deontic.
Ontic knowledge analyses how the world is, whereas deontic knowledge
is focused on how it can be changed. These two forms of knowledge can
be represented according to two main modes: the analytical mode deals
with the linguistic forms that we use to express knowledge; the cognitive
mode deals with the psychological ways of representing and processing
knowledge. Two main epistemological features of knowledge influence
the organizational means of knowledge generation and transfer. The first
is the rate of generality. The more general the knowledge is, the easier it is
to transfer and apply it to subjects different from those envisioned by the
inventor. The second is complexity. The more conceptually and computa-
tionally complex the knowledge is, the more there will be a concomitant
organizational division of work in problem-solving and reasoning.

1 Analytical Mode of Ontic Knowledge

Analytical ontic knowledge is divided into two main types, descriptive and
explanatory.

Descriptive
The first type comprises all the assertions describing a particular event
according to given space-time coordinates. These assertions have many
names, such as ‘elementary propositions’ or ‘base assertions’. They cor-
respond to the perceptual experience of an empirical event by a human
epistemic agent at a given time.2 A descriptive assertion has a predicative
field limited to the perceived event at a given time. The event is excep-
tional because its time-space coordinates are unique and not reproducible.
Moreover, this uniqueness is made stronger by the irreproducibility of
the perception of the agent. Even if the same event were reproducible, the
perception of it would be different because of the continuous changes in
34 The capitalization of knowledge

perceptual ability. Perception is related to cortical top-down influences cor-


responding to schemes, expectations, frames and other conceptual struc-
tures that change constantly. The perception of an event causes a change
in the conceptual categorization to which the event belongs. This change
can modify the perception of a similar event that happens afterwards.3
Therefore a singular descriptive assertion can correspond only to the time-
space particular perception of a given epistemic agent and cannot have
any general meaning. For example, the observational data in an experi-
ment can be transcribed in a laboratory diary. An observational assertion
has only a historical meaning because it cannot be generalized. Therefore a
process technique described by a succession of descriptive assertions made
by an epistemic agent cannot be transferred and replicated precisely by a
different agent. It will lose part of its meaning and, consequently, replica-
tion will be difficult. Inventions, before and during the First Industrial
Revolution, were mainly represented as a set of idiosyncratic descriptive
assertions made by the inventor. Understanding the assertions and repli-
cating the data were only possible for the inventor. Therefore technology
transfer was quite impossible at that time. Moreover, the predicative
field of an invention was narrow and fixed. It applied only to the events
described in the assertions. There was no possibility of enlarging the
semantic meaning of the assertions, that is, of enlarging the field of the
application of the invention in order to produce further innovations. As
a result, the law of diminishing returns manifested itself very quickly and
effectively. Soon the economic exploitation of the invention reached its
acme, and the diminishing returns followed. Only a knowledge that was
based not on descriptive assertions but on explanatory ones could provide
the opportunity to enlarge and expand an invention, to generate corollary
innovations and, thus, to invalidate the law of diminishing returns. This
economic motive, among others, pushed inventors and, mainly, entrepre-
neurs to look for the explanatory basis of an invention, that is, to pursue
collaborations with university labs and to establish internal research and
development labs (Mowery and Rosenberg, 1989; Rosenberg and Birdzell,
1986).

Explanatory
These assertions, contrary to descriptive ones, have a predicative field that
is wide and unfixed. They apply to past and future events and, in some
cases (e.g. theories), to events that are not considered by the discoverer.
They can therefore allow the prediction of novel facts. These goals are
achieved because of the syntactic and semantic complexity and flexibility
of explanatory assertions. Universal or probabilistic assertions, such as
the inductive generalization of singular observations (e.g. ‘all crows are
Knowledge-driven capitalization of knowledge 35

black’ or ‘a large percentage of crows are black’) are the closest to descrip-
tive assertions. They have little complexity and their application outside
the predicative field is null. In fact, their explanatory and predictive power
is narrow, and the phenomenon is explained in terms of the input–output
relations of a ‘black box’ (Viale, 2008). In contrast, theories and models
tend to represent inner parts of a phenomenon. Usually, hypothetical
entities are introduced that have no direct empirical meaning. Theoretical
entities are then linked indirectly to observations through bridge principles
or connecting statements. Models and metaphors often serve as heuristic
devices used to reason more easily about the theory. The complexity,
semantic richness and plasticity of a theory allow it to have wider applica-
tions than empirical generalizations. Moreover, theories and models tend
not to explain a phenomenon in a ‘black box’ way, but to represent the
inner mechanisms that connect input to output. Knowing the inner causal
mechanisms allows for better management of variables that can change
the output. Therefore they offer better technological usage.

Inductive generalizations were the typical assertions made by individual


inventors during the First Industrial Revolution. Compared to descrip-
tive assertions, they represent progress because they lend themselves to
greater generalization. They thus avoid being highly idiosyncratic and, in
principle, can be transferred to other situations. Nevertheless, inductive
generalizations are narrow in their epistemological meaning and don’t
allow further enlargement of the invention. This carries with it the inevita-
ble consequence of their inability to generate other innovations. Therefore
inductive generalizations are fixed in the law of diminishing returns. In
contrast, theories attempting to give causal explanations of an invention
presented the ability to invalidate the law of diminishing returns. They
opened the black box of the invention and allowed researchers to better
manipulate the variables involved in order to produce different outputs, or
rather, different inventions. A better understanding of inventions through
the discovery of their scientific theoretical bases began to be pursued
during and after the Second Industrial Revolution.
To better exemplify the relations between descriptive assertions, empiri-
cal generalizations and theories in technological innovation, I now describe
a historical case (Viale, 2008, pp. 23–5; Rosenberg and Birdzell, 1986).
At the end of the eighteenth century and the beginning of the nineteenth,
the growth in urban populations, as people moved out of the country to
search for work in the city, posed increasing problems for the provision-
ing of food. Long distances, adverse weather conditions and interruptions
in supplies as a result of social and political unrest meant that food was
often rotten by the time it reached its destination. The authorities urgently
36 The capitalization of knowledge

needed to find a way to preserve food. In 1795, at the height of the French
Revolution, Nicholas Appert, a French confectioner who had been testing
various methods of preserving edibles using champagne bottles, found a
solution. He placed the bottles containing the food in boiling water for a
certain length of time, ensuring that the seal was airtight. This stopped
the food inside the bottle from fermenting and spoiling. This apparently
commonplace discovery would be of fundamental importance in years
to come and earned Appert countless honours, including a major award
from the Napoleonic Society for the Encouragement of Industry, which
was particularly interested in new victualling techniques for the army. For
many years, the developments generated by the original invention were
of limited application, such as the use of tin-coated steel containers intro-
duced in 1810. When Appert developed his method, he was not aware of
the physical, chemical and biological processes that prevented deteriora-
tion once the food had been heated. His invention was a typical example
of know-how introduced through ‘trial and error’. The extension of the
invention into process innovation was therefore confined to descriptive
knowledge and to an empirical generalization. It was possible to test new
containers or to try to establish a link between the change in tempera-
ture, the length of time the container was placed in the hot water and the
effects on the various bottled foods, and to then draw up specific rules for
the preservation of food. However, this was a random, time-consuming
approach, involving endless possible combinations of factors and lacking
any real capacity to establish a solid basis for the standardization of the
invention. Had it been a patent, it would have been a circumscribed inno-
vation, whose returns would have been high for a limited initial period
and would then have gradually decreased in the absence of developments
and expansions of the invention itself.4 The scientific explanation came
some time later, in 1873. Louis Pasteur discovered the function of bacte-
ria in certain types of biological activity, such as in the fermentation and
deterioration of food. Microorganisms are the agents that make it difficult
to preserve fresh food, and heat has the power to destroy them. Once the
scientific explanation was known, chemists, biochemists and bacteriolo-
gists were able to study the effects of the multiple factors involved in food
spoilage: ‘food composition, storage combinations, the specific microor-
ganisms, their concentration and sensitivity to temperature, oxygen levels,
nutritional elements available and the presence or absence of growth
inhibitors’ (Rosenberg and Birdzell, 1986; Italian translation 1988, pp.
300–301). These findings and many others expanded the scope of the inno-
vation beyond its original confines. The method was applied to varieties of
fruit, vegetables and, later, meats that could be heated. The most suitable
type of container was identified, and the effects of canning on the food’s
Knowledge-driven capitalization of knowledge 37

flavour, texture, colour and nutritional value were characterized. As often


happens when the scientific bases of an invention are identified, the inno-
vation generated a cascade effect of developments in related fields, such as
insulating materials, conserving-agent chemistry, and genetics and agri-
culture for the selection and cultivation of varieties of fruit and vegetables
better suited to preservation processes.
Why is it that the scientific explanation for an invention can expand
innovation capacity? To answer this question, reference can again be made
to Appert’s invention and Pasteur’s explanation. When a scientific expla-
nation is produced for a phenomenon, two results are obtained. First, a
causal relationship is established at a more general level. Second, once a
causal agent for a phenomenon has been identified, its empirical character-
istics can be analysed. As far as the first result is concerned, the microbic
explanation furnished by Pasteur does not apply simply to the specific phe-
nomenon of putrefaction in fruit and vegetables; bacteria have a far more
general ability to act on other foods and to cause pathologies in people
and animals. The greater generality of the causal explanation compared
with the ‘local’ explanation – the relationship between heat and the preser-
vation of food – means that the innovation can be applied on a wider scale.
The use of heat to destroy microbes means that it is possible to preserve
not only fruit and vegetables, but meat as well, to sterilize milk and water,
to prepare surgical instruments before an operation, to protect the human
body from bacterial infection (by raising body temperature) and so on. All
this knowledge about the role of heat in relation to microbes can be applied
in the development of new products and new innovative processes, from
tinned meat to autoclaves to sterilized scalpels. As to the second result,
once a causal agent has been identified, it can be characterized and, in the
case of Pasteur’s microbes, other methods can be developed to neutralize
or utilize them. An analysis of a causal agent begins by identifying all pos-
sible varieties. Frequently, as in the case of microbes, the natural category
identified by the scientific discovery comprises a huge range of entities.
And each microbe – bacterium, fungus, yeast, and so on – presents differ-
ent characteristics depending on the natural environment. Some of these
properties can be harnessed for innovative applications. Yeasts and bacilli,
for example, are used to produce many kinds of food, from bread and beer
to wine and yoghurt; bacteria are used in waste disposal. And this has led
scientists, with the advent of biotechnology, to attempt to transform the
genetic code of microorganisms in order to exploit their metabolic reac-
tions for useful purposes. Returning to our starting point, the preservation
of food, once the agent responsible for putrefaction had been identified, it
also became possible to extend the range of methods used to neutralize it.
It was eventually discovered that microbes could also be destroyed with
38 The capitalization of knowledge

ultraviolet light, with alcohol or with other substances, which, for a variety
of reasons, would subsequently be termed disinfectants.
So to answer our opening question, the scientific explanation for an
invention expands the development potential of the original innovation
because it ‘reduces’ the ontological level of the causes and extends the
predicative reach of the explanation. Put simply, if the phenomenon to
be explained is a ‘black box’, the explanation identifies the causal mecha-
nisms inside the box (‘reduction’ of the ontological level) that are common
to other black boxes (extension of the ‘predicative reach’ of the explana-
tion). Consequently, it is possible to develop many other applications or
micro-inventions, some of which may constitute a product or process
innovation. Innovation capacity cannot be expanded, however, when
the knowledge that generates the invention is simply an empirical gener-
alization describing the local relationship between antecedent and causal
consequent (in the example of food preservation, the relationship between
heat and the absence of deterioration). In this case, knowledge merely
describes the external reality of the black box, that is, the relationship
between input (heat) and output (absence of deterioration); it does not
extend to the internal causal mechanisms and the processes that generate
the causal relationship. It is of specific, local value, and may be applied
to other contexts or manipulated to generate other applications to only a
very limited degree.
The knowledge inherent in Appert’s invention, which can be described
as an empirical generalization, is regarded as genuine scientific knowledge
by some authors (Mokyr, 2002a, Italian translation 2004). We do not want
to get involved here in the ongoing epistemological dispute over what
constitutes scientific knowledge (Viale, 1991): accidental generalizations
of ‘local’ value only (e.g. the statement ‘the pebbles in this box are black’),
empirical generalizations of ‘universal’ value (e.g. Appert’s invention) and
causal nomological universals (e.g. a theory such as Pasteur’s discovery).
The point to stress is that although an empirical generalization is ‘useful’
in generating technological innovation (useful in the sense adopted by
Mokyr, 2002b, p. 25, derived from Kuznets, 1965, pp. 84–7), it does not
possess the generality and ontological depth that permit the potential of
the innovation to be easily expanded in the way that Pasteur’s discovery
produced multiple innovative effects. In conclusion, after Pasteur’s dis-
covery of the scientific basis of Appert’s invention, a situation of ‘increas-
ing economic returns’ developed, driven by the gradual expansion of the
potential of the innovation and a causal concatenation of micro-inventions
and innovations in related areas. This could be described as a recursive
cascade phenomenon, or as a ‘dual’ system (Kauffman, 1995), where the
explanation of the causal mechanism for putrefaction gave rise to a tree
Knowledge-driven capitalization of knowledge 39

structure of other scientific problems whose solution would generate new


technological applications and innovations, and also raise new problems
in need of a solution.
The example of Appert’s invention versus Pasteur’s discovery is also
revealed in another phenomenon. The discovery of the scientific basis
of an invention allows the horizontal enlargement of the invention into
areas different from the original one (e.g. from alimentation to hygiene
and health). The interdisciplinarity of inventive activities has grown
progressively from the Second Industrial Revolution until now, with the
recent birth of the converging technology programme (National Science
Foundation, 2002). The new technologies are often the result of the expan-
sion of a theory outside its original borders. This phenomenon implies the
participation of different disciplines and specializations in order to be able
to understand, grasp and exploit the application of the theory. The strong
interdisciplinarity of current inventions implies a great division of expert
labour and increased collaboration among different experts in various
disciplines. Thus, only complex organizations supporting many different
experts can cope with the demands entailed in the strong interdisciplinarity
of current inventive activity.

2 Cognitive Mode of Ontic Knowledge

The cognitive approach to science (Giere, 1988; Viale, 1991; Carruthers et


al., 2002; Johnson-Laird, 2008) considers scientific activity as a dynamic
and interactive process between mental representation and processing
on the one hand, and external representation in some media by some
language on the other. According to this approach, scientific knowledge
has two dimensions: the mental representations of a natural or social
phenomenon; and its linguistic external representation. The first dimen-
sion includes the mental models stemming from perceptive and memory
input and from their cognitive processing. This cognitive processing is
mainly inductive, deductive or abductive. The models are realized by a set
of rules: heuristics, explicit rules and algorithms. The cognitive processing
and progressive shaping of the mental representations of a natural phe-
nomenon utilize external representations in natural or formal language.
The continuous interaction between the internal mental representation
and the external linguistic one induces the scientist to generate two prod-
ucts: the mental model of the phenomenon and its external propositional
representation.
What is the nature of the representation of knowledge in the mind? It
seems to be different in the case of declarative (ontic) knowledge than in
the case of procedural (deontic) knowledge. The first is represented by
40 The capitalization of knowledge

networks, while the second is represented by production systems. The


ACT-R (Adaptive Control of Thought–Rational) networks of Anderson
(1983, 1996) include images of objects and corresponding spatial con-
figurations and relationships; temporal information, such as relationships
involving the sequencing of actions, events and the order in which items
appear; and information about statistical regularities in the environ-
ment. As with semantic networks (Collins and Quillian, 1969) or schemas
(Barsalou, 2000), there is a mechanism for retrieving information and a
structure for storing it. In all network models, a node represents a piece of
information that can be activated by external stimuli, such as sensations,
or by internal stimuli, such as memories or thought processes. Given each
node’s receptivity to stimulation from neighbouring nodes, activation
can easily spread from one node to another. Of course, as more nodes are
activated and the spread of activation reaches greater distances from the
initial source of the activation, the activation weakens. In other words,
when a concept or a set of concepts that constitutes a theory contains a
wide and dense hierarchy of interconnected nodes, the connection of one
node to a distant node will be difficult to detect. It will, therefore, be diffi-
cult to pay the same attention to all the consequences of a given assertion.
For example (Sternberg, 2009), as the conceptual category of a predicate
(e.g. ‘animal’) becomes more hierarchically remote from the category of
the subject of the statement (e.g. ‘robin’), people generally take longer to
verify a true statement (e.g. ‘a robin is an animal’) in comparison with a
statement that implies a less hierarchically remote category (e.g. ‘a robin
is a bird’). Moreover, since working memory can process only a limited
amount of information (according to Miller’s magical number 5 ± 2 items)
a singular mind cannot compute a large amount of structured information
or too many complex concepts, such as those contained in theories. These
cognitive aspects explain various features of knowledge production and
capitalization in science and technology.
First, the great importance given to external representation in natural
and formal language and the institutional value of publication satisfy
two goals: because of the natural limitations of memory, these serve as
memory devices, useful for allowing the cognitive processing of perceptive
and memory input; because of the complexity of concepts and the need for
different minds working within the same subject, these are social devices,
useful for allowing the communication of knowledge and the interaction
and collaboration among peers.
Second, before and during the First Industrial Revolution, the compu-
tational effort of inventors was made apparent primarily in their percep-
tual ability in detecting relevant features of the functioning of machines
and prototypes and in elaborating mental figurative concepts or models
Knowledge-driven capitalization of knowledge 41

that were depicted externally in diagrams, designs, figures, flow charts,


drafts, sketches and other representations. The single mind of an inventor
could cope with this computational burden. Interaction with other sub-
jects consisted mainly of that with artisans and workers in order to prepare
and tune the parts of a machine or of that with apprentices involving
knowledge transfer. Few theoretical concepts were needed, and cognitive
activity was focused on procedural knowledge (i.e. practical know-how
represented, mentally, by production systems) and simple declarative
knowledge (i.e. simple schemes that generalize physical phenomena, like
the Appert scheme involving the relation between heat and food preser-
vation). The situation changes dramatically after the Second Industrial
Revolution with the growing role of scientific research, particularly in the
life sciences. Conceptual categories increase in number; concepts become
wider with many semantic nodes; and there are increasing overlaps among
different concepts. One mind alone cannot cope with this increased com-
plexity, so a growing selective pressure to share the computational burden
between different minds arose. The inadequacy of a single mind to manage
this conceptual complexity brought about the emergence of the division
of expert labour, or in other words, the birth of specializations, collective
organizations of research and different roles and areas of expertise.5
Third, knowledge complexity and limited cognition explain the emer-
gence of many institutional phenomena in scientific and technological
research, such as the importance of publication, the birth of disciplines
and specializations, the division of labour and the growth in the size of
organizations. What were the effects of these emergent phenomena on
the capitalization of knowledge? While the inventor in the First Industrial
Revolution could capitalize his knowledge by ‘selling his mind’ and the
incomplete knowledge represented in the patent or in the draft, since the
Second Industrial Revolution, many minds now share different pieces
of knowledge that can fill the gaps in knowledge contained in the patent
or publication. This is particularly true in technological fields, where the
science push dimension is strong. In an emerging technology like biotech-
nology, nanotechnology, ICT, or in new materials and in the next con-
verging technology (National Science Foundation, 2002), the complexity
of knowledge and its differentiation lead to interdisciplinary organiza-
tions and collaborations and to the creation of hybrid organizations.
Knowledge contained in a formal document, be it patent, publication or
working paper, is not the full representation of the knowledge contained
in the invention. There are tacit aspects of the invention that are crucial to
its transfer and reproduction and are linked to the particular conceptual
interpretation and understanding of the invention occasioned by the pecu-
liar background knowledge and cognitive rules of the inventors (Balconi et
42 The capitalization of knowledge

al., 2007; Pozzali and Viale, 2007). Therefore the only way to allow transfer
is to create hybrid organizations that put together, face-to-face, the varied
expertise of inventors with that of entrepreneurs and industrial researchers
aiming to capitalize knowledge through successful innovations.

BACKGROUND KNOWLEDGE AND COGNITIVE


RULES

An important part of knowledge is not related to the representation of


the physical and human world but to the ways in which we can interact
with it. Deontic knowledge corresponds to the universe of norms. Rules,
prescriptions, permissions, technical norms, customs, moral principles and
ideal rules (von Wright, 1963) are the main categories of norms. Various
categories of norms are implied in research and technological innovation.
Customs or social norms represent the background knowledge that guides
and gives meaning to the behaviour of scientists and industrial research-
ers. Some norms are moral principles and values that correspond to a
professional deontology or academic ethos (Merton, 1973). They represent
norms for moral actions and are slightly different from ideal rules (Moore,
1922), which are a way of characterizing a model of goodness and virtue (as
in the Greek meaning of arête), or in this case, of what it means to be a good
researcher. Prescriptions and regulations are the legal norms established
by public authorities that constrain research activity and opportunities for
the capitalization of knowledge. Rules are mainly identifiable in reasoning
cognitive rules applied in solving problems, drawing inferences, making
computations and so forth. Lastly, technical norms are those methodo-
logical norms and techniques that characterize the research methodology
and procedures in generating and reproducing a given innovation. From
this point of view, it is possible to assert that a scientific theory or a tech-
nological prototype is a mixture of ontic knowledge (propositions and
mental models) and deontic knowledge (values, principles, methodologies,
techniques, practical rules and so on).
Deontic knowledge has been examined analytically as involving a logic
of action by some authors (e.g. von Wright, 1963; but see also the work of
Davidson, Chisholm and Kenny).6 An analytic approach has been applied
primarily to the representation of legal and moral norms. For the purposes
of this chapter, the analytic mode of deontic knowledge doesn’t appear
relevant. First, it is difficult to apply a truth-functional logic to norms
whose ontological existence is not clear. Second, unlike ontic knowledge,
where the knowledge relevant for technological innovation is, to a certain
extent, expressed in some language and transcribed in some media, deontic
Knowledge-driven capitalization of knowledge 43

knowledge relevant for technological innovation is mainly present at a


socio-psychological level.
As we shall see, norms are greatly involved in shaping the behaviours
responsible for knowledge capitalization. Moral principles and social
values are part of the background knowledge that influences the social
behaviour of scientists and entrepreneurs, as well as the modalities of their
interaction and collaboration. They play an important role in determin-
ing different styles of thinking, problem-solving, reasoning and decision-
making between academic and industrial researchers (Viale, 2009) that
can also have an effect on shaping the institutions and organizations for
capitalizing knowledge.
Before analysing background knowledge and cognitive rules, I wish to
focus briefly on technical norms. According to von Wright (1963), techni-
cal norms correspond to the means of reaching a given goal. Analytically,
they can be represented as conditional assertions of the elliptical form
‘if p, then q’ where the antecedent p is characterized by the goal and the
consequent q by the action that should be taken to reach the goal. They
represent the bridge between ontic and deontic knowledge. In fact, the
antecedent is characterized not only explicitly by the goal but also implic-
itly by the empirical initial conditions and knowledge that allow the selec-
tion of the proper action. In other words, a technical norm would be better
represented in the following way: if (p & a), then q where a represents the
empirical initial conditions and theoretical knowledge for action. From
this analytical representation of technical norms we infer certain features:
(1) the more a corresponds to complex theoretical knowledge, the more
computationally complex the application of the norm will be; (2) the more
a corresponds to descriptive assertions, the more difficult it will be to gen-
eralize the understanding and application of the norm; and (3) the more
the relevant knowledge contained in a is characterized by tacit features,
the more difficult it will be to generalize to others the understanding and
application of the norm. Technical norms corresponding to the procedures
and techniques needed to generate an invention can manifest these fea-
tures. Inventions from the First Industrial Revolution, such as Appert’s,
presented technical norms characterized by descriptive assertions and tacit
knowledge (mainly of the competential type). Thus knowledge transfer
was very difficult, and there was no need for the division of expert labour.
Inventions after the Second Industrial Revolution, however, involved a
growing share of theoretical knowledge and a decrease in competential
tacit knowledge; therefore the transfer of knowledge could be, in theory,
easier. In any case, it required a greater amount of expertise that was pos-
sible only with a complex division of expert labour. This was particularly
necessary in disciplines such as physics and chemistry, where the particular
44 The capitalization of knowledge

ontology and mathematical language to represent the phenomena allow


the generation of complex theoretical structures.
From a cognitive point of view, technical norms correspond to ‘prag-
matic schemes’ (Cheng and Holyoak, 1985, 1989) that have the form of
production systems composed of condition–action rules (corresponding
to conditional assertions in logic and to production rules in artificial intel-
ligence). Pragmatic schemes are a set of abstract and context-dependent
rules corresponding to actions and goals relevant from a pragmatic point
of view. According to the analytical formulation of von Wright (1963),
the main cognitive rules in pragmatic schemes are that of permission
and obligation. More generally, a schema (an evolution of the ‘semantic
network’ of Collins and Quillian, 1969) is a structured representation
that captures the information that typically applies to a situation or event
(Barsalou, 2000). Schemas establish a set of relations that link proper-
ties. For example, the schema for a birthday party might include guests,
gifts, a cake and so on. The structure of a birthday party is that the guests
give gifts to the birthday celebrant, everyone eats cake and so on. The
pragmatic schema links information about the world with the goal to be
attained according to this information.
A pragmatic schema can serve as a cognitive theory for most deontic
knowledge relevant in innovation. It can represent values and princi-
ples characterizing background knowledge. Social norms ruling research
behaviour, moral principles characterizing the ethos of the academic
community, pragmatic goals driving the decision-making of industrial
researchers and social values given to variables such as time, risk, money
and property can be represented by pragmatic schemes. These schemes
also seem to influence the application of cognitive rules, such as those used
in deduction, induction, causality, decision-making and so forth. The topic
is controversial. The dependence of cognitive rules on pragmatic schemes
is not justified by theories supporting an autonomous syntactic mental
logic. According to these theories (Beth and Piaget, 1961; Braine, 1978;
Rumain, et al., 1983), the mind contains a natural deductive logic (which
for Piaget offers the propositional calculus) that allows the inference of
some things and not others. For example, the human mind is able to apply
modus ponens but not modus tollens. In the same way, we could also pre-
suppose the existence of a natural probability calculus, a causal reasoning
rule and a risk assessment rule, among others. Many empirical studies and
several good theories give alternative explanations that neglect the exist-
ence of mental logic and of other syntactic rules (for the pragmatic scheme
theories, see Cheng and Holyoak, 1985, 1989; Cheng and Nisbett, 1993;
for the mental models theory, see Johnson-Laird, 1983, 2008; for the con-
ceptual semantic theory see Jackendoff, 2007). The first point is that there
Knowledge-driven capitalization of knowledge 45

are many rules that are not applied when the format is abstract but are
applied when the format is pragmatic – that is, when it is linked to every-
day experience. For example, the solution of the ‘selection task problem’,
namely, the successful application of modus tollens, is possible only when
the questions are not abstract but are linked to problems of everyday life
(Politzer, 1986; Politzer and Nguyen-Xuan, 1992). The second point is that
most of the time rules are implicitly learned through pragmatic experience
(Reber, 1993; Cleeremans, 1995; Cleeremans et al., 1998). The phenom-
enon of implicit learning seems so strong that it occurs even when the
cognitive faculties are compromised. From recent studies (Grossman et
al., 2003) conducted with Alzheimer patients, it appears that they are able
to learn rules implicitly but not explicitly. Lastly, the rules that are learnt
explicitly in a class or that are part of the inferential repertoire of experts
are often not applied in everyday life or in tests based on intuition (see the
experiments with statisticians of Tversky and Kahneman, 1971).
At the same time, pragmatic experience and the meaning that people
give to social and natural events is driven by background knowledge
(Searle, 1995, 2008; Smith and Kossylin, 2007). The values, principles
and categories of background knowledge, stored in memory, allow us to
interpret reality, to make inferences and to act, that is, to have a pragmatic
experience. Therefore background knowledge affects implicit learning and
the application of cognitive rules through the pragmatic and semantic
dimension of reasoning and decision-making7. What seems likely is that
the relationships within schemas and among different schemas allow us
to make inferences, that is, they correspond to implicit cognitive rules.
For example, let us consider our schema for glass. It specifies that if an
object made of glass falls onto a hard surface, the object may break. This
is an example of causal inference. Similar schemas can allow you to make
inductive, deductive or analogical inferences, to solve problems and to take
decisions (Markman and Gentner, 2001; Ross, 1996). In conclusion, the
schema theory seems to be a good candidate to explain the dependence of
cognitive rules on background knowledge. If this is the case, we can expect
that different cognitive rules should correspond to different background
knowledge, characterizing, in this way, different cognitive styles. Nisbett
(2003) has shown that the relation between background knowledge and
cognitive rules supports the differences of thinking and reasoning between
Americans and East Asians. These differences can explain the difficulties
in reciprocal understanding and cooperation between people of different
cultures. If this is the situation in industrial and academic research, we
can expect obstacles to collaboration and the transfer of knowledge, and
the consequent emergence of institutions and organizations dedicated to
overcoming these obstacles to the capitalization of knowledge.
46 The capitalization of knowledge

THE EFFECTS OF DIFFERENT DEONTIC


KNOWLEDGE ON ACADEMY–INDUSTRY
RELATIONS

Usually, the obstacles to the collaboration between universities and


companies are analysed by comparing entrepreneurs and managers with
academic scientists (plus the academic technology transfer officers, as in
the case of Siegel et al., 1999). In my opinion, this choice is correct in the
case of the transfer of patents and in licensing technology, because here
the relationship is between an academic scientist and an entrepreneur
or manager, often mediated by an academic technology transfer officer
(TTO). The situation is different in the collaboration between a university
and industrial labs in order to achieve a common goal, such as the devel-
opment of a prototype, the invention of a new technology, the solution to
an industrial problem and so on. In these cases, interaction occurs mainly
between academic and industrial researchers. Entrepreneurs, managers
and TTOs might only play the role of establishing and facilitating the
relationship. Since academy–industry relations are not simply reducible
to patents and licences (Agrawal and Henderson, 2002), but prioritize
joint research collaboration, I prefer to focus on academic and industrial
researcher behaviours. Previous studies on obstacles between universities
and companies analysed only superficial economic, legal and organiza-
tional aspects and focused mainly on the transfer of patents and licences
(Nooteboom et al., 2007; Siegel et al., 1999). Since research collaboration
implies a complex phenomenon of linguistic and cognitive coordination
and adjustment among members of the research group, I think that a
deeper cognitive investigation into this dimension might offer some inter-
esting answers to the academy–industry problem. The main hypothesis is
that there can be different cognitive styles in thinking, problem-solving,
reasoning and decision-making that can hamper collaboration between
academic and industrial researchers. These different cognitive styles are
linked and mostly determined by different sets of values and norms that
are part of background knowledge (as we have seen above). Different
background knowledge is also responsible for poor linguistic coordina-
tion, misunderstanding and for impeding the successful psychological
interaction of the group. The general hypotheses that will be inferred
in the following represent a research programme of empirical tests to
control the effects of cognitive styles on different scientific and technologi-
cal domains and geographical contexts (for a more complete analysis see
Viale, 2009).
Knowledge-driven capitalization of knowledge 47

1 Background Knowledge

What is the difference in background knowledge as between the university


and industrial labs, and how can this influence cognitive styles?
Studies in the sociology of science have focused on the values and
principles that drive scientific and industrial research. Academic research
seems to be governed by a set of norms and values that are close to the
Mertonian ethos (Merton, 1973). Qualities such as communitarianism,
scepticism, originality, disinterestedness, universalism and so on were
proposed by Robert Merton as the social norms of the scientific com-
munity. He justified the proposal theoretically. Other authors, such as
Mitroff (1974), criticized the Mertonian ethos on an empirical basis. He
discovered that scientists often follow Mertonian norms, but that there
are, nevertheless, cases in which scientists seem to follow the opposite of
these norms. More recent studies (Broesterhuizen and Rip, 1984) confirm
most of Merton’s norms. These studies assert that research should be
Strategic, founded on Hybrid and interdisciplinary communities, able
to stimulate Innovative critique, and should be Public and based on
Scepticism (the acronym SHIPS). Other recent studies (Siegel et al., 1999;
Viale, 2001) confirm the presence of social norms that are reminiscent of
the Mertonian ethos. Scientists believe in the pursuit of knowledge per
se, in the innovative role of critique, in the universal dimension of the
scientific enterprise and in science as a public good. They believe in sci-
entific method based on empirical testing, the comparison of hypotheses,
enhanced problem-solving and truth as a representation of the world
(Viale, 2001, pp. 216–19). But the simple fact that scientists have these
beliefs doesn’t prove that they act accordingly. Beliefs can be put on hold
by contingent interests and opportunistic reasons. They can also represent
the invented image that scientists wish to show to society. They can vary
from one discipline and specialization to another. Nevertheless, the pres-
ence of these beliefs seems to characterize the cultural identity of academic
scientists. They constitute part of their background knowledge and can,
therefore, influence the implicit cognitive rules for reasoning and decision-
making. On the contrary, industrial researchers are driven by norms that
are quite different from academic ones. They can be summarized by the
acronym PLACE (Ziman, 1987): Proprietary, local, authoritarian, com-
missioned and expert. Research is commissioned by the company, which
has ownership of the results, which can’t be diffused, and which are valid
locally to improve the competitiveness of the company. The researchers
are subjected to the authoritarian decisions of the company and develop a
particular expertise valid locally. PLACE is a set of norms and values that
characterizes the cultural identity of industrial researchers. These norms
48 The capitalization of knowledge

constitute part of their background knowledge and may influence the


inferential processes of reasoning and decision-making.
In summary, the state of the art of studies on social norms in academic
and industrial research seems insufficient and empirically obsolete. A new
empirical study of norms contained in background knowledge is essential.
This study should control the main features characterizing the cultural
identity of academic and industrial researchers as established by previous
studies. These main features can be summarized as follows.

● Criticism versus dogmatism: academic researchers follow the norm


of systematic critique, scepticism and falsificatory control of knowl-
edge produced by colleagues; industrial researchers aim at maintain-
ing knowledge that works in solving technological problems.
● Interest versus indifference: academic researchers are not impelled in
their activity primarily by economic interest but by epistemological
goals; industrial researchers are motivated mainly by economic ends
such as technological competitiveness, commercial primacy and
capital gain.
● Universalism versus localism: academic researchers believe in a uni-
versal audience of peers and in universal criteria of judgement that
can establish their reputation; industrial researchers think locally in
terms of both the audience and the criteria of judgement and social
promotion.
● Communitarianism versus exclusivism: academic researchers believe
in the open and public dimension of the pool of knowledge to which
they must contribute in order for it to increase; industrial research-
ers believe in the private and proprietary features of knowledge.

To the different backgrounds I should also add the different contingent


features of the contexts of decision-making (I refer here to the decision-
making context of research managers who are heads of a research unit or
of a research group) that become operational norms. The main features
are related to time, results and funding.
In a pure academic context,8 the time allowed for conducting research
is usually loose. There are certain temporal requirements when one is
working with funds coming from a public source (particularly in the
case of public contracts). However, in a contract with a public agency or
government department, the deadline is usually not as strict as that of a
private contract, and the requested results not quite as well defined nor
as specific to a particular product (e.g. a prototype or a new molecule or
theorem). Thus time constraints don’t weigh as heavily on the reason-
ing and decision-making processes of the researchers. In contrast, when
Knowledge-driven capitalization of knowledge 49

an academic researcher works with an industrial contract, the time con-


straints are similar to those of the corporate researcher. Moreover, in a
fixed given time, a precise set of results must be produced and presented
to the company. According to private law, the clauses of a contract with
a company can be very punitive for the researcher and the university if
the signed expected requirements are not complied with. In any case, the
consequences of suboptimal results from an academic working with a
company are less punitive than for a corporate researcher. For the latter,
the time pressure is heavier because the results, in a direct or semi-direct
way, are linked to the commercial survival of the company. Suboptimal
behaviour increases the risks to their career and job security. As a result,
the great expectation of the fast production of positive concrete results
presses on them more heavily. Different environmental pressures may
generate a different adaptive psychology of time and a different adaptive
ontology of what the result of the research might be. In the case of aca-
demic research, time might be less discounted. That is, future events tend
not to be as underestimated as they may be in industrial research. The cor-
porate researcher might fall into the bias of time discounting and myopia
because of the overestimation of short-term results.
Even the ontology of an academic researcher in respect of the final prod-
ucts of the research might be different from that of a corporate researcher.
While the former is interested in a declarative ontology that aims at the
expression of the result in linguistic form (e.g. a report, a publication, a
speech and so on), the latter aims at an object ontology. The results for
the latter should be linked in a direct or indirect way to the creation of
an object (e.g. a new molecule, a new machine, a new material, or a new
process to produce them or a patent that describes the process of produc-
ing them).
The third, different, operational norm concerns financial possibili-
ties. In this case, the problem does not concern the quantity of funding.
Funding for academic research is usually lower for each unit of research
(or, better, for each researcher) than for industrial research. The crucial
problem is the psychological weight of the funds and how much the funds
constrain and affect the reasoning and decision-making processes of the
researchers. In other words (all other things being equal), this involves the
amount of money at disposal and the level to which cognitive processes
and, in particular, attention processes refer to a sort of value-for-money
judgement in deciding how to act. It is still a topic to be investigated, but
from this point of view, it seems that the psychological weight of money
on academic researchers is less than that on industrial researchers. Money
is perceived to have less value and, therefore, influences decision-making
less. The reasons for this different mental representation and evaluation
50 The capitalization of knowledge

may derive from: (1) the way in which funding is communicated and the
ways it can constitute a decision frame (with more frequency and relevance
within the company because it is linked to important decisions concerning
the annual budget); (2) the symbolic representation of money (with much
greater emphasis in the company, whose raison d’etre is the commercial
success of its products and increased earnings); (3) the social identity of
the researchers is linked more or less strongly to the monetary levels of the
wage (with greater importance on the monetary level as an indicator of a
successful career in a private company than in the university). The differ-
ent psychological weight of money has been analysed by many authors,
and in particular by Thaler (1999).
To summarize, operational norms can be schematized in loose time
versus pressing time; undefined results versus well-defined results; and
financial lightness versus financial heaviness.
How can the values in background knowledge and operational norms
influence the implicit cognitive rules of reasoning and decision-making,
and how can they be an obstacle to collaboration between industrial and
academic researchers?
Many aspects of cognition are important in research activity. We can
say that every aspect is involved, from motor activity to perception, to
memory, to attention, to reasoning, to decision-making and so on. My
aim, however, is to focus on the cognitive obstacles to reciprocal commu-
nication, understanding, joint decision-making and coordination between
academic and corporate researchers, and how these might hinder their
collaboration.
I shall analyse briefly three dimensions of interaction: language, group
and inference (i.e. the cognitive rules in thinking, problem-solving, reason-
ing and decision-making).

2 Language

My focus is on the pragmatic aspects of language and communication.


To collaborate on a common project means to communicate, mainly by
natural language. To collaborate means to exchange information in order
to coordinate one’s own actions with those of others in the pursuit of a
common aim. This means ‘using language’, as the title of Clark’s book
(1996) suggests, in order to reach the established common goal. Any lin-
guistic act is at the same time an individual and a social act. It is individual
because it is the individual who by motor and cognitive activity articulates
the sounds that correspond to words and phrases, and who receives and
interprets these sounds. Or, in Goffman’s (1981) terminology on linguistic
roles, it is the subject that ‘vocalizes’, ‘formulates’, and ‘means’, and it is
Knowledge-driven capitalization of knowledge 51

another subject that ‘attends the vocalization, identifies the utterances


and understands the meaning’ (Clark, 1996, p. 21). It is social because
every linguistic act of a speaker has the aim of communicating something
to one or more addressees (even in the case of private settings where we
talk to ourselves, since here we ourselves play the role of an addressee).
In order to achieve this goal, there should be coordination between the
speaker’s meaning and the addressee’s understanding of the communica-
tion. However, meaning and understanding are based on shared knowl-
edge, beliefs and suppositions, namely, on shared background knowledge.
Therefore the first important point is that it is impossible for two or more
actors in a conversation to coordinate meaning and understanding without
reference to common background knowledge. ‘A common background is
the foundation for all joint actions, and that makes it essential to the
creation of the speaker’s meaning and addressee’s understanding as well’
(Clark, 1996, p. 14). A common background is shared by the members of
the same cultural community.
A second important point is that the coordination between meaning
and understanding is more effective when the same physical environment
is shared (e.g. the same room at a university or the same bench in a park)
and the vehicle of communication is the richest possible. The environ-
ment represents a communicative frame that can influence meaning and
understanding. Moreover, gestures and facial expressions are rich in non-
linguistic information and are also very important aids in coordination.
From this point of view, face-to-face conversation is considered the basic
and most powerful setting for communication.
The third point is that the more simple and direct the coordination,
the more effective the communication. There are different ways of com-
plicating communication. The roles of speaking and listening (see above
regarding linguistic roles) can be decoupled. The use of spokesmen, ghost
writers and translators is an example of decoupling. A spokeswoman for
a minister is only a vocalizer, while the formulation vocalized is the ghost
writer’s, and the meaning is the minister’s. Obviously, in this case, the
coordination of meaning and understanding becomes more difficult (and
even more so because it is an institutional setting with many addressees).
The non-verbal communication of the spokeswoman might be inconsist-
ent with the meaning of the minister, and the ghost writer might not be
able to formulate this meaning correctly. Moreover, in many types of dis-
course, such as plays, story telling, media news and reading, there is more
than one layer of action. The first layer is that of the real conversation.
The second layer concerns the hypothetical domain that is created by the
speaker (when he is describing an event). Through recursion there can be
further layers as well. For example, a play requires three layers: the first
52 The capitalization of knowledge

is the real-world interaction among the actors, the second is the fictional
role of the actors; and the third is the communication with the audience.
In face-to-face conversation there is only one layer and no decoupling.
The roles of vocalizing, formulating and producing meaning are per-
formed by the same person. The domain of action identifies itself with the
conversation; coordination is direct without intermediaries. Thus face-to-
face conversation is the most effective way of coordinating meaning and
understanding, resulting in only minor distortions of meaning and fewer
misunderstandings. Academic and industrial researchers are members
of different cultural communities and, therefore, have different back-
ground knowledge. In the collaboration between academic and industrial
researchers, coordination between meanings and understandings can be
difficult if background knowledge is different. When this is the case, as we
have seen before, the result of the various linguistic settings will probably
be the distortion of meaning and an increase in misunderstanding. When
fundamental values are different (as in SHIPS versus PLACE), and also
when the operational norms of loose time versus pressing time, undefined
product versus well-defined product and financial lightness versus finan-
cial heaviness are different, it is impossible to transfer knowledge without
losing or distorting shares of meaning.
Moreover, difficulty in coordination will increase in settings that utilize
intermediaries between the academic inventor and the potential industrial
user (‘mediated settings’ in Clark, 1996, p. 5). These are cases in which
an intermediate technology transfer agent tries to transfer knowledge
from the university to corporate labs. In this case, there is a decoupling
of speech. The academic researcher is the one who formulates and gives
meaning to the linguistic message (also in a written format), while the
technology transfer (TT) agent is merely a vocalizer. As a result, there
may be frequent distortion of the original meaning, in particular when
the knowledge contains a large share of tacit knowledge. This distortion
is strengthened by the likely difference in background knowledge between
the TT agent and that of the other two actors in the transfer. TT agents are
members of a different cultural community (if they are professional, from
a private TT company) or come from different sub-communities inside the
university (if they are members of a TT office). Usually, they are neither
active academic researchers nor corporate researchers. Finally, the trans-
fer of technology can also be accompanied by the complexity of having
more than one domain of action. For example, if the relation between an
academic and an industrial researcher is not face-to-face, but is instead
mediated, there is an emergent second layer of discourse. This is the layer
of the story told by the intermediary about the original process and the
techniques needed to generate the technology invented by the academic
Knowledge-driven capitalization of knowledge 53

researchers. The story can also be communicated with the help of a written
setting, for example, a patent or publication. All three points show that
common background knowledge is essential for reciprocal understanding
and that face-to-face communication is a prerequisite for minimizing the
distortion of meaning and the misunderstandings that can undermine the
effectiveness of knowledge transfer.

3 Group

The second dimension of analysis is that of the group. When two or more
persons collaborate to solve a common problem, they elicit interesting
emergent phenomena. In theory, a group can be a powerful problem-
solver (Hinsz et al., 1997). But in order to be so, members of the group
must share information, models, values and cognitive processes (ibid.). It
is likely that heterogeneity of skill and knowledge is very useful for detect-
ing solutions more easily. Some authors have analysed the role of hetero-
geneity in cognitive tasks (e.g. the solution of a mathematical problem)
and the generation of ideas (e.g. the production of a new logo), and have
found a positive correlation between it and the successful completion of
these tasks (Jackson, 1992). In theory, this result seems very likely, since
finding a solution entails looking at the problem from different points of
view. Different perspectives allow the phenomenon of entrenched mental
set to be overcome; that is, the fixation on a strategy that normally works
well in solving many problems but that does not work well in solving
this particular problem (Sternberg, 2009). However, the type of diver-
sity that works concerns primarily cognitive skills or personality traits
(Jackson, 1992). In contrast, when diversity is based on values, social
categories and professional identity, it can hinder the problem-solving
ability of the group. This type of heterogeneity generates the categoriza-
tion of differences and similarities between the self and others, and results
in the emergent phenomenon of the conflict/distance between ‘ingroup’
and ‘outgroup’ (Van Knippenberg and Schippers, 2007). The relational
conflict/distance of ingroup versus outgroup is the most social expres-
sion of the negative impact of diversity of background knowledge on
group problem-solving. As was demonstrated by Manz and Neck (1995),
without a common background knowledge, there can be no sharing of
goals, of the social meaning of the work, of the criteria to assess and to
correct the ongoing activity, of foresight on the results nor on the impact
of the results and so on. As described by the theory of ‘teamthink’ (Manz
and Neck, 1995), the establishment of an effective group in problem-
solving relies on the common sharing of values, beliefs, expectations and,
a priori, on the physical and social world. For example, academic and
54 The capitalization of knowledge

industrial researchers present different approaches concerning disciplinary


identity. Academics have a strong faith in the ‘disciplinary matrix’ (Kuhn,
1962) composed of the members of a discipline with their particular set
of disciplinary knowledge and methods. In contrast, industrial research-
ers tend to be opportunistic both in using knowledge and in choosing
peers. They don’t feel the need to be a member of an invisible disciplinary
college of peers and instead choose à la carte which peers are helpful to
them and what knowledge is useful to attain the goal of the research. This
asymmetry between academic and corporate researchers is an obstacle
to the proper functioning of ‘teamthink’. The epistemological and social
referents are different, and communication here can resemble a dialogue
between deaf and mutes. Lastly, there is the linguistic dimension. As we
have seen above, without common background knowledge, the coordina-
tion of meaning and understanding among the members of the group (i.e.
the fundamental basis of collaboration) is impossible. Moreover, without
common background knowledge, the pragmatic context of communica-
tion (Grice, 1989; Sperber and Wilson, 1986) doesn’t allow the generation
of correct automatic and non-automatic inferences between speaker and
addressee. Foe example the addressee will not be able to generate proper
‘implicatures’ (Grice, 1989) to fill in the lack of information and the
elliptical features of the discourse.

4 Cognitive Rules

Finally, different background knowledge influences inference, that is,


the cognitive rules in thinking, problem-solving, reasoning and decision-
making activity. Different implicit cognitive rules mean asymmetry, asyn-
chrony and dissonance in the cognitive coordination among the members
of the research group. This generates obstacles to the transfer of knowl-
edge, to the application of academic expertise and knowledge to the indus-
trial goal, and to the development of an initial prototype or technological
idea towards a commercial end. I shall discuss this subject only briefly; it is
fully developed in Viale (2009).
There are two systems of thinking that affect the way we reason, decide
and solve problems. The first is the associative system, which involves
mental operations based on observed similarities and temporal contiguities
(Sloman, 1996). It can lead to speedy responses that are highly sensitive to
patterns and to general tendency. This system corresponds to system 1 of
Kahneman (2003), and represents the intuitive dimension of thinking. The
second is the rule-based system, which involves manipulations based on
the relations among symbols (Sloman, 1996). This usually requires the use
of deliberate, slow procedures to reach conclusions. Through this system,
Knowledge-driven capitalization of knowledge 55

we carefully analyse relevant features of the available data based on rules


stored in memory. This corresponds to system 2 of Kahneman (2003). The
intuitive and analytical systems can produce different results in reasoning
and decision-making. The intuitive system is responsible for the biases and
errors of everyday-life reasoning, whereas the analytical system allows us
to reason according to the canons of rationality. The prevalence of one
of these two systems in the cognitive activity of academic and industrial
researchers will depend on contingent factors, such as the need to finish the
work quickly, and on the diverse styles of thinking. I hypothesize that the
operational norms of pressing time and well-defined results and the social
norms of dogmatism and localism will support a propensity towards the
activity of the intuitive system. In contrast, the operational norms of loose
time and undefined results, and the social norms of criticism and univer-
salism will support the activity of the analytical system. The role of time in
the activation of the two systems is evident. Industrial researchers are used
to following time limits and to giving value to time. Thus this operational
norm influences the speed of reasoning and decision-making and the acti-
vation of the intuitive system. The contrary happens in academic labs. The
operational norm concerning results seems less evident. Those operating
without the constraints of well-defined results have the ability to indulge
in slow and attentive ways of analysing the features of the variables and in
applying rule-based reasoning.
Those who must produce an accomplished work can’t stop to analyse
the details and must proceed quickly to the final results. The social norm
of criticism is more evident. The tendency to check, monitor and to criti-
cize results produced by other scientists strengthens the analytical attitude
in reasoning. Any form of control requires a slow and precise analysis of
the logical coherence, methodological fitness and empirical foundations
of a study. On the contrary, in corporate labs the aim is to use high-
quality knowledge for practical results and not to increase the knowledge
pool by overcoming previous hypotheses through control and critique.
Finally, the social norm of universalism versus localism is less evident.
Scientists believe in a universal dimension to their activity. The rules of the
scientific community should be clear and comprehensible to their peers.
Furthermore, the scientific method, reasoning style and methodological
techniques can’t be understood and followed by only a small and local
subset of scientists, but must be explicit to all in order to allow the dif-
fusion of their work to the entire community. Thus universality tends to
strengthen the analytical system of mind. The contrary happens where
there is no need for the explicitness of rules and where evaluation is made
locally by peers according to the success of the final product.
The other dimension concerns problem-solving. At the end of the 1950s,
56 The capitalization of knowledge

Herbert Simon and his colleagues analysed the effect of professional


knowledge on problem representation. They discovered the phenomenon
of ‘selective perception’ (Dearborn and Simon, 1958), that is, the relation
between different professional roles and different problem representations.
In the case of industrial and academic scientists, I presume that selective
perception will be effective not only in relation to professional and discipli-
nary roles but also as regards social values and operational norms. These
norms and values might characterize the problem representation and,
therefore, might influence reasoning and decision-making. For example,
in representing the problem of the failure of a research programme, indus-
trial researchers might point more to variables like cost and time, whereas
academic scientists might point more towards an insufficiently critical
attitude and too local an approach.
In any case, it might prove interesting to analyse the time spent by aca-
demic and industrial researchers in problem representation. The hypoth-
esis is that time pressure together with an intuitive system of thinking,
might cause the industrial researchers to dedicate less time to problem
representation than academic researchers.
Time pressure can affect the entire problem-solving cycle, which includes
problem identification, definition of a problem, constructing a strategy for
problem-solving, organizing information about a problem, allocation of
resources, monitoring problem-solving, and evaluating problem-solving
(Sternberg, 2009). In particular, it might be interesting to analyse the effect
of pressing versus loose time on the monitoring and evaluation phases of
the cycle. An increase in time pressures could diminish the time devoted to
these phases. Dogmatism can accelerate the time spent on monitoring and
evaluation, whereas criticism might lead to better and deeper monitoring
and evaluation of the problem solution.
Finally, time pressure might also have an effect on incubation. In order
to allow old associations resulting from negative transfer to weaken, one
needs to put a problem aside for a while without consciously thinking
about it. One allows for the possibility that the problem will be processed
subconsciously in order to find a solution. There are several possible
mechanisms for enhancing the beneficial effects of incubation (Sternberg,
2009). Incubation needs time. Thus the pressing-time norm of industrial
research may hinder success in problem-solving.
The third dimension concerns reasoning. Reasoning is the process of
drawing conclusions from principles and from evidence. In reasoning, we
move from what is already known to infer a new conclusion or to evaluate
a proposed conclusion. There are many features of reasoning that differ-
entiate academic from corporate scientists.
Probabilistic reasoning is aimed at updating an hypothesis according
Knowledge-driven capitalization of knowledge 57

to new empirical evidence. Kahneman and Tversky (1973) and Tversky


and Kahneman (1980, 1982a, 1982b) have proven experimentally that we
forget prior probability and give excessive weight to new data. According
to Bar Hillel (1980), we give more weight to new data because we consider
them more relevant compared to old data. Relevance in this case might
mean that more affective or emotional weight is given to the data and,
consequently, stronger attentional processes focused on them. An oppo-
site conservative phenomenon happens when old data are more relevant.
In this case we tend to ignore new data. In the case of industrial research-
ers, it can be hypothesized that time pressure, financial weight and well-
defined results tend to give more relevance to new data. New experiments
are costly and should be an important step towards the conclusion of the
work. They are, therefore, more relevant and privileged by the mecha-
nisms of attention. In contrast, academic scientists, without the pressures
of cost, time and the swift conclusion of the project, can have a more
balanced perception of the relevance of both old and new data.
In deductive reasoning and, in particular, hypothesis-testing, Wason
(1960) and Johnson-Laird (1983) proved that, in formal tests, people
tend to commit confirmation bias. New studies analysing the emotional
and affective dimension of hypothesis-testing have found that when an
individual is emotionally involved in a thesis, he/she will tend to commit
confirmation bias. The involvement can be varied: economic (when one
has invested money in developing an idea), social (when social position is
linked to the success of a project), organizational (when a leader holding
a thesis is always right) or biographical (when one has spent many years
of one’s life in developing a theory). The emotional content of a theory
causes a sort of regret phenomenon that can push the individual to avoid
falsification of the theory. From this point of view, it is likely that financial
heaviness and dogmatism, together with other social and organizational
factors, will induce industrial researchers to commit confirmation bias
more easily. Research is costly but fundamental to the commercial sur-
vival of a company. Therefore researchers’ work should be successful and
the results well defined in order for them to retain or to improve their posi-
tion. Moreover, industrial researchers don’t follow the academic norm
of criticism that prescribes the falsificationist approach towards scientific
knowledge. This is contrary to the situation of academic scientists, who
tend to be critics, and who are not (and should not be) obliged to be suc-
cessful in their research. It is, therefore, likely that they are less prone to
confirmation bias.
Causal reasoning, according to Mackie (1974), is based on a ‘causal
field’, that is, the set of relevant variables able to cause an effect. It is well
known that each individual expert presented with the same event will
58 The capitalization of knowledge

support a particular causal explanation. Often, once the expert has identi-
fied one of the suspected causes of a phenomenon, he/she stops searching
for additional alternative causes. This phenomenon is called ‘discounting
error’. From this point of view, the hypothesis posits that the different
operational norms and social values of academic and corporate research
may produce different discounting errors. Financial heaviness, pressing
time and well-defined results compared to financial lightness, slow time
and ill-defined results may limit different causal fields in the entire project.
For example, the corporate scientist can consider time as a crucial causal
variable for the success of the project, whereas the academic researcher is
unconcerned with it. At the same time, the academic researcher can con-
sider the value of universal scientific excellence of the results to be crucial,
whereas the industrial researcher is unconcerned with it.
The fourth dimension deals with decision-making. Decision-making
involves evaluating opportunities and selecting one choice over another.
There are many effects and biases connected to decision-making. I shall
focus on certain aspects of decision-making that can differentiate academic
from industrial researchers.
The first deals with risk. According to ‘prospect theory’ (Kahneman
and Tversky, 1979; Tversky and Kahneman, 1992), risk propensity is
stronger in situations of loss and weaker in situations of gain. A loss of
$5 causes a negative utility bigger than the positive utility caused by the
gain of $5. Therefore people react to a loss with risky choices aimed at
recovering the loss. Two other conditions that increase risk propensity are
overconfidence (Fischhoff et al., 1977; Kahneman and Tversky, 1996) and
illusion of control (Langer, 1973). People often tend to overestimate the
accuracy of their judgements and the probability of the success of their
performance. Both the perception of loss and overconfidence occur when
there is competition; decisions are charged with economic meaning and
have economic effects. The operational norm of financial heaviness and
pressing time, and the social value of exclusivity and the interests of the
industrial researcher can increase the economic value of choices and inten-
sify the perception of competitiveness. This, consequently, can increase
risk propensity. In contrast, the social values of communitarianism and
indifference, and the operational norms of financial lightness and the slow
time of academic scientists may create an environment that doesn’t induce
a perception of loss or overconfidence. Thus behaviour tends to be more
risk-averse.
A second feature of decision-making is connected to regret and loss
aversion. We saw before that, according to prospect theory, an indi-
vidual doesn’t like to lose, and reacts with increased risk propensity.
Loss aversion is based on the regret that loss produces in the individual.
Knowledge-driven capitalization of knowledge 59

This regret is responsible for many effects. One of the most important is
‘irrational escalation’ (Stanovich, 1999) in all kinds of investments (not
only economic, but also political and affective). When one is involved in
the investment of money in order to reach a goal, such as the building of
a new missile prototype or the creation of a new molecule to cure AIDS,
one has to consider the possibility of failure. One should monitor the
various steps of the programme and, especially when funding ends, one
must coldly analyse the project’s chances for success. In this case, one must
consider the monies invested in the project as sunk cost, forget them and
proceed rationally. People tend, however, to become affectively attached
to their project (Nozick, 1990; Stanovich, 1999). They feel strong regret in
admitting failure and the loss of money, and tend to continue investment
in an irrational escalation of wasteful spending in an attempt to attain the
established goal. This psychological mechanism is also linked to prospect
theory and risk propensity under conditions of loss. Irrational escalation
is stronger when there is a stronger emphasis on the economic importance
of the project. This is the typical situation of a private company, which
links the success of its technological projects to its commercial survival.
Industrial researchers have the perception that their job and the possibil-
ity of promotion are linked to the success of their technological projects.
Therefore they are likely to succumb more easily to irrational escalation
than academic researchers, who have the operational norm of financial
lightness and the social norm of indifference, and whose career is only
loosely linked to the success of research projects.
The third aspect of decision-making has to do with an irrational bias
called ‘myopia’ (Elster, 1979) or temporal discounting. People tend to
strongly devalue long-term gains over time. They prefer small, immediate
gains to big gains projected in the future. Usually, this behaviour is associ-
ated with overconfidence and the illusion of control. Those who discount
time prefer the present, because they imagine themselves able to control
output and results beyond any chance estimations. In the case of industrial
researchers, and of entrepreneurial culture in general, the need to have
results at once, to find fast solutions to problems and to assure sharehold-
ers and the market that the company is stable and growing seems to align
with the propensity towards time discounting. Future results don’t matter.
What it is important is the ‘now’ and the ability to have new competitive
products in order to survive commercially. Financial heaviness, pressing
time and well-defined results may be responsible for the tendency to give
more weight to the attainment of fast and complete results at once, even
at the risk of making products that in the future will be defective, obso-
lete and easily overcome by competing products. In the case of academic
scientists, temporal discounting might be less strong. In fact, the three
60 The capitalization of knowledge

operational norms of financial lightness, loose time and undefined results,


together with criticism and universalism, may help immunize them from
myopic behaviours. Criticism is important because it pushes scientists
to be not easily satisfied with quick and unripe results that can be easily
falsified by their peers. Universalism is important because academics want
to pursue results that are not valid locally but that can be recognized and
accepted by the entire community and that can increase their scientific
reputation. In the academic community, it is well known that reputation is
built through a lengthy process and can be destroyed very quickly.

‘NUDGE’9 SUGGESTIONS TO THE TRIPLE HELIX:


THE JANUS SCIENTIST AND PROXIMITY

The capitalization of knowledge is usually analysed by recourse to exter-


nal socioeconomic factors. An example is the way in which the model
of the triple helix is proposed. The main determinants of the interaction
between university, industry and government in supporting innovation
and of the emergence of hybrid organizations, entrepreneurial universi-
ties, dual academic careers and so forth (Etzkowitz, 2008) are economic
(mainly industrial competitiveness and academic fundraising) and politi-
cal (mainly regional primacy). Economic and political forces are able to
shape organizations and to change institutional norms.
In contrast, the thesis of this chapter is that we can’t explain and predict
the organizational and institutional development of the capitalization
of knowledge without considering the internal dynamics driven by the
epistemological and cognitive features of knowledge. Various authors
have pinpointed the importance of the features of knowledge and cogni-
tion in shaping organizations. For Jim March and Herbert Simon (1993),
‘bounded rationality’ is the conceptual key to understanding the emer-
gence of the organization, the division of labour and of routines. When
the human mind cannot process the amount of information that it faces in
complex problem-solving, it needs to share this burden with other minds.
Different complementary roles in problem-solving emerge. These roles
include a great amount of routine, that is, reasoning and decision-making
realized in automatic or quasi-automatic ways. Moreover, according to
Douglass North (2005) an organization is characterized by the structure
of institutional values and norms. The norms and values, or in other words
background knowledge, are responsible for shaping the organization and
for pushing the actors to act and interact in particular ways.
If we follow those authors who wish to explain, predict and also inter-
vene in the organization, we should consider, primarily, variables such
Knowledge-driven capitalization of knowledge 61

as complexity of information, limited cognition and the background


knowledge of the actors. It is pointless to try to design organizations and
social actions through top-down economic and political planning without
considering the microstructure of motivations, norms, cognitive resources
and knowledge. ‘Nudge’ (Thaler and Sunstein, 2008) is a thesis that starts
from these simple observations. When a policy-maker, a social planner
and an economic planner want to reach certain collective goals, they must
single out the right institutional tools capable of nudging the individual
actors to behave coherently according to the planned aim. In order to
nudge the actors effectively, one must be able to consider their cognitive
limitations and motivations, and the environmental complexity in which
they are placed. If a policy-maker wants to devise successful policy recipes,
he/she should reason as a cognitive rule ergonomist; that is, he/she should
‘extract’ the rules from the knowledge of the minds of the actors interact-
ing within a given initial environment.
In this chapter, I have analysed the effects of the epistemological and
cognitive features of knowledge on the capitalization of knowledge. In
particular, I have hypothesized that some intrinsic features of knowledge
can have effects on how knowledge can be generated, transferred and
developed in order to achieve commercial aims. These effects, in turn,
constrain the organizational and institutional forms aimed at capital-
izing knowledge. The following is a summary of the main features of the
knowledge relevant for capitalization.

1 Generality versus Singularity

When knowledge is composed of descriptive assertions (i.e. elementary


propositions or base assertions), it refers to singular empirical events
without any claim of generality. As was the case with the descriptive asser-
tions of eighteenth-century inventors, the predicative field was limited to
the empirical experience of inventors themselves. The reason, however,
is not only epistemological but cognitive as well. In fact, the conceptual
categorization of an empirical event changes with the experience. Thus
we have a different mental representation of the same object at differ-
ent times. In any case, descriptive assertions have no explanatory power
and can’t allow the enlargement of the field of innovation. The effect of
singularity on knowledge was a capitalization that was weakened by the
law of diminishing returns. Exploitation was rapid, and only slow and
small incremental innovations were generated from the original invention.
Research was conducted mainly by individuals, outside the university,
with the participation of apprentices. The short-range nature of the work
and other institutional and economic factors (Rosenberg and Birdzell,
62 The capitalization of knowledge

1986; Mowery and Rosenberg, 1989; Mokyr, 2002a, 2002b) pushed


industrial companies to try to widen the scientific base of inventions in
order to increase generality in knowledge. As we saw in the Appert versus
Pasteur case, a theory explaining the causal mechanisms of food preser-
vation allowed the improvement of the same innovation and, moreover,
its application outside the original innovative field. The effect of general
explanatory knowledge was the development of a capitalization that over-
comes the constraints of the law of diminishing returns. Research needs to
be conducted in laboratories, inside or in collaboration with a university,
concurrent with the birth and development of new applied specializations
(i.e. applications of a general theory to find solutions to practical problems
and to invent commercial products). Moreover, general theories often
apply across different natural and disciplinary domains. For example,
DNA theory applies to agriculture, the environment, human health,
animal health and so on. Information theory and technology can also be
applied in many different areas, from genomics to industrial robotics. The
generality of the application of a theory requires interdisciplinary train-
ing and research organizations that are able to single out promising areas
of innovation and to generate the proper corresponding technological
knowledge. This implies an interdisciplinary division of labour that can be
afforded only by research universities and by the largest of companies.

2 Complexity versus Simplicity

Analytically, simple knowledge is categorized by a syntactic structure


composed of few assertions with few terms whose meaning is conceptu-
ally evident (because they are empirically and directly linked to external
objects that have a well-defined categorization). A descriptive assertion,
such as ‘this crow is black’, or an empirical generalization, such as ‘all
crows are black’, is an example of simple knowledge. These analytical
features correspond to cognitive ones. The semantic network representing
this knowledge is composed of a few interrelated nodes. Complexity, on
the other hand, is analytically represented by a great number of asser-
tions, containing many terms whose meaning is conceptually obscure (as,
e.g., when there are theoretical terms that have indirect empirical mean-
ings that derive from long linguistic chains, such as in the case of quarks
or black holes). Quantum mechanics and string theory are examples of
complex knowledge, mainly from the point of view of the meaning of
the terms. Linnaeus’s natural taxonomy and Mendeleev’s periodic table
of elements are examples mainly from the point of view of the number
of assertions they contain. Analytical complexity implies computational
overloading. The cognitive representation of a theory or of several theories
Knowledge-driven capitalization of knowledge 63

might correspond to an intricate semantic network with many small, inter-


related and distant nodes. For an individual mind, it is usually impossible
to have a complete representation of a complex theory, let alone several
theories. The cognitive network will represent the conceptual structure
of the theory only partially. Usually, some mental model of a theory will
play the role of a heuristic device in reasoning and problem-solving. The
model serves as a pictorial analogy of the theory and, therefore, does not
ensure the completeness or consistency of the problem-solving results. It
is evident from what I have previously stated that knowledge simplicity
doesn’t require organization in knowledge generation. An individual mind
can represent, process and compute knowledge and its consequences. The
more complex knowledge becomes, the greater organizational division
of labour is needed to cope with it. Only a network of interacting minds
can have a complete representation, process the relevant information and
compute the deductive consequences of complex knowledge. An organiza-
tion should be shaped to give room to theoretical scientists working on
concepts, to experimental scientists working on bridge laws between theo-
retical concepts and natural phenomena, and to applied scientists working
on technological applications. Statisticians, mathematicians and experi-
mental technicians will also play an important role. ‘Big Science’ projects
such as the Los Alamos nuclear bomb, human genome mapping, nuclear
fusion and particle physics are examples of this collective problem-solving.
When complexity is also linked to generality, the division of labour will
be reflected in the interdisciplinarity of the experts involved in collective
problem-solving. Most companies will not be endowed with this level of
expertise and will, therefore, always rely more on academic support in
applying knowledge to commercial aims. Consequently, increasing com-
plexity and generality means a growing ‘industrial’ role for universities.
The Obama programme for green technologies might be a future example
of the generation and capitalization of complex and general knowledge
that will see universities playing a central ‘industrial’ role.

3 Explicitness versus Tacitness

To capitalize knowledge, one should be able to sell it or use it to produce


economic added value. In both cases, knowledge must be completely
understandable and reproducible by both the inventor and by others. In the
latter case, knowledge must not lose part of its meaning in transfer. When
knowledge was mainly composed of descriptive assertions and technical
norms, it was highly idiosyncratic. Descriptive assertions corresponded
to the perceptual and conceptual apparatus of the inventor. Technical
norms were represented by competential know-how. Thus knowledge was
64 The capitalization of knowledge

mainly tacit, and its transfer through linguistic media almost impossible.
The organizational centre of capitalization was the inventor’s laboratory,
where he/she attempted to transfer knowledge to apprentices through face-
to-face teaching and by doing and interacting. Selling patents was pointless
without ‘transfer by head’ or proper apprenticeship. According to some
authors with the growth of science-based innovation the situation changed
substantially. In life sciences, for example, ontic knowledge is composed
of explanatory assertions, mainly theories and models. Technical norms
are less represented by competential know-how than by explicit condition–
action rules. Thus the degree of tacitness seems, at first sight, to be less.
Ontic knowledge explaining an invention might be represented, explicitly,
by general theories and models, and the process for reproducing the inven-
tion would be little characterized by know-how. A patent might be sold
because it would allow complete knowledge transfer. Academic labs and
companies might interact at a distance, and there would be no need for
university–industry proximity. The explicitness of technological knowl-
edge would soon become complete with the ICT revolution (Cowan et al.,
2000), that would be able even to automatize know-how. As I have shown
in previous articles (Balconi et al., 2007; Pozzali and Viale, 2007), this opti-
mistic representation of the disappearance of tacit knowledge is an error. It
considers tacitness only at the level of competential know-how and does not
account for the other two aspects of tacitness, namely, background knowl-
edge and cognitive rules. Background knowledge not only includes social
norms and values but also principles and categories that give meaning to
actions and events. Cognitive rules serve to apply reason to the data and
to find solutions to problems. Both tend to be individually variable. The
knowledge represented in a patent is, obviously, elliptical from this point
of view. A patent can’t explicitly contain background knowledge and cog-
nitive rules used to reason and interpret information contained in it. These
irreducible tacit aspects of knowledge oblige technology generators and
users to interact directly in order to stimulate a convergent calibration of
the conceptual and cognitive tools needed to reason and interpret knowl-
edge. This entails a stimulus towards proximity between university and
company and the creation of hybrid organizations between them to jointly
develop knowledge towards commercial aims.

4 Shared Background Knowledge versus Unshared Background


Knowledge

Norms and values used for action, together with principles and concepts
used for understanding, constitute background knowledge. Beyond knowl-
edge transfer, shared background knowledge is necessary for linguistic
Knowledge-driven capitalization of knowledge 65

communication and for effective collaboration in a group. The linguistic


dimension has never been analysed in knowledge capitalization. Its impor-
tance is evident both in patent transfer and in research collaboration. As I
stated above, academic and industrial researchers are members of different
cultural communities and, therefore, have different background knowl-
edge. In the collaboration between academic and industrial researchers, the
coordination between meanings and understandings can be difficult if back-
ground knowledge is different. When this is the case, as I have seen above,
the effect on the various linguistic settings will probably be the distortion
of meaning and the creation of misunderstandings. Moreover, difficulties
in coordination will increase in settings that utilize intermediaries (such as
the members of the TT office of a university or of the TT agent of a private
company or government) between the academic inventor and the potential
industrial user (‘mediated settings’ in Clark, 1996, p. 5). In this case, there
is decoupling of speaking. The academic researcher formulates and gives
meaning to the linguistic message (also in a written setting), while the TT
agent is merely a vocalizer of the message. Thus there may be frequent
distortion of the original meaning, in particular when the knowledge con-
tains a great share of tacit knowledge. This distortion is strengthened by
the likely differences in background knowledge between the TT agent and
the other actors in the transfer. Finally, in the transfer of technology, the
complexity of having more than one domain of action can also arise. For
example, if the relation between an academic and industrial researcher is not
face-to-face but is instead mediated by an intermediary, there is an emergent
second layer of discourse. This is the layer of the story that is told by the
intermediary about the original process and techniques used to generate the
technology invented by the academic researchers. All three points show that
common background knowledge is essential for reciprocal understanding
and that face-to-face communication is a prerequisite for minimizing the
distortion of meaning and the misunderstandings that can undermine effec-
tive knowledge transfer. Organizations of knowledge capitalization must,
therefore, emphasize the feature of proximity between knowledge produc-
ers and users, and support the creation of public spaces for meeting and cul-
tural exchange between members of universities and companies. Moreover,
universities primarily, but companies also, should promote the emergence
of a new professional figure, a researcher capable of cultural mediation
between academic and industrial background knowledge.

5 Shared Cognitive Style versus Unshared Cognitive Style

Analytically, cognitive rules for inference are part of deontic knowledge.


Cognitively, they can be considered the emergent results of pragmatic
66 The capitalization of knowledge

knowledge. In any case, they are influenced by norms and values contained
in background knowledge, as was shown by Nisbett (2003) in his study on
American and East Asian ways of thinking. The hypothesis of different
cognitive rules generated by different background knowledge seems likely
but must still be confirmed empirically (Viale et al., forthcoming). I shall
now look at some examples of these differences (analysed in the pilot
study of Fondazione Rosselli, 2008). Time perception and the operational
norm of the loose time versus pressing time differentiate business-oriented
academics from entrepreneurial researchers. For the latter, time is press-
ing, and it is important to find concrete results quickly and not waste
money. Their responses show a clear temporal discounting. The business
participants charge academics with looking too far ahead and not caring
enough about the practical needs of the present. The short-term logic of
the industrial researchers seems to follow the Latin saying Primum vivere
deinde philosophare (‘First live, then philosophize’). For them, it is better
to concentrate their efforts on the application of existing models in order
to obtain certain results. The academic has the opposite impetus, that is,
to explore boundaries and uncertain knowledge. The different temporal
perceptions are linked to risk assessment. The need to obtain fast results
for the survival of the company increases the risk perception of the money
spent on R&D projects. In contrast, even if the academic participants
are not pure but business oriented, they don’t exhibit the temporal dis-
counting phenomenon, and for them risk is perceived in connection with
scientific reputation inside the academic community (the social norm of
universalism). What is risky to the academic researchers is the possibil-
ity of failing to gain scientific recognition (vestiges of academic values).
Academic researchers also are more inclined towards communitarianism
than exclusivity (vestiges of academic values). They believe that knowl-
edge should be open and public and not used as exclusive private property
to be monopolized. For all participants, misunderstandings concerning
time and risk are the main obstacles to collaboration. University members
accuse company members of being too short-sighted and overly prudent
in the development of new ideas; entrepreneurial participants charge
university members with being too high-minded and overly far-sighted in
innovation proposals. This creates organizational dissonance in planning
the milestones of the projects and in setting the amount of time needed for
the various aspects of research. Differences in cognitive rules are a strong
factor in creating dissonance among researchers. The likely solution to this
dissonance is the emergence in universities of a new research figure trained
in close contact with industrial labs. This person should have the academic
skills of his/her pure scientist colleagues and, at the same time, knowledge
of industrial cognitive styles and values. Obviously, hybrid organizations
Knowledge-driven capitalization of knowledge 67

can also play an important role, acting as a type of ‘gym’ in which to train
towards the convergence between cognitive styles and values.

In conclusion, when the main hypotheses of this chapter will be empiri-


cally controlled we shall know what the main epistemological and cogni-
tive determinants in capitalizing knowledge are. This will give us clues
on how to ‘nudge’ (Thaler and Sunstein, 2008) the main stakeholders in
academy–industry relations to act to improve collaboration and knowl-
edge transfer. In my opinion, ‘nudging’ should be the main strategy of
public institutions and policy-makers wishing to strengthen the triple-helix
model of innovation.
For example, if results confirm the link between social values, opera-
tional norms and cognitive style, it might be difficult to overcome the
distances between pure academic scientists and entrepreneurial research-
ers. What would be more reasonable would be to support the emergence
of a new kind of researcher. Together with the pure academic researcher
and the applied researcher, universities must promote a mestizo, a hybrid
figure that, like a two-faced Janus (Viale, 2009), is able to activate men-
tally two inconsistent sets of values and operational norms, that is, the
academic and the entrepreneurial. These Janus scientists would not believe
the norms, but would accept them as if they believed them (Cohen, 1992).
They would act as the cultural mediators and translators between the two
worlds. They should be members of the same department as the pure and
applied scientists, and should collaborate with them as well as with the
industrial scientists. A reciprocal figure such as this is difficult to introduce
into a company unless it is very large and financially well endowed. In
fact, commercial competitiveness is the main condition for company sur-
vival. Time and risk are leading factors for competitiveness. And cognitive
style is strongly linked to them. This creates a certain rigidity when faced
with the possibility of change. Thus adaptation should favour the side of
university, where there is more potential elasticity in shaping behaviour.
Moreover, the two-faced Janus figure is different from that involved in a
TT office. It is a figure that should collaborate directly in research activity
with corporate scientists, whereas a member of a TT office has the func-
tion of establishing the bridge between academics and the company. The
first allows R&D collaboration, whereas the second facilitates technology
transfer. Empirical confirmation of the emergence of these figures can
be found in the trajectories of the development of strongly science-based
sectors such as biotechnologies, which have followed a totally different
path in America than in Europe (Orsenigo, 2001). While the American
system is characterized by a strong proximity between the industrial
sector and the world of research, with the universities in the first line in
68 The capitalization of knowledge

internalizing and in taking on many functions typical of the business


world, in Europe universities have been much more reluctant to take on a
similar propulsive role.
A second ‘nudge’ suggestion that may emerge from this chapter, and in
particular from the growing generality and complexity of the knowledge
involved in innovation, is the importance of face-to-face interaction and
proximity between universities and companies. The need for proximity has
been underlined in recent studies (Arundel and Geuna, 2004; for an expla-
nation according to the theory of complexity see Viale and Pozzali, 2010).
Virtual clusters and meta districts can’t play the same role in innovation.
Proximity and face-to-face interactions are not only important for mini-
mizing the tacitness bottleneck in technology transfer, but face-to-face
interaction is also fundamental to collaboration because of the linguistic
and pragmatic effect on understanding (see above). It also improves the
degree of trust, as has been proved by neuroeconomics (Camerer et al.,
2005). Proximity can also increase the respective permeability of social
values and operational norms. From this point of view, universities might
promote the birth of ‘open spaces’ of discussion and comparison where
academicians and business members might develop a kind of learning by
interacting. Lastly, proximity allows better interaction between compa-
nies and the varied academic areas of expertise and knowledge resources.
Indeed, only the university has the potentiality to cope with the growing
complexity and interdisciplinarity of the new ways of generating innova-
tion. Emergent and convergent technologies require a growing division
of expert labour that includes the overall knowledge chain, from pure
and basic research to development. Only a company that can interact
and rely on the material and immaterial facilities of a research university
can find proper commercial solutions in the age of hybrid technological
innovation.

NOTES

1. Janus is a two-faced god popular in the Greek and Roman tradition. One face looks to
the past (or to tradition) and the other looks toward the future (or to innovation).
2. From a formal point of view, the descriptive assertion may be expressed in the following
way:

(E x, t) (a S b)

This means that an event x exists in a given time t such that if x is perceived by the agent
a, then it has the features b. Contrary to the pure analytical interpretation, this formula-
tion is epistemic; that is, it includes the epistemic actor a who is responsible for perceiving
the feature b of event x.
Knowledge-driven capitalization of knowledge 69

3. In theory, this variability could be overcome by artificial epistemic agents without


plasticity in conceptual categorization. Some artificial intelligence systems used at the
industrial level have these features.
4. These cases may generate ‘compound techniques’ (Mokyr, 2002b, p. 35) based on new
combinations of known techniques (e.g. the use of a metal container instead of a glass
bottle) without knowledge of the causal mechanisms behind the phenomenon. This type
of innovation is short-lived, however, and soon exhausts its economic potential.
5. The organizational impact of the cognitive features of scientific knowledge has been
singled out in some studies on scientific disciplines and specializations. For example, the
different organizations of experimental physicists compared to organic chemists or biolo-
gists is explained mainly by the different complexity of knowledge (Shinn, 1983).
6. Deontic logic (the name was proposed by Broad to von Wright) uses two operators:
O for obligation and P for permission. The pretence of building a logic based on these
two operators as prefixes to names of acts A, B, C, and so on, which is similar to propo-
sitional logic, has been strongly criticized by many, among them von Wright himself
(1963).
7. It is not clear if the process is linear or circular and recursive. In this case, cognitive rules
might become part of background knowledge, and this could change its role in pragmatic
experience and in reasoning and decision-making processes.
8. The analysis refers mainly to the academic environment of the universities of continental
Europe.
9. Nudge (Thaler and Sunstein, 2008) is the title of a book and a metaphor characterizing
a way to gently push social actors towards certain collective aims according to their
cognitive characteristics.

REFERENCES

Agrawal, A. and R. Henderson (2002), ‘Putting patents in context: exploring


knowledge transfer from MIT’, Management Science, 48 (1), 44–60.
Anderson, J.R. (1983), The Architecture of Cognition, Cambridge, MA: Harvard
University Press.
Anderson, J.R. (1996), ‘ACT: A simple theory of complex cognition’, American
Psychologist, 51, 355–65.
Arundel, A. and A. Geuna (2004), ‘Proximity and the use of public science by
innovative European firms’, Economic Innovation and New Technologies, 13 (6),
559–80.
Balconi, M., A. Pozzali and R. Viale (2007), ‘The “codification debate” revisited:
a conceptual framework to analyse the role of tacit knowledge in economics’,
Industrial and Corporate Change, 16 (5), 823–49.
Bar Hillel, M. (1980), ‘The base-rate fallacy in probabilistic judgements’, Acta
Psychologica, 44, 211–33.
Barsalou, L.W. (2000), ‘Concepts: structure’, in A.E. Kazdin (ed.), Encyclopedia
of Psychology, Vol. 2, Washington, DC: American Psychological Association,
pp. 245–8.
Beth, E. and J. Piaget (1961), Etudes d’Epistemologie Genetique, XIV: Epistemologie
Mathematique et Psichologie, Paris: PUF.
Braine, M.D.S. (1978), ‘On the relation between the natural logic of reasoning and
standard logic’, Psychological Review, 85, 1–21.
Broesterhuizen, E. and A. Rip (1984), ‘No Place for Cudos’, EASST Newsletter,
3.
70 The capitalization of knowledge

Camerer, C., G. Loewenstein and D. Prelec (2005), ‘Neuroeconomics: how neuro-


science can inform economics’, Journal of Economic Literature, 43 (1), 9–64.
Carruthers, P., S. Stich and M. Siegal (eds) (2002), The Cognitive Basis of Science,
Cambridge: Cambridge University Press.
Cheng, P.W. and K.J. Holyoak (1985), ‘Pragmatic versus syntactic aproaches to
training deductive reasoning’, Cognitive Psychology, 17, 391–416.
Cheng, P.W. and K.J. Holyoak (1989), ‘On the natural selection of reasoning theo-
ries’, Cognition, 33, 285–313.
Cheng, P.W. and R. Nisbett (1993), ‘Pragmatic constraints on causal deduction’,
in R. Nisbett (ed.), Rules for Reasoning, Hillsdale, NJ: Erlbaum, pp. 207–27.
Clark, H.H. (1996), Using Language, Cambridge: Cambridge University Press.
Cleeremans, A. (1995), ‘Implicit learning in the presence of multiple cues’, in
Proceedings of the 17th Annual Conference of the Cognitive Science Society,
Hillsdale, NJ: Erlbaum, pp. 298–303.
Cleeremans, A., A. Destrebecqz and M. Boyer (1998), ‘Implicit learning: news
from the Front’, Trends in Cognitive Science, 2, 406–16.
Cohen, J. (1992), An Essay on Belief and Acceptance, Oxford: Blackwell.
Collins, A.M. and M.R. Quillian (1969), ‘Retrieval time from semantic memory’,
Journal of Verbal Learning and Verbal Behaviour, 8, 240–48.
Cowan, R., P.A. David and D. Foray (2000), ‘The explicit economics of knowl-
edge codification and tacitness’, Industrial and Corporate Change, 9, 211–53.
Dearborn, D.C. and H.A. Simon (1958), ‘Selective perception: a note on the
departmental identifications of executives’, Sociometry, 21 (June), 140–44.
Elster, J. (1979), Studies in Rationality and Irrationality, Cambridge: Cambridge
University Press.
Etzkowitz, H. (2008). The Triple Helix: University–Industry–Government Innovation
in Action, London: Routledge.
Fischhoff, B., P. Slovic and S. Lichtenstein (1977), ‘Knowing with certainty: the
appropriateness of extreme confidence’, Journal of Experimental Psychology, 3,
552–64.
Fondazione Rosselli (2008). ‘Different cognitive styles between academic and
industrial researchers: a pilot study’, www.fondazionerosselli.it.
Giere, R.N. (1988), Explaining Science, Chicago, IL: Chicago University Press.
Goffman, E. (1981), Forms of Talk, Philadelphia, PA: University of Pennsylvania
Press.
Grice, H.P. (1989), Studies in the Way of Words, Cambridge, MA: Harvard
University Press.
Grossman, M., E.E. Smith, P. Koenig, G. Glosser, J. Rhee, and K. Dennis (2003),
‘Categorization of object descriptions in Alzheimer’s disease and frontal tem-
poral dementia: limitation in rule-based processing’, Cognitive Affective and
Behavioural Neuroscience, 3 (2), 120–32.
Hinsz, V., S. Tindale and D. Vollrath (1997), ‘The emerging conceptualization of
groups as information processors’, Psychological Bulletin, 96, 43–64.
Jackendoff, R. (2007), Language, Consciousness, Culture, Cambridge, MA: MIT
Press.
Jackson, S. (1992), ‘Team composition in organizational settings: issues in manag-
ing an increasingly diverse work force’, in S. Worchel, W. Wood and J. Simpson
(eds), Group Process and Productivity, Newbury Park, CA: Sage, pp. 138–73.
Johnson-Laird, P. (1983), Mental Models, Cambridge: Cambridge University
Press.
Knowledge-driven capitalization of knowledge 71

Johnson-Laird, P. (2008), How We Reason, Oxford: Oxford University Press.


Kahneman, D. (2003), ‘Maps of Bounded Rationality’, in T. Frangsmyr (ed.),
Nobel Prizes 2002, Stockholm: Almquist & Wiksell, pp. 449–89.
Kahneman, D. and A. Tversky (1973), ‘On the psychology of prediction’,
Psychological Review, 80, 237–51.
Kahneman, D. and A. Tversky (1979), ‘Prospect theory: an analysis of decision
under risk’, Econometrica, 47, 263–91.
Kahneman, D. and A. Tversky (1996), ‘On the reality of cognitive illusions’,
Psychological Review, 103, 582–91.
Kauffman, S. (1995), At Home in the Universe: The Search for the Laws of Self-
Organization and Complexity, New York: Oxford University Press.
Kuhn, T. (1962), The Structure of Scientific Revolutions, Chicago, IL: University
of Chicago Press.
Kuznets, S. (1965), Economic Growth and Structure, New York: W.W. Norton.
Langer, E. (1973), ‘Reduction of psychological stress in surgical patients’, Journal
of Experimental Social Psychology, 11, 155–65.
Mackie, J. (1974), The Cement of the Universe. A Study on Causation, Oxford:
Oxford University Press.
Manz, C.C. and C.P. Neck (1995), ‘Teamthink. Beyond the groupthink syndrome
in self-managing work teams’, Journal of Managerial Psychology, 10, 7–15.
March, J. and H.A. Simon (1993), Organizations, 2nd edn, Cambridge, MA:
Blackwell Publishers.
Markman, A.B. and D. Gentner (2001), ’Thinking’, Annual Review of Psychology,
52, 223–47.
Merton, R. (1973), The Sociology of Science. Theoretical and Empirical
Investigations, Chicago, IL: University of Chicago Press.
Mitroff, I.I. (1974), The Subject Side of Science, Amsterdam: Elsevier.
Mokyr, J. (2002a), The Gifts of Athena: Historical Origins of the Knowledge
Economy (Italian translation 2004, I doni di Atena, Bologna: Il Mulino),
Princeton, NJ: Princeton University Press.
Mokyr, J. (2002b), ‘Innovations in an historical perspective: tales of technol-
ogy and evolution’, in B. Steil, G. Victor and R. Nelson (eds), Technological
Innovation and Economic Performance, Princeton, NJ: Princeton University
Press, pp. 23–46.
Moore, G.E. (1922), ‘The nature of moral philosophy’, in Philosophical Studies,
London: Routledge & Kegan Paul.
Mowery, D. and N. Rosenberg (1989), Technology and the Pursuit of Economic
Growth, Cambridge: Cambridge University Press.
National Science Foundation (2002), Converging Technologies for Improving
Human Performance, Washington, DC: National Science Foundation.
Nisbett, R.E. (2003), The Geography of Thought, New York: The Free Press.
Nooteboom, B., W. Van Haverbeke, G. Duysters, V. Gilsing and A. Van den Oord
(2007), ‘Optimal cognitive distance and absorptive capacity’, Research Policy,
36, 1016–34.
North, D.C. (2005), Understanding the Process of Economic Change (Italian trans-
lation 2006, Capire il processo di cambiamento economico, Bologna: Il Mulino),
Princeton, NJ: Princeton University Press.
Nozick, R. (1990), ‘Newcomb’s problem and two principles of choice’, in
P.K. Moser (ed.), Rationality in Action. Contemporary Approach, New York:
Cambridge University Press, pp. 207–35.
72 The capitalization of knowledge

Orsenigo, L. (2001), ‘The (failed) development of a biotechnology cluster: the case


of Lombardy’, Small Business Economics, 17 (1–2), 77–92.
Politzer, G. (1986), ‘Laws of language use and formal logic’, Journal of
Psycholinguistic Research, 15, 47–92.
Politzer, G. and A. Nguyen-Xuan (1992), ‘Reasoning about promises and warn-
ings: Darwinian algorithms, mental models, relevance judgements or pragmatic
schemas?’, Quarterly Journal of Experimental Psychology, 44, 401–21.
Pozzali, A. and R. Viale (2007), ‘Cognition, types of “tacit knowledge” and tech-
nology transfer’, in R. Topol and B. Walliser (eds), Cognitive Economics: New
Trends, Oxford: Elsevier, pp. 205–24.
Reber, A.S. (1993), Implicit Learning and Tacit Knowledge. An Essay on the
Cognitive Unconscious, Oxford: Oxford University Press.
Rosenberg, N. and L.E. Birdzell (1986), How The West Grew Rich. The Economic
Transformation of the Economic World (Italian translation 1988, Come l’Occidente
è diventato ricco, Bologna: Il Mulino), New York: Basic Books.
Rosenberg, N. and D. Mowery (1998), Paths of Innovation. Technological Change
in 20th-Century America (Italian translation 2001, Il Secolo dell’Innovazione,
Milano: EGEA), Cambridge: Cambridge University Press.
Ross, B.H. (1996), ‘Category learning as problem solving’, in D.L. Medin (ed.),
The Psychology of Learning and Motivation: Advances in Research and Theory,
Vol 35, San Diego, CA: Academic Press, pp. 165–92.
Rumain, B., J. Connell and M.D.S. Braine (1983), ‘Conversational comprehension
processes are responsible for fallacies in children as well as in adults: it is not the
biconditional’, Developmental Psychology, 19, 471–81.
Searle, J. (1995), The Construction of Social Reality, New York: Free Press.
Searle, J. (2008), Philosophy in a New Century, Cambridge: Cambridge University
Press.
Shinn, T. (1983), ‘Scientific disciplines and organizational specificity: the social and
cognitive configuration of laboratory activities’, Sociology of Science, 4, 239–64.
Siegel, D., D. Waldman and A. Link (1999), ‘Assessing the impact of organiza-
tional practices on the productivity of university technology transfer offices: an
exploratory study’, NBER Working Paper 7256.
Sloman, S.A. (1996), ‘The empirical case for two systems of reasoning’,
Psychological Bulletin, 119, 3–22.
Smith, E.E. and S.M. Kossylin (2007), Cognitive Psychology. Mind and Brain,
Upper Saddle River, NJ: Prentice Hall.
Sperber, D. and D. Wilson (1986), Relevance. Communication and Cognition,
Oxford: Oxford University Press.
Stanovich, K. (1999), Who Is Rational?: Studies of Individual Differences in
Reasoning, Mahwah, NJ: Erlbaum.
Sternberg, R.J. (2009), Cognitive Psychology, Belmont, CA: Wadsworth.
Thaler, R. (1999), ‘Mental accounting matters’, Journal of Behavioural Decision
Making, 12, 183–206.
Thaler, R.H. and C.R. Sunstein (2008), Nudge. Improving Decisions About Health,
Wealth, and Happiness, New Haven, CT: Yale University Press.
Tversky, A. and D. Kahneman (1971), ‘Belief in the law of small numbers’,
Psychological Bulletin, 76, 105–90.
Tversky, A. and D. Kahneman (1980), ‘Causal schemas in judgements under
uncertainty’, in M. Fishbein (ed.), Progress in Social Psychology, Hillsdale, NJ:
Erlbaum, pp. 49–72.
Knowledge-driven capitalization of knowledge 73

Tversky, A. and D. Kahneman (1982a), ‘Judgements of and by representativeness’,


in D. Kahneman, P. Slovic and A. Tversky (eds), Judgement under Uncertainty:
Heuristics and Biases, Cambridge: Cambridge University Press, pp. 84–98.
Tversky, A. and D. Kahneman (1982b), ‘Evidential impact of base rate’, in
D. Kahneman, P. Slovic and A. Tversky (eds), Judgement under Uncertainty:
Heuristics and Biases, Cambridge: Cambridge University Press, pp. 153–60.
Tversky, A. and D. Kahneman (1992), ‘Advances in prospect theory: cumulative
representation of uncertainty’, Journal of Risk and Uncertainty, 5, 547–67.
Van Knippenberg, D. and M.C. Schippers (2007), ‘Work group diversity’, Annual
Review of Psychology, 58, 515–41.
Viale, R. (1991), Metodo e società nella scienza, Milano: Franco Angeli.
Viale, R. (2001), ‘Reasons and reasoning: what comes first’, in R. Boudon,
P. Demeulenaere and R. Viale, R. (eds), L’explication des normes sociales, Paris:
PUF, pp. 215–36.
Viale R. (2008), ‘Origini storiche dell’innovazione permanente’, in R. Viale (a cura
di), La cultura dell’innovazione, Milano: Editrice Il Sole 24 Ore, pp. 19–70.
Viale, R. (2009), ‘Different cognitive styles between academic and industrial
researchers: an agenda of empirical research’, New York: Columbia University,
http://www.italianacademy.columbia.edu/publications_working.html#0809.
Viale, R. and A. Pozzali (2010), ‘Complex adaptive systems and the evolutionary
triple helix’, Critical Sociology, 34.
Viale, R., F. Del Missier, R. Rumiati and C. Franzoni (forthcoming), Different
Cognitive Styles between Academic and Industrial Researchers: An Empirical
Study.
von Wright, G.H. (1963), Norms and Action. A Logical Enquiry, London:
Routledge.
Wason, P.C. (1960), ‘On the failure to eliminate hypotheses in a conceptual task’,
Quarterly Journal of Experimental Psychology, 12, 129–40.
Ziman, J.M. (1987), ‘L’individuo in una professione collettivizzata’, Sociologia e
Ricerca Sociale, 24, 9–30.
2. ‘Only connect’: academic–business
research collaborations and the
formation of ecologies of innovation
Paul A. David and J. Stanley Metcalfe

UNIVERSITY PATENTING FOR TECHNOLOGY


TRANSFER – MIRACULOUS OR MISTAKEN
MOVEMENT?

The international movement to emulate the US institutional reforms of


the early 1980s that gave universities and publicly funded technology
research organizations the right (rather than a privilege granted by a spon-
soring agency) to own and derive income from the commercialization of
IP (intellectual property) based on their researchers’ inventions has devel-
oped remarkable momentum since its inception at the end of the 1990s
(see, e.g., the survey in Mowery and Sampat, 2005). The process of change
and adaptation that was thereby set in motion among the EU member
states has not yielded the dramatic effects on innovation and employment
growth in Europe that had been promised by those who enthusiastically
prescribed a dose of ‘the Bayh–Dole solution’ for the region’s sluggish
economies.
But such expectations were at best unrealistic, and in too many instances
stemmed from contemporary European observers’ mistaken suppositions
regarding the sources of the revival of productivity growth and the ‘infor-
mation technology’ investment boom in the American economy during
the late 1990s; and more widely shared misapprehensions regarding the
fundamental factors that were responsible for the rising frequency with
which patents applications filed at the USPTO during the 1980s and 1990s
were citing scientific papers by academic authors.1
The movement to promote ‘technology transfers’ from universities
to industry through the medium of patent licensing was fueled by a
widespread supposition that European academic research was danger-
ously disconnected from the processes of private sector innovation. This
belief rested largely on the observation at the turn of the century that the

74
Academic–business research collaborations 75

regions’ universities were not extensively involved as corporate entities


in filing applications for patents, and negotiating the terms on which the
inventions could be commercially exploited (whether by being ‘worked’
or not) via business licensees. The obvious contrast was that drawn with
the contemporary scene in the USA during the frenzied era of the dot-
com and biogenetics boom, where research universities’ patenting and the
licensing of technology to venture-capital-fueled startups were growing
rapidly. Whatever the accuracy of European perceptions about the reali-
ties of events taking place on the far side of the Atlantic Ocean, it has
become clear that there was a serious misconception of the realities of
university–industry technology transfers closer to home. Several recent
studies have revealed, however belatedly, that much of the university
research leading to patents in Europe does not show up readily in the sta-
tistics, because private firms rather than the universities themselves apply
for the patent.2
The impression that university professors in the physical sciences and
engineering were not engaged in patent-worthy inventive activities whose
results were of interest to industrial firms was firmly dispelled for the case
of Italy by a study of the identities of inventors named in patents issued
by the European Patent Office (EPO) during 1978–99. Balconi et al. (2004,
Table 3) found that for many research areas the Italian academic inventors
of those patents formed quite a sizeable share of all the professors working
in those fields on the faculties of Italy’s universities and polytechnics at the
close of that period.3 In 11 of the 20 research fields studied, 13.9 percent or
more of the professors working in the field were identified as the inventors
of EPO patents issued for inventions in the corresponding field; in the case
of quite a few specialty areas, such as mechanical and chemical bioengi-
neering, and industrial and materials chemistry, the corresponding propor-
tions were much higher – ranging from one-third to one-half. The transfer
of the ownership of those patents to industrial firms was the norm in Italy,
as was the case elsewhere in Western Europe during this era.4 According
to Crespi et al. (2006), about 80 percent of the EPO patents with at least
one academic inventor are not owned by the university, which means that
no statistical indication of a university’s involvement in the technology’s
creation would be found by studying the patent office records.
Thus the appearance of a lack of ‘university patents’ in Europe must
be understood to be a lack of university-owned patents, and not neces-
sarily indicative of any dearth of university-invented patents. Once the
data are corrected to take into account the different ownership structure
in Europe and in the USA, very simple calculations made by Crespi et
al. (2006) indicate that the European academic system seems to perform
considerably better than was formerly believed to be the case: indeed, the
76 The capitalization of knowledge

patenting output of European universities lags behind only one among the
US universities – and in that exception the difference was quite marginal.
If there are grounds for suspecting that it may not really have been
necessary for Europe to embrace the Bayh–Dole regime’s approach to
effecting ‘technology transfers’ from academic labs to industrial firms,
there also are doubts as to whether the likelihood of innovative success
ensuing from such transactions is raised by having universities rather than
firms own the patents on academic inventions. There are theoretical argu-
ments about this, pro and con, because the issue turns essentially on the
comparative strength of opposing effects: are firms likely to make a better
job of the innovation process because they have greater control over the
development of their own inventions? Or is it less likely that viable aca-
demic inventions will be shelved if the inventor’s institution retains control
of the patent and has incentives to find a way of licensing it to a company
that will generate royalty earnings by direct exploitation?
Since the issue is one that might be settled on empirical grounds, it is
fortunate that Crespi et al. (2006) have recently carried out a statistical
analysis of the effects of university ownership on the rate of commercial
application (diffusion) of patents, and on patents’ commercial values,
based upon the experience of European academic inventions for which
patents were issued by the EPO. Their analysis controls for the different
(ex ante observed) characteristics of university-owned and non-university-
owned patents, and therefore accords with theoretical considerations
that suggest one should view university ownership of a patent as the
endogenously determined outcome of a bargaining game.5 Both before
and after controlling for such differences between patents, they find no
statistically significant effects of university ownership of patents. The only
significant (positive) effect reported is that university-owned patents are
more often licensed out, but this does not lead to an overall increase in
the rate of commercial use. Hence the authors conclude that they can find
no evidence of ‘market failure’ that would call for additional legislation
in order to make university patenting more attractive in Europe. Their
inference is that whether or not universities own commercially interesting
patents resulting from their research makes little difference, because what-
ever private economic benefits might be conveyed by ownership per se are
being adjusted for by the terms set in the inter-organizational bargaining
process. This interpretation of the findings surely should gratify admirers
of the Coase Theorem’s assertion that the locus of ownership of valuable
property does not carry efficiency implications when transactions costs are
not very high.
Nonetheless, even though impelled by misconceptions of the realities
both in the USA and in Europe, there is now a general sense that, by
Academic–business research collaborations 77

following the American example, governments in the EU have forced a


reconsideration of the administrative system into which Europe’s univer-
sities had settled in the era following the rapid post-World War II pro-
liferation of new institutional foundations; and that this shock has been
on balance salutary in its effects for the longer term. Perhaps that is so.
It certainly has encouraged fresh thinking about the potential payoffs of
publicly funded research in terms of commercial innovation in small and
medium-sized industries, and of the support of applied research in areas
where new science might spawn new technologies of interest to major new
industries. It has precipitated and legitimized the assertion of university
rights to ownership of intellectual property vis-à-vis the claims of their
employees – an alteration in institutional norms that had occurred almost
universally in the USA before the 1970s. More significantly, perhaps,
it had the effect of encouraging a general re-examination of university
regulations affecting the activities of academic researchers in Europe. The
liberalization – for the benefit of universities – of many rules that had been
imposed uniformly on state institutions and their employees, in turn, has
opened the way to a broader consideration of the need for greater institu-
tional independence and autonomy. That appears to have brought more
realistic attention from university leaders to the possibilities of adopting
or creating new incentive mechanisms that would redirect individual
activities and raise productivity among those who worked within those
organizations.6

PRODUCTIVE SHOCKS AND LASTING


PROBLEMATIC TENSIONS

These have been important steps toward the flexibility needed for R&D
collaborations throughout the European Research Area (ERA), even
though a considerable distance remains to be traveled by the respec-
tive national government authorities along the path towards granting
greater autonomy to their institutions; and also by consortia and regional
coalitions of the institutions themselves to remove the impediments to
collaboration and inter-university mobility of personnel that continues
to fragment the European market for academic science and engineering
researchers.
Furthermore, although European governments have not hesitated
to urge business corporations to accept the necessity of investments in
‘organizational re-engineering’ to take full advantage of new technolo-
gies and consequent new ways of working, they have not been so quick
to put this good advice into practice ‘closer to home’ – when urging
78 The capitalization of knowledge

‘modernization’ upon their respective educational and research institu-


tions. Yet it is now more widely recognized that the ‘modernizing’ of
university governance and management is not a costless process, and,
like ‘business re-engineering’, requires up-front incremental expenditures
to effect the transformations that are expected to yield sustainable future
gains in the efficiency of resource use.
There is thus an obvious tension between two key assertions about
university–business interactions in many current policy recommendations,
and in the programs that seek to respond to their advice. Insistence giving
priority to ‘market-driven’ technology transfers – based upon the licens-
ing or direct exploitation of intellectual property arising from university
research – creates impediments to inter-organizational collaboration,
and, at the very least, tends to inhibit the recommendation that universi-
ties strive to develop more frequent interpersonal collaborative contacts
to encourage exchange of scientific and technological information with
industry. That this tension remained unresolved is not surprising, but that
it continued to pass without much comment in policy circles for so long
was nonetheless unfortunate.
Most welcome, therefore, are the growing signs of a shift of thinking
in Europe that is evidenced in such statements as the one below, in which
the view expressed by the Report of the Forum on University-based
Research (European Commission, 2005, p. 28) is in harmony with that
in the 2005 report by the Forum on University-based Research: ‘From a
societal perspective, more will be gained by letting our universities excel
in knowledge creation while encouraging closer links with the rest of
society, than by insisting that they should fund themselves mainly through
commercializing their knowledge.’
This may intimate that the orientation of policy development for the
ERA, particularly that aiming to ‘strengthen the link between the public
research base and industry’,7 is now moving into closer alignment with
what appears to be the emergent trend in industry–university collabora-
tion in the USA. The latter, however, is not another new institutional
model. Quite the opposite, in fact, as the signs are indicating a growing
movement to recover a mode of interaction that seemed to have been all
but lost in the post-Bayh–Dole era. One harbinger of this trend reversal
might be seen in the recently announced Open Collaborative Research
Program, under which IBM, Hewlett-Packard, Intel, Cisco Systems and
seven US universities have agreed to embark on a series of collaborative
software research undertakings in areas such as privacy, security and
medical decision-making.8 The intriguing feature of the agreement is the
parties’ commitment to make their research results freely and publicly
available. Their avowed purpose in this is to be able to begin cooperative
Academic–business research collaborations 79

work, by freeing themselves from the lengthy delays and costly, frustrat-
ing negotiations over IPR that proposals for such collaborative projects
typically encounter.
This development reflects a growing sense in some corporate and uni-
versity circles during the past five years that the Bayh–Dole legislation
had allowed (and possibly encouraged) too great a swing of the pendulum
towards IP protection as the key to appropriating economic returns from
public and private R&D investments alike; that the vigorous assertion of
IPR was being carried too far, so that it was impeding the arrangement
of inter-organization collaborations involving researchers in the private
and publicly funded spheres. As Stuart Feldman, IBM’s vice-president for
computer science, explained to the New York Times: ‘Universities have
made life increasingly difficult to do research with them because of all the
contractual issues around intellectual property . . . We would like the uni-
versities to open up again.’ A computer scientist at Purdue University is
quoted in the same report as having echoed that perception: ‘Universities
want to protect their intellectual property but more and more see the
importance of collaboration [with industry].’
The empirical evidence about the effects of Bayh–Dole-inspired legisla-
tion in the EU that has begun to appear points, similarly, to some negative
consequences for research collaboration. Thus a recent study has investi-
gated the effect of the January 2000 Danish Law on University Patenting
and found that it led to a reduction in academic–industry collaboration
within Denmark (Valentin and Jensen, 2007). But the new law, which gave
the employing university patent rights to inventions produced by faculty
scientists and engineers who had worked alone or in collaboration with
industry, appears also to have been responsible for an increase in Danish
biotech firms’ readiness to enter into research collaborations with scientists
working outside Denmark – an outcome that must have been as surprising
as it was unwelcome to the legislation’s proponents. Clearly, the transfer
of institutional rules from the USA to Europe is not a matter to be treated
lightly; their effects in different regimes may not correlate at all well.
It remains to be seen just how widely shared are these skeptical ‘second
thoughts’ about the wisdom of embracing the spirit of the Bayh–Dole
experiment, and how potent they eventually may become in altering
the modus of industry–university interactions that enhance ‘technology
knowledge transfers’, as distinguished from ‘technology ownership trans-
fers’. At present it is still too early to speculate as to whether many other
academic institutions will spontaneously follow the example of the Open
Collaborative Research Program. Moreover, it seems unlikely that those
with substantial research programs in the life sciences and portfolios of
biotechnology and medical device patents will find themselves impelled to
80 The capitalization of knowledge

do so by the emergence of enthusiasm for such open collaboration agree-


ments on the part of drug development firms and major pharmaceutical
manufacturers.
Nevertheless, from the societal viewpoint, the issue of whether IPR
protection is getting in the way of the formation of fruitful collabora-
tions between industry and university researchers is fundamentally a
question about the conditions that would maximize the marginal social
rate of return on public investment in exploratory research. This could be
achieved by making it more attractive for R&D-intensive firms with inter-
ests and capabilities in the potential commercial applications to collabo-
rate with publicly funded academic research groups because they hoped
subsequently to exploit the knowledge base thereby created. This issue is
not unrelated to an important aspect of the concerns that have been raised
in regard to potential ‘anti-commons effects’ of the academic patenting of
research tools, and the resulting impediments to downstream R&D invest-
ment that are created not only by ‘blocking patents’, but by ‘patent thick-
ets’ formed by a multiplicity of IP ownership rights that are quite likely
to be distributed among different public research organizations (PROs).
The latter would contribute to prospects of ‘royalty stacking’ that would
reduce the prospective revenues from a technically successful innovation,
and to higher investment costs due to the transactions costs of conducting
extensive patent searches and multiple negotiations for the rights to use
the necessary set of upstream patents.9
It would seem possible to address the source of this particular problem
by allowing, or indeed encouraging, the cooperative formation of efficient
‘common-use pools’ of PRO patents on complementary collections of
research tools. While this would strengthen the bargaining position of the
collectivity of patent-owning institutions, and it would be necessary to
have supervision of the competition authorities to present abuses, it might
well increase the licensing of those technologies to downstream innova-
tors. Of course, it is a second-best solution from the societal viewpoint, as
the award of ownership rights on inventions that have resulted from pub-
licly funded academic research will result in a ‘deadweight loss’ – due to
the effect of the licensing charges that curtail the downstream exploitation
of those inventions.10
The specific functionality of the information-disclosure norms and
social organization of open science, which until very recently (by his-
torical standards) was strongly associated with the ethos and conduct
of academic, university-based research, rests upon the greater efficacy of
data and information-sharing as a basis for the cooperative, cumulative
generation of eventually reliable additions to the stock of knowledge.
Treating new findings as tantamount to being in the public domain fully
Academic–business research collaborations 81

exploits the ‘public-goods’ properties that make it possible for data and
information to be concurrently shared in use and reused indefinitely, and
thereby promote the faster growth of the stock of reliable knowledge. This
contrasts with the information control and access restrictions that gener-
ally are required in order to appropriate private material benefits from the
possession of (scientific and technological) knowledge. In the proprietary
research regime, discoveries and inventions must either be held secret or
be ‘protected’ by gaining monopoly rights to their commercial exploita-
tion. Otherwise, the unlimited entry of competing users could destroy the
private profitability of investing in R&D.11
One may then say, somewhat baldly, that the regime of proprietary
technology (qua social organization) is conducive to the maximization
of private wealth stocks that reflect current and expected future flows of
economic rents (extra-normal profits). While the prospective award of
exclusive ‘exploitation rights’ have this effect by strengthening incentives
for private investments in R&D and innovative commercialization based
on the new information, the restrictions that IP monopolies impose on the
use of that knowledge perversely curtail the social benefits that it will yield.
By contrast, because open science (qua social organization) calls for liberal
dissemination of new information, it is more conducive to both the maxi-
mization of the rate of growth of society’s stocks of reliable knowledge
and to raising the marginal social rate of return from research expendi-
tures. But it, too, is a flawed institutional mechanism: rivalries for priority
in the revelation of discoveries and inventions induce the withholding of
information (‘temporary suspension of cooperation’) among close com-
petitors in specific areas of ongoing research. Moreover, adherents to open
science’s disclosure norms cannot become economically self-sustaining:
being obliged to quickly disclose what they learn and thereby to relinquish
control over its economic exploitation, their research requires the support
of charitable patrons or public funding agencies.
The two distinctive organizational regimes thus serve quite different pur-
poses within a complex division of creative labor, purposes that are com-
plementary and highly fruitful when they coexist at the macro-institutional
level. This functional juxtaposition suggests a logical explanation for their
coexistence, and the perpetuation of institutional and cultural separations
between the communities of researchers forming ‘the Republic of Science’
and those who are engaged in commercially oriented R&D conducted
under proprietary rules. Yet these alternative resource allocation mecha-
nisms are not entirely compatible within a common institutional setting; a
fortiori, within the same project organization there will be an unstable com-
petitive tension between the two and the tendency is for the more fragile,
cooperative micro-level arrangements and incentives to be undermined.
82 The capitalization of knowledge

SOCIAL INTERACTIONS ACROSS


ORGANIZATIONAL BOUNDARIES AND THE
FACILITATION OF COLLABORATIVE RESEARCH

Beyond the overtly commercial and explicitly contractual interactions


involving IP, whose role at the macro-system level in supporting R&D
investment and innovation tends to be accorded prime place in general
policy prescriptions, the importance of other channels of ‘interaction’ with
business is often stressed in discussions of what the leadership of Europe’s
universities should be doing in that regard. Prominent on this list are the
variety of interpersonal and inter-organizational connections that bring
participants in academic research into regular contact with members of
the local, regional and national business communities. Under the heading
‘The role of the universities in promoting business–university collabora-
tion’, the Lambert Review (2003, p. 41), for example, remarked on the
growing role that universities (in the UK) have taken in their cities and
regions during recent decades:

Vice-chancellors often have links with the CEOs of major local companies, with
chambers of commerce, with their development agency and with NHS Trusts
and other community service providers in their region. Academics work with
individual businesses through consultancy, contract or collaborative research
services. University careers services cooperate with the businesses which wish to
recruit their graduates or provide work placements for their students.

The trend toward organized institutional involvement – as distinct


from personal connections between university professors and industrial
and financial firms in their locale – is indeed an ongoing process for many
of Europe’s HEIs (higher education institutions). But the reader of the
Lambert Review, who was familiar with the US university scene, especially
that among the public (land grant) institutions, would be struck by its sug-
gestion of the novelty of top-level administrators having links with CEOs
and local business leaders, inasmuch as this would be presumed to be the
case for their American counterparts.
Also noteworthy, as a reflection of the ‘top-down’ impetus for the
establishment of such relationships, is the quoted passage’s emphasis on
the cooperation of university careers services with recruiters from business
firms. At most major US research universities – where the organized place-
ment services of the professional schools, as well as those of the under-
graduate colleges, have long been established – the important recruiting
contacts with graduate scientists and engineers are typically arranged at
the level of the individual departments, and often are linked with a variety
of ‘industrial affiliates’ programs. This is significant in view of the expert
Academic–business research collaborations 83

screening functions that are implicitly performed for potential employers


by universities’ graduate educational programs and faculty supervisors.
Screening of scientific and engineering talent, as well as assessment of
graduate and postdoctoral research contributions, is a publicly subsidized
service (provided as it is without fee) that is especially valuable for com-
panies seeking promising researchers who have been working in frontier
areas. All but the largest firms are likely to lack the internal expertise,
and are unable to arrange the extended opportunities that internships
could provide for observation and evaluation of the capabilities of current
graduates. Even the large R&D-intensive corporations find it worthwhile
to contribute to the ‘industry associates programs’ of leading departments
in the sciences and engineering, if only for the opportunities these afford
to form contacts with advanced students and younger faculty who may be
approached with offers of employment in the near future.12 The formation
of enduring ties for the transfer of knowledge through the movement of
personnel gives business organizations access to the craft aspects of apply-
ing new techniques, contacts with new recruits’ personal network of other
young researchers, and an advantage in spotting exceptional capabilities
to conduct high-caliber research. Such ties are sustained by personal rela-
tionships with the student’s professors, and strengthened by ‘repeat play’,
which tend to inhibit the latter’s inclination to ‘oversell’ members of their
current crop of PhDs and postdocs; similarly, the prospects of having to
try to recruit next year from the same source works to induce the firms to
be more candid in describing the nature of the employment opportunities
that professors may recommend to their good students. The point here is
that the direct participation of the parties, rather than institutionally pro-
vided third-party intermediation services, will generally be a requirement
for successful ‘relationship management’ in the market for young research
talent.
Perhaps the greater prevalence of such arrangements that can be
observed in science and engineering departments and research groups
at US universities can be attributed to the greater degree of autonomy
that university administrations there have allowed to these units, permit-
ting them (and indeed providing them with initial help) to create special
programs of lectures, seminars and gatherings of ‘industry associates’
by soliciting and using funds contributed by the business invitees who
participate as sponsors of those events. Initiatives of this kind, it must be
said, are also an aspect of the traditions of local community and regional
involvement that were developed in the agricultural and engineering
schools of state (public land grant) universities in America. This form of
direct engagement with the society beyond the precincts of the academy
has been further reinforced and extended to the private HEIs in the USA
84 The capitalization of knowledge

by the generally more intense competition among them in the placement of


graduating students in national and regional job markets – which is espe-
cially pronounced in the cases of the professional schools and graduate
science and engineering faculties.
Whatever the precise sources of these contrasts may be, the obvious
suggestion to be registered here is that interesting interactions and pro-
ductive engagements of this kind arise under conditions that have allowed
greater scope for initiative, and attached rewards to actions taken not by
vice-chancellors, but at the levels within these institutions where one is
most likely to find the specific information and technical judgments about
the subjects of mutual interest to academic researchers and knowledge-
seeking corporate personnel. It implies also that, when they work suc-
cessfully, they do so within an ecological niche that provides a web of
supporting connections and mutually reinforcing incentives that need to
be studied and understood before attempting to transplant and adapt this
important mechanism for ‘connectivity’ in new institutional and cultural
settings.13
It is only reasonable that considerable effort will be needed in order to
properly align mutual expectations among the parties to a collaboration
when they approach the negotiating table with quite different, and con-
flicting, goals that have been organizationally mandated. Nevertheless, the
extent to which that investment is undertaken by both sides does appear
to shape strongly whether, and how well, business–university research col-
laborations turn out to benefit both parties, and whether they are able to
evolve into more enduring ‘connected’ relationships. When one starts the
alignment process at the upper echelons of the administrative hierarchies
of organizations that are differentiated in their purposes and concerns
as business companies and universities, the conflicts are likely to appear
most salient and the prospective negotiation process more difficult and
protracted, and uncertain in outcome, whereas the existence or absence of
common interests and appreciation of the magnitude and division of pro-
spective gains from cooperation usually will be quite readily established.
The question then is whether the benefits in terms of the enhanced capacity
to carry out the projected line of research are deemed sufficiently impor-
tant to their respective (academic and business) organizations that mutual
accommodations will be reached to ‘make it happen’.
The organizational structure of most research universities, in which the
upper levels of administration typically have at best only a derived inter-
est in pursuing the particular substantive research programs that animate
members of their research faculty, are likely to eschew any attempt to eval-
uate and prioritize among them on the basis of their comparative scientific
interest or societal worth. Accordingly, university administrators rarely
Academic–business research collaborations 85

if ever approach firms with proposals to engage in particular research


projects that would involve collaborations between specified groups or
individual faculty scientists and engineers and counterparts employed in
the business R&D labs. Instead, the research director of a company that
has decided that sponsoring a collaborative project with certain university-
based research scientists would be beneficial to the organization’s ‘bottom
line’ will usually have authority to take the initiative of approaching the
prospective academic partners to discuss such an arrangement. But, as the
latter, in their capacities of research faculty members rather than officers
of the university, do not usually have corresponding authority to negotiate
formal inter-organizational agreements, the business firm’s representatives
find themselves told that they must deal with the university administration,
and more precisely with one or a number of ‘service units’ within the insti-
tution (variously described as the office of external relations, ‘sponsored
research office’, ‘university research services’, ‘technology transfer office’,
all of which will in one way or another be equipped with legal counsel and
contract negotiators).
Reasonable as this may appear as a procedure reflecting the different
specializations of the people whose expertise the university calls upon,
problems with its operation in practice often arise precisely because
the primary concerns of these specialized services typically have little
to do with the specifics of the professors’ interests in the research col-
laboration.14 Rather, their professional purpose is to secure such financial
benefits as can be extracted by ‘the university’ (directly or indirectly) in
exchange for agreeing that its facilities and faculty resources will be per-
mitted to perform their part of the contemplated collaborative work, and
that the university will bear responsibility should they refuse to perform in
accordance with the terms of the contract. Their competence and role also
require their performance of ‘due diligence’ – by trying to identify all the
conceivable risks and costs that could stem from their institution’s expo-
sure to legal liabilities and adverse publicity occasioned by participating in
the proposed collaboration.
The uncertainties about the nature of the products and processes of
research, conjoined with the professional incentives of those charged with
performing ‘due diligence’ – and their inability to calculate the counter-
vailing value of the losses entailed in not doing the research – tend to
promote behaviors that reflect extreme risk aversion. In other words, these
agents of the university are predisposed to advocate and adopt a tough
bargaining stance, trying to get the other collaborating party (or parties)
to bear the liabilities, or the costs of insuring against them; and when that
appears to be infeasible, they are not hesitant to counsel that the project
not be undertaken by their institution. That this can be an unwelcome
86 The capitalization of knowledge

surprise to corporate representatives who were under the impression that


‘the university’ would be symmetrically responding to the interest of the
faculty counterparts of their own research group is perhaps responsible
for the shocked and disparaging terms in which research directors of large,
R&D-intensive US companies relate their experiences in negotiations with
universities over the IPR to joint R&D ventures.15
What happens in such cases appears to depend upon whether or not
the faculty researchers who are keen to do the science are able to per-
suade people at some higher levels in the university administration that
it would not be in the institution’s long-term interest to refuse to allow
their research groups to seize the opportunity of a collaboration with
the firm in question. When the individuals concerned are valued by their
university administration, whether for their academic prestige or for
their ability to recruit talented young faculty, or for their track record of
success in securing large public research grants and the overhead support
that these bring, their persuasive efforts to find a compromise arrange-
ment in which the university does not try to extract the maximum con-
cessions from the firm, or bears more of the risk than its lawyers think
is prudent, are likely to be successful. This is especially likely if there is
a credible threat that professors will go to another research institution –
where, as the formulaic expression puts it in such conversations, they ‘will
feel really wanted’.
The point of entering into these seemingly sordid details is to highlight
the way that complex innovation systems emerge. In the case at hand it
will be seen that more active competition among research institutions for
productive scientists – especially where it receives additional impetus from
the usefulness of their talents in their university’s competition for public
research funding – will have the indirect effect of working as a counter-
vailing force against the internal organizational impediments to the
formation of spontaneous ‘connectivity’ between academic and business
researchers. Regulatory structures that permit universities to compete to
attract and retain research faculties that have attained great peer esteem,
and public research funding programs whose allocation criteria give weight
to excellence and thereby provide high-level administrators with justifica-
tions for being seen to depart from risk-averse institutional guidelines in
order to accommodate those individuals’ pursuit of interesting research
opportunities, therefore affect positively the formation of university–
industry connections that are likely to give rise to future innovations.
The perspective thus gained might be contrasted favorably with the
thrust of the enthusiastic notice given by the Lambert Review (2003, p. 42)
to the recent trend toward the opening of ‘corporate liaison offices’ at UK
universities:
Academic–business research collaborations 87

Partly in recognition of the number and complexity of these [business–


university] relationships, many universities have developed corporate or busi-
ness liaison offices, with a specific remit to act as the interface with business.
These offices have taken on an increasing number of tasks as universities’
engagement with their wider community has developed. These include develop-
ing networks of businesses; marketing the research strengths of the university;
advising on consultancy agreements and contract research; arranging complex
collaborative research agreements or major joint ventures.

For the university to present business corporations’ representatives


with a well-organized corporate academic face, and a central office whose
concerns are regulation of external relationships and internal manage-
ment control of the exploitation of the university’s marketable ‘knowl-
edge assets’, may succeed in making European upper-level executives at
both institutions feel increasing ‘at home’ in their new contacts. Yet this
organizational measure strikes one as perhaps neither so important nor
so well designed to respond to the challenge of drawing R&D managers
and research personnel into dense and fruitful networks of knowledge
exchange with university-based experts.16
Viewed against these findings, the emphasis that was placed by the
text of the Lambert Review upon the mission (‘remit’) of the newly
established corporate liaison office to be the university ‘interface with
business’ is quite striking. To have a liaison officer advising firms of
the formal requirements the university will impose upon consultancy
agreements and contract research, particular those involving complex
collaborative research agreements, certainly is appropriately instruc-
tive when there is no room for flexibility. Yet putting this function
in the hands of a central liaison office encourages pre-commitment
of the university to the inflexibility of ‘standard-form contracts’, and
thus tends to reduce the scope for exploring a variety of possible legal
arrangements for the assignment of intellectual property rights, obliga-
tions and liabilities that would be responsive to the particular needs of
the research collaborators, as well as the concerns of the participating
corporate entities. Liaison officers, as the agents of university admin-
istrations, are likely to have much stronger career incentives to attend
to the priorities of those responsible for monitoring and regulating the
formalities of the university’s external transactions than to seek ways
of fulfilling the actual research raison d’être that provides the impetus
for the formation of successful and more sustained inter-organizational
connections.17 To appreciate the tangled lines of influence and indirect
effects is to recognize why systems analysis is so necessary in the diag-
nosis of institutional problems and the design of corrective measures for
the innovation process.
88 The capitalization of knowledge

‘ONLY CONNECT’ – TOWARDS FOSTERING A


VIBRANT ORGANIZATIONAL ECOLOGY OF
INNOVATION

To form ‘a system of innovation’, the organizations and the individuals


in them have to interact in a way that contributes solutions to innovation
problems. Systems depend on connections (interactions) and cannot be
described or understood simply in terms of their components. What is at
stake here is an idea that goes back to Alfred Marshall’s concept of the
internal and external organization of a firm (in Industry and Trade, 1919,
and Principles of Economics (8th edition) 1920). Flows of knowledge from
outside the firm’s boundaries are important determinants of its capabili-
ties and actions, but this information is not simply ‘in the ether’. A firm
has to invest in the organization to gather this information and feed it into
and adapt it to its internally generated information.
Innovative activity is perhaps the most important case of the firm’s reli-
ance on external sources of information, and leads to the idea that the firm
is embedded in a wider matrix of relations that shape its ability to inno-
vate. Hence the concern for the various ways in which universities may
contribute to the innovation process that we have outlined above. But it
is important to recognize that a firm’s internal and external organization
constitutes an operator that is simultaneously facilitating and constrain-
ing. The codes and information structuring routines that firms invest in to
interact with other external sources of information may also serve to filter
and blinker the firm’s appreciation of the information that is important
and that which isn’t (Arrow, 1974). Thus the innovation systems that a
firm is part of are not always plastic in the face of changes in the knowl-
edge environment; and, as a consequence, they fail because their reading
of new information is deficient. We should not lose sight of the probability
that an innovation system generated to solve one set of problems may
prove counterproductive in the context of a new and different set of prob-
lems, which is why the processes for flexibly assembling and disassembling
specific innovation systems add greatly to the adaptability of any economy
(see Metcalfe, 2007).
The policy problem may then be put starkly: ‘Is it possible to improve
on the spontaneous self-organization process of the already existing and
refined interaction between firms and research universities in Europe?’
That the answer may be in the affirmative suggests that the innovation
policy response falls into two related branches:

1. Policy to improve the chances of innovation systems being formed


from the knowledge ecologies of the member states, a problem that is
Academic–business research collaborations 89

largely about barriers and incentives to collaborate in the solution of


innovation problems.
2. Policy to improve the quality of the knowledge ecologies in the
member states assessed in terms of the overall supply of researchers
in different disciplines and the way in which they are organized to
produce knowledge.

The preceding sections have focused on the role of research universities,


and less so on other public research organizations in relation to these two
policy problems. But two general points should be recognized to underlie
the whole discussion: it is business firms that occupy the central role in
the realization of innovations, and it is the mix of market and non-market
interactions that shapes the incentives, the available resources and the
opportunities to innovate. Innovation, obviously, is more than a matter
of invention, and so it is particularly important not to equate innova-
tion policy with policy for science and technology. University–business
linkages form only part of this system and their influence on innovation
cannot be independent of the many other factors at play.
Thus, for example, the competitive implications of the single market will
influence the incentives to innovate, whether interpreted as opportunities
or threats to a firm’s position. Consequently, competition policy is de facto
an important component of a broad innovation policy just as innovation
policy is de facto an important component of competition policy. The
fact that the knowledge ecology of the EU has been changing rapidly in
the past two decades, and that there are important differences in the rich-
ness of the ecologies in different member states, adds further problems in
understanding the implications for the innovation process.
The prevailing division of labor in the European knowledge ecology has
not arisen by chance, but rather as a reflection of many years of evolution
in the comparative advantages of different organizations in producing
and using knowledge. Firms, for example, have evolved in ways quite dif-
ferent from universities because they perform different sets of tasks and
fulfill quite different societal functions. This division of labor needs to be
respected and understood, for it would be as foolish to make universities
behave like firms as it would be economically disastrous to make private
firms operate like universities. The differences in their respective modes of
operation are not accidental, but have a functional purpose.
The origins of the current ecology, of which the governance of uni-
versities is a part, can be traced back to a historical epoch when the
knowledge foundations of industrial processes owed little to systematic
scientific understanding and the formal organization and conduct of R&D
activities. The modern age is different, however: the great expansion of
90 The capitalization of knowledge

organized public and private science and engineering research activities


that took place during the second half of the twentieth century, and accel-
erated the shift in the structures of the ‘industrialized’ economies toward
‘services’ and away from commodity production, are two important
transformations that have in a sense made the university as an institution
appear to be, at least outwardly, less distinct from other corporate entities
than was formerly the case.18
The relevant issue, then, must remain how best to achieve coordination
of this division of labor and thereby enhance innovation processes. As we
have explored above, the different ‘cultures’ of business and the public
research sector need special attention. The distinguishing feature of fun-
damental research in science and technology is its open nature, its nature
as a science commons (see Cook-Deegan, 2007; Nelson, 2004). Open
science (including engineering technology) is a collective endeavor that
bases the reliability of the knowledge production processes on widespread
agreement as to methods of evaluation and replication, but bases radical
progress of knowledge on disagreement, the scientific equivalent of crea-
tive enterprise. This tension between order and agreement and change and
disagreement is at the core of the institutions that shape science.
Similarly, in regard to commercial innovation, disagreement is the
defining characteristic of any significant innovative enterprise that is nec-
essarily based on a conjecture that imagines that the economic world can
be ordered differently. It is the open market system that facilitates adapta-
tions to such disagreement and generates powerful incentives to disagree:
the instituted procedures of science and business are open ‘experiment
generating systems’; both work within different principles of order and
both depend for their progress on the productive channeling of disagree-
ment. The consequences are that the knowledge-generating and using
processes of businesses and of PROs operate with different cultures, dif-
ferent value systems, different time frames, and different notions of what
their principal activities are. Thus the principal outputs of universities
are educated minds and new understandings of the natural and artificial
worlds, economy, society and so on. The outputs of business are different,
and involve new understandings of productive and commercial processes
for the purpose of producing outputs of goods and services to be sold
at a profit. Universities operate with one kind of governance system to
achieve their aims, private firms with quite different governance systems,
and these differences materially influence their interactions in the pursuit
of innovation. As has been pointed out, this results in very different norms
for the production and sharing of knowledge within and between the two
systems.
In both business and the academy, positive feedback processes are in
Academic–business research collaborations 91

operation so that success breeds success. The profits from existing activi-
ties that provide the basis for subsequent innovation in a firm have their
equivalent in the university in terms of research reputations that serve to
attract high-quality staff and funding. Indeed, the institutions of science
are partly designed to create and reinforce this process. The currently
articulated attempts by some member states to accelerate this reputation
effect through the competitive allocation of teaching and research funds
are bound to further concentrate reputations on a relatively small number
of universities.
Because there are strong potential complementarities between the
conduct of exploratory, fundamental research in institutions organized
on the ‘open science’ principle, and closed proprietary R&D activities in
the private business sector, it is doubly important to establish market and
non-market arrangements that facilitate information flows between the
two kinds of organization. The returns on public investment in research
carried on by PROs can be captured through complementary, ‘valorizing’
private R&D investments that are commercially oriented, rather than by
encouraging PROs to engage in commercial exploitation of their knowl-
edge resources. This is why the strategy that has been expressed in the EU’s
Barcelona targets is important: by raising the rate of business investment
in R&D, Europe can more fully utilize the knowledge gained through its
public research and training investments, and correspondingly capture the
(spillover) benefits that private producers and consumers derive from the
application of advances in scientific and technological knowledge.
Knowledge transfer processes can be made more effective by attention
to the arrangements that are in place at the two main points of the public
research institutions’ connections with their external environments. That
a research institute or a university may acquire the attributes of an iso-
lated, inward-looking ‘ivory tower’ is well understood, and their internal
processes in many cases tend to encourage this. Universities in the EU are
frequently criticized for operating with internal incentive structures that
reward academic excellence in teaching and research independently of
any potential application to practice in the business or policy realms. This
concern is reflected in the newly attributed ‘third stream’ or ‘triangulation’
of the university system, defined as ‘the explicit integration of an economic
development mission with the traditional university activities of scholar-
ship, research and teaching’.19 Third-stream activities are of many different
kinds, and here it is important to distinguish those activities that seek the
commercialization of university research (technology licenses, joint ven-
tures, spin-offs and so on) from activities of a more sociopolitical nature
that include professional advice to policy-makers, and contributions to
cultural and social life (see OEU, 2007). What is significant about the
92 The capitalization of knowledge

current debate is the emphasis on the commercialization activities. What


is less well understood, and possibly will remain elusive, is how to design
institutional arrangements that successfully support commercialization
while not inhibiting the performance of research and teaching functions
that are the primarily social raison d’être for the continued maintenance of
the universities as a distinctive organizational and cultural form.

A SUMMARY

Researchers in Western universities do make important, fruitful connec-


tions with business firms, and indeed have done so for many, many years.
But the pressures and changes that Europe’s universities now face in
markedly different innovation ecologies have raised questions that focus
attention on the purposes and efficacy of the current extent and modes
of university–business interactions. Innovation ecologies only form into
innovation systems when the different organizations from the ecology
are connected for the purpose of solving innovation problems. Since uni-
versities and firms are part of a complex division of labor in which each
has evolved unique characteristics relative to their primary functions, it
is not to be wondered at that the practices that support these functions
do not automatically facilitate interactions among these differentiated
organizations.
Therefore public and private policy consequently has an important
role to play in respect of the richness and diversity of Europe’s innova-
tion ecology, and with respect to the ways in which connections can be
formed and re-formed to promote a higher rate of innovation. But a fresh
look, and a ‘rethink’ within an explicit ‘systems’ or organizational ecology
framework is in order, because some of the main institutional innovations
that have been promoted with a view to enhancing the exploitation of uni-
versity research do not seem to be the most beneficial ways of ensuring that
university knowledge is translated into greater economic wealth. Indeed,
continuing to seek to overcome the barriers to connecting publicly funded
research conducted in academic institutions with commercial application,
by having those organizations become dependent upon commercialization
of research findings and behaving as a proprietary performer of R&D,
simply is not sensible. It would jeopardize the open science arrangements
that are more effective for the conduct of fundamental, exploratory
research – a function that must be fulfilled by some institution if a basis for
long-run productivity growth is to be sustained.
Policies that would add to existing pressures on academic communities
and their leaders to take on new and different missions, for which their
Academic–business research collaborations 93

historical evolution and specialized characteristics have not equipped


them, run the risk of damaging their ability to fulfill critical functions
that no other organizations in the society are prepared to perform with
comparable effectiveness. The recognition of a need for new missions in
the generation and transmission of knowledge suited to solve problems of
innovation in the economy, therefore, should redirect attention to more
creative solutions. These are likely to involve the development of alterna-
tive equally specialized bridging organizations that would gain expertise in
the forging of diverse inter-organizational links between the worlds of the
academy and the worlds of business.
To explore that option as a promising way forward, however, lies well
beyond the scope of the present chapter and so must be left to stand as a
grand challenge for ‘mechanism design’.

ACKNOWLEDGMENT

This chapter was previously published in Research Policy, Vol. 35, No. 8,
2006, pages 1110–21.

NOTES

1. It is unnecessary to review the details of these misunderstandings, which are discussed


in David (2007). For a comparison of the questionable effects of Bayh–Dole on the
licensing activities of three major US universities see Mowery et al. (2001).
2. Balconi et al. (2004); Geuna and Nesta (2006); Crespi et al. (2006).
3. The statistics presented by Balconi et al. (2004) in table 3 refer to 20 specific science
and engineering research fields in which at least 20 academic inventors (of all nationali-
ties) could be observed in the EPO patent data for the years 1978–99. The proportions
referred to in the following text pertain to Italian academic inventors as a fraction
of total faculty enrolments in the corresponding fields at Italian universities and
polytechnics on 31 October 2000.
4. Paradoxically, this was the practice despite the fact that at that time Italian universities
had titular rights to own the patents filed by their employees, which was anomalous in
the context of the German, Dutch and other national universities at the time; the prac-
tice in Italy removed the anomaly by permitting professors to assign the rights directly
to industrial companies – a practice that was subsequently ratified by a change in the
Italian law. That change seemed, quixotically, to run against the stream of Bayh–Dole-
inspired ‘reforms’ that were under way in other nations’ university systems at the time,
giving patent rights formerly held as the professors’ prerogatives, to their employers.
Operationally, however, the Italian reform was more in accord with the intention
of facilitating the transfer of new technologies to industry, legalizing the way it had
previously been done.
5. The identity of the parties in such bargains, of course, is defined by the regulations
governing the initial assignment of the inventor’s patent rights, which, in the general
situation, reside in the first instance with the institution at which the R&D work was
conducted. Italy is the signal exception in the EU, as changes in Italian law placed the
94 The capitalization of knowledge

patent in the hands of the inventing professor(s). In principle the latter could assign the
rights to a university, which could in turn bargain with a firm over the terms of a license
to exploit it.
6. In this regard it is significant that the latter considerations led the Italian government
to award ownership rights in patents to their faculty employees, whereas the industrial
treatment of ‘work for hire’ by employed inventors was applied to university faculty
by all the other European states. Thus, in Denmark, PROs including universities were
given the rights to all inventions funded by the Ministry of Research and Technology
(in 1999); French legislation authorized the creation of TTOs (technology transfer
offices) at universities (in 1999), and university and PRO assertion of rights to employee
inventions was ‘recommended’ by the Ministry of Research (in 2001); the ‘professor’s
privilege’ was removed in Germany by the Ministry of Science and Education (in 2002);
in Austria, Ireland, Spain and other European countries the employment laws have
been altered to removed ‘professor’s exemption’ from the assignment to employers
of the IP rights to the inventions of their employees. See OECD (2003); Mowery and
Sampat (2005).
7. The quoted phrase is the single most frequently cited national policy development
among those listed in a country-by-country summary of the 25 EU member states’
‘National policies toward the Barcelona Objective’, in European Commission (2003),
Table 2.1, pp. 29ff.
8. See Lohr (2006). The universities involved are UC Berkeley, Carnegie Mellon,
Columbia University, UC Davis, Georgia Institute of Technology, Purdue University
and Rutgers University.
9. For further discussion of the literature on the economics of the so-called ‘anti-
commons’, and the critical importance of ‘multiple-marginalization’ as a source of
inefficiency that is potentially more serious than that which would result from the
formation of a cartel, or profit-maximizing pool among the holders of complementary
patents, see David (2008).
10. There was something not so foolish, after all, in the old-fashioned idea of upstream
public science ‘feeding’ downstream research opportunities to innovative firms. The
worry that this will not happen in the area of nanotechnology (see Lemley, 2005) brings
home the point about the unintended consequences of the success of national policies
that aimed at building a university-based research capacity in that emerging field. The
idea was not to allow domestic enterprise to be blocked by fundament patents owned
by other countries. That they might now be blocked by the readiness of PROs on their
home terrain seeking to exploit their control of those tools is a disconcerting thought.
For points of entry into the growing economics literature on the impact of academic
patenting upon exploratory research investments, and the ‘anti-commons’ question
(specifically, the ambiguities of recent empirical evidence regarding its seriousness), see
David (2003); Lemley and Shapiro (2007).
11. This and the following discussion draw upon Dasgupta and David (1994) and David
(2003).
12. The value of the screening function for employers is the other side of the coin of the
‘signaling’ benefits that are obtained by young researchers who trained and chose to
continue in postdoctoral research positions in academic departments and labs where
publication policies conform to open science norms of rapid and complete disclosure.
On job market signaling and screening externalities in this context see, e.g. Dasgupta
and David (1994), section 7.1, pp. 511–513.
13. This caution might be subsumed as part of the general warning against the ‘mix-and-
match’ approach to institutional reform and problem selection in science and policy-
making, a tendency that is encouraged by international comparative studies that seek
to identify ‘best practices’, as has been pointed out by more than one observer of this
fashionable practice (but see, e.g., David and Foray (1995). Examining particular insti-
tutions, organizational forms, regulatory structures, or cultural practices in isolation
from the ecologies in which they are likely to evolve, and searching for correlations
Academic–business research collaborations 95

between desired system-level outcomes and their presence in the country or regional
cross-section data, has been fashionable but as a rule offers little if any guidance about
how to move from one functional configuration to another that will be not only viable
but more effective.
14. The difficulties occasioned by this internal organizational structure of universities,
which contributes to separating the interest of the institution as a ‘research host’
from that of its faculty researchers, thereby placing these research ‘service units’ in a
regulatory role vis-à-vis the latter, are considerable. But they are far from arbitrary or
capricious, in view of the potential legal complexities that contractual agreements for
collaborative research performance may entail. For further discussion see David and
Spence (2003).
15. See the 2003 survey results reported by Hertzfeld et al. (2006). See also David (2007),
esp. table 1 and text discussion.
16. It is consequently a bit surprising to find the following statement, attributed to the
Lambert Review of Business–University Collaboration (HM Treasury, 2003), p. 52, n.
110: ‘Indeed, the best forms of knowledge transfer involve human interaction, and
European society would greatly benefit from the cross-fertilization between university
and industry that flows from the promotion of inter-sectoral mobility.’
17. These issues are examined in some detail in David and Spence (2003).
18. While this does not imply that other institutions and organizations are more inter-
changeable with the universities in the performance of a number of the latter’s key
functions in modern society, it has contributed to the recent tendency of some observers
to suggest that universities as deliverers of research and training services might be more
effective if they emulated business corporations that perform those tasks.
19. See Minshull and Wicksteed (2005). Activities of this nature are not linked solely
to academy–industry interactions. The tripartite missions in health care to link bio-
medical research with clinical service delivery and clinical education across hospitals
and university medical schools have been widely adopted in the USA and UK. In
the latter they are known as academic clinical partnerships, and they provide the
framework within which much NHS-funded research is carried out. See Wicksteed
(2006).

REFERENCES

Arrow, K.J. (1974), The Limits of Organization, London and New York: W.W.
Norton.
Balconi, M., S. Breschi and F. Lissoni (2004), ‘Networks of inventors and the role
of academia: an exploration of Italian patent data’, Research Policy, 33 (1),
127–45.
Cook-Deegan, R. (2007), ‘The science commons in health research: structure,
function and value’, Journal of Technology Transfer, 32, 133–56.
Crespi, G.A., A. Geuna and B. Verspagen (2006), ‘University IPRs and knowl-
edge transfer. Is the IPR ownership model more efficient?’, presented to the
6th Annual Roundtable of Engineering Research, Georgia Tech College of
Management, 1–3 December, available at http://mgt.gatech.edu/news_room/
news/2006/reer/files/reer_university_iprs.pdf.
Dasgupta, P. and P.A. David (1994), ‘Toward a new economics of science’,
Research Policy, 23, 487–521.
David, P.A. (2003), ‘The economic logic of “open science” and the balance
between private property rights and the public domain in scientific data and
information: a primer’, in J. Esanu and P.F. Uhlir (eds), The Role of the Public
96 The capitalization of knowledge

Domain in Scientific and Technical Data and Information: A National Research


Council Symposium, Washington, DC: Academy Press.
David, P.A. (2007), ‘Innovation and Europe’s academic institutions – second
thoughts about embracing the Bayh–Dole regime’, in F. Malerba and S. Brusoni
(eds), Perspectives on Innovation, Cambridge: Cambridge University Press.
(Available as SIEPR Policy Paper 04-027, May 2005, from http://siepr.stanford.
edu/papers/pdf/04-27.html, accessed January 2008.)
David, P.A. (2008), ‘New moves in “legal jujitsu” to combat the anti-commons’,
Keynote presentation to the COMMUNIA Conference on the Public Domain
in the Digital Age, Louvain-le-Neuve, Belgium, 30 June–1 July, available at
http://communia-project.eu/node/115.
David, P.A. and D. Foray (1995), ‘Accessing and expanding the science and tech-
nology knowledge base’, STI Review: O.E.C.D. Science, Technology, Industry,
16 (Fall),
David, P.A. and M. Spence (2003), Towards Institutional Infrastructures for
e-Science: The Scope of the Challenges, A Report to the Joint Information Systems
Committee of the Research Councils of Great Britain, Oxford Internet Institute
Report No. 2, September, available at http://www.oii.ox.ac.uk/resources/publi-
cations/OIIRR_E-Science_0903.pdf, last accessed 28 January 2008..
European Commission (2003), ‘National policies toward the Barcelona Objective’,
in Investing in research: an action plan for Europe, Brussels EUR 20804 (COM,
2003, 226 final).
European Commission (2005), ‘European universities: enhancing Europe’s
research base’, Report by the Forum on University-based Research, EC-DG
Science and Society, May, available at http://eur-lex.europa.eu.Result.
do?idReg=14&page=7}.
European Commission (2007), ‘EC Staff Working Document’ (COM, 2007,
161/2).
Geuna, A. and L. Nesta (2006), ‘University patenting and its effects on academic
research: the emerging European evidence’, Research Policy, 35 (June–July),
[P.A. David and B.H. Hall (eds), Special Issue on Property and the Pursuit of
Knowledge: IPR Issues Affecting Scientific Research].
Hertzfeld, H.R., A.N. Link and N.S. Vonortas (2006), ‘Intellectual property pro-
tection mechanisms in research partnerships’, Research Policy, 35 (June–July)
[P.A. David and B.H. Hall (eds), Special Issue on Property and the Pursuit of
Knowledge: IPR Issues Affecting Scientific Research].
HM Treasury (2003), The Lambert Review of Business–University Collaboration,
London: HMSO, available at http://www.lambertreview.org.uk.
Lambert, Richard (2007), Lambert Review in EC Staff Working Document (COM,
2007, 161/2).
Lemley, M.A. (2005), ‘Patenting nanotechnology’, October, available at: http://
siepr.stanford.edu/programs/SST_Seminars/index.html.
Lemley, M.A. and C. Shapiro (2007), ‘Royalty Stacking and patent hold-up’,
January, available at http://siepr.stanford.edu/programs/SST_Seminars/index.
html.
Lohr, Steven (2006), ‘I.B.M. and university plan collaboration’, New York Times, 14
December, available at http://www.nytimes.com/2006/12/14/technology/14blue.
html.
Marshall, A. (1919), Industry and Trade, London: Macmillan.
Marshall, A. (1920), Principles of Economics, London: Macmillan.
Academic–business research collaborations 97

Metcalfe, J.S. (2007), ‘Innovation systems, innovation policy and restless capital-
ism’, in F. Malerba and S. Brusoni (eds), Perspectives on Innovation, Cambridge:
Cambridge University Press.
Minshull, T. and B. Wicksteed (2005), University Spin-Out Companies: Starting to
Fill the Evidence Gap, Cambridge: SQW Ltd.
Mowery, D.C. and B.N. Sampat (2005), ‘Bayh–Dole Act of 1980 and university–
industry technology transfer: a model for other OECD governments?’, Journal
of Technology Transfer, 20 (1–2), 1115–27.
Mowery, D.C., R.R. Nelson, B. Sampat and A.A. Ziedonis (2001), ‘The growth
of patenting and licensing by US universities: an assessment of the effects of the
Bayh–Dole act of 1980’, Research Policy, 30, 99–119.
Nelson, R.R. (2004), ‘The market economy and the scientific commons’, Research
Policy, 33, 455–71.
Observatory of the European University (OEU) (2007), Position Paper, PRIME
Network: http://www.prime-noe.org.
OECD (2003), Turning Science into Business: Patenting and Licensing at Public
Research Organizations, Paris: OECD.
Valentin, F. and R.L. Jensen (2007), ‘Effects on academia–industry collaboration
of extending university property rights’, Journal of Technology Transfer, 32,
251–76.
Wicksteed, S.Q. (2006), The Economic and Social Impact of UK Academic Clinical
Partnerships, Cambridge: SQW.co.uk.
3. Venture capitalism as a mechanism
for knowledge governance1
Cristiano Antonelli and Morris Teubal

1. INTRODUCTION

New dedicated capital markets specialized in the public transactions of the


stocks of ‘science-based companies’ emerged in the USA during the 1970s.
These new financial markets enable the anticipation of returns stemming
from the economic applications of technological knowledge, bundled with
managerial competence, but non-embodied in either capital or intermedi-
ary goods. As such the financial markets have, for the first time in history,
promoted the creation and growth of a specialized segment of ‘inventor’
companies and favored public transactions in technological knowledge as
an activity per se.
These new financial markets are becoming a key component of an
innovation-driven novel institutional system termed ‘venture capital-
ism’. This is key for a new model of ‘knowledge-based’ growth relevant
not only for information and communication technologies but also for
biotechnologies and new radical technologies at large (Perez, 2003).
As such, venture capitalism can be considered a major institutional
innovation that enables higher levels of knowledge governance. The basic
‘innovation’ here is not technological but rather institutional, as it consists
in a new hybrid organization based upon the bundling of knowledge,
finance and competence into new science-based startup firms and in the
trade of their knowledge-intensive property rights in dedicated institu-
tional financial markets (Hodgson, 1998; Menard, 2000, 2004; Menard
and Shirley, 2005).
In order to grasp the process that has led to its introduction we shall
rely upon the complexity approach to the economics of innovation. The
application of the tools of complex system dynamics to the economics of
innovation enables us to analyze the role of new multi-agent structures
such as the new financial markets characterized by higher-level organiza-
tions. These ‘higher levels of organization’ in fact are forms of organized
complexity that favor the generation and dissemination of technological

98
Venture capitalism and knowledge governance 99

knowledge into economic systems. Specifically venture capitalism can be


considered a major institutional innovation that provides a platform for the
more effective exploitation of technological knowledge bringing together
into a coalition for innovation a variety of complementary players such
as ‘inventors’, venture capital companies, managerial skills and invest-
ment funds, large incumbents searching for new sources of technological
knowledge and families looking for new financial assets, and stirring their
participation and active contribution to a collective undertaking (Lane,
1993; Lane and Maxfield, 2005; Antonelli, 2008; Lane et al., 2009).
This chapter elaborates the view that venture capitalism has improved
the governance of technological knowledge within economic systems,
and hence has reshaped the prime mechanism by which the generation of
new knowledge can lead to economic growth (Nelson, 1994, 1995; Quéré,
2004).
The rest of the chapter is organized as follows. Sections 2 and 3 provide
the analytical background. Specifically, Section 2 provides the basic eco-
nomics of the relationship between finance and innovation, and highlights
the advantages of the new financial markets in providing funds to science-
based startup companies with respect to previous institutional arrange-
ments such as banks and incumbent corporations. Section 3 explores
the basic elements of the economics of markets as economic institutions.
Section 4 shows the complexity of interactions that led to the emergence of
the new financial markets. The conclusions highlight the main results.

2. FINANCE AND INNOVATION: THE


FRAMEWORK

Knowledge as an economic good exhibits major limitations in terms of


radical uncertainty, non-divisibility, non-excludability, non-exhaustibility,
non-appropriability and non-rivalry in use. Much economic analysis has
explored the implications with respect to the tradability of knowledge
(Arrow, 1962). Yet the limitations of knowledge as an economic good
have major implications also in terms of the provision of finance to fund
its generation and use.
Major asymmetries shape the interaction between prospective funders
and prospective innovators. The access to financial markets for innovative
projects is seriously limited by the radical uncertainty that characterizes
both the generation and the exploitation of new knowledge. Prospective
lenders and investors are worried by the combined high levels of risk: (1)
that the activities that have been funded with their own money will not
succeed, and (2) that the new knowledge, occasionally generated, will
100 The capitalization of knowledge

not be appropriated by the inventor, at least to an extent that makes it


possible to repay the credits and remunerate the capital invested. Even in
the case of a successful generation, funders have good reasons to worry
about dissipation stemming from uncontrolled leakages of proprietary
knowledge. As a consequence, worthy inventive activities and innovative
projects risk being jeopardized because of the lack of financial resources
(Hall, 2002).
Stiglitz has provided two fundamental tools to analyze the relationship
between finance and innovation. With the first stream of contributions,
Stiglitz (Stiglitz and Weiss, 1981; Stiglitz, 1985) has shown that equity
finance has an important advantage over debt in the provision of funds to
innovative undertakings because investors have the right to claim a share
of the profits of successful companies. While lenders can claim only their
credits, investors can participate to the bottom tail of the highly skewed
distribution of positive returns stemming from the generation of new
knowledge and the introduction of new technologies. This has important
consequences in terms of reduction of both the risks of credit rationing
and the costs of financial resources for research activities. Lenders need
to charge high interest rates in order to compensate for the risks of failure
and to discriminate among new research activities to avoid as many
‘lemons’ as possible. Equity investors instead find an equilibrium rate of
return at much lower levels because they can participate in the huge profits
of a small fraction of the new ventures. The fraction of lemons that equity
can support is much larger than that of debt, hence financial equity can
provide a much larger amount of funding for research activities.
With a second line of analysis, Stiglitz (Sah and Stiglitz, 1986, 1988)
has provided the distinction between hierarchies and polyarchies as
alternative mechanisms to manage different types of risk. Hierarchical
decision-making is better able to avoid the funding of bad projects. Yet
the ability of hierarchies is limited by the scope of their competence: their
decision-making tends to favor minor, incremental changes. Polyarchic
decision-making, on the other hand, experiences higher risks of including
bad projects, for example Type 1 errors, but yields higher chances of inclu-
sion of outstanding projects. According to Stiglitz, hierarchical decision-
making fits better in economic environments characterized by low levels of
entropy and radical uncertainty. Conversely, polyarchic decision-making
applies better in times when the levels of radical uncertainty are higher.
The distinction between Type 1 and Type 2 errors proves very useful in
assessing the working of alternative mechanisms and forms of decision-
making in the selection and implementation of new technological knowl-
edge. The argument elaborated by Stiglitz can be used upside-down so
as to investigate what type of decision-making yields higher results in
Venture capitalism and knowledge governance 101

terms of the generation of new technological knowledge and the eventual


introduction of innovations.
Hierarchies are more likely to incur Type 2 errors that arise when good
innovative projects are excluded. Hence hierarchical decision-making
has higher chances of favoring incremental innovations and to excluding
innovative undertakings that are disruptive and may engender problems
in terms of discontinuities both with respect to the existing knowledge
base and sunk costs. Polyarchic decision-making, based on a variety of
competences, selected on a professional basis according to their expertise,
and less exposed to vested interests, on the contrary, favors the inclusion
of a wider range of projects. As a consequence, polyarchies tend to include
also bad projects. But the likelihood that outstanding projects are retained
is much higher. The occurrence of radical innovations seems higher with
polyarchic architectures.
The combination and implementation of the two tools provided by
Stiglitz enable the comparative assessment of the alternative institutional
mechanisms designed to handle the relationship between finance and inno-
vation, and identified by Schumpeter: banks and corporations. The analy-
sis of their limitations, with the tools provided by Stiglitz, enables us to
identify the emerging venture capitalism as a third distinctive mechanism.
In his Theory of Economic Development, Schumpeter stresses the central
role of the provision of appropriate financial resources to entrepreneurs.
The natural interface of the entrepreneur, as a matter of fact, is the innova-
tive banker. The banker is innovative when he or she is able to spot new
opportunities and select among the myriad of business proposals that are
daily submitted, those that have higher chances of getting through the
system. With a given quantity of financial resources, the innovative banker
should be able to reduce the flow of funds towards traditional activities
and switch them towards the new firms. The innovative banker should be
able to identify the obsolete incumbents that are going to be forced to exit
by the creative destruction that follows the entry of successful innovators.
Banks can be considered much closer to polyarchic decision-making.
They can rely upon a variety of expertise and competence, hired on a
professional basis. Their competence is much less constrained by a given
scope of expertise, and the effects of irreversibilities and vested interests
are much lower. As such, banks seem better able to avoid Type 2 errors.
Banks have a clear advantage in the screening process, but their action is
limited by clear disadvantages in the participation in the profits stemming
from new innovative undertakings. Banks are exposed to the intrinsic
asymmetry between debt and equity in the provision of funds to innova-
tive undertakings. This is true especially when radical innovations occur.
The higher the discontinuity brought about by radical innovations, the
102 The capitalization of knowledge

larger the risks of failure of new companies. Banks bear the risks of the
failure of firms that had access to their financial support but cannot share
the benefits of radical breakthroughs. As Schumpeter himself realized,
this model, although practiced with much success in Germany in the last
decades of the nineteenth century, suffered from the severe limitations
brought about by this basic asymmetry.
Schumpeter not only realized the limits of the first model but identified
the new model emerging in the US economy at the beginning of the twen-
tieth century. The analysis of the corporation as the institutional alterna-
tive to the ‘innovative banker’ has been laid down in Capitalism, Socialism
and Democracy. Here Schumpeter identifies the large corporation as the
driving institution for the introduction of innovations. His analysis of
the corporation as an innovative institutional approach to improving the
relationship between finance and innovation has received less attention
than other facets (King and Levine, 1993). The internal markets of the
Schumpeterian corporation substitute external financial markets in the
key role of the effective provision and correct allocation of funds combin-
ing financial resources and entrepreneurial vision within competent hierar-
chies. Corporations, however, are much less able to manage the screening
process. Internal vested interests and localized technological knowledge
help reduce the risks of funding bad projects but risk reducing the chances
that radical innovations are funded.
The Schumpeterian corporation confirms that equity finance is more
effective than debt finance for channeling resources towards innovative
undertakings, but with a substantial bias characterized by continuity with
the existing knowledge base. The model of finance for innovation based
upon the corporation ranks higher than the model based upon banks in
that equity finance is more efficient than debt-based finance with respect
to risk-sharing, but has its own limitations arising from the reduction of
the centers able to handle the decision-making and the ensuing reduction
of the scope of competence that filters new undertakings.
In the second part of the twentieth century a few corporations con-
centrated worldwide a large part of the provision of finance for innova-
tion. The limited span of competence of a small and decreasing number
of incumbents became less and less able to identify and implement new
radical technologies: a case of lock-in competence could be observed. The
corporation has been able for a large proportion of the twentieth century
to fulfill the pivotal role of intermediary between finance and innovations,
but with a strong bias in favor of incremental technological change. The
screening capabilities of corporations fail to appreciate radical novelties.
The integration of these two strands of analysis highlights the radical mis-
match between the distinctive competence and the competitive advantage
Venture capitalism and knowledge governance 103

of the two traditional modes of provision of financial resources to innova-


tion. Both in equity and debt finance, exploitation conditions on the one
hand and competence on the other, are not aligned and are actually diver-
gent. Banks, as polyarchies, are better able to identify and fund radical
innovations but cannot participate into their extraprofits, as they provide
debt and not equity. On the contrary they are exposed to the high rates of
failures stemming from Type 1 errors, for example, the higher incidence
of ‘lemons’ into their portfolios of funded projects. Corporate provision
of funds to internal R&D projects selected by internal and hierarchical
decision-making is less inclined to identifying and funding radical innova-
tions that would benefit larger firms as equity providers. Corporations are
better able to fund minor, incremental innovations where their competi-
tive advantage in exploitation is lesser because the latter are less likely to
earn extra profits. This misalignment between the distinctive exploitation
conditions and the intrinsic competence of the two traditional institutions
has the clear effect of reducing the incentives to the provision of funds for
innovation, and of increasing the interest rates for debt finance. Together
with the limits of knowledge as an economic good, this institutional mis-
alignment is one of the main causes of underinvestment in the generation
of technological knowledge and hence undersupply of innovations.
A mechanism based upon a screening procedure performed by compe-
tent polyarchies and the equity-based provision of finance to new under-
takings would clearly combine the best aspects of each model. Venture
capitalism seems more and more likely to emerge as the third major insti-
tutional set-up able to manage the complex interplay between finance and
innovation when radical changes take place. As a matter of fact, venture
capitalism combines the advantages of distributed processing typical of
polyarchies with the advantages of equity-based finance over debt-based
finance. Venture capitalism makes it possible to combine the more effec-
tive identification of radical innovations with the more effective sharing of
risks associated with the provision of funds.
Table 3.1 provides a synthetic account of the analysis conducted so
far. The bank-based provision of funds to innovation suffers the limits of
debt-based finance but ranks higher in terms of distributed processing. The
advantages of distributed processing are larger, the larger the number of
banks, and the larger the number of independent agents that participate in
the screening process. The corporation model is less able to avoid Type 2
errors but enjoys the advantages of the equity-based provision of finance
to innovation. The corporation model suffers especially from the grip of
the past that sunk costs and the irreversibilities of tangible and intangible
capital exert upon the appreciation of new disruptive technologies. It is
also clear that the smaller the number of corporations that control the
104 The capitalization of knowledge

Table 3.1 Limits and advantages of alternative financial systems for


innovations

Polyarchies Hierarchies
Debt finance Banks experience more
Type 1 errors funding
bad projects because of
low competence levels but
favor the introduction of
radical innovations; as
lenders however they cannot
participate into their extra
profits
Equity finance Venture capitalism favor Corporations can participate
the introduction of radical into the fat tail of profits of new
innovations and participate ventures, and are better able to
into the fat tails of profits of sort out bad projects, but are
new ventures limited by higher probability to
commit Type 2 errors reducing
the rate of introduction of
radical innovations

funding of innovative undertaking, the higher the risks of Type 2 errors at


the system level. Venture capitalism seems able to combine the advantages
of the corporation model in terms of equity-based provision of funds for
innovation, with the distributed processing typical of the banking system.
The emergence of the new, dedicated financial markets specialized in
the public transactions of the knowledge-intensive property rights of new
science-based startup companies is a key aspect of venture capitalism. As
such it requires a dedicated analysis.2
In order to grasp the emergence of the new financial markets specialized
in the transactions of knowledge-intensive property rights, it is necessary
to revisit the basic elements of the economics of markets.

3. MARKETS AS ECONOMIC INSTITUTIONS

Markets as an Economic Problem

Markets are economic institutions that emerge when an appropriate com-


bination of complementary conditions occurs. Markets are the product
of social and institutional change. As such, they evolve over time: they
Venture capitalism and knowledge governance 105

can decline and emerge. At each point in time, markets differ. Markets
can be classified according to their characteristics and their functionality.
The emergence and upgrading of a market is the result of an articulated
institutional process that deserves to be analyzed carefully.
There are three basic notions of market in the literature: (1) in the
textbook theory of exchange, markets exist and are self-evident; and
any transaction presupposes the existence of an underlying market; (2)
markets as devices for reducing transaction costs (Coase); (3) markets as
social institutions promoting division of labor, innovation and economic
growth.
A major contribution to the discussion of markets comes from Coase
whose work clarifies both (1) and (2) above. ‘In mainstream economic
theory the firm and the market are for the most part assumed to exist and
are not themselves the subject of investigation’ (Coase, 1988, p. 5; italics
added). By mainstream economic theory Coase means an economic theory
without transaction costs. Transaction costs are the costs of market trans-
actions that include ‘search and information costs, bargaining and deci-
sion costs, and policing and enforcement costs’ (Dahlman, 1979, quoted
by Coase), which, of course, includes the costs of contracting. In Coase’s
theory, transaction costs exist and can be important; and they explain the
existence of the firm.3
In the old neoclassical theory of exchange that Coase refers to, the exist-
ence of markets (and also the creation of new markets) is assumed but not
analyzed. It is an axiom, a self-evident truth, similar to Coase’s criticism of
the notion of consumer utility, which is central to the above theory: ‘a non
existing entity which plays a part similar, I suspect, to that of ether in the
old physics’ (Coase, 1988, p. 2; italics added). This view of markets implies
that any transaction assumes an underlying market, or that there is no
such thing as a transaction without a market. This is not only not correct
but, following Coase or the implications of his analysis, we assert that the
distinction between individual transactions and a market is important.4
For our purposes, markets are social institutions where at least a critical
mass of producers and a critical mass of consumers interact and transact.
There is an important element of collective interaction and of collective
transacting; that is, any one transaction takes into account the conditions
of all other transactions.
From this viewpoint a market contrasts with an institutional context
characterized by three relevant conditions. First, it is a lower set of trans-
actions than that of the subsequent market. Second, transactions are iso-
lated and sporadic, both synchronically and diachronically. Third, agents
do not rely upon exchanges but on self-sufficiency; that is, users produce
the products they consume/use.
106 The capitalization of knowledge

Originally markets were defined only in geographical terms as locations


where a large number of sellers and buyers would meet to trade. Since
then, markets have grown into sophisticated institutions characterized by
an array of functions and characteristics.5 The extent to which the process
has grown differs. Different stratifications of institutional evolution can
be found according to the characteristics of products and agents involved
(Menard, 2004). Markets differ across countries, industries and contexts.
Markets differ according to the functions they can perform and their
structural characteristics. The emergence and evolution of markets is the
result of a process that takes place over time and is shaped by institutional
innovations of different kinds.

Towards a Classification of Markets

Markets have properties and characteristics. According to such character-


istics, markets are more or less able to perform their functions. The prop-
erties of markets do not coincide with the properties of the products being
exchanged and the characteristics of agents engaged in trade. Yet there is
a high degree of overlap between the characteristics of the products and
agents and the properties of the markets.
The reputation of agents is an essential condition for the emergence
and the working of markets. The certification of agents and the ex ante
assessment of their reliability and sustainability provide both tentative
customers and suppliers with information necessary to perform transac-
tions. Without the provision of information about the reliability of part-
ners in trade, both customers and suppliers must bear the costly burden of
relevant search and assessment activities. From the viewpoint of the effec-
tive working of the marketplace, moreover, the symmetric distribution of
reputation, as a carrier of information, plays a key role. It is clear that in
a system where reputation is distributed unevenly, transactions are likely
to privilege the few agents that enjoy the advantages of good reputation.
A star system is likely to emerge, with clear monopolistic effects. Systems
where the reputation of agents is certified are likely to work better than
systems where reputation is asymmetrically distributed. The latter systems,
in turn, perform better than systems where average levels of reputation are
low. Reputation is a key element in the definition of social capital precisely
for its positive effects in terms of reduction of transaction costs.
Products differ widely with respect to their characteristics, and exhibit
different levels of general tradability and hence influence the performances
of the corresponding markets with respect to the number and quality of
the functions provided to the rest of the system.
In this context it is consequently clear that a central property is the
Venture capitalism and knowledge governance 107

category of products that are being exchanged. We can identify markets


built around a specific need category or user segment (encompassing many
different products and technologies); and markets built around a particu-
lar industry or segment of producers (encompassing many user segments
and need categories). In the first profile of a market, users of substitute
products relating to the satisfaction of a basic category of need converge;
in the second, producers of products related to a basic set of technologies
converge. In the former market, the products traded are substitutes on the
demand side. In the latter market, defined by a particular producer tech-
nology category, for example the chemical industry, the products traded
are substitutes on the supply side.
Beyond the characteristics of the products being exchanged in the
marketplace, and of agents engaged in trade, we can identify at least
six main characteristics of markets: the time horizon of markets plays a
central role. Spot markets are far less effective than regular markets. In
effective markets, future prices can be identified and a full intertemporal
string of prices and quantities can be set. Market density is defined by the
number of agents both on the demand and on the supply side. It is clear
that markets with one player either on the demand or the supply side are
highly imperfect. Market thickness is relevant both on the demand and
the supply side with respect to the volume of transactions. With respect
to thickness, there is an important issue about the levels of the critical
mass necessary for a good performance of the market. When transactions
take place with high levels of frequency, the users of markets, both on the
demand and the supply side, and prices and quantities can adjust swiftly to
changing economic conditions. Sporadic transactions limit the perform-
ances of markets. Recurrence of transactions is most important to reduce
opportunistic behavior and to make comparisons possible. Recurrence of
transactions is a major source of transparency and hence information. The
concentration of transactions increases the density, thickness, frequency
and recurrence of transactions: as such it can be enforced by means of
compulsory interventions, or emerge as the consequence of a spontaneous
process. The role of concentration is vital for the emergence of new effec-
tive markets, and hence it is at the same time a prerequisite and a threshold
factor.

The Functions of Markets

Markets differ greatly with respect to their characteristics, and as a conse-


quence with respect to the functions they can perform. A well-functioning
market is able to perform a variety of functions that a set of isolated
transactions cannot. At least four basic functions can be identified:
108 The capitalization of knowledge

1 Markets as signaling mechanisms to actual or potential users or


suppliers/producers
Markets with appropriate levels of thickness and robustness signal to the
rest of the economy the need for the specific products being traded; and
that the need-satisfying category of good not only exists but is traded and
therefore accessible. The signaling involves a qualitative dimension (the
‘need’ and the ‘product class’ satisfying it) and a quantitative dimension
reflected in quantities and values purchased and sold. Existence of a market
also minimizes volatility and swings concerning persistence of the ‘need’ or
possibility of obtaining the good. This is because a market or an industry
operating in it is presumably more stable than a single user or a single firm;
and a market – compared to a single transaction – provides relative assur-
ance about the possibility of repetitive transactions, purchases or sales, in
the future. Signaling existence and persistence of need to be satisfied and
product class to be supplied helps any firm/supplier and any user/consumer
respectively, actual or potential, to focus his or her search process on
the relevant space where the market exists or operates. It also facilitates
users’ (producers’) long-run decisions concerning purchase (sale) of a new
particular product class or service or system traded in a particular market
(‘the product’). The decisions involve investment decisions concerning or
involving the product or its supply. Nobody wants to create dependence
on a product purchased (sold) whose sources of supply (demand) and
mechanisms of purchase (sale) are not highly reliable and stable.6

2 Markets as selection and incentive mechanisms


Markets are able to perform relevant screening functions when many dif-
ferent products, manufactured with different technologies, are being con-
fronted. Best products emerge and lower-quality products are screened.
The extent to which selection is dynamically efficient depends on charac-
teristics of users, for example on whether or not users are willing or not
to take risks in trying novel products. It also depends on characteristics of
producers, for example whether they are innovative or not and whether
or not competition (as a process) among producers both generates variety
and leads individual firms to rapidly adapt and improve their products in
response to other firms’ products. Good selection mechanisms enable the
allocation of effective incentives to agents, via entry, expansion and inven-
tion/innovation, and symmetrically exit when losses emerge both on the
demand and the supply side.

3 Markets as coordination mechanisms


By means of their signaling functions, markets make possible coordi-
nation in the production of complementary products. Specialization
Venture capitalism and knowledge governance 109

of agents in the narrow spectrum of activities where each firm has a


competitive advantage can be done by means of efficient markets. This
because in the market all the relevant users are present, so that a firm
can easily know the potential market for that specific component (or
components) in the production of which it enjoys a competitive advan-
tage (it will also save on selling costs). The mechanisms in operation
seem to be: signaling and selection with interactive learning. More gen-
erally, markets facilitate both specialization and integration by produc-
ers. Moreover, markets also provide integration opportunities on the
demand side: they facilitate integration and specialization of users that
can combine specialized products into more elaborated consumption
and usage.

4 Markets as risk management mechanisms


By reducing transaction costs and through the enhancement of variety of
firms and products, some markets (as opposed to transactions without
markets) make possible the distribution of risks across a variety of firms
and products. Hence they reduce the risks of opportunistic behavior and
information and knowledge asymmetries.
Only a few markets can reach all the necessary levels of time horizon,
density, thickness, frequency, recurrence and concentration. The analysis
of the broad array of characteristics and functions of markets as eco-
nomic institutions enables the analysis of the emergence of the market
as the result of a process of convergent and complementary innovations.
Markets emerge and consolidate as specialized institutions.7
From this viewpoint, the emergence of a viable market can be consid-
ered the result of an articulated, institutional process that deserves to be
analyzed carefully. Markets are social institutions that perform a variety
of functions and exhibit different forms, organizations and characteristics.
Moreover, markets are a dynamical construct.
Hence markets are being created, they emerge, occasionally their per-
formances and functions improve, yet they can decline. In other words,
markets evolve (Richter, 2007).
In turn, the emergence of new specialized markets has an impact on the
economic system. This leads us to appreciate the notion of ‘market’ origi-
nally proposed by Adam Smith, namely ‘a device that promotes division
of labor, learning/innovation, and economic growth’.
An effort to understand the institutional characteristics of markets in a
general context seems necessary in order to grasp all the implications of the
creation of the new financial markets associated with venture capitalism.
The analysis of their emergence should be the center-piece in any theory
of economic development nowadays: markets perform a central role not
110 The capitalization of knowledge

only in the allocation of resources but also in promoting ‘knowledge-based


growth’ (De Liso, 2006).

4. THE EMERGENCE OF NASDAQ FOR VENTURE


CAPITALISM
The creation of a surrogate market for knowledge where knowledge-
intensive property rights can be traded as financial products can be con-
sidered one of the key features and contributions of venture capitalism.
The new financial markets specialized in knowledge-intensive property
rights are based on a new intermediation form that emerges from the
mutual adaptation of different groups of actors both on the supply and
the demand side, and with the underlying institutional structure. This has
led to a multilayer super-market such as NASDAQ, which enables partici-
pants to relate to a large number of markets for individual stocks simul-
taneously, thereby better coordinating their needs with the capabilities
offered.
A new market may emerge when a set of previously isolated precursor
transactions sparks an emergence process. For this to happen, a number
of conditions are required. Frequently these will include pre-emergence
processes of interaction and information flow among agents, together with
experimentation and learning concerning product characteristics and user/
producer organization and strategy. Emergence may also require a criti-
cal mass of precursor transactions both to underpin the above-mentioned
interactions, learning and experimental processes, and to enhance the
expected ‘benefits’ derived from creating a new market.8 Moreover, the
successful emergence of a new market may depend critically on the con-
verging action of agents towards emerging platforms able to provide the
required dynamic coordination (Richardson, 1972, 1998).
The evolutionary process leading to the emergence of a new market is
seen as an autocatalytic, cumulative process with positive feedback, or,
alternatively, a process characterized by dynamic economies of scale. This
process involves the creation and utilization of externalities that explain
the acceleration of growth. The cumulative process does not end with crea-
tion of the new market; rather it continues afterwards at least for a time
(provided that external conditions do not deteriorate).9
The new (more complex) structure created by the interaction among
elementary components (firms and users) will, once it is emerged, posi-
tively further stimulate such components. This phenomenon provides us
with an additional, and much less recognized, characteristic of ‘a market’:
once created it will stimulate the creation of new firms.10
Venture capitalism and knowledge governance 111

The Phases of the Process

The emergence of the new financial markets is the result of a continued


process of convergent and complementary steps that can be visualized as
comprising four phases.

Phase I. Bundling finance and competence with innovation


Since the early days, venture capital firms specialized in the provision of
‘equity finance’ to new science-based startup companies as distinct from
‘loans’, which were the prevailing product offered by existing financial
institutions (banks). Equity finance was offered to science-based startup
companies bundled together with business services and management
advice, management services, certification and networking functions. This
was exchanged for limited partnership. Limited partnership is a key ‘pre-
cursor’ dimension to the emergence of the new market. In the USA during
the 1960s and 1970s, limited partnerships were the dominant form of
organization for new science-based startup companies. Limited partner-
ship allowed for dilution of founder equity positions and a capital (jointly
with the prevailing product) market orientation.
Gompers and Lerner (1999) stress the role at this stage of the chang-
ing features of intellectual property right regimes. The increasing depth,
width and duration of patents has led to higher levels of appropriability for
knowledge that is embodied in new science-based companies, and traded in
the form of knowledge-intensive property rights rather than bundled within
large diversified incumbents. Large incumbents were able to rely much less
on the protection provided by intellectual property rights because of the
advantages of existing barriers to entry that would delay the dissipation of
innovation rents. Large incumbents, moreover, can take advantage of lead
times and secrecy as effective mechanisms of knowledge appropriation.
New science-based startup companies, on the other hand, need to disclose
information about the advantages of their knowledge base: patents perform
a key signaling function. The protection of hard intellectual property right
regimes is much more important for science-based startup companies that
are newcomers themselves. The radical changes in intellectual property
right regimes introduced in the 1980s and 1990s clearly favored venture
capitalism because they reduced for investors the levels of risks associated
with the non-appropriability of the strong knowledge component of the
intangible assets of the new science based firms (Hussinger, 2006).

Phase II. Knowledge-intensive property rights


Phase II is marked by the evolution of limited partnership as the leading
form of organization of startup into private stock companies based upon
112 The capitalization of knowledge

knowledge-intensive property rights shares of the new science-based


startup companies and other rights concerning the management of the
company. Limited partnership converges progressively into stock-holding.
The personal participation of partners in the startup declines and is sub-
stituted by the professional services of managers organized by venture
capital companies. The new bundling of equity with managerial compe-
tence into knowledge-intensive property rights of science-based startup
companies that can be traded can be considered the dominant (product)
design that lies at the origin of what will become a new market. In this
early phase, venture capital companies co-evolve with the organization of
the new science-based startup companies.
The development of venture capital companies and the growth of the
syndications as a way to collect funds for new science-based startup com-
panies have played a key role in this phase. Private investors and financial
companies that had contributed to the fundraising activities for new
companies were eager to elaborate exit strategies for collecting the value
of the new firms after their creation and growth, and participate fully in
the profits of the ‘blockbusters’. The search for ‘exit’ strategies acts as a
powerful dynamic factor at this stage.

Phase III. Trading knowledge-intensive property rights in private markets


Exits took place principally through the sale of knowledge-intensive
property rights in the so-called trade sales to individuals or organizations.
These are ‘private transactions’. During the first half of the 1970s we can
observe the growing number of over-the-counter (OTC) initial offerings of
knowledge-intensive property rights. Here a critical mass of transactions
slowly builds up and triggers, through variation, a more systematic and
focused search and experimentation process leading to the emergence of a
public market.
Large companies become progressively aware of the important oppor-
tunities provided by the new small public companies whose shares are
traded over the counter as a source of technological knowledge. Mergers
and acquisitions increase as corporations rely more and more systemati-
cally upon the takeover of the new science-based companies, after initial
public offering (IPO), as a source of technological knowledge that has
already been tested and proved to be effective. The acquisition of external
knowledge, embodied in the new firms, complements and partly sub-
stitutes internal activities conducted intra muros within the traditional
research laboratories. Specifically, incumbents rely on the new source of
external technological knowledge as an intermediary input that can be
combined with other internal knowledge sources.
Hence it is clear that the new, dedicated financial markets implement
Venture capitalism and knowledge governance 113

a new central functionality in the economic system in terms of increased


division of labor in the generation of new technological knowledge, and
higher levels of specialization in the production of the bits of knowledge
that each company is better able to command. From this viewpoint it is
also clear that the new markets favor the coordination among different
firms specialized in the generation of complementary modules of knowl-
edge that can be exchanged and traded. The new financial markets favor
the reorganization of the generation of knowledge, away from high levels
of internal vertical integration, towards open innovation architectures
(Chesbrough, 2003). The changing organization of the generation of tech-
nological knowledge on the new financial markets attracts increasing flows
of firms on the demand side. Consequently, the growing demand of the
new knowledge-intensive property rights by large incumbents increases
the frequency of transactions and hence the thickness of the new markets
(Avnimelech and Teubal, 2004, 2006, 2008).

Phase IV. Emergence of a public capital market focused on IPOs


The increasing size of OTC exchanges led the National Association of
Dealers to introduce an automatic quoting mechanism to report the prices
and quantities of the private transactions. Eventually the mechanisms,
better known by an acronym, evolved into a marketplace. NASDAQ
became a new market for selling knowledge-intensive property rights to
the public at large rather than only to private individuals or organizations.
NASDAQ became the specialized market for IPOs of the shares of the new
science-based startup companies nurtured by venture capital companies
and funded with their assistance by groups of financial investors.
Significant adaptations of the institutional environment, for example
modifications of the ERISA (Employment Retirement Income Security
Act), including the 1979 amendment to the ‘prudent man’ rule governing
pension fund investments in the USA (Gompers and Lerner, 2004, pp. 8,
9) involved liberalization of the constraints on pension fund investment in
the stock of new science-based startups.
In parallel, the increasing liberalization of international financial and
currency markets had the twin effect of increasing both the demand and
the supply in the NASDAQ. On the demand side, a growing number of
investment funds entered the NASDAQ to place their capital. On the
supply side, the high levels of liquidity, the thickness of transactions and
the low levels of volatility, together with the high quality of the profes-
sional services available in NASDAQ, attracted the entry of venture
capital companies of other countries (in the Israeli case the dynamics is
impressive) that eventually represented a large and growing share of the
total figure of IPOs of science-based startup companies. An increasing
114 The capitalization of knowledge

concentration of exchanges, a key feature of a marketplace, has been


taking place at the global level (Bozkaya et al. 2008).
By means of global concentration, sparse, rare and occasional trans-
actions by a myriad of isolated and dispersed agents, scattered around
many local markets, were progressively brought into the same physical
and institutional context with clear advantages in terms of the number of
transactions that occur and hence can be compared and observed.
Here the analysis of Schmookler (1966) on the role of demand in pulling
technological innovation applies to explain the final stages of this process
of institutional change. Schmookler found strong empirical evidence of a
link between capital-good market size (as indicated by gross investment)
on the one hand and capital-good improvement inventions (as indicated
by patents on capital goods, with a lag) on the other (Schmookler, 1966).
Moreover, when it comes to explain the distribution of patents on capital-
goods improvement inventions across industries, ‘demand’ overrides any
differences in the ‘supply’ side of inventions. His analysis suggests that the
emergence of new product markets in general and not only capital-goods
markets will, through a ‘demand’ effect, induce improvement inventions in
the underlying product and process technology.
Here it is clear that demand for the new knowledge-intensive property
rights by investment funds, pension funds and eventually family pulled
the final diffusion of NASDAQ with a snowball effect in terms of the
overall level of transactions. The new levels of mass transactions favored
the frequency of IPOs and attracted qualified professional and financial
companies specialized in market management. This in turn led to substan-
tial increase in the thickness of the markets, reduction in volatility and
eventually global concentration of exchanges.
The concentration of transactions, the thickness of the new markets,
and, most important, the ensuing recurrence of transactions on individual
stocks have important effects in terms of reduction of volatility. The entry
on the demand side of large investment funds, pension funds and ulti-
mately even private investors has the important effect of providing large
flows of transactions on the shares of individual companies. The size of
the new financial markets makes it possible to better manage uncertainty
by means of the distribution of small bets across a variety of actors and of
firm-specific equity markets.
In the previous phases, characterized by the preponderance, on the
demand side, of large incumbents searching for new science-based compa-
nies able to complement their internal knowledge base in order to organize
takeovers and subsequent delisting, transactions on individual stocks were
sporadic, with high levels of volatility.
This enables NASDAQ to become an efficient mechanism for the
Venture capitalism and knowledge governance 115

identification of the correct value of knowledge-intensive property rights.


This in turn leads it to perform the key function of appreciation of the
large share of intangible assets in the value of the new science-based
companies (Campart and Pfister, 2007; Bloch, 2008).
The expansion/transformation of NASDAQ is clearly the result of a
cumulative process with positive feedback involving a number of proc-
esses that make the market more and more attractive to increasingly
larger sets of agents (both demand side and supply side). The reasons
are similar to some extent to those invoked to explain the dynamics of
venture capital or cluster emergence. The new sets of agents that partici-
pate in the new markets include specialized agents providing services to
investors or companies, for example investment banks, brokers, consult-
ants, and so on; specialized new intermediaries such as venture capital/
private equity funds, financial investors and so on. The enhanced volume
that their entry induces further reduces transaction costs, which further
increases the thickness and frequency of transactions. This also reduces
uncertainty to individual investors as well as market volatility, and so
on.
Thus, once a new market emerges (e.g. as a result of venture capitalism)
and begins to grow, a point may be reached when the private ‘benefit’
from developing a disruptive technology may become such as to induce
‘technology suppliers’ like science-based startup companies to undertake
disruptive technology development. This in turn enabled exploitation of
significant economies of scale and scope, and a momentum for further
expansion (dynamic economies or cumulative processes with positive feed-
back). NASDAQ thereby eventually became the market for transactions
on knowledge-intensive property rights in general. NASDAQ in effect
became a ‘super-market’ for products generating income streams for the
general public.
The emergence of venture capitalism, defined as the combination of
venture capital companies able to screen, fund and assist the growth of
new science-based startup companies complemented by a dedicated finan-
cial market specialized in the transactions of their property rights, marks
important progress in knowledge governance. Venture capitalism has
significant advantages with respect to the system architecture prevailing
in the second part of the twentieth century, when innovations were mainly
selected, developed and commercialized by existing incumbent companies.
The new, dedicated financial markets seem better able than the previous
knowledge governance mechanisms to appreciate the economic value
of technological knowledge, to signal the new directions of technologi-
cal change, to select the new blueprints and, most important, to provide
better incentives respectively to ‘inventors’, to venture capital firms and to
116 The capitalization of knowledge

investors in directing their resources and capabilities towards the genera-


tion and use of new technological knowledge.
The new, dedicated financial markets seem able to reduce the limita-
tions of both the hierarchical corporate and the credit-based polyarchic
model based upon the banking system. They also seem able to combine
the advantages of screening radical innovations of polyarchic decision-
making with the advantages stemming from direct participation in the
profits of new outperforming science-based startup that are characteristic
of the equity provision of finance to innovation, typical of the corporate
model.

5. CONCLUSIONS

Venture capitalism can be understood as a new mechanism for the govern-


ance of technological knowledge that is the result of a system dynamics
where a variety of complementary and localized innovations introduced
by heterogeneous agents aligned and converged towards a collective plat-
form. The new mechanism has improved the governance of technological
knowledge within economic systems, through the combination of new
science-based startups and new, dedicated financial markets specialized
in the transactions of knowledge-intensive property rights. Hence it has
reshaped the prime mechanism by which the generation of new knowledge
can lead to economic growth.
The relationship between technological and institutional change is
strong and allows for bidirectional causality. Technological change can
be considered the cause of institutional change, as much as institutional
change can be considered as the origin of technological change. A large lit-
erature has explored the view that the discontinuities brought about by the
radical technological breakthrough that took place in the late 1970s with
the emergence of the new technological systems based upon information
and communication technologies (ICT) can be thought to be at the origin
of the progressive demise of the Chandlerian model of innovation centered
on large corporations. Venture capitalism has been often portrayed as the
consequence of the ICT revolution.
In this chapter we have articulated the alternative hypothesis. The
emergence of venture capitalism based upon new dedicated financial
markets specialized in the trading of knowledge-intensive property rights
and hence in the systematic appreciation of new science-based startups
can be considered a major institutional innovation in the governance of
technological knowledge and as such a key factor in hastening the pace of
introduction of more radical technological innovations.
Venture capitalism and knowledge governance 117

The analysis has highlighted the advantages of the new mechanism


of knowledge governance based upon venture capital companies able to
screen, fund and implement new science-based startup companies and new
dedicated financial markets specialized in knowledge-intensive property
rights. It has also shown how the emergence of such new markets has been
the result of a complex process of system dynamics where a plurality of
actors and interests aligned and converged towards a common platform
able to integrate and valorize the complementarities between their dif-
ferent profit functions. The emergence of the new financial markets can
be considered as a major institutional innovation that is likely to have
important effects on the pace of technological change.
Following our line of investigation, we can summarize the main
reasons why the process of transformation of radical inventions into new
product markets is likely to become more certain, frequent and routinized
under venture capitalism: (1) increased numbers of new science-based
startup companies with radical inventions; (2) new systemic and generic
mechanisms of direct or indirect transformation of such inventions into
new product markets; (3) the effect of new markets and more rapid
market growth on invention, including radical (both disruptive and non-
disruptive) inventions; (4) the possible emergence of unbundled markets
for technological improvements.
Venture capitalism creates a cumulative process of innovation-based
economic growth. The combination of continued generation of new
opportunities and the mechanism for ‘unlocking’ the system from poten-
tial, strong past dependence, is evidence that venture capitalism could
become a feature of sustainable innovation-based growth.

NOTES

1. Morris Teubal acknowledges the funding and support of ICER (International Center
for Economic Research) where he was a Fellow in 2005 and 2008 and the Prime (NoE)
Venture Fun Project. Preliminary versions have been presented at the Fifth Triple Helix
Conference ‘The capitalization of knowledge: cognitive, economic, social and cultural
aspects’ organized in Turin by the Fondazione Rosselli, May 2005 and the following
workshops: ‘The emergence of markets and their architecture’, jointly organized by
CRIC (University of Manchester) and CEPN-IIDE (University Paris 13) in Paris, May
2006; ‘Instituting the market process: innovation, market architectures and market
dynamics’ held at the CRIC of the University of Manchester, December 2006; ‘Search
regimes and knowledge based markets’ organized by the CEPN Centre d’Economie de
Paris Nord at the MSH Paris Nord, February 2008.
2. So far, this contribution complements and integrates Antonelli and Teubal (2008),
which focuses on the emergence of knowledge-intensive property rights.
3. Concerning the nature and function of markets, again following Coase: ‘Markets are
institutions that exist to facilitate exchange, that is they exist in order to reduce the
118 The capitalization of knowledge

cost of carrying out exchange transactions. In Economic Theory which assumes that
transaction costs are non-existent markets have no function to perform’ (Coase, 1988,
p. 7); and ‘when economists do speak about market structure, it has nothing to do with
markets as an institution, but refers to such things as the number of firms, product dif-
ferentiation and the like, the influence of the social institutions that facilitate exchange
being completely ignored’.
4. Coase (1988) discusses the elements comprising a market, e.g. the medieval fairs and
markets that comprise both physical facilities and legal rules governing the rights and
duties of those carrying out transactions. Modern markets will also involve collective
organizations, that is technological institutes and mechanisms for the provision of
market-specific public goods. They also require a critical mass of buyers and sellers,
and institutions assuring standards and quality on the one hand and transparency of
transactions and inter-agent information flow on the other.
5. Marshall makes it clear that markets are themselves the product of a dynamic process:
‘Originally a market was a public place in a town where provisions and other objects
were exposed for sale; but the word has been generalized, so as to mean any body of
persons who are in intimate business relations and carry on extensive transactions in
any commodity. A great city may contain as many markets as there are important
branches of trade, and these markets may or may not be localized. The central point
of a market is the public exchange, mart or auction rooms, where the traders agree to
meet and transact business. In London the Stock Market, the Corn Market, the Coal
Market, the Sugar Market, and many others are distinctly localized; in Manchester the
Cotton Market, the Cotton Waste Market, and others. But this distinction of locality
is not necessary. The traders may be spread over a whole town, or region of country,
and yet make a market, if they are, by means of fairs, meetings, published price lists, the
post-office or otherwise, in close communication with each other’ (Marshall, 1920, pp.
324–5).
6. Markets can also signal new product or product feature requirements (‘unmet needs’)
within the ‘product category’ being traded.
7. Our agenda is therefore not only to define and explain the role of markets but also to
identify the processes of emergence of new markets. This will include analyzing the con-
ditions under which a set of ‘precursor’ transactions will not lead to the emergence of a
new market. In terms of system dynamics, this could be termed ‘left-hand truncation’.
Moreover, explaining emergence will require making reference to other variables, that
is scale economies in building the marketplace (Antonelli and Teubal, 2008).
8. The benefits include savings in transaction costs that should cover the fixed costs of
creating and the variable costs of operating a new market (see above).
9. The above framework suggests that failed market emergence could be the result of two
general causes. One is failed selection processes resulting from too little search/experi-
mentation and/or inappropriate selection mechanisms due to institutional rigidity.
The other is failure to spark or sustain an evolutionary cumulative emergence process
(e.g. due to system failures that policy has not addressed). Not all radical inventions,
even those leading to innovations and having potential, will automatically lead to new
product markets.
10. Students of regional high-tech clusters such as Saxenian (1994) and Fornahl and
Menzel (2004) have intuitively recognized the relevance of such dynamics, but not quite
elaborated it.

REFERENCES

Antonelli, C. (2008), Localized Technological Change: Towards the Economics of


Complexity, London: Routledge.
Venture capitalism and knowledge governance 119

Antonelli, C. and M. Teubal (2008), ‘Knowledge intensive property rights and


the evolution of venture capitalism’, Journal of Institutional Economics, 4,
163–82.
Arrow, K. (1962), ‘Economic welfare and the allocation of resources to invention’,
in R.R. Nelson (ed.), The Rate and Direction of Inventive Activity: Economic
and Special Factors, Princeton, NJ: Princeton University Press for the National
Bureau of Economic Research, pp. 609–25.
Avnimelech, G. and M. Teubal (2004), ‘Venture capital start-up co-evolution and
the emergence and development of Israel’s new high tech cluster’, Economics of
Innovation and New Technology, 13, 33–60.
Avnimelech, G. and M. Teubal (2006), ‘Creating venture capital industries that co-
evolve with high tech: Insights from an extended industry life cycle perspective
of the Israeli experience’, Research Policy, 35, 1477–98.
Avnimelech, G. and M. Teubal (2008), ‘From direct support of business sector
R&D/innovation to targeting of venture capital/private equity: a catching up
innovation and technology policy cycle perspective’, Economics of Innovation
and New Technology, 17, 153–72.
Bloch, C. (2008), ‘The market valuation of knowledge assets’, Economics of
Innovation and New Technology, 17, 269–84.
Bozkaya, A. and Bruno Van Pottelsberghe De La Potterie (2008), ‘Who funds
technology-based small firms? Evidence from Belgium’, Economics of Innovation
and New Technology, 17, 97–122.
Campart, S. and E. Pfister (2007), ‘Technology cooperation and stock market
value: an event study of new partnership announcements in the biotechnology
and pharmaceutical industries’, Economics of Innovation and New Technology,
17, 31–49.
Chesbrough, H. (2003), Open Innovation. The New Imperative for Creating and
Profiting from Technology, Boston, MA: Harvard Business School Press.
Coase, R. (1988), The Firm, the Market and the Law, Chicago, IL: The University
of Chicago Press.
Dahlman, C.J. (1979), ‘The problem of externality’, Journal of Law and Economics,
22, 141–62.
De Liso, N. (2006), ‘Charles Babbage, technological change and the “National
System of Innovation”’, Journal of Institutional and Theoretical Economics, 162,
470–85.
Fornahl, D. and M.P. Menzel (2004), ‘Co-development of firms founding and
regional cluster’, Discussion Paper No. 284, University of Hanover, Faculty of
Economics.
Gompers, P. and J. Lerner (1999), The Venture Capital Cycle, Cambridge, MA:
The MIT Press.
Gompers, P. and J. Lerner (2004), The Venture Capital Cycle, 2nd edn, Cambridge,
MA: The MIT Press.
Hall, B.H. (2002), ‘The financing of research and development’, Oxford Review of
Economic Policy, 18, 35–51.
Hodgson, G.M. (1998), ‘The approach to institutional economics’, Journal of
Economic Literature, 36, 166–92.
Hussinger, K. (2006), ‘Is silence golden? Patents versus secrecy at the firm level’,
Economics of Innovation and New Technology, 15, 735–52.
King, R.G. and R. Levine (1993), ‘Finance and growth: Schumpeter might be
right’, Quarterly Journal of Economics, 107, 717–37.
120 The capitalization of knowledge

Lane, D.A. (1993), ‘Artificial worlds and economics, part II’, Journal of Evolutionary
Economics, 3, 177–97.
Lane, D.A. and R.R. Maxfield (2005), ‘Ontological uncertainty and innovation’,
Journal of Evolutionary Economics, 15, 3–50.
Lane, D.A., S.E. van Der Leeuw, A. Pumain and G. West (eds.) (2009), Complexity
Perspectives in Innovation and Social Change, Berlin: Springer, pp. 1–493.
Marshall, A. (1890), Principles of Economics, London: Macmillan (8th edn,
1920).
Menard, C. (ed.) (2000), Institutions Contracts and Organizations. Perspectives
from New Institutional Economics, Cheltenham, UK and Northampton, MA,
USA: Edward Elgar.
Menard, C. (2004), ‘The economics of hybrid organizations’, Journal of Institutional
and Theoretical Economics, 160, 345–76.
Menard, C. and M.M. Shirley (eds) (2005), Handbook of New Institutional
Economics, Dordrecht: Springer.
Nelson, R.R. (1994), ‘The co-evolution of technology, industrial structure and
supporting institutions’, Industrial and Corporate Change, 3, 47–63.
Nelson, R.R. (1995), ‘Recent evolutionary theorizing about economic change’,
Journal of Economic Literature, 23, 48–90.
Perez, C. (2003), Technological Revolutions and Financial Capital: The Dynamics
of Bubbles and Golden Ages, Cheltenham, UK and Northampton, MA, USA:
Edward Elgar.
Quéré, M. (2004), ‘National systems of innovation and national systems of govern-
ance: a missing link?’, Economics of Innovation and New Technology, 13, 77–90.
Richardson, G.B. (1972), ‘The organization of industry’, Economic Journal, 82,
883–96.
Richardson, G.B. (1998), The Economics of Imperfect Knowledge, Cheltenham,
UK and Northampton, MA, USA: Edward Elgar.
Richter R. (2007), ‘The market as organization’, Journal of Institutional and
Theoretical Economics, 163, 483–92.
Sah, R.K. and J.E. Stiglitz (1986), ‘The architecture of economic systems’,
American Economic Review, 76, 716–27.
Sah, R.K. and J.E. Stiglitz (1988), ‘Committees, hierarchies and polyarchies’,
Economic Journal, 98, 451–70.
Saxenian, A. (1994), Regional Development: Silicon Valley and Route 128,
Cambridge, MA: Harvard University Press.
Schmookler, J. (1966), Invention and Economic Growth, Cambridge, MA: Harvard
University Press.
Schumpeter, J.A. (1934), The Theory of Economic Development, Cambridge, MA:
Harvard University Press.
Schumpeter, J.A. (1942), Capitalism, Socialism and Democracy, New York:
Harper and Brothers.
Stiglitz, J.E. (1985), ‘Credit markets and capital control’, Journal of Money, Credit
and Banking,), 133–52.
Stiglitz, J.E. and A. Weiss (1981), ‘Credit rationing in markets with imperfect
information’, American Economic Review, 71, 912–27.
4. How much should society fuel
the greed of innovators? On the
relations between appropriability,
opportunities and rates of
innovation
Giovanni Dosi, Luigi Marengo and
Corrado Pasquali

1. INTRODUCTION

This chapter attempts a critical assessment of both theory and empirical


evidence on the role and consequences of the various modes of appro-
priation, with particular emphasis on intellectual property rights (IPR), as
incentives for technological innovation.
That profit-motivated innovators are fundamental drivers of the
‘unbound Prometheus’ of modern capitalism (Landes, 1969) has been
well appreciated since Smith, Marx and, later, Schumpeter. For a long
time such an acknowledgment has come as an almost self-evident ‘stylized
fact’. Finer concerns about the determinants of the propensity to innovate
by entrepreneurs and business firms came much later with the identifi-
cation of a potentially quite general trade-off underlying the economic
exploitation of technological knowledge: in so far as the latter is a non-
rival and hardly excludable quasi-public good, pure competitive markets
are unable to generate a stream of quasi-rents sufficient to motivate profit-
seeking firms to invest resources in its production (Arrow, 1962). In order
to provide such incentives, a general condition is to depart from pure
competition (as was indeed quite naturally acknowledged by Smith, Marx
and Schumpeter). Granted that, however, what is empirically the extent
of such a departure? And, from a normative point of view, what is the
desirable degree of appropriability able to fuel a sustained flow of inno-
vations undertaken by business firms? And through which mechanisms?
Moreover, what is the impact of different institutional and technological

121
122 The capitalization of knowledge

conditions upon the profitability and competitive success of innovators


themselves?
The last angle is the one tackled in the seminal paper of David Teece
(1986), who argues that profits from innovation depend upon the interac-
tion of three families of factors, namely, appropriability regimes, com-
plementary assets and the presence or absence of a dominant paradigm.
Note that appropriability conditions, in addition to patent and copyright
protection, include secrecy, lead times, costs and time required for dupli-
cation, learning, sales and service assets. Moreover, as Teece empha-
sizes, such appropriability regimes are largely dictated by the nature of
technological knowledge (Teece, 1986, p. 287).
These fundamental observations on the mechanisms through which
firms ‘benefit from innovation’, however, have been lost in a good deal of
contemporary literature on the incentive to innovate, wherein, first, appro-
priability conditions are reduced almost exclusively to IPR regimes, and,
second, the award of IPR themselves is theoretically rooted in a frame-
work – in our view deeply misleading – namely that of ‘market failures’.
In what follows, we start from a critical assessment of such a perspective
and of the related notion of a monotonic relation between IP protection
and rates of innovation (Section 2). Next, after an overview of the recent
changes in IPR regimes (Section 3), in Section 4 we review the empirical
evidence on the relationship between appropriability in general and IP
protection in particular, on the one hand, and rates of innovation on the
other.
Such evidence, we shall argue, suggests that, first, appropriability
conditions are just one of several factors (possibly second-order ones)
shaping the propensity to innovate. Together, the relative importance of
the various factors and their interaction is highly sector- and technology-
specific.
Second, appropriability is likely to display a threshold effect, meaning
that a minimum degree of appropriability is necessary to motivate
innovative effort, but above such a threshold further strengthening of
appropriability conditions will not determine further increases of R&D
investments and rates of innovation. Rather, social inefficiencies such as
‘anti-commons’ effects (Heller and Eisenberg, 1998), rent-seeking behav-
iors, dissipation of quasi-rents into litigation and so on are much more
likely to emerge.
Third, and relatedly, there seems to be no clear evidence of a positive
relation between the tightening of IPR regimes and the rates of innova-
tion. Conversely, there is good evidence on the (perverse) links between
IPR protection and income distribution.
The rates of innovation, we suggest, fundamentally depend on paradigm-
Appropriability, opportunities and innovation 123

specific opportunities rather than on mere appropriability conditions (at


least above some threshold) and even less so on the specific subset of
appropriability devices represented by legal IPR protection.
Note that observed rates of innovation at the level of an industry or
an economy are only remotely related to any ‘equilibrium’ rate of R&D
investment by the ‘representative’ firm, whatever that means. Given what-
ever incentive profile, one typically observes quite varied search responses
(as very roughly measured by R&D investments) and also quite differ-
ent technological and economic outcomes, well beyond what a statisti-
cian would interpret as independent realizations of the same underlying
random process. We thus conclude (Section 5) that while the first-order
determinants of the rates of innovation rest within the technology-specific
and sector-specific opportunity conditions, the differential ability of indi-
vidual firms to benefit from them economically stem from idiosyncratic
organizational capabilities.
But if this is the case, the answer to the question we ask in the title of this
chapter is also straightforward: fueling the greed of innovators might be
at best irrelevant for the ensuing rates of innovation, while of course bad
from a social point of view.

2. SOME FAILURES OF THE ‘MARKET FAILURE’


ARGUMENTS

The economic foundations of both theory and practice of IPR rest upon
a standard market failure argument, without any explicit consideration
of the characteristics of the knowledge whose appropriation should be
granted by patent or other forms of legal monopoly.
The proposition that a positive and uniform relation exists between
innovation and intensity of IP protection in the form of legally enforced
rights such as patents holds only relative to a specific (and highly disput-
able) representation of markets, their functioning and their ‘failures’, on
the one hand, and of knowledge and its nature on the other.
The argument falls within the realm of the standard ‘Coasian’ positive
externality problem, which can be briefly stated in the following way.
There exists a normative set of efficiency conditions under which markets
perfectly fulfill their role of purely allocative mechanisms.
The lack of externalities is one of such conditions because their appear-
ance amounts (as with positive externalities) to underinvestment and
underproduction of those goods involved in the externality itself. Facing
any departure from efficiency conditions, a set of policies and institutional
devices must be put in place with the aim of re-establishing them in order to
124 The capitalization of knowledge

achieve social efficiency. Knowledge generation is one of the loci entailing


such an externality: since knowledge is (to a great extent) a public good, it
will be underproduced and will receive insufficient investments. Hence an
artificial scarcity is created to amend non-rivalry and non-excludability in
its use, yielding an appropriate degree of appropriability of returns from
investments in its production. The core of the matter then becomes one
of balancing out the detrimental effect of the deadweight loss implied by
a legally enforced monopoly, on the one hand, and the beneficial effect of
investments in R&D and more generally in knowledge generation, on the
other.
A number of general considerations can be made about this argument.
First, the argument rests fundamentally on the existence of a theoreti-
cal (but hardly relevant in terms of empirical and descriptive adequacy)
benchmark of efficiency against which policy and institutional interven-
tions should be compared as to their necessity and efficacy. Second, the
efficiency notion employed is a strict notion of static efficiency that brings
with it the idea that markets do nothing except (more or less efficiently)
allocate resources. Third, a most clear-cut distinction between market and
non-market realms is assumed, together with the idea that non-market
(policy, institutional) interventions can re-establish perfect competition
using purely market-based ‘tools’. Fourth, it is assumed that the nature of
‘knowledge’ is totally captured by the notion of ‘information’, thus setting
the possibility of treating it institutionally in uniform ways, neglecting any
dimension of knowledge that relates to its ‘non public good’ features.
According to this perspective, the transformation of the public-good
‘knowledge’ in the private-good ‘patent’ will perfectly set incentives for
its production by way of legally enforced conditions and possibilities of
appropriability.
However, if one starts questioning that markets solely allocate resources,
one may begin to consider them as performing a wider set of activities
such as being the places in which ‘novelty’ is (imperfectly) produced,
(imperfectly) tested and (imperfectly) selected.
In this alternative perspective, it becomes hard to reduce any efficiency
consideration to static efficiency, so that, for instance, it is not necessar-
ily true that allocative patterns that are efficient from a static perspective
have the same property from a dynamical point of view. It thus follows
that the institutional attribution of property rights (whether efficient or
not in a static allocative perspective) may strongly influence the patterns
of technological evolution in directions that are not necessarily optimal or
even desirable.
In this sense, any question about the appropriate level of IP protection
and degree of appropriability would be better grounded in a theory of
Appropriability, opportunities and innovation 125

innovative opportunities and productive knowledge (issues on which the


theory of allocative efficiency is rather silent: see Winter, 1982 and Stiglitz,
1994 from different angles).
In addition, viewing markets as embedded and depending upon a whole
ensemble of non-market institutions allows us to appreciate the fact that
technological innovation is highly dependent on a variety of comple-
mentary institutions (e.g. public agencies, public policies, universities,
communities and of course corporate organizations with their rich inner
structure) that can hardly be called ‘markets’ and hardly be regulated by
pure market incentives. Precisely this institutional embeddedness of inno-
vative activities makes it very unlikely that a ‘market failure’ approach
such as the one we sketched above could provide any satisfactory account
of the relationship between appropriability and propensity to innovate.
Finally, the (misleading) identification of knowledge with information
(i.e. the deletion of any reference to cognitive and procedural devices
whose role is to transform sheer information into ‘useful knowledge’ and
which are to a large extent tacit and embedded in organizations) makes
one forget that processes through which new knowledge is generated are
strongly dependent on the specificities of each technological paradigm
(which can hardly be reduced to ‘information’ categories).
One question that seems to be rarely asked (and answered) in precise
terms is: what is (if any) the increase in the value of an innovation real-
ized by way of patenting it? A straightforward answer to this question
would be: in a perfectly competitive market, any innovation has no value
(i.e. its price equals zero) as its marginal cost of reproduction equals zero.
As a consequence, the whole and sole value of an innovation comes from
its being patented. On this perspective, one is forced to conclude that a
straightforward positive relation exists between innovative activities and
patents: a relation in which patents are the one and only source of value of
technological innovations (given perfect competition). That is, in Teece’s
words, patents would be the only way of ‘profiting from technological
innovation’.
Under more careful scrutiny, however, this argument is subject to
a series of limitations and counter-examples. A first class of counter-
arguments arises from the many instances of innovations that, in spite of
not being patented (or patented under very weak patent regimes), have
most definitely produced considerable streams of economic value.
Relevant examples can be drawn from those technologies forming
the core of ICT (information and communication technology). For
instance, the transistor, while being patented from Bell Labs, was liber-
ally licensed also as a consequence of antitrust litigation and pressure
from the US Justice Department: its early producers nonetheless obtained
126 The capitalization of knowledge

enough revenue to be the seeds of the emergence of a whole industry


(Grandstrand, 2005). The early growth of the semiconductor industry had
been driven to a great extent by public procurement in a weak IP regime.
The software industry, certainly a quite profitable one, similarly emerged
under a weak IP regime. The telecom industry was, until the 1990s largely
operated by national monopolies that were also undertaking a good deal
of research, and IPR played little role in the rapid advance of technology
in this industry. Mobile telephony also emerged under a weak IP regime
(until the late 1980s).
We suggest indeed that strong IPR did not play a pivotal role either in
the emergence of ICT or as a means of value generation. On the contrary,
in the early stage of those sectors it might have been the very weakness
of the patent regime that spurred their rapid growth. Conversely, the
strengthening of the IP regime in recent years (soon after the ICT boom
in the late 1980s) might well have been (in terms of political influence) a
consequence rather than a cause of the fast pace at which the ICT sector
expanded.
Back to our opening question, it is worth noting how (some) economists
have been at least cautious with respect to the adoption of the patent
system as the only means to foster innovative activity and to its uniform
effectiveness. As Machlup (1958, p. 80) put it: ‘If we did not have a patent
system, it would be irresponsible, on the basis of our present knowledge
of its economic consequences, to recommend instituting one. But since
we have had a patent system for a long time, it would be irresponsible, on
the basis of our present knowledge, to recommend abolishing it.’ Similar
doubts are expressed in David (1993, 2002), who argues that IPR are not
necessary for new technologies and suggests that different institutional
mechanisms more similar to open science might work more efficiently.
Of course, the cautious economist is well aware that even from a purely
theoretical point of view, the innovation/patent relation is by no means
a simple one. And similarly tricky from a policy point of view is the
identification of balance between gains and losses of any system of IP
protection.
As a matter of fact, on the one hand it may be argued that IP monopo-
lies afforded by patents or copyright raise prices above unit production
costs, thus diminishing the benefits that consumers derive from using pro-
tected innovations. On the other hand, the standard argument claims that
the same rights provide a significant incentive to produce new knowledge
through costly investments in innovative research.
However, such a purported trade-off might well apply also at the micro
level.
Whether or not a firm has the profitability of its own innovations
Appropriability, opportunities and innovation 127

secured by IPR, its R&D behavior and its IPR enforcement strategies
cannot be unaffected by the actions of other firms acquiring and exploit-
ing their own IPR. The effect of firms exploiting IPR invariably raises
the costs that other firms incur when trying to access and utilize existing
knowledge. Similar dilemmas apply to the effects of a strong IP system on
competition process. Static measures of competition may decrease when a
monopoly right is granted but dynamic measures could possibly increase if
this right facilitates entry into an industry by new and innovative firms.
Are these trade-offs general features of the relationship between static
allocative efficiency and dynamic/innovative efficiency? There are good
reasons to think that such trade-offs might not theoretically even appear
in an evolutionary world, as Winter (1993) shows.
On the grounds of a simple evolutionary model of innovation and
imitation, Winter (1993) compares the properties of the dynamics of a
simulated industry with and without patent protection to the innovators.
The results show that, first, under the patent regime the total surplus (i.e.
the total discovered present value of consumers’ and producers’ surplus) is
lower than under the non-patent one. Second, and even more interesting,
the non-patent regime yields significantly higher total investment in R&D
and displays higher best-practice productivity.
More generally, an evolutionary interpretation of the relation between
appropriability and innovation is based on the premise that no model
of invention and innovation and no answer to the patent policy ques-
tion is possible without a reasonable account of inventive and innovative
opportunities and their nature.
The notion of technological paradigm (Dosi, 1982), in this respect,
is precisely an attempt to account for the nature of innovative activi-
ties. There are a few ideas associated with the notion of paradigm worth
recalling here.
First, note that any satisfactory description of ‘what technology is’
and how it changes must also embody the representation of the specific
forms of knowledge on which a particular activity is based and cannot
be reduced to a set of well-defined blueprints. It primarily concerns
problem-solving activities involving – to varying degrees – also tacit forms
of knowledge embodied in individuals and in organizational procedures.
Second, paradigms entail specific heuristics and visions on ‘how to do
things’ and how to improve them, often shared by the community of prac-
titioners in each particular activity (engineers, firms, technical societies
and so on), that is, they entail collectively shared cognitive frames. Third,
paradigms often also define basic templates of artifacts and systems, which
over time are progressively modified and improved. These basic artifacts
can also be described in terms of some fundamental technological and
128 The capitalization of knowledge

economic characteristics. For example, in the case of an airplane, their


basic attributes are described not only and obviously in terms of inputs
and production costs, but also on the basis of some salient technological
features such as wing-load, take-off weight, speed, distance it can cover
etc. What is interesting here is that technical progress seems to display pat-
terns and invariances in terms of these product characteristics. Hence the
notion of technological trajectories associated with the progressive reali-
zation of the innovative opportunities underlying each paradigm. In turn,
one of the fundamental implications of the existence of such trajectories is
that each particular body of knowledge (each paradigm) shapes and con-
strains the rates and direction of technical change, in a first rough approxi-
mation, irrespective of market inducements, and thus also irrespective of
appropriability conditions.

3. THE GROWTH IN PATENTING RATES AND THE


(MIS)USES OF PATENT PROTECTION

Needless to say, such a lack of any robust theory-backed relation between


appropriability (and even less IPR forms of appropriability) and rates of
innovation puts the burden of proof upon the actual empirical record.
Indeed, the past two decades have witnessed the broadening of the
patenting domain, including the application of ‘property’, to scientific
research and its results. This has been associated with an unprecedented
increase in patenting rates. Between 1988 and 2000, patent applications
from US corporations have more than doubled.
The relation between the two phenomena, however, and – even more
important – their economic implications are subject to significant contro-
versy (for discussion, see Kortum and Lerner, 1998; Hall, 2005; Lerner,
2002; Jaffe and Lerner, 2004; and Jaffe, 2000).
A first hypothesis is that the observed ‘patent explosion’ has been linked
to an analogously unprecedented explosion in the amount and quality of
scientific and technological progress. A ‘hard’ version of that hypothesis
would claim that the increase of patents has actually spurred the accelera-
tion of innovation, which otherwise would have not taken place. A ‘softer’
version would instead maintain that the increase of patents has been an
effect rather than a cause of increased innovation, as the latter would also
have taken place with weaker protection.
The symmetrically opposite hypothesis is that the patent explosion is
due to changes both in the legal and institutional framework and in firms’
strategy, with little relation to the underlying innovative activities.
While it is difficult to come to sharp conclusions in the absence
Appropriability, opportunities and innovation 129

of counterfactual experiments, circumstantial evidence does lend some


support to the latter hypothesis.
Certainly part of the growth in the number of patents is simply due to
the expansion of the patentability domain to new types of objects such as
software, research tools, business methods, genes and artificially engineered
organisms (see also Tirole, 2003, on the European case). Moreover, new
actors have entered the patenting game, most notably universities and public
agencies (more on this in Mowery et al., 2001). Finally, corporate strategies
vis-à-vis the legal claim of IPR appear to have significantly changed.
First, patents have acquired importance among the non-physical assets
of firms as a means to signal the enterprise’s value to potential inves-
tors, even well before the patented knowledge has been embodied in any
marketable good. In this respect, the most relevant institutional change
is to be found in the so-called ‘Alternative 2’ under the NASDAQ regula-
tion (1984). This allowed ‘market entry and listing of firms operating at
a deficit on the condition that they had considerable intangible capital
composed of IPRs’.
At the same time, patents seems to have acquired a strategic value, quite
independently from any embodiment in profitable goods and even in those
industries in which they were considered nothing more than a minor by-
product of R&D: extensive portfolios of legal rights are considered means
for entry deterrence (Hall and Ziedonis, 2001) and for infringement and
counter-infringement suits against rivals. Texas Instruments, for instance,
is estimated to have gained almost US$1 billion from patent licenses and
settlements resulting from its aggressive enforcement policy. It is interest-
ing to note that this practice has generated a new commercial strategy
called ‘defensive publishing’.
According to this practice, firms that find it too expensive to build an
extensive portfolio of patents tend to openly describe an invention in order
to place it in the ‘prior art’ domain, thus preserving the option to employ
that invention free from the interference of anyone who might eventually
patent the same idea.
Kortum and Lerner (1998) present a careful account of different
explanations of recent massive increases in patenting rates, comparing
different interpretative hypotheses. First, according to the ‘friendly court
hypothesis’, the balance between costs related to the patenting process (in
terms, e.g., of loss of secrecy) and the value of the protection that a patent
affords to the innovator had been altered by an increase in the probability
of successful application granted by the establishment in the USA of the
Court of Appeals for the Federal Circuit (CAFC) specialized in patent
cases – regarded by most observers as a strongly pro-patent institution
(see Merges, 1996).
130 The capitalization of knowledge

Second, the ‘regulatory capture’ tries to explain the surge of US patent


applications, tracking it back to the fact that business firms in general and
larger corporations (whose propensity to patent has traditionally been
higher than average) in particular succeeded in inducing the US govern-
ment to change patent policy in their favor by adopting a stronger patent
regime.
The third hypothesis grounds the interpretation in a general increase
in ‘technological opportunities’ related, in particular, to the emergence
of new technological paradigms such as those concerning information
technologies and biotechnologies.
Remarkably, Kortum and Lerner (1998) do not find any overwhelm-
ing support either for the political/institutional explanations or for the
latter one driving the surge in patenting to changes in the underlying
technological opportunities. At the same time there is good evidence that
the cost related to IP enforcement has gone up, together with the firms’
propensity to litigate: the number of patents suits instituted in the US
Federal Courts has increased from 795 in 1981 to 2573 in 2001. Quite
naturally, this has led to significative increases in litigation expenditures.
It has been estimated by the US Department of Commerce that patent
litigation begun in 1991 led to total legal expenditures by US firms that
were at least 25 percent of the amount of basic research by these firms
in that year.

4. THE BLURRED RELATIONS BETWEEN


APPROPRIABILITY AND INNOVATION RATES:
SOME EVIDENCE

What is the effect of the increase in patent protection on R&D and techni-
cal advance? Interestingly, in this domain also, the evidence is far from
conclusive. This is for at least two reasons. First, innovative environments
are concurrently influenced by a variety of different factors, which makes
it difficult (for both the scholar and the policy-maker) to distinguish patent
policy effects from effects due to other factors. Indeed, as we shall argue
below, a first-order influence is likely to be exerted by the richness of
opportunities, irrespective of appropriability regimes. Second, as patents
are just one of the means to appropriate returns from innovative activity,
changes in patent policy might often be of limited effect.
At the same time, the influence of IPR regimes upon knowledge dis-
semination appears to be ambiguous. Hortsmann et al. (1985) highlight
the cases in which, on the one hand, the legally enforced monopoly rents
should induce firms to patent a large part of their innovations, while, on
Appropriability, opportunities and innovation 131

the other hand, the costs related to disclosure might well be greater than
the gain eventually attainable from patenting.
In this respect, to our knowledge, not enough attention has been devoted
to question whether the diffusion of technical information embodied in
inventions is enhanced or not by the patent system.
The somewhat symmetric opposite issue concerns the costs involved in
the imitation of patent-protected innovations. In this respect, Mansfield
et al. (1981) find, first, that patents do indeed entail some significant imi-
tation costs. Second, there are remarkable intersectoral differences. For
example, their data show 30 percent in drugs, 20 percent in chemicals and
only 7 percent in electronics. In addition, they show that patent protection
is not essential for the development of at least three out of four patented
innovations. Innovators introduce new products notwithstanding the fact
that other firms will be able to imitate those products at a fraction of the
costs faced by the innovator. This happens both because there are other
barriers to entry and because innovations are felt to be profitable in any
case. Both Mansfield et al. (1981) and Mansfield (1986) suggest that the
absence of patent protection would have little impact on the innovative
efforts of firms in most sectors. The effects of IPR regimes on the propen-
sity to innovate are also likely to depend upon the nature of innovations
themselves and in particular whether they are, so to speak, discrete ‘stand-
alone’ events or ‘cumulative’. So it is widely recognized that the effect of
patenting might turn out to be a deleterious one on innovation in the case
of strongly cumulative technologies in which each innovation builds on
previous ones.
As Merges and Nelson (1994) and Scotchmer (1991) suggest, in this
realm stronger patents may represent an obstacle to valuable but poten-
tially infringing research rather than an incentive.
Historical examples, such as those quoted by Merges and Nelson on
the Selden patent of a light gasoline in an internal combustion engine
to power an automobile, and the Wright brothers’ patent on an efficient
stabilizing and steering system for flying machines, are good cases to the
point, showing how the IPR regime probably slowed down considerably
the subsequent development of automobiles and aircraft. The current
debate on property rights in biotechnology suggests similar problems,
whereby granting very broad claims on patents might have a detrimental
effect on the rate of innovation, in so far as they preclude the exploration
of alternative applications of the patented invention. This is particularly
the case with inventions concerning fundamental pieces of knowledge:
good examples are genes or the Leder and Stewart patent on a genetically
engineered mouse that develops cancer. To the extent that such techniques
and knowledge are critical for further research that proceeds cumulatively
132 The capitalization of knowledge

on the basis of the original invention, the attribution of broad property


rights might severely hamper further developments. Even more so if the
patent protects not only the product the inventors have achieved (the
‘onco-mouse’) but all the class of products that could be produced through
that principle (‘all transgenic non-human mammals’) or all the possible
uses of a patented invention (say, a gene sequence), even though they are
not named in the application.
More generally, the evidence suggests that the patents/innovation rela-
tion depends on the very nature of industry-specific knowledge bases,
on industry stages in their life cycles and on the forms of corporate
organizations.
Different surveys highlight, first, such intersectoral differences and
second, on average, the limited effectiveness of patents as an appropri-
ability device for the purpose of ‘profiting from innovation’. Levin et
al. (1987), for instance, report that patents are by and large viewed as
less important than learning-curve advantages and lead time in order to
protect product innovation and the least effective among appropriability
means as far as process innovations are concerned (see Table 4.1).
Cohen et al. (2000) present a follow-up to Levin et al. (1987) just cited
addressing also the impact of patenting on the incentive to undertake
R&D. Again, they report on the relative importance of the variety of
mechanisms used by firms to protect their innovations – including secrecy,
lead time, complementary capabilities and patents – see again Table 4.1.
The percentage of innovations for which a factor is effective in protecting
competitive advantage deriving from them is thus measured. The main
finding is that, as far as product innovations are concerned, the most
effective mechanisms are secrecy and lead time, while patents are the least
effective, with the partial exception of drugs and medical equipment.
Moreover, the reasons for the ‘not patenting’ choice are reported to be
(1) demonstration of novelty (32 percent); (2) information disclosure (24
percent); and (3) ease of inventing around (25 percent).
The uses of patents differ also relative to ‘complex’ and ‘discrete’ product
industries. Complex product industries are those in which a product is
protected by a large number of patents while discrete product industries
are those in which a product is relatively simple and therefore associated
with a small number of patents. In complex product industries, patents
are used to block rival use of components and acquire bargaining strength
in cross-licensing negotiations. In discrete product industries, patents are
used to block substitutes by creating patent ‘fences’ (see Gallini, 2002;
Ziedonis, 2004).
It is also interesting to compare Cohen et al. (2000) with Levin et al.
(1987), which came before the changes in the IPR regime and before the
Appropriability, opportunities and innovation 133

Table 4.1 Effectiveness of appropriability mechanism in product


and process innovations, 1983 and 1994 surveys, USA, 33
manufecturing industries

(a) Product innovations


Mechanism 1st 2nd 3rd 4th
1983 1994 1983 1994 1983 1994 1983 1994
Patents 4 7 3 5 17 7 9 4
Secrecy 0 13 0 11 11 2 22 5
Lead time 14 10 14 8 5 7 0 7
Sales & service 16 4 16 4 1 7 0 10
Manufacturing n.a. 3 n.a. 3 n.a. 14 n.a. 7

(b) Process innovations


Mechanism 1st 2nd 3rd 4th
1983 1994 1983 1994 1983 1994 1983 1994
Patents 2 1 4 5 3 3 24 16
Secrecy 2 21 10 10 19 1 2 0
Lead time 26 3 5 7 2 16 0 3
Sales & service 4 0 16 0 7 3 6 11
Manufacturing n.a. 10 n.a. 12 n.a. 10 n.a. 0

Note: n.a. = not available.

Source: Levin et al. (1987) and Cohen et al. (2000), as presented in Winter (2002).

massive increase in patenting rates. Still, in Cohen et al. (2000) patents are
not reported to be the key means to appropriate returns from innovations
in most industries. Secrecy, lead time and complementary capabilities are
often perceived as more important appropriability mechanisms.
It could well be that a good deal of the increasing patenting activities
over the last two decades might have gone into ‘building fences’ around
some key invention, thus possibly raising the private rate of return to
patenting itself (Jaffe, 2000), without however bearing any significant rela-
tion to the underlying rates of innovation. This is also consistent with the
evidence discussed in Lerner (2002), who shows that the growth in (real)
R&D spending pre-dates the strengthening of the IPR regime.
The apparent lack of effects of different IPR regimes upon the rates
of innovation appears also from broad historical comparisons. So, for
example, based on the analysis of data from the catalogues of two
nineteenth-century world fairs – the Crystal Palace Exhibition in London
134 The capitalization of knowledge

in 1851, and the Centennial Exhibition in Philadelphia in 1876 – Moser


(2003) finds no evidence that countries with stronger IP protection pro-
duced more innovations than those with weaker IP protection and strong
evidence of the influence of IP law on sectoral distribution of innovations.
In weak IP countries, firms did innovate in sectors in which other forms
of appropriation (e.g. secrecy and lead time) were more effective, whereas
in countries with strong IP protection significantly more innovative effort
went to the sectors in which these other forms were less effective. Hence
the interesting conclusion that can be drawn from Moser’s study is that
patents’ main effect could well be on the direction rather than on the rate
of innovative activity.
The relationship between investment in research and innovative out-
comes is explored at length in Hall and Ziedonis (2001) in the case of the
semiconductor industry. In this sector, the limited role and effectiveness
of patents – related to short product life cycles and fast-paced innova-
tion which make secrecy and lead time much more effective appropri-
ability mechanisms – also makes the surge in patenting (dating back
to the 1980s) particularly striking. As Hall and Ziedonis report, in the
semiconductor industry patenting per R&D dollar doubled over the
period 1982–92. (Incidentally, note that, over the same period, patenting
rates in the USA were stable in manufacturing as a whole and declined in
pharmaceuticals.)
Semiconductors are indeed a high-opportunity sector whose relatively
low propensity to patent is fundamentally due to the characteristic of the
knowledge base of the industry. Thus it could well be that the growth in
patents might have been associated with the use of patents as ‘bargaining
chips’ in the exchanges of technology among different firms.
Such a use of (low-quality) patents – as Winter (2002) suggests – might
be a rather diffused phenomenon: when patents are used as ‘bargaining
chips’, that is, as ‘the currency of technology deals’, all the ‘standard
requirements’ about such issues as non-obviousness, usefulness, novelty,
articulability (you can’t patent an intuition), reducibility to practice (you
can’t patent an idea per se) and observability in use turn out to be much
less relevant.
In Winter’s terms, ‘if the relevant test of a patent’s value is what it is
worth in exchange, then it is worth about what people think it is worth
– like any paper currency. “Wildcat patents”1 work reasonably well to
facilitate exchanges of technology. So, why should we worry?’ One of
the worries concerns the ‘tragedy of anti-commons’. While the quality
of patents lowers and their use bears very little relation to the require-
ments of stimulating the production and diffusion of knowledge, the costs
devoted to untie conflicting and overlapping claims on IP are likely to
Appropriability, opportunities and innovation 135

increase, together with the uncertainty about the extent of legal liability
in using knowledge inputs. Hence, as convincingly argued by Heller and
Eisenberg (1998) and Heller (1998), a ‘tragedy of anti-commons’ is likely to
emerge wherein the IP regime gives too many subjects the right to exclude
others from using fragmented and overlapping pieces of knowledge, with
no one having ultimately the effective privilege of use.
In these circumstances, the proliferation of patents might turn out to
have the effect of discouraging innovation. One of the by products of the
recent surge in patenting is that, in several domains, knowledge has been so
finely subdivided into separate property claims (on essentially complemen-
tary pieces of information) that the cost of reassembling constituent parts/
properties in order to engage in further research charges a heavy burden on
technological advance. This means that a large number of costly negotia-
tions might be needed in order to secure critical licenses, with the effect of
discouraging the pursuit of certain classes of research projects (e.g. high-risk
exploratory projects). Ironically, Barton (2000) notes that ‘the number of
intellectual property lawyers is growing faster than the amount of research’.
While it is not yet clear how widespread are the foregoing phenomena
of a negative influence of strengthening IPR protection upon the rates of
innovation, a good deal of evidence does suggest that, at the very least,
there is no monotonic relation between IPR protection and propensity
to innovate. So, for example, Bessen and Maskin (2000) observe that
computers and semiconductors, while having been among the most inno-
vative industries in the last 40 years, have historically had weak patent
protection and rapid imitation of their products. It is well known that the
software industry in the USA experienced a rapid strengthening of patent
protection in the 1980s. Bessen and Maskin suggest that ‘far from unleash-
ing a flurry of new innovative activity, these stronger rights ushered in a
period in which R&D spending leveled off, if not declined, in the most
patent-intensive industries and firms’. The idea is that in industries such
as software, imitation might be promoting innovation and that, on the
other hand, strong patents might inhibit it. Bessen and Maskin argue that
this phenomenon is likely to occur in those industries characterized by a
relevant degree of sequentiality (each innovation builds on a previous one)
and complementarity (the simultaneous existence of different research
lines enhances the probability that a goal might be eventually reached). A
patent, in this perspective, actually prevents non-holders from the use of
the idea (or of similar ideas) protected by the patent itself, and in a sequen-
tial world full of complementarities this turns out to slow down innovation
rates. Conversely, it might well happen that firms would be better off in
an environment characterized by easy imitation, whereby it would be true
that imitation reduced current profits but it would be also true that easy
136 The capitalization of knowledge

imitation raised the probability of further innovation to take place and of


further profitable innovations to be realized.
A related but distinct question concerns the relationship between IPRs,
the existence of markets for technologies and the rates of innovation and
diffusion (see Arora et al., 2001, for a detailed analysis of the develop-
ments). While it is certainly true that some IPR protection is often a neces-
sary condition for the development of markets for technologies, there is no
clear evidence suggesting that more protection means more markets. And
neither is there general evidence that more markets drive higher rates of
innovation. Rather, the degree to which technological diffusion occurs via
market exchange depends to a great extent on the nature of technological
knowledge itself, e.g. its degree of codifiability (Arora et al., 2001).
So far we have primarily discussed the relations between the regimes of
IPR protection and rates of innovations, basically concluding that either
the relation is not there, or, if it is there it might be a perverse one, with
strong IPR enforcement actually deterring innovative efforts. However, we
know also that IPR protection is only one of the mechanisms for appropri-
ating returns from innovation, and certainly not the most important one.
What about the impact of appropriability in general? Considering
together the evidence on appropriability from survey data (see Cohen et
al., 2000; Levin et al., 1987), the cross-sectoral evidence on technological
opportunities (see Klevorick et al., 1995) and the evidence from multi-
ple sources on the modes, rates and directions of innovation (for two
surveys, see Dosi, 1988; and Dosi et al., 2005), the broadbrush conclusion
is that appropriability conditions in general have only a limited effect on
the pattern of innovation, if any. This clearly applies above a minimum
threshold: with perfectly zero appropriability, the incentive to innovate for
private actors would vanish, but with few exceptions such a strict zero con-
dition is hardly ever encountered. And the threshold, as the open source
software shows, might be indeed very low.

5. OPPORTUNITIES, CAPABILITIES AND


GREED: SOME CONCLUSIONS ON THE
DRIVERS OF INNOVATION AND ITS PRIVATE
APPROPRIATION

There are some basic messages from the foregoing discussion of the
theory and empirical evidence on the relationship between degrees of IPR
protection and rates of innovation.
The obvious premise is that some private expectation of ‘profiting from
innovation’ is and has been throughout the history of modern capitalism,
Appropriability, opportunities and innovation 137

a necessary condition for entrepreneurs and business firms themselves to


undertake expensive and time-consuming search for innovations. That was
already clear to classical economists and has been quite uncontroversial
since.
However, having acknowledged that, there are neither strong theo-
retical reasons nor any strong empirical evidence suggesting that tuning
up or down appropriability mechanisms of innovations, in general, and
appropriability by means of IPR in particular, has any robust effect upon
the resources that private self-seeking agents devote to innovative search
and upon the rates at which they discover new products and new produc-
tion processes. As pointed out by the already mentioned survey by Jaffe
(2000) on the effects of the changes in IPR regimes in recent years, ‘there
is little empirical evidence that what is widely perceived to be a significant
strengthening of intellectual property protection had significant impact on
the innovation process’ (Jaffe, 2000, p. 540).
Note that any tightening of IPR is bound to come together with a fall in
‘consumer surplus’: making use somewhat uneasily of such a static tool for
welfare analysis, it is straightforward that as producers’ rents and prices
on innovation rise, the consumer surplus must fall. Conversely, on the
producers’ side,

to the extent that firms’ attention and resources are, at the margin, diverted
from innovation itself toward the acquisition, defense and assertion against
others of property rights, the social return to the endeavor as a whole is likely to
fall. While the evidence on all sides is scant, it is fair to say that there is at least
as much evidence of these effects of patent policy changes as there is evidence of
stimulation of research. (Jaffe, 2000, p. 555)

But if IPR regimes have at best second-order effects upon the rates of
innovation, what are the main determinants of the rates and directions of
innovation? Our basic answer, as argued above and elsewhere (see Dosi,
1988, 1997, Dosi et al., 2005) is the following. The fundamental determi-
nants of observed rates of innovation in individual industries/technologies
appear to be nested in levels of opportunities that each industry faces.
‘Opportunities’ capture, so to speak, the width, depth and richness of the
sea in which incumbents and entrants go fishing for innovation. In turn,
such opportunities are partly generated by research institutions outside
the business sector, partly stem from the very search efforts undertaken by
incumbent firms in the past and partly flow through the economic system
via supplier/user relationships (see the detailed intersectoral comparisons
in Pavitt, 1984, and in Klevorick et al., 1995). Given whatever level of
innovative opportunities is typically associated with particular techno-
logical paradigms, there seems to be no general lack of appropriability
138 The capitalization of knowledge

conditions deterring firms from going out and fishing in the sea. Simply,
appropriability conditions vary a great deal across sectors and across
technologies, precisely as highlighted by Teece (1986). Indeed, one of the
major contributions of that work is to build a taxonomy of strategies and
organizational forms and map them into the characteristics of knowledge
bases, production technologies and markets of the particular activity in
which the innovative/imitative firm operates.
As these ‘dominant’ modes of appropriation of the returns from inno-
vation vary across activities, so also should vary the ‘packets’ of winning
strategies and organizational forms: in fact, Teece’s challenging conjecture
still awaits a thorough statistical validation on a relatively large sample of
statistical successes and failures.
Note also that Teece’s taxonomy runs counter to any standard ‘IPR-
leads-to-profitability’ model according to which turning the tap of IPR
ought to move returns up or down rather uniformly for all firms (except
for noise), at least within single sectors. Thus the theory is totally mute
with respect to the enormous variability across firms even within the same
sector and under identical IPR regimes, in terms of rates of innovation,
production efficiencies and profitabilities (for a discussion of such evidence
see Dosi et al., 2005).
The descriptive side – as distinguished from the normative, ‘strategic’
one – of the interpretation by Teece (1986) puts forward a promising can-
didate in order to begin to account for the patterns of successes and failures
in terms of suitability of different strategies/organizational arrangements
to knowledge and market conditions. However, Teece himself would
certainly agree that such interpretation could go only part of the way
in accounting for the enormous interfirm variability in innovative and
economic performances and their persistence over time.
A priori, good candidates for an explanation of the striking differences
across firms even within the same line of business in their ability to both
innovate and profit from innovation ought to include firm-specific fea-
tures that are sufficiently inertial over time and only limitedly ‘plastic’ to
strategic manipulation so that they can be considered, at least in the short
term, ‘state variables’ rather than ‘control variables’ for the firm (Winter,
1987). In fact, an emerging capability-based theory of the firm to which
Teece himself powerfully contributed (see Teece et al., 1990; and Teece et
al., 1997) identifies a fundamental source of differentiation across firms
in their distinct problem-solving knowledge, yielding different abilities of
‘doing things’ – searching, developing new products, manufacturing and
so on (see Dosi et al., 2000, among many distinguished others). Successful
corporations, as is argued in more detail in the introduction to Dosi et al.
(2000), derive competitive strength from their above-average performance
Appropriability, opportunities and innovation 139

in a small number of capability clusters where they can sustain a lead-


ership. Symmetrically, laggard firms often find hard the imitation of
perceived best-practice production technologies difficult because of the
problem of identifying the combination of routines and organizational
traits that makes company x good at doing z.
Such barriers to learning and imitation, it must be emphasized, have
very little to do with any legal regime governing the access to the use of
supposedly publicly disclosed but legally restricted knowledge such as that
associated with patent-related information.
Much more fundamentally, it relates to collective practices that in
every organization guide innovative search, production and so on. In
fact, in our view, given the opportunities for innovation associated with
a particular paradigm – which approximately also determine the ensuing
industry-specific rates of innovation – who wins and who loses among
the firms operating within that industry depends on both the adequacy of
their strategic choices – along the lines of the taxonomy of Teece (1986)
– and on the type of idiosyncratic capabilities that they embody. In our
earlier metaphor, while the ‘rates of fishing’ depend essentially on the size
and richness of the sea, idiosyncratic differences in the rates of success
in the fishing activity itself depend to a large extent on firm-specific
capabilities.
Moreover, the latter, jointly with complementary assets, fundamentally
also affects the ability to ‘profit from innovation’. Conversely, if we are
right, this whole story has very little to do with any change in the degree to
which society feeds the greed of the fishermen, in terms of prices they are
allowed to charge for their catch. That is, the tuning of IPR-related incen-
tives is likely to have only second-order effects, if any, while opportunities,
together with the capabilities of seeing them, are likely to be the major
drivers of the collective ‘unbound Prometheus’ of modern capitalism and
also to shape the ability of individual innovators to benefit from it.

ACKNOWLEDGMENT

This chapter was previously published in Research Policy, Vol. 35, No 8,


2006, pp. 1110–21.

NOTE

1. Winter is here pursuing an analogy between patents and ‘wildcat banknotes’ in the US
free banking period (1837–65).
140 The capitalization of knowledge

REFERENCES

Arora, A., A. Fosfuri and A. Gambardella (2001), Markets for Technology:


Economics of Innovation and Corporate Strategy, Cambridge, MA: MIT Press.
Arrow, K. (1962), ‘Economic welfare and the allocation of resources for inven-
tion’, in R. Nelson (ed.), The Rate and Direction of Inventive Activity, Princeton,
NJ: Princeton University Press.
Barton, J. (2000), ‘Reforming the patent system’, Science, 287, 1933–34.
Bessen, J. and E. Maskin (2000), ‘Sequential innovation, patents and imitation’,
Working Paper 00-01, Cambridge, MA: MIT Department of Economics.
Cohen, W., R.R. Nelson and J. Walsh (2000), ‘Protecting their intellectual assets:
appropriability conditions and why US manufacturing firms patent or not’,
Discussion Paper 7552, NBER.
David, P. (1993), ‘Intellectual property institutions and the panda’s thumb:
patents, copyrights, and trade secrets in economic theory and history’, in M.
Wallerstein, M. Mogee and R. Schoen (eds), Global Dimensions of Intellectual
Property Protection in Science and Technology, Washington, DC: National
Academy Press.
David, P. (2002), ‘Does the new economy need all the old IPR institutions?
Digital information goods and access to knowledge for economic develop-
ment’, presented at Wider Conference on the New Economy in Development,
Helsinki.
Dosi, G. (1982), ‘Technological paradigms and technological trajectories. A
suggested interpretation of the determinants and directions of technological
change’, Research Policy, 11, 147–62.
Dosi, G. (1988), ‘Source, procedures and microeconomic effects of innovation’,
Journal of Economic Literature, 26, 1120–71.
Dosi, G. (1997), ‘Opportunities, incentives and the collective patterns of technical
change’, Economic Journal, 107, 1530–47.
Dosi, G., R.R. Nelson and S. Winter (eds.) (2000), The Nature and Dynamics of
Organizational Capabilities, Oxford and New York: Oxford University Press.
Dosi, G., L. Orsenigo and M. Sylos Labini (2005), ‘Technology and the economy’,
in N. Smelser and R. Swedberg (eds), The Handbook of Economic Sociology,
Princeton NJ: Princeton University Press/Russell Sage Foundation.
Gallini, N. (2002), ‘The economics of patents: lessons from recent U.S. patent
reform’, Journal of Economic Perspectives, 16, 131–54.
Grandstrand, O. (2005), ‘Innovation and intellectual property Rights’, in
I. Fagerberg, D. Mowery, and R. Nelson (eds), The Oxford Handbook of
Innovation, Oxford: Oxford University Press, pp. 266–90.
Hall, B. (2005), ‘Exploring the patent explosion’, Journal of Technology Transfer,
30, 35–48.
Hall, B. and R. Ziedonis (2001), ‘The patent paradox revisited: firm strategy and
patenting in the US semiconductor industry’, Rand Journal of Economics, 32,
101–28.
Heller, M. (1998), ‘The tragedy of the anticommons: property in transition from
Marx to markets’, Harvard Law Review, 111, 698–701.
Heller, M. and R. Eisenberg (1998), ‘Can patents deter innovation? The anti-
commons in biomedical research’, Science, 280, 698–701.
Hortsmann, I., J. Mac Donald and A. Slivinski (1985), ‘Patents as information
Appropriability, opportunities and innovation 141

transfer mechanisms: to patent or (maybe) not to patent’, Journal of Political


Economy, 93, 837–58.
Jaffe, A. (2000), ‘The US patent system in transition: policy innovation and the
innovation process’, Research Policy, 29, 531–57.
Jaffe, A. and J. Lerner (2004), Innovation and its Discontents, Princeton, NJ:
Princeton University Press.
Klevorick, A., R. Levin, R.R. Nelson and S. Winter (1995), ‘On the sources and
interindustry differences in technological opportunities’, Research Policy, 24,
185–205.
Kortum, S. and J. Lerner (1998), ‘Stronger protection or technological revolution:
what is behind the recent surge in patenting?’, Rochester Conference Series on
Public Policy, 48, 247–307.
Landes, D. (1969), The Unbound Prometheus, Cambridge, MA: Cambridge
University Press.
Lerner, J. (2002), ’150 years of patent protection’, American Economic Review:
Papers and Proceedings, 92, 221–25.
Levin, R., A. Klevorick, R.R. Nelson and S. Winter (1987), ‘Appropriating the
returns from industrial R&D’, Brookings Papers on Economic Activity, pp.
783–820.
Machlup, F. (1958), ‘An economic review of the patent system’, Discussion Paper,
US Congress, Washington DC: Government Printing Office.
Mansfield, E. (1986), ‘Patents and innovation: an empirical study’, Management
Science, 32, 173–81.
Mansfield, E., M. Schwartz and S. Wagner (1981), ‘Imitation costs and patents: an
empirical study’, Economic Journal, 91, 907–18.
Merges, R. (1996), ‘Contracting into liability rules: intellectual property rights and
collective rights organizations’, California Law Reviews, 84, 1293–386.
Merges, R. and R.R. Nelson (1994), ‘On limiting or encouraging rivalry in techni-
cal progress: the effects of patent scope decisions’, Journal of Economic Behavior
and Organization, 25, 1–24.
Moser, P. (2003), ‘How do patent laws influence innovation? Evidence from
nineteenth-century world fairs’, Discussion Paper, NBER.
Mowery, D., R.R. Nelson, B. Sampat and A. Ziedonis (2001), ‘The growth of
patenting and licensing by US universities: an assessment of the effects of the
Bayh–Dole Act of 1980’, Research Policy, 30, 99–119.
Pavitt, K. (1984), ‘Sectoral patterns of innovation: toward a taxonomy and a
theory’, Research Policy, 13, 343–73.
Scotchmer, S. (1991), ‘Standing on the shoulders of giants: cumulative research
and the patent law’, Journal of Economic Perspectives, 5, 29–41.
Stiglitz, J. (1994), Whither Socialism?, Cambridge, MA: MIT Press.
Teece, D. (1986), ‘Profiting from technological innovation: implications for
integration, collaboration, licensing and public policy’, Research Policy, 15,
285–305.
Teece, D., G. Pisano and A. Shuen (1997), ‘Dynamic capabilities and strategic
management’, Strategic Management Journal, 18, 509–33.
Teece, D., R. Rumelt, G. Dosi and S. Winter (1990), ‘Understanding corporate
coherence: theory and evidence’, Journal of Economic Behavior and Organization,
23, 1–30.
Tirole, J. (2003), Protection de la propriété intellectuelle: une introduction et quelques
pistes de réflexion, Rapport pour le Conseil d’Analyse Economique.
142 The capitalization of knowledge

Winter, S. (1982), ‘An essay on the theory of production’, in H. Hymans (ed.),


Economics and the World around It, Ann Arbor, MI: University of Michigan
Press, pp. 55–93.
Winter, S. (1987), ‘Knowledge and competences as strategic assets’, in D.
Teece (ed.), The Competitive Challenge: Strategies for Industrial Innovation and
Renewal, New York: Harper & Row.
Winter, S. (1993), ‘Patents and welfare in an evolutionary model’, Industrial and
Corporate Change, 2, 211–31.
Winter, S. (2002), ‘A view of the patent paradox’, Presentation at London Business
School, 20 May.
Ziedonis, R. (2004), ‘Don’t fence me in: fragmented markets for technology and
the patent acquisition strategies of firms’, Management Science, 50, 804–20.
5. Global bioregions: knowledge
domains, capabilities and
innovation system networks
Philip Cooke

INTRODUCTION

This chapter explores a field, ‘globalization of bioregions’, that is of


growing interest to social science and policy alike. A number of special
journal issues have been published (Cooke, 2003, 2004; Lawton Smith and
Bagchi-Sen, 2004), and others may be anticipated from different stables.
Outside the spatial field but well informed by it, an earlier one was pub-
lished in Small Business Economics (2001), subsequently enlarged into a
book (Fuchs, 2003). Numerous other books are published in this increas-
ingly dynamic field (Orsenigo, 1989; McKelvey, 1996; De la Mothe and
Niosi, 2000; Carlsson, 2001). There has also been a surge in paper publica-
tion, too large to cover in this brief review, but that of Powell et al. (2002)
is noteworthy in showing close spatial interactions among biotechnology
SMEs (small and medium-sized enterprises) and venture capitalists from
non-spatially designed survey data. Most economists writing about the
evolution of the biotechnology sector (many authors refuse to think of
a technology as an ‘industry’) accept that it is a classic case of a science-
driven sector, highly active in R&D expenditure, patenting of discoveries,
with research increasingly dominated by public (mainly university) labora-
tories, but exploitation mainly dominated not by large but by small firms.1
This is so for the biggest part of biotechnology, accounting for about 70
per cent of sales, biopharmaceuticals, which is the subject of much of what
follows.
Biopharmaceuticals, as a large subsector of biotechnology, is fascinat-
ing. It belies the prediction that multinational corporations are bound to
dominate all aspects of any industry due to economies of scale. While the
likes of Pfizer and Glaxo are certainly large global actors by any stand-
ards, they are rapidly losing capabilities to lead research. This function
has moved – rapidly in the 1990s – to university ‘Centres of Excellence’

143
144 The capitalization of knowledge

and particularly to networks of dedicated biotechnology firms (DBFs)


that often lead important research projects (Orsenigo et al., 2001) relying
on ‘big pharma’ for grants, marketing and distribution. All are agreed that
DBFs cluster around leading-edge university campuses and that even if
they are not the kind of DBFs that depend on immediate interaction with
research laboratories, they still agglomerate.
A good example of such an agglomerated non-cluster occurs to the
south-west of London, centred on Guildford. Why there? Static externali-
ties like good access to Heathrow airport and a suitable talent pool from
which to poach. But firms scarcely interact except with global clients. This
contrasts with the dynamic externalities from proximate co-presence with
many and various significant others in the sector. In the following brief
sections, these will be highlighted by reference to key issues of relevance to
cluster theory more generally, the specifics of science-driven industry, the
post-Schumpeterian symbiotics that characterize the double dependence
of small businesses on corporate financial clout and ‘big pharma’ on DBF
and laboratory research excellence. Also posed is the question – raised
in the ‘spillovers’ debate – as to whether most cluster interactions in bio-
technology (and yet to be asked for elsewhere) are really forms of market
rather than milieu-type transaction, pecuniary rather than non-pecuniary
spillovers, and traded rather than untraded interdependencies (Zucker et
al., 1998; Owen-Smith and Powell, 2004).

BIOTECHNOLOGY AND CLUSTER THEORY

Biotechnology is easily seen to be an economic activity, technology or


sector both of the present but more for the future. Here, as elsewhere,
we shall call it an enabling technology although, for example, biophar-
maceuticals clearly also constitutes a subsector of the broader healthcare
industry along with research, pharmaceuticals, hospitals, medical schools
and a further panoply of support services that together comprise in some
countries over 15 per cent, and sometimes more, of GDP, and in the UK
up to 25 per cent of national R&D expenditure (OECD, 2000; DTI, 1999).
Agricultural bioscience is similarly part of the wider agro-food industry.
Biotechnology can be shown to have its own value chain, especially in
biopharmaceuticals, although some parts of even that are indistinguishable
from agro-food biotechnology or environmental biotechnology, especially
regarding enzymes, recombinant DNA and the like. Indeed, agricultural
‘farming’ of DNA antibody sequences for healthcare application has been
patented by San Diego firm Epicyte through its patented Plantibodies
product. Although objections have been raised by researchers about the
Global bioregions 145

scope of such a ‘broad patent’ for restricting academic research freedom, it


does not take extra-sensory insight to see that such technologies can be the
saving of some agricultural areas assailed by global competition provided
they become more ‘knowledge-intensive’ in this and other ways.
Precisely this sort of Galilean leap seems to be at the heart of the clus-
tering process in this industry. Thus Epicyte’s actual patent-holder is San
Diego’s renowned Scripps Research Institute, where its founders Hiatt
and Hine worked. Nearby Dow AgroSciences is the licensing and funding
partner. This is the ‘holy trinity’ of interaction around biotechnology
discovery – basic research, ‘pharma’ or agro-chemicals funding and DBF
exploitation for commercial purposes, with ‘pharma’ earning returns
through marketing and distribution. In San Diego, to continue there for a
moment, are some 400 SMEs engaged in some aspect of biotechnological
core or peripheral activity and, as Lee and Walshok (2001) show, 45 were
founded by the Scripps, Salk, Burnham or La Jolla research institutes,
while 61 were founded directly from the University of California. Many
more were founded in the area as a consequence of knowledge flows and
investment opportunities for incoming and resident entrepreneurs adver-
tised through activities such as those promoted by the much-emulated UC
San Diego Connect network.
Equally characteristic is the well-connected non-local network of the
typical DBF. This is obviously not geographically embedded in the topog-
raphy of the cluster but seems to be consistently a feature of DBFs in such
clusters, namely their engagement in strong functional or organizational
proximity with other DBFs or research laboratories elsewhere, even
globally. Thus Epicyte has, for Plantibodies, strong research or produc-
tion links with DBFs in Princeton, NJ, Malvern, PA, Baltimore, MD
and Biovation in Aberdeen, Scotland. Its research affiliations are with
Johns Hopkins University, Baltimore, MD, Oklahoma Medical Research
Foundation, Cornell University, Ithaca, NY and its closely affiliated
agro-biotech centre, The Boyce Thompson Institute. Hence it has more
non-local than local ‘partners’, or from a markets viewpoint, customers
and suppliers. However, Scripps and Epicyte are specific research leaders
and knowledge originators for Plantibodies and the others are suppliers
in the knowledge value chain of research, funding and specific products
and/or technologies, testing and trialling capabilities, and marketing and
distributional expertise (Cooke, 2002).
What differentiates biotechnology clusters like this from others in dif-
ferent sectors is the science-driven nature of the business, and it is worth
remembering that there are roughly 400 other DBFs in this specific cluster
with many board and advisory cross-links, some of them local but more
non-local, each to some extent aping the profile of Epicyte, with strong
146 The capitalization of knowledge

localized and stronger globalized, or at least North American, business


linkages. It has recently been argued by Wolter (2002) that this character-
istic differentiates biotechnology clusters from Porterian ‘market’-driven
clusters: the Porterian ‘diamond’ of factor conditions for competitiveness
postulates transparent competitors, clear market opportunities, known
production and supply schedules and appropriate market support to
enable competing firms to perform effectively.
Bioscience lacks this kind of market stability. Most of its incumbents
do not make profits, indeed new stock market rules and indeed stock
markets like NASDAQ and (in the UK) AIM (Alternative Investment
Market) had to be set up to allow firms not making profits to be pub-
licly invested in. Knowledge is unstable and it is prone to being swiftly
superseded. Skills are in short supply and requirements change rapidly.
Deep tacit knowledge of indeterminate but potentially enormous value
is in the minds and hands of academics; hence their capacity to generate
upfront input returns from licensing income or patent royalties. But many
DBFs presently earn little from product or treatment outputs. The key
to this unusual kind of economy is proximity to knowledge ‘fountain-
heads’ such as those described for Maryland by Feldman and Francis
(2003), Munich by Kaiser (2003), Montreal by Niosi and Bas (2003),
Cambridge by Casper and Karamanos (2003), and key knowledge centres
in Israel by Kaufmann et al. (2003). The largest of these may be termed
‘megacentres’ after a French term for science-led innovation complexes
like Grenoble that combine ‘ahead-of-the-curve’ science with disruptive
research, heavy investment in scientific and technological ‘talent’, and
incentives for entrepreneurs to exploit such discoveries commercially
(Johnson, 2002). This is a feature of biotechnology that steps beyond the
notion of ‘clusters’ and may presage new developmental opportunities for
other science-driven, research-intensive and heavily publicly funded com-
plexes in future, nanotechnology, for example, being the actual trigger for
Grenoble’s ‘megacentre’ appellation.

BEYOND SCHUMPETER?

Reference to disruptive research and technologies recalls Schumpeter’s


view of innovation and entrepreneurship as characterized by ‘creative
destruction’ (Schumpeter, 1975). That view has been enormously influ-
ential among the Neo-Schumpeterian School of innovation theorists
and, particularly, innovation systems analysts such as Freeman (1987),
Lundvall (1992) and Nelson (1993). Inspired by Schumpeter’s insights
into the heroic efforts of the innovator, whether as the lone entrepreneur
Global bioregions 147

or, later, the equally heroic R&D team of the large corporation as the
key driver of competition, they developed a heterodox perspective that
attracted a large academic and policy following.
This was not least because of the School’s rejection of the desiccated
theorems of neoclassical economics that treated Schumpeter’s wellspring
of capitalism as a commodity to be acquired ‘exogenously’ off the shelf.
This is no longer believed, even by neoclassicals. Although the national
innovation systems authors mentioned in this section had no spatial brief –
other than for a poorly theorized ‘nation state’ which in the case of Taiwan
and South Korea could look odd, their adherence to Schumpeterian
insights implied, by his concept of ‘swarming’, clusters. These are logical
consequences where imitators pile in to mimic the innovator and seek
early superprofits before over-competition kills the goose that lays the
golden eggs. Hence ‘creative destruction’, as some older technology –
sailing ships – gives way to newer – steamships. Many shipyards arise only
to be competed into a few or none, as competitive advantage moves their
epicentre to cheaper regions around the world. It can be argued that the
swift destruction of value observable in such processes, and more recently
in the bursting of the dot.com and financial bubble (see below), with their
negative upstream implications for world trade, constitutes a problem of
‘over-innovation’ rather as, in a completely opposite way, German car
and machinery makers used to be accused of ‘over-engineering’. Mobile
telephony and computing may equally be accused of having too many
‘bells and whistles’ that most people never use.
Biotechnology is not like this. It is in many ways either non-Schumpeterian
or post-Schumpeterian, although, as we have seen, it certainly clusters,
even concentrates in ‘megacentres’ of which there are very few in the
world. But it does this not to outcompete or otherwise neutralize competi-
tor firms, but rather to maximize access to knowledge at an early stage in
the knowledge exploration phase that is difficult to exploit in the absence
of multidimensional partnerships of many kinds. Feldman and Francis’s
(2003) paper presents a remarkable, even original, picture of the manner
in which the driver of cluster-building in Washington and Maryland’s
Capitol region was the public sector. Importantly, the National Institutes
of Health conduct basic research as well as funding others to do it in
university and other research laboratories. Politics means that sometimes
bureaucracies must downsize, something that was frequently accompa-
nied by scientists transferring knowledge of many kinds into the market,
not least in DBFs that they subsequently founded. Interactions based on
social capital ties with former colleagues often ensued in the marketplace.
Before founding Human Genome decoding firm Celera, Craig Venter was
one of the finest exploiters of such networks, not least with the founding
148 The capitalization of knowledge

CEO of Human Genome Sciences Walter Haseltine, who had been a col-
league of Walter Gilbert at Harvard and James Watson, co-discoverer
of DNA, and later of Nobel Prize-winner David Baltimore (Wickelgren,
2002). Thus, as Feldman and Francis (2003) show, inter-organizational
capabilities networks from government to firms as distinct from more
normally understood laboratory–firm or firm–firm links may also be
pronounced in biotechnology.
This is seldom observed in the classical Schumpeterian tradition, which
rests to a large degree on engineering metaphors, where entrepreneurship
is privileged. Moreover, few Neo-Schumpeterians devote attention to
the propulsive effect of public decisions (for an exception, see Gregersen,
1992), even though regional scientists had long pointed out the central-
ity of military expenditure to the origins of most ICT clusters (Hall and
Markusen, 1985). In Niosi and Bas (2003) it is shown how relations are
symbiotic rather than creatively destructive among firms in the Montreal
and Toronto clusters. Arguing for the ‘technology’ definition of biotech-
nology, they say it is not a generic but a diverse set of activities, each
with specificity. Of interest is that in biopharmaceuticals they interact in
networks to produce products for different markets. Hence there is cross-
sectoral complexity in markets as well as research, although healthcare is
the main market. Accordingly, they too point to the ‘regional innovation
system’ nature of biotechnology, spreading more widely and integrating
more deeply into the science base than is normally captured by the local-
ized, near-market idea of ‘cluster’. Thus CROs and biologics suppliers
may exist in the orbit, not the epicentre of the cluster.
This collaborative aspect of biotechnology is further underlined by
Casper and Karamanos (2003). For example, they show that, regarding
joint publication, while there is substantial variation in the propensity to
publish with laboratory, other academic or firm partners by Cambridge
(UK) biotechnology firms, only 36 per cent of firms are ‘sole authors’ and
the majority of partners are those from the founders or their laboratory.
Academic collaborators are shared equally between Cambridge and the
rest of the UK, with international partners a sizeable minority. Hence
the sector is again rather post-Schumpeterian in its interactive knowledge
realization, at least in respect of the all-important publication of results
that firms will probably seek to patent. Moreover, they will, in many cases,
either have or anticipate milestone payments from ‘pharma’ companies
with whom they expect to have licensing agreements.
Finally, biotechnolgy is interesting for the manner in which epistemo-
logical, methodological and professional shifts in the development of the
field created openings for DBFs when it might have been thought that
already large pharmaceuticals firms would naturally dominate the new
Global bioregions 149

field. This is an important sub-text in Kaiser’s (2003) paper on the slow


development of biotechnology in Germany, and specifically its main
centre at Munich. Pharmaceuticals arose from chemistry, not biology,
and although, as others show, modern biotechnology relies on molecular
expertise that chemists pioneered, most of its discoveries derive ultimately
from biology. Hence the ‘molecular biology revolution’, as Henderson
et al. (1999) call it, somehow superseded the epistemological primacy of
chemistry.
Methodologically, chemists retain a hold over such activities as screen-
ing for particular molecules that act as inhibitors for those that are
disease-causing. But methodologically that process has become infinitely
swifter and more automated with the advent of supercomputer-strength
high-throughput screening. Such equipment was also developed by spe-
cialist ICT companies applying expertise to bioinformatics and the like.
A good example of the advantage of the application of such examination
knowledge is the race to decode the human genome. Here, Celera pitted
itself against, and for a long time led, an enormous transatlantic university
research centre-based consortium. The reason it could do this was Craig
Venter’s innovative approach, sequencing segments ‘shotgun’ fashion
rather than whole gene sequences, and Californian firm Perkin-Elmer’s
decision to produce equipment to enable this to be done rapidly. It was
only when the consortium’s budget was massively increased so that they,
too, could afford such equipment that they caught up and the race ended
neck and neck. This, of course, meant that, professionally speaking,
ICT equipment producers, specialist software engineers and information
scientists have become integral to biotechnology.2

CLUSTERING: MARKET OR MILIEU?

Strong cases can straightforwardly be made for the characteristic spatial


proximity emphasis there is in biotechnology in North America, Europe
and, in the guise of Israel and Singapore, Asia. But global linkages are
also pronounced (see below). In Kaufmann et al. (2003) it is shown that
the academic nature of the background of most firm founders means that
they have a weak business expertise and cluster accordingly, but near to
their laboratory home base in the early stages. The importance of public
subsidy in the form of regional support policies is also stressed. However,
it is also shown that with maturity, complex business and technological
needs cause expansion of networks overseas. In this, the need to access
business partners for examination knowledge expertise, particularly in
testing and trialling for clinical research, is paramount as it is unavailable
150 The capitalization of knowledge

in such a small-country setting. As with early financing requirements


that also contribute to a clustering disposition, there is a strong business
market impulse for clustering in Israel, one that also drives expansion of
networks. In other words, partnering is strategic but primarily for market
reasons. Thus the milieu is in quite pronounced ways one in which busi-
ness is done with trusted others, rather than, as may sometimes be pre-
sumed, through exchanging of much untraded resource. This is echoed in
Casper and Karamanos’, (2003) study of Cambridge, UK, where they con-
clude that a major factor in clustering is the presence of an ‘ideas market’
and access to specialized labour markets. This attracts spin-ins as well, a
feature shown to be pronounced in Cambridge.3
Nevertheless, in Germany, after a number of false starts, the BioRegio
programme earned praise for rapid stimulation of biotechnology startup
businesses in 1997–2002. Of importance in this success was the pre-
existence of institutional partnership and a special institution, such as
Munich’s BioM, capable of promoting startup activity. But, as Kaiser
(2003) notes, this dynamism was equally predicated on the emergence of
a venture capital market, itself assisted by the co-funding support avail-
able from BioRegio and other regional and local funds. Here, then, public
investment was required to stimulate market activity to promote clusters
in areas that possessed both organizational and institutional capabilities
for clustering to occur.4 We may say that the institutional dimension con-
stitutes aspects of a milieu effect, but a rather different one from the type
hypothesized by the GREMI group, for whom the public sector seldom
plays a leading role (Maillat, 1998). Of course a different but crucial role
was, as discussed, shown to have been played out in the Capitol region in
the USA in the account of biotechnology growth provided by Feldman
and Francis (2003). In Canada, one could say that the clustering devel-
oped in classic style for the sector, namely large-scale publicly funded
research of high quality created opportunities for entrepreneurship by
academics and venture capitalists. Strong links with pharmaceuticals firms
strengthened the funding base of DBFs, some of which have been acquired
by larger companies, while newer spin-outs continue to emerge.

BIOTECHNOLOGY’S GLOBALIZATION: FROM


BELOW?

Today, biotechnology is seen as a sine qua non of regional economic


development. Paradoxically, as ministers and many scientists bemoan the
aspirations of regional agencies everywhere to establish a biotechnology
presence, their words are drowned in the rush to implement policies that
Global bioregions 151

seek to ensure future regional competitiveness from biotechnology appli-


cations in all their myriad forms. This is, of course, reminiscent of the rush
to clone Silicon Valley by policy-makers worldwide during the 1980s and
1990s. Most of these efforts were unsuccessful on a narrow evaluation, in
the sense that Silicon Valley remains unique in both the scale of its local
and global impact, both during the upswing and more recently the down-
swing in the ICT industry. It is worth recalling that job losses in Silicon
Valley from 2001 to 2003 were 400 000 and San Jose lost 20 per cent of its
jobs. The latter statistic, according to a UCLA Anderson Business School
report, constitutes ‘the single largest loss of jobs by any major metropoli-
tan area since at least the second world war’ (Bazdarich and Hurd, 2003;
Parkes, 2004).
Despite the roller-coaster ride that high-technology industry clearly
provides, these are, on balance, growth sectors over the business cycle,
and on a broad evaluation regions that sought to anchor high-technology
growth during the past two decades have often proved remarkably suc-
cessful. The Kista science park, north of Stockholm, founded in 1972,
still has some 600 companies with 27 000 employees (down from 29 000
in 2000), more than 3000 students in IT, and numerous university and
research institutions active in wireless communication and the mobile
Internet. Other such complexes in Helsinki, Ottawa and Cambridge could
be mentioned, let alone Austin and Dallas, Texas or Dulles, Virginia and
Bangalore, Hyderabad, Shenzen and Shanghai. None of these is an exact
clone of the original, but they are important contributors to the regional
economies they inhabit.
Biotechnology has begun to evolve its ecology likewise, albeit at a
lesser scale, and with different specificities. Thus, first, agro-food bio-
technology locales are different from those specializing in biopharma-
ceuticals – although occasionally they overlap (e.g. San Diego). An
interesting case of the latter phenomenon is presented in the Medicon
Valley (Sweden–Denmark) study of Coenen et al. (2004). The proximate
cause of locational variation beween biopharmaceutical and agro-food
clusters in general is the fundamentally public funding of research in the
former and private funding in the latter. Moreover, firm structure, organi-
zation and capabilities bring a heavy influence to bear. Thus agro-food
is dominated by a few multinational corporations like Monsanto, Bayer,
Dow, Unilever, Nestlé, Syngenta and Aventis. Joining them are former
chemicals companies transforming themselves into ‘biologics’ (fermen-
tation and contract DNA manufacturing specialists) firms, like Avecia,
DSM, Lonza and rising state-backed firms like BioCon in India and China
Biotech. Significant agglomerations of such companies occur in out of
the way places like Guelph and Saskatoon in Canada, Wageningen, the
152 The capitalization of knowledge

Netherlands and Basel, Switzerland. More substantial but somehow still


unusual places from a high-tech viewpoint such as St Louis, Missouri also
appear on the radar in the form of ‘BioBelt’, the world’s biggest agro-food
agglomeration, as Ryan and Phillips (2004) have shown.
Second, unlike the biopharmaceuticals industry, these corporates
conduct much leading-edge research in house, although they are also
found in juxtaposition to national research institutes, themselves often
located historically in agricultural zones. This explains the concentration
in Saskatoon, where oilseed rape innovation capabilities have been con-
centrated since the 1940s, and St Louis with comparable competences in
the corn and soybean sectors. Syngenta – the Basel-based agro-food firm
formed from the merger of ICI and Novartis’s agro-food divisions – estab-
lished a major cotton research centre in Raleigh–Durham, a traditional
centre of expertise in cotton research, now leading in cotton genomics. The
same firm was first to decode the rice genome, albeit at its Torrey Mesa
Research Institute in San Diego. As many published papers have shown,
biopharmaceuticals is unlike this. University research departments and
centres of research expertise lead the field in bioscientific research and the
exploitation for commercial purposes of biotechnology. Thus corporate
‘pharma’ is a supplicant to these leading-edge research centres. Recent
papers by Lawton Smith (2004) and Bagchi-Sen and Scully (2004) under-
line the centrality of university and medical school research as a magnet
for specialist biotechnology firms of small scale that out-license their find-
ings into a supply chain that ends with pharmaceutical companies market-
ing and distributing the final product on a global scale. In what follows,
three themes of significance to the ‘globalization of bioregions’ are high-
lighted. These are first, the practices of globalization in biosciences, second
the vehicles for its implantation, and third, the evolving, networked global
hierarchy in biotechnology5.

THE PRACTICES OF GLOBALIZATION IN


BIOSCIENCES

We have already seen that biotechnology underpins a range of industries,


that its broad categorization into biopharmaceuticals, agro-food and
environment/energy biotechnology produces distinctive economic geog-
raphies, at least in the first two. Briefly, we can say that there is a third
distinctive pattern in regard to the main environmental/energy segment in
that it is normally found in older industrial regions such as the Ruhr region
in Germany or even established chemicals complexes like Grangemouth in
Scotland. The latter was a product of ‘growth pole’ planning involving
Global bioregions 153

the major chemicals companies in the UK in the 1960s. It currently hosts


the aforementioned Avecia and Syngenta (both divested former divi-
sions of ICI), British Petroleum (BP), Polyami (artificial fibres), GE
Plastics, Rohm & Haas (paints and personal care products). Avecia sells
both energy and environmental effluent services from its Grangemouth
site, both to co-inhabitants and the wider market. This is in addition to
its core products of fine chemicals, pharmaceutical biotechnology and
biochemical/biologics manufacturing. Returning to the Ruhr region, the
origin of its bioenvironmental industry, consisting of some 600 firms,
lies in the coal and steel industry, whose companies diversified from their
traditional home base during the 1980s, utilizing ventilation and filtration
technologies integral to their former capabilities as applications to meet
new demand from regulators for higher-standard emissions and environ-
mental clean-up controls.
Hence one segment of biotechnology is based on ‘open innovation’
(Chesbrough, 2003) among research institutes, pharmaceuticals compa-
nies and small, dedicated biotechnology firms (DBFs). The second one in
agro-food is organized around direct basic knowledge transfer between
large public research institutes and large agro-food biotechnology firms
with little or no DBF involvement (although Wageningen has around ten
agro-food DBFs and a consortium, the Centre for Biosystems Genomics,
a network of four universities, two research institutes and 15 firms in agro-
food biotechnology across the Netherlands in its ‘Food Valley’ science
park). The third model is inter-industry trade by direct spin-out, divisional
divestiture, or in-house evolution of innovation capabilities from coal,
steel and chemicals corporates entering and selling products and services
in new environmental or energy markets. The last-named model is more
engineering- (synthetic) than science-driven (analytical) and thus unaf-
fected in locational terms by the presence or absence of research institutes,
but rather remains embedded in its original locational trajectory (Coenen
et al., 2004).

THE VEHICLES FOR IMPLANTATION OF


GLOBALIZED BIOSCIENCE

In the biopharmaceuticals case, globalization is driven through knowledge


networks, as inspection of Figure 5.1 shows. Leading-edge research centres
and their star scientists interact through co-publishing. Such knowledge
and the research on which it rests is of great value to the pharmaceuticals
industry. Hence ‘big pharma’ not only funds and licenses such research and
its results; it formerly also invested substantial equity stakes in spin-out
154 The capitalization of knowledge

Stockholm Sydney Paris


PU
SU
INS
RIT UNSW

Uppsala KI Lund Copenhengen


Ucop
UL Grenoble
CBSP
UU SUAS
UG

San Diego SanFranc Toronto


UCSD
UT
Tokyo UCSF
TML
Salk
UTo
SU
TIT
SRI
UBer
BI
Juresalem Boston Montreal
HeU HaH

MU
HHHaH HMS
New York
Munich Cambridge(MA)
UM NYU GH
Singapore
MIPS
MIT NUS
ColmU MI DSI

Zurich Cam(UK) HU
RU
ZU
CamULondon

MSR London
Geneva BPRC Oxford UCL ICL
UG
OU
JRH NIMR

3 4–6
NIMR
7–8 >10

Notes: Represented are co-publications between ‘star’ bioscientists in leading research


institutes in eight of the top ten Science Citation Index journals for recent periods: Cell
(2002–04); Science (1998–2004); Proceedings of the National Academy of Sciences (2002–04);
Genes and Development (2000–04); Nature (1998–2004); Nature Biotechnology (2000–04);
Nature Genetics (1998–2004); EMBOJ (European Molecular Biology Organization Journal)
(2000–04). In all, 9336 articles were checked. Abbreviations refer to research institutes.

Figure 5.1 Publishing collaborations

firms established by such stars, or acquired them outright, although nowa-


days it tends to make variable-term exchange-based or traded partnerships
with them to access valued, preferably patented knowledge to be trans-
formed into biotechnologically based drug treatments (‘open innovation’).
Furthermore, the large corporates also establish research institutes in such
science-driven clusters.6
We noted such cluster penetration for ‘open innovation’ in regard to
Syngenta for genomic decoding in San Diego. But as Zeller (2002) has
Global bioregions 155

shown, San Diego and now Cambridge, Massachusetts genomics research


capabilities are deeply penetrated by Novartis, in particular. This began
with the signing of a deal for first refusal to access 50 per cent of the
research output in immunology, neurological science, and cardiovas-
cular diseases of The Scripps Research Institute in San Diego in 1992.
Thereafter, in 2002 Novartis announced a new $250 million genomics
research institute in San Diego, named the Genomics Institute of the
Novartis Research Foundation (GNF). The 200 staff complemented in-
house research teams at institutes in Basel and New Jersey (more recently
also Cambridge, Massachusetts; see below). Several GNF scientists also
have faculty appointments at Scripps, and 17 per cent of postdoctoral
researchers work with GNF scientists. GNF also gave rise to the Joint
Center for Structural Genomics (JCSG) and the Institute for Childhood
and Neglected Diseases (ICND) funded as consortia in San Diego by the
US National Institutes of Health.
In 2004 a new $350 million investment opened its doors, sustained by a
further long-term commitment of $4 billion investment beyond that which
was announced for Cambridge, Massachusetts by Novartis through its
Novartis Institutes for Biomedical Research (NIBR). NIBR constitutes
the primary pharmaceutical research arm in the company’s strategy of
post-genomic drug discovery, concentrating on the key therapeutic areas
of cardiovascular disease, diabetes, infectious diseases, functional genom-
ics and oncology. From such globalized research and commercialization
nodes or clusters, linked through networks of star research programmes
that are largely resourced by public research funds, DBFs form to exploit
the findings in early-stage commercialization activities. From these inter-
actions licensing deals on patented knowledge are conducted such that
DBFs receive milestone payments from big pharma, which then market
and distribute biotechnologically derived drugs globally through their
intra-corporate channels. This is a vivid example of the manner in which
globalization is managed through extensive and intensive linkages unify-
ing multinationals and clusters. In the papers by Lawton Smith (2004),
Bagchi-Sen and Scully (2004) and particularly Finegold et al. (2004), these
processes are anatomized in considerable detail.

THE NETWORKED HIERARCHY IN


BIOTECHNOLOGY

Three key things have been shown with implications for understanding of
knowledge management, knowledge spillovers and the roles of collabora-
tion and competition in bioregions. The first is that two kinds of proximity
156 The capitalization of knowledge

are important to the functioning of knowledge complexes like biosciences


in Boston and the northern and southern Californian clusters. These are
geographical but also functional proximity (Rallet and Torre, 1998). The
first involves, in particular, medical research infrastructure for exploration
knowledge as well as venture capital for exploitation knowledge, that is for
research on the one hand, and commercialization on the other. The second
point is that where exploration knowledge infrastructure is strong, that
nexus leads the knowledge management process, pulling more distant ‘big
pharma’ governance elements behind it. Where, by contrast, exploitation
knowledge institutions are stronger than exploration, they may, either as
venture capital or ‘big pharma’, play a more prominent role. But in either
case the key animator is the R&D and exploitation-intensive DBF. DBFs
are key ‘makers’ as well as ‘takers’ of local and global spillovers; research
institutions are more ‘makers’ than ‘takers’ locally and globally; while ‘big
pharma’ is nowadays principally a ‘taker’ of localized spillovers from dif-
ferent innovative DBF clusters. It is then a global marketer of these and
proprietorial (licensed or acquired) knowledge, generated with a large
element of public financing but appropriated privately.
For obvious reasons to do with scale, especially of varieties of financing of
DBFs from big pharma on the one hand, and venture capitalists, on the other,
we conclude that Boston, San Francisco and San Diego are the top US biore-
gions that also have the greater cluster characterization of prominent spin-
out from key knowledge centres, an institutional support set-up like Boston’s
Massachusetts Biotechnology Council, San Diego’s Connect network and
San Francisco’s California Healthcare Institute, and major investment from
both main pillars of the private investment sector. We have attempted to
access comparable data from many and diverse statistical sources that justify
and represent the successful or potentially successful clusters from outside
the USA, including the four lesser or unclustered of the seven US bioregions.
Global cities like New York, London and even Tokyo have relatively large
numbers of DBFs, but they are dotted around in isolation and have no estab-
lished bioregional promotional bodies (such as BioCom in San Diego or the
Massachusetts Biotechnology Council in Cambridge) rather than clustered
close to key universities as in the listed bioregions.
We can see from Table 5.1 the predominance of Boston and San
Francisco on all indicators, and the differences between the former (also
New York) and the Californian centres are strikingly revealed by these
data. Boston’s life scientists generate on the order of $285 000 each per
annum in National Institutes of Health research funding (New York’s
generate some $288 000). San Diego’s considerably smaller number of
life scientists generates $480 000 per capita, substantially more than in
Northern California, where it is some $226 000. North Carolina, with
Global bioregions 157

Table 5.1 Core biotechnology firms, 2000: comparative global


performance indicators

Location DBFs Life VC($m) Big pharma


scientists funding($m)
Boston 141 4980 601.5 800 (1996–2001)
San Francisco 152 3090 1063.5 400 (1996–2001)
New York 127 4790 1730 151.6 (2000)
Munich 120 8000 400.0 54 (2001)
Lund–Medicon 104 5950 80.0 300 (2002)
San Diego 94 1430 432.8 320 (1996–2001)
Stockholm–Upp. 87 2998 90.0 250 (2002)
Washington, DC 83 6670 49.5 360 (2000)
Toronto 73 1149 120.0 n.a.
Montreal 72 822 60.0 n.a.
Ral-Dur NC 72 910 192 190 (2000)
Zurich 70 1236 57.0 n.a.
Cambridge 54 2650 250.0 105 (2000)
Oxford 46 3250 120.0 70 (2000)
Singapore 38 1063 200.0 88 (2001)
Jerusalem 38 1015 300.0 n.a.
Seattle 30 1810 49.5 580 (2000)

Sources: NIH; NRC; BioM, Munich; VINNOVA, Sweden; Dorey (2003); Kettler and
Casper (2000); ERBI, UK; Lawton Smith (2004); Kaufmann et al. (2003).

the second smallest number of life scientists in Table 5.1, scores highest
at $510 000 per capita, although Seattle, at $276 000, is comparable to
Boston, New York and San Francisco. How should we interpret these
statistics? One way is to note the very large amounts of funding from ‘big
pharma’ going especially to the Boston, and to a lesser extent New York
and both Californian centres. Canada’s bioregional clusters challenge
many elsewhere in the world regarding cluster development.
Some of Europe’s are rather large on the input side (for example life
scientists) but less so on the output side (for example funding, firms). The
process of bioregional cluster evolution has occurred mainly through aca-
demic entrepreneurship supported by well-funded research infrastructure
and local venture capital capabilities. In Israel, there is a highly promis-
ing group of bioregions including Rehovot and Tel Aviv as well as the
main concentration in Jerusalem, where patents are highest although firm
numbers are of lesser significance.
The most striking feature of the global network of bioscientific knowl-
edge transfer through exploration collaboration as measured by research
158 The capitalization of knowledge

HMS

HMS
HMS
Stanford
Uni
HMS
UCSF
HMS
UT
Scripps UBerkley

Cam Uni UCSD


Stanford Uni UCSF Stanford Uni

HMS Cam Un
RU
HMS

Stanford Uni Scripps MIT


Zurich Uni
UCSF Scripps RU
Salk
Stanford
Stanford Uni Cam Uni MIT
MIT Cam Uni
RU NYU
UCSF
MIT NYU KI
RU UBerkley
Stanford Uni
HMS
UCSF KI Hebrew Uni Scripps
RU KI RU UBerkley
UCSD UCSD UT
UCSF
Salk UCSF NYU
UCL Scripps
Stanford Uni Salk
Cam Uni MIT Salk UCSD
UT
UCL
MIT
UCSD
MIT UT
Salk UCSD UBerkley NYU
UCL
UCSF
Cam Uni Zurich
KI Uni
KI Hebrew
RU Uni
Zurich U
UCSD
Hebrew Uni
UBerkley Cam Uni UCSD Salk

Notes: Results (log scale) are based on articles from the top two or three journals (with
the highest impact factor provided by Web of Knowledge) of each sub-field, which are
as follows: immunology: Annual Review of Immunology; Nature Immunology; Immunity;
molecular biology & genetics: Cell Molecular; Journal of Cell Biology; microbiology:
Microbiology and Molecular Biology Review; Annual Review of Microbiology; Clinical
Microbiology Review; neuroscience: Annual Reviews Neuroscience; Trends Neuroscience;
biotechnology & applied microbiology: Drug Discovery; Nature Biotechnology; cell &
developmental biology: Annual Review of Cell and Developmental Biology; Advances
in Anatomy, Embryology and Cell Biology; Anatomy and Embryology; biophysics &
biochemistry: Annual Review of Biophysics; Current Opinions in Biophysics.

Figure 5.2 Publication shares of Bioscience Research Institutes in the


seven main fields of bioscientific research

publication collaborations is, as we have seen, the dominance of US mega-


centres in such collaborations. However, within that hierarchy, which
taken in the round by including examination and exploitation knowledge
processes, puts Cambridge–Boston at the peak of the global bioscientific
network hierarchy, specific institutional dominance of knowledge pro-
duction is remarkable. This is shown in Figure 5.2, which ranks specific
research institutions from among those benchmarked in Table 5.1 accord-
ing to their share of publications in the seven main bioscientific fields. The
Global bioregions 159

predominance of a single, large and complex institution, Harvard Medical


School (HMS), is noteworthy to say the least.
Regarding immunology, HMS published 10 per cent of the articles in
the three most cited journals, with Stanford at 4.5 per cent and UC San
Francisco (UCSF) at 3.5 per cent. Karolinska Institute led Europe (tied
with Salk Institute) at 1.2 per cent, followed by MIT at 0.8 per cent and
Cambridge University at 0.5 per cent. HMS topped the share of molecular
biology articles in the top three journals in 1998–2004 with 6 per cent of
the relevant share, with MIT well behind at 1.7 per cent, UCSF at 1.4 per
cent next, then Cambridge and Stanford Universities tied at 0.9 per cent.
Of those scoring below 1 per cent, Salk Institute, Rockefeller University
and UC San Diego (UCSD) follow. Next, in microbiology, HMS is again
first at 6 per cent, followed by The Scripps Research Institute at 3.4 per
cent, Stanford University at 3 per cent and Karolinska Institute at 2.4 per
cent. For neuroscience, UCSF is first at 4.5 per cent but HMS is second at
4.1 per cent, Stanford is third at 3.8 per cent, Rockefeller follows at 3.3 per
cent while Europe’s highest entrant is Cambridge University at 3 per cent.
Karolinska Institute is tenth at 1.3 per cent. In biotechnology HMS is fifth
(4.1 per cent) after University of Toronto (5.5 per cent) and The Scripps
Research Institute (5.4 per cent) with Stanford and Cambridge universi-
ties equal at 4.5 per cent. These are followed by Zurich University (3.7 per
cent), UCSF and Rockefeller tied at 2.8 per cent and UCSD at 1 per cent.
In cell and development biology, HMS is the clear global leader with
over 7 per cent leading journal share by citation and MIT trailing second
at under 3 per cent. Finally, a relative weakness is biophysics and bio-
chemistry, where HMS scores only 2.5 per cent and trails Cambridge
University (UK), Scripps, Stanford, MIT, NYU and UC Berkeley. Thus
in these seven key bioscientific fields alone, HMS is first four times, second
once, fifth once and seventh once. Clearly, with or without control of
house journals (which, it can be argued, significantly favours ‘home’ con-
tributions), HMS is the leading quality publishing centre for biosciences
in the world. Added to which, for Cambridge–Boston is the fact that MIT
is second for molecular biology and cell biology, fourth in biophysics
and biochemistry, and in the top ten for most of the rest. Thus we have
seen inside the black box of global research excellence, gaining a detailed
understanding of the broader network hierarchy of global knowledge
transfer through co-publishing revealing the nature and extent of the
global bioregions network. Recall from Figure 5.1 that it is clear that
a global hierarchy exists in which there are five ‘megacentres’ (Cooke,
2003; Niosi and Bas, 2003). Measured by co-publishing activity, these are
Boston–Cambridge, New York, San Francisco and San Diego. At the next
level are London, Cambridge (UK) and Stockholm. Surrounding these,
160 The capitalization of knowledge

at lower levels of co-publication activity, are global city locations like


Paris, Tokyo and Toronto, but interspersed are other university towns like
Lund, Uppsala and Oxford as well as lesser cities like Geneva, Zurich and
Grenoble. Biotechnology thus does not fit the ‘global cities’ thesis (Sassen,
1994) particularly well. Other important but scarcely global cities like
Jerusalem, Munich, Montreal and Singapore also enter at this third level
of co-publishing intensity.
Notice, though, that in the seven core fields of bioscientific publishing
relatively few of the main publishing centres extend deeply into these lesser
network nodes. They divide most asymmetrically between 11 globally key
publishing institutes in North America, all bar University of Toronto
in the USA, four in Europe and one, Hebrew University, Jerusalem in
Asia. All the US top institutes are in the five co-publishing megacentres
depicted in Figure 5.1. Despite dubious claims that San Diego7 is the top
global cluster, it is clear that with respect to generation of new exploration
knowledge it is Boston (including Cambridge MA) that in fact scores as
the strongest biotechnology cluster.
Moving to the opposite end of the global network hierarchy, as Finegold
et al. (2004) make clear, Singapore’s case is one that runs strongly against
the grain of the ‘clusters cannot be built’ thesis since it is almost entirely
the product of state intervention.
Four new research institutes in bioinformatics, genomics, bioprocessing
and nanobiotechnology now exist at a cost of $150 million up to 2006.
Public venture capital of $200 million has been committed to three bio-
science investment funds to fund startups and attract FDI. A further $100
million is earmarked for attracting up to five globally leading corporate
research centres. The Biopolis is Singapore’s intended world-class R&D
hub for the georegion. It is dedicated to biomedical R&D activities and
designed to foster a collaborative culture among the institutions present
and with the nearby National University of Singapore, the National
University Hospital and Singapore’s science parks. Internationally cel-
ebrated scientists have also been attracted, such as Nobel laureate Sidney
Brenner, Alan Colman, leading transgenic animal cloning scientist from
Scotland’s Roslin Institute, Edison Liu, former head of the US National
Cancer Institute, and leading Japanese cancer researcher Yoshaki Ito.
Next, for example, Tartu in Estonia is an ‘aspirational’ cluster in a
rather less-developed form than that of Singapore. Noticeably, it does not
for example yet appear in the co-publishing graphic presented in Figure
5.1. But as Raagmaa and Tamm (2004) show, it has certain historic biosci-
entific strengths to build upon as well as a burgeoning biomedical devices
industry and emergent bioscientific and biotechnological research, if not
yet DBFs that have exploited it. Nevertheless, as everywhere, government
Global bioregions 161

is keen to build up a biotechnology industry; Tartu is the leading uni-


versity centre in Estonia for this kind of activity and new markets are
often identified in relation to combinations of local expertise and even
local environmental assets. The process leading to emergence of a leading
East European industry cluster and the speed of its appearance, owing
something partly to hype, no doubt, reveals an interesting case of possible
failure but equally possible success at the point when its trajectory is really
yet to be decided.
Finally, in exploring both the cluster formation and globalization proc-
esses they imply, this analysis throws some light on the complexities of
interactions between economic geography and science-based industry.
This is so particularly regarding aspects of the networked interactions
involved in knitting together the public and private spheres, basic research
and commercial exploitation, and tacit and codified knowledge. There are
good grounds now for looking upon biotechnology as a pioneer both of
‘open innovation’ and ‘Globalization 2’, in which ‘knowledgeable clusters’
rather than multinationals drive development more than hitherto. As well
as being of intrinsic interest, these papers may offer hypotheses for other
researchers investigating in other related or unrelated fields these challeng-
ing processes and the problems and opportunities for regional economic
development policy that they entail.

CONCLUSIONS

Thus biotechnology contains much of interest for academic and policy


analysts interested in the distinctive ways in which clustering specific to
biotechnology has occurred in different contexts. It also poses intriguing
puzzles for academic specialists interested in biotechnology per se or more
broadly in the clustering phenomenon. We see that local networks are
common but that they may not be as large or as important as distant, even
global, ones. The industry within which biotechnology of the type mainly
discussed here is embedded, pharmaceuticals, does not appear to follow a
Schumpeterian innovation or entrepreneurship pathway, being character-
ized by a double dependence of large corporations on SMEs (for R&D)
and the inverse for financing, marketing and distribution. Large corpo-
rates merge but do not become out-competed and thus disappear entirely.
Hence Novartis, formed from the merger of Ciba and Sandoz, retains
the former as its opthalmics brand, with Sandoz its brand for generic
drugs. Moreover, both are heavily dependent on public funding for basic
research – the exploration knowledge that is the resource to which exploi-
tation knowledge is applied. The latter includes knowledge of the kind
162 The capitalization of knowledge

possessed by venture capitalists to facilitate commercialization of the few


drug candidates that reach the market. Biotechnology thus seems to be
more a Penrosian sector in that firm, including small-firm, capabilities
determine industry organization (Penrose, 1959). Finally, to some extent
echoing points made by Zucker et al. (1998), biotechnology is not a prime
case of milieu effects dominating market effects. As they argue, scientific
expertise has meant that since the beginning academics have wielded
greater technological influence over firms than was the case with other
industries. Firms leaned on academics’ terms and often, as DBFs, their
ownership. As such, untraded interdependencies and even dense local
network relationships, at the individual firm level, are less pronounced
than might be expected given the obvious cluster concentrations in which
biotechnology firms exist.
Noticeable too are the ways in which public funding, policy and expertise
can often be much more central to biotechnology clustering than has been
recorded for other sectors. But of greatest theoretical interest is the way
‘open innovation’ hints at ‘Globalization 2’ where knowledgeable clusters
shift the economic geography of industry organization towards themselves.
Moreover, as the case of Cambridge–Boston and Harvard Medical School
showed, some such spaces have begun to exert spatial knowledge monop-
oly effects that are already bringing ‘increasing returns’ as multinationals
establish R&D facilities to capture ‘knowledge spillover’ advantages from
cluster conventions of both ‘open science’ and ‘open innovation’.

ACKNOWLEDGEMENTS

Thanks to Ann Yaolu for assistance with the scientometrics. Thanks


also to Lennart Stenberg, Vinnova, for discussing various concepts
and hypotheses. Finally, thanks to colleagues in the CIND, Circle and
ISRN networks who heard these observations in preliminary form and
encouraged their development into what I hope is a coherent analysis.

NOTES

1. The ‘exploration’ and ‘exploitation’ distinction was, of course, first made in organiza-
tion science by March (1991). It is nowadays necessary to introduce the intermediary
‘examination’ form of knowledge to capture the major stages of the move from research
to commercialization in biotechnology because of the long and intensive trialling process
in both agro-food and pharmaceutical biotechnology. However, a moment’s reflection
about other sectors suggests that this kind of knowledge and its organizational processes
have been rather overlooked as it applies outside biotechnology (see Cooke, 2005).
Global bioregions 163

2. San Francisco’s megacentre capability in ICT gives its biotechnology competitive advan-
tage in bioinformatics, screening, gene sequencing and genetic engineering software and
technologies.
3. As shown below, a strong public role in cluster-building is also evident in Finegold et al.’s
(2004) analysis of biopharmaceuticals developments in Singapore.
4. But disappointment has grown since the downturn at the only modest growth and drying
pipeline dynamics associated with Germany’s policy-induced startup biotechnology
businesses. Moreover, German high-tech entrepreneurship in general suffered a heavy
blow with the demise of the Neuer Markt, its now defunct take on NASDAQ.
5. Unusually, the role of ‘big pharma’ is rather under-emphasized in this analysis of its
relation to health biotechnology. This is not because large pharmaceuticals firms are
unimportant in this context, for they clearly are. However, for exploration and even,
increasingly, examination and some exploitation knowledge production they practise
‘open innovation’, as Chesbrough (2003) demonstrates for the case of Millennium
Pharmaceuticals, a leading bioinformatics supplier that redesigned itself as a biotechno-
logical drugs manufacturer through investment of ‘open innovation’ contract earnings
from the likes of Monsanto and Eli Lilly. These practices are now emulated by specialist
suppliers in industries like ICT, automotives and household care, according to the same
author. This chimes with a more general hypothesis we can call ‘Globalization 2’, in which
in a ‘knowledge economy’ the drivers of globalization become ‘knowledgeable clusters’
of various kinds. These exert an irresistible attraction for large corporates, who become
‘knowledge supplicants’ as their in-house R&D becomes ineffective and inefficient. They
pay for, but no longer generate, leading-edge analytical knowledge for innovation.
6. As Owen-Smith and Powell (2004) show, ‘open science’ conventions in such clusters as
Cambridge–Boston ‘irrigate’ the milieu with knowledge spillovers, giving to some clus-
ters an element of ‘increasing returns’ from ‘spatial knowledge monopoly’ to a significant
degree.
7. National Institute of Health (NIH) funding for medical and bioscientific research in
Cambridge–Boston was in excess of $1.1 billion by 2000, $1.5 billion by 2002 and $2.1
billion in 2003. Cooke (2004) shows that it exceeded all of California by 2002, and by
2003 the gap widened to $476 billion ($2021 as against $1545 billion). Interestingly, this
is a recent turnaround since the 1999 total of $770 million was marginally less than the
amount of NIH funding passing through the northern California cluster in 1999, a statis-
tic that only increased to $893 million in 2000. Thus Greater Boston’s supremacy is recent
but definitive. San Diego’s NIH income includes that earned by Science Applications
International Corporation. This firm is based in San Diego but performs most of its NIH
research outside its home base as a research agent for US-wide clients. Thus it warrants
mention but is excluded from totals calculated by this author. However, it is included
in the Milken International report ‘America’s Biotech and Life Science Cluster’ (June
2004), which ranks San Diego the top US cluster. This oversight seriously weakens its
claims for San Diego’s top US cluster position. Further reasons for rejecting the Milken
Institute’s ranking of San Diego first as well as inclusion of questionable research funds
are that the Institute deploys a spurious methodology based on research dollars per
metropolitan inhabitant to promote San Diego’s ranking. Finally, the research was com-
missioned by local San Diego interests (Deloitte’s San Diego) and excludes ‘big pharma’
funding, on which San Diego performs less than half as well as Boston (Table 5.1).

REFERENCES

Bagchi-Sen, S. and J. Scully (2004), ‘The Canadian environment for innovation


and business development in the biotechnology industry; a firm-level analysis’,
European Planning Studies, 12, 961–84.
164 The capitalization of knowledge

Bazdarich, M. and J. Hurd (2003), Anderson Forecast: Inland Empire & Bay Area,
Los Angeles, CA: Anderson Business School.
Carlsson, B. (ed.) (2001) New Technological Systems in the BioIndustries: An
International Study, London: Kluwer.
Casper, S. and A. Karamanos (2003), ‘Commercialising science in Europe: the
Cambridge biotechnology cluster’, European Planning Studies, 11, 805–22.
Chesbrough, H. (2003), Open Innovation, Boston, MA: Harvard Business School
Press.
Coenen, L., J. Moodysson and B. Asheim (2004), ‘Nodes, networks and prox-
imities: on the knowledge dynamics of the Medicon Valley biotech cluster’,
European Planning Studies, 12, 1003–18.
Cooke, P. (2002), ‘Rational drug design and the rise of bioscience megacentres’,
presented at the Fourth Triple Helix Conference, ‘Breaking Boundaries –
Building Bridges’, Copenhagen and Lund, 6–9 November.
Cooke, P. (2003), ‘The evolution of biotechnology in three continents:
Schumpeterian or Penrosian?’, European Planning Studies, 11, 757–64.
Cooke, P. (2004), ‘Globalization of bioregions: the rise of knowledge capability,
receptivity and diversity’, Regional Industrial Research Report 44, Cardiff:
Centre for Advanced Studies.
Cooke, P. (2005), ‘Rational drug design, the knowledge value chain and bioscience
megacentres’, Cambridge Journal of Economics, 29 (3), 325–42.
De la Mothe, J. and J. Niosi (eds) (2000), The Economics & Spatial Dynamics of
Biotechnology, London: Kluwer.
Dorey, E. (2003), ‘Emerging market Medicon Valley: a hotspot for biotech affairs’,
BioResource, March, www.investintech.com, accessed 1 March 2004.
DTI (1999), Biotechnology Clusters, London: Department of Trade & Industry.
Feldman, M. and J. Francis (2003), ‘Fortune favours the prepared region: the case
of entrepreneurship and the Capitol Region biotechnology cluster’, European
Planning Studies, 11, 757–64.
Finegold, D., P. Wong and T. Cheah (2004), ‘Adapting a foreign direct investment
strategy to the knowledge economy: the case of Singapore’s emerging biotech-
nology cluster’, European Planning Studies, 12, 921–42.
Freeman, C. (1987), Technology Policy & Economic Performance: Lessons from
Japan, London: Pinter.
Fuchs, G. (ed.) (2003), Biotechnology in Comparative Perspective, London:
Routledge.
Gregersen, B. (1992), ‘The public sector as a pacer in National Systems of
Innovation’, in B.A. Lundvall (ed.), National Systems of Innovation, London:
Pinter, pp. 129–45.
Hall, P. and A. Markusen (eds) (1985), Silicon Landscapes, London: Allen &
Unwin.
Henderson, R., L. Orsenigo and G. Pisano (1999), ‘The pharmaceutical industry
and the revolution in molecular biology: interactions among scientific, institu-
tional and organisational change’, in D. Mowery and R. Nelson (eds), Sources
of Industrial Leadership, Cambridge: Cambridge University Press, pp. 99–115.
Johnson, J. (2002), ‘Valley in the Alps’, Financial Times, 27 February, p. 10.
Kaiser, R. (2003), ‘Multi-level science policy and regional innovation: the case
of the Munich cluster for pharmaceutical biotechnology’, European Planning
Studies, 11, 841–58.
Kaufmann D., D. Schwartz, A. Frenkel and D. Shefer (2003), ‘The role of location
Global bioregions 165

and regional networks for biotechnology firms in Israel’, European Planning


Studies, 11, 823–40.
Kettler, H. and S. Casper (2000), The Road to Sustainability in the UK & German
Biotechnology Industries, London: Office of Health Economics.
Lawton Smith, H. (2004), ‘The biotechnology industry in Oxfordshire: enterprise
and innovation’, European Planning Studies, 12, 985–1002.
Lawton Smith, H. and S. Bagchi-Sen (2004), ‘Guest editorial: innovation geogra-
phies; international perspectives on research, product development, and com-
mercialisation of biotechnologies’, Environment & Planning C: Government &
Policy, 22, 159–60.
Lee, C. and M. Walshok (2001), Making Connections, Report to University of
California, Office of the President.
Lundvall, B.-Å. (ed.) (1992), National Systems of Innovation, London: Pinter.
Maillat, D. (1998), ‘Interactions between urban systems and localised productive
systems: an approach to endogenous regional development in terms of innova-
tive milieu’, European Planning Studies, 6, 117–30.
March, J. (1991), ‘Exploration and exploitation in organizational learning’,
Organization Science, 2, 71–87.
McKelvey, M. (1996), Evolutionary Innovation: The Business of Biotechnology,
Oxford: Oxford University Press.
Nelson, R. (ed.) (1993), National Innovation Systems, Oxford: Oxford University
Press.
Niosi, J. and T. Bas (2003), ‘Biotechnology megacentres: Montreal and Toronto
regional systems of innovation’, European Planning Studies, 11, 789–804.
OECD (2000), Health Data, Paris: OECD.
Orsenigo, L. (1989), The Emergence of Biotechnology, New York: St Martin’s
Press.
Orsenigo, L., F. Pammolli and M. Riccaboni (2001), ‘Technological change and
network dynamics: lessons from the pharmaceutical industry’, Research Policy,
30, 485–508.
Owen-Smith, J. and W. Powell (2004), ‘Knowledge networks as channels and
conduits: the effects spillovers in the Boston biotechnology community’,
Organization Science, 15, 5–21.
Parkes, C. (2004), ‘Job losses in Silicon Valley worse than first feared’, Financial
Times, 25 March, p. 9.
Penrose, E. (1959), The Theory of the Growth of the Firm, Oxford: Oxford
University Press.
Powell, W., K. Koput, J. Bowie and L. Smith-Doerr (2002), ‘The spatial clustering
of science and capital: accounting for biotech firm-venture capital relationships’,
Regional Studies, 36, 291–305.
Raagmaa, G. and P. Tamm (2004), ‘An emerging biomedical business in a low
capitalised country’, European Planning Studies, 12, 943–60.
Rallet, A. and A. Torre (1998), ‘On geography and technology: proximity rela-
tions on localised innovation networks’, in M. Steiner (ed.), Clusters & Regional
Specialisation, London: Pion, pp. 41–56.
Ryan, C. and P. Phillips (2004), ‘Knowledge management in advanced technology
industries: an examination of international agricultural biotechnology clusters’,
Environment & Planning C: Government & Policy, 22, 217–32.
Sassen, S. (1994), Cities in a World Economy, Thousand Oaks, CA: Pine Forge
Press.
166 The capitalization of knowledge

Schumpeter, J. (1975), Capitalism, Socialism & Democracy, New York: Harper &
Row.
Small Business Economics (2001), Special Issue: ‘Biotechnology in Comparative
Perspective – Regional Concentration and Industry Dynamics’ (guest editors:
Gerhard Fuchs and Gerhard Krauss), 17, 1–153.
Wickelgren, I. (2002), The Gene Masters, New York: Times Books.
Wolter, K. (2002), ‘Can the US experience be repeated? The evolution of biotech-
nology in three European regions’, Mimeo, Duisburg: Duisburg University.
Zeller, C. (2002), ‘Regional and North Atlantic knowledge production in the phar-
maceutical industry’, in V. Lo and E. Schamp (eds), Knowledge – the Spatial
Dimension, Münster: Lit-Verlag, pp. 86–102.
Zucker, L., M. Darby and J. Armstrong (1998), ‘Geographically localised knowl-
edge: spillovers or markets?’, Economic Inquiry, 36, 65–86.
6. Proprietary versus public domain
licensing of software and research
products
Alfonso Gambardella and Bronwyn H. Hall

1. INTRODUCTION

In the modern academic research setting, many disciplines produce soft-


ware and databases as a by-product of their own activities, and also use
the software and data generated by others. As Dalle (2003) and Maurer
(2002) have documented, many of these research products are distributed
and transferred to others using institutions that range from commercial
exploitation to ‘free’ forms of open source. Many of the structures used
in the latter case resemble the traditional ways in which the ‘Republic of
Science’ has ensured that research spillovers are available at low cost to
all. But in some cases, moves toward closing the source code and com-
mercial development take place, often resulting either in the disappearance
of open source versions or in ‘forking’, where an open source solution
survives simultaneously with the provision of a closed commercial version
of the same product. This has also created tensions between the reward
systems of the ‘Republic of Science’ and the private sector, especially when
the production of research software or the creation of scientific databases
is carried out in academic and scientific research environments (see also
Hall, 2004).
As these inputs to scientific research have become more important and
their value has grown, a number of questions and problems have arisen
surrounding their provision. How do we ensure that incentives are in place
to encourage their supply? How do market and non-market production
of these knowledge inputs interact? In this chapter, we address some of
these questions. We develop a framework that highlights the difficulties
in sustaining the production of knowledge when it is the outcome of a
collective enterprise. Since the lack of coordination among the individual
knowledge producers is typically seen as the culprit in the underprovision
of public knowledge, the latter can be sustained by institutional devices

167
168 The capitalization of knowledge

that encourage such a coordination. A key idea of the chapter is that the
Generalized Public License (GPL) used in the provision of open source
software is one such mechanism. We then discuss another limitation in
the production of this type of knowledge. To make it useful for commer-
cial or other goals, one needs complementary investments (e.g. develop-
ment costs). If the knowledge is freely available, there could be too many
potential producers of such investments, which reduces the incentives of
all of them to make the investments in the first place. Paradoxically, if the
knowledge were protected, its access would be more costly, which might
produce the necessary rents to enhance the complementary investments.
But protecting upstream knowledge has many drawbacks, and we argue
that a more effective solution is to protect the downstream industry prod-
ucts. Finally, we discuss how our framework and predictions apply to the
provision of scientific software and databases.
An example of the difference between free and commercial software
solutions that should be familiar to most economists and scientific
researchers is the scientific typesetting and word processing package TeX.1
This system and its associated set of fonts was originally the elegant inven-
tion of the Stanford computer scientist Donald Knuth, also famous as the
author of the Art of Computer Programming, the first volume of which
was published in 1969. Initially available on mainframes, and now widely
distributed on UNIX and personal computer systems, TeX facilitated the
creation of complex mathematical formulae in a word-processed manu-
script and the subsequent production of typeset camera-ready output. It
uses a set of text-based computer commands to accomplish this task rather
than enabling users to enter their equations via the graphical WYSIWYG
interface now familiar on the personal computer.2 Although straightfor-
ward in concept, the command language is complex and not easily learned,
especially if the user does not use it on a regular basis. Although many
academic users still write in raw TeX but work on a system with a graphi-
cal interface such as Windows, there now exists a commercial program,
Scientific Word, which provides a WYSIWYG environment for generat-
ing TeX documents, albeit at a considerable price when compared to the
freely distributed original.
This example illustrates several features of the academic provision of
software that we shall discuss in this chapter. First, it shows that there
is willingness to pay for ease of software use even in the academic world
and even if the software itself can be obtained for free. Second, the most
common way in which software and databases are supplied to the aca-
demic market is a kind of hybrid between academic and commercial,
where they are sold in a price-discriminatory way that preserves access for
the majority of scientific users. Such products often begin as open source
Proprietary versus public domain licensing 169

projects directed by a ‘lead’ user, because the culture of open science is


quite strong in the developers and participants. Nevertheless, they are
eventually forced into the private sector as the market grows and non-
developer users demand support, documentation, and enhancements to
the ease of use.
In the next section we discuss some basic aspects of the problem of cre-
ating incentives for the production of knowledge when many producers
are involved. Section 3 discusses our analytic framework, which shows
that without some kind of coordination, production of the public knowl-
edge good (science or research software or database) is suboptimal, and
that the GPL can solve the problem at least in part. Section 4 focuses on
complementary investments. Sections 5 and 6 apply our framework to
the specific setting where the knowledge being produced is software or
a database that will be used by academic researchers and possibly also
by private firms, using as an example a product familiar to economists,
econometric software. We conclude by discussing some of the ways in
which pricing can ameliorate the problem of providing these products to
academic researchers. The Appendix develops the technical details of our
model in Section 3.

2. INCENTIVES FOR KNOWLEDGE PRODUCTION


WITH MANY PRODUCERS

The design of incentive systems that reward inventors and knowledge


producers and encourage dissemination of their output has been a famil-
iar issue to economists and other scholars for a long time (e.g. Nelson,
1959; Arrow, 1962; Scotchmer, 1991). If anything, the issue has become
more important today with the advent of the Internet and other computer
networking methods. The principal effect of the increase in computer net-
working and Internet use is that it lowers the marginal cost of distributing
codified knowledge to the point where it is essentially zero. This in turn
has the potential to reduce incentives for production of such knowledge or
to increase the demands of the producers for protection of their property
rights to the knowledge. Hence there is a felt need to undertake additional
efforts to understand the production of knowledge, and to think about
new approaches to policy.
To address these issues, we must first ask what motivates the produc-
ers of knowledge. Key factors identified in the literature are curiosity and
a taste for science, money, the desire for fame and reputation, and, as a
secondary goal, promotion or tenure (Stephan, 1996). The latter two goals
are usually achieved via priority in publication, that is, being the first to get
170 The capitalization of knowledge

a discovery into print. Although monetary income is clearly a partial moti-


vation in the search for reputation and promotion, considerable evidence
exists that for researchers in universities and public research organizations
with some level of guaranteed income, the first motive − intellectual curi-
osity − is of overriding importance (e.g. Isabelle, 2004). For this type of
researcher, the desire for financial rewards is often driven by the desire to
fund their own scientific research (Lee, 2000) rather than by consumption
per se. Scientists’ motivations also are coloured by the culture in which
they are embedded, with traditional norms giving way to a more market-
oriented view among some younger scientists today (Isabelle, 2004;
Owen-Smith and Powell, 2001).
Several scholars (e.g. Merton, 1957, 1968; David, 1993) have described
the two regimes that allocate resources for the creation of new knowledge:
one is the system of granting intellectual property rights (IPR), as exem-
plified by modern patent and copyright systems; the other is the ‘open
science’ regime, as often found in the realm of ‘pure’ scientific research
and sometimes in the realm of commercial technological innovation,
often in infant industries (Allen, 1983). Today we also see this system to a
certain extent in the production of free and open source software. The first
system assigns clear property rights to newly created knowledge that allow
the exclusion of others from using that knowledge, as well as the trading
and licensing of the knowledge. As is well known, such a system provides
powerful incentives for the creation of knowledge, at the cost of creating
temporary monopolies that will tend to restrict output and raise price.
Additionally, in such systems, the transaction costs of combining pieces
of knowledge or building on another’s knowledge may be rather high, and
in some cases achieving first- or even second-best incentives via ex post
licensing may be impossible (Scotchmer, 1991). The use of other firms’
knowledge output will often require payment or reciprocal cross-licensing,
which means that negotiation costs have to be incurred. Finally, obtain-
ing IPR usually requires publication, but only of codified knowledge, and
trade secrecy protection is often used in addition.
The second set of institutional arrangements, sometimes referred to
as the norms governing the ‘Republic of Science’, generates incentives
and rewards indirectly: the creation of new knowledge is rewarded by
increased reputation, further access to research resources, and possible
subsequent financial returns in the form of increased salary, prizes, and
the like (Merton, 1957, 1968). This system relies to some extent on the fact
that individuals often invent or create for non-pecuniary reasons such as
curiosity. Dissemination of research results and knowledge is achieved at
relatively low cost, because assigning the ‘moral rights’ to the first pub-
lisher of an addition to the body of knowledge gives creators an incentive
Proprietary versus public domain licensing 171

to disseminate rapidly and broadly. Therefore, in this system the use of


others’ output is encouraged and relatively cheap, with the cost being
appropriate citation and possibly some reciprocity in sharing knowledge.
But it is evident that this system cannot capture the same level of private
economic returns for the creation of knowledge. Inventors must either
donate their work or receive compensation as clients of public or private
patrons.3
Hall (2004) highlights the tension that arises when these two systems
come up against each other. For example, it is common for the differ-
ence in norms and lack of understanding of the potential partner’s needs
and goals to produce breakdowns in negotiations between industry and
academia. These breakdowns can have an economic as well as a cultural
cause, as shown by Anton and Yao (2002) in a study of contracting
under asymmetric information about the value of the knowledge to be
exchanged. In addition, there is the simple fact that both systems rely on
reciprocal behavior between the parties to a knowledge exchange, so that
contracting between participants in the two difference systems becomes
subject to misunderstanding or worse. This is illustrated by the reaction
of the genomic industry in the USA when asked to take out licenses to
university-generated technology: once the university starts acting like
a private sector firm, there is a temptation to start charging it for the
use of the outputs of industry research, and consequent negative effects
on researchers who still believed themselves part of the ‘open science’
regime.
In fact, notice should also be taken of an important variation of the
‘open science’ regime for the sharing of knowledge production outputs,
one that has arisen many times in the development of industry through-
out history: the free exchange and spillover of knowledge via personnel
contact and movement, as well as reverse engineering, without resort to
intellectual property protection. This has become known as the system
for ‘collective invention’. Examples include the collective invention in the
steel and iron industry described by Allen (1983) (see also Von Hippel,
1987), the development of the semiconductor industry in Silicon Valley
(Hall and Ziedonis, 2001), the silk industry in Lyons during the ancien
regime, described by Foray and Hilaire-Perez (2005), and the collective
activities of communities of users who freely distribute information to
the manufacturers (Harhoff et al., 2003). In these environments, most of
which are geographically localized innovation areas with social as well as
business relationships that build trust (or at least knowledge of whom to
trust), the incentive system for the production and exchange of knowledge
is somewhat different from that in either of the other two systems.
The first and most obvious difference is that the production of ‘research’
172 The capitalization of knowledge

in the industry setting is supported not by public or private patronage but


by commercial firms that finance it by the sale of end products that incor-
porate their discovery. Because rewards come from the sale of products
rather than information itself, as they do in the conventional IP-based
system, the sharing of information about incremental innovations is
motivated by different considerations than in the case of the open source
regime. Although priority is not per se valuable except in the sense that it
may confer lead time for production, shared knowledge, especially about
incremental improvements to a complex product, is perceived to be useful
and essential for the progress of the entire industry, including the firm that
shares the knowledge. When an industry is advancing and growing rapidly,
the desire to exclude competitors from the marketplace is not as strong as
when an industry reaches maturity. An implication is that this form of free
exchange of knowledge tends to collapse, or is unstable over time, as has
happened in many of the historical examples. In the next section we try to
capture this idea and discuss some conditions under which the academic or
industry-based open source regime might break down.

3. ‘PUBLIC DOMAIN’ VERSUS ‘PROPRIETARY’


RESEARCH

Configuration of the Open Source Equilibrium

When do the different systems of knowledge generation and sharing dis-


cussed in the previous section develop, and when they might be expected
to break down? In this section we address these questions. To make our
argument more precise we provide a simple formalization in the Appendix.
Below we discuss the intuitions and the implications of our model.
As discussed in the previous section, many researchers face a trade-
off. They can put a given research outcome in the public domain or seek
private profits from it. As a stylized representation, in the former case
they enjoy no economic rents, while in the latter they restrict public dif-
fusion of their findings, seek property rights on them, and gain monetary
income. We label the first mode ‘public domain’ (PD), and the second
‘proprietary research’ (PR). As also noted earlier, this framework encom-
passes many situations, such as academic scientists who could publish
their research findings vis-à-vis holding patents or other property rights on
them (Dasgupta and David, 1994); software developers who contribute to
open source software as opposed to patenting their programs (Lerner and
Tirole, 2002); user–inventors who transfer their inventions to the produc-
ers rather than protecting them as intellectual property and then selling
Proprietary versus public domain licensing 173

them (Von Hippel, 1988; Von Hippel and Von Krogh, 2003; Harhoff et
al., 2003); communities of technologists who coordinate to share their ‘col-
lective’ inventions, as opposed to keeping their knowledge secret (Allen,
1983; Nuvolari, 2004; Foray and Hilaire-Perez, 2005).
Like any individuals, researchers gain utility from monetary income,
but their utility also increases with the stock of public domain (PD) knowl-
edge. Their benefits from this knowledge are from two sources: their own
contributions and other contributions. First, they enjoy utility from the
fact that they contribute to public knowledge. This is because they ‘like’
contributing to PD knowledge per se, or because they enjoy utility from
a larger stock of public knowledge and hence they wish to contribute to
its increase. There could also be instrumental reasons. Contribution to
public knowledge makes their research visible, which provides fame, glory
or potential future monetary incomes in the form of increased salary,
funding for their research, or consultancy. Second, the researchers gain
utility from the fact that others contribute to PD knowledge. Again this
could be because they care about the state of public knowledge. In addi-
tion, a greater stock of public knowledge provides a larger basis for their
own research, which implies that, other things being equal, they would like
others contribute to it.
We assume that the benefits from the contributions of other researchers
to public knowledge will be enjoyed whether one works under PD or in
the proprietary research (PR) regime. This implies that a researcher will
operate under PD if the benefit that she enjoys from her public contribu-
tion is higher than the foregone monetary income from not privatizing
her findings. In the Appendix we show that in equilibrium this is true of
all the researchers who operate under PD, while the opposite is true of the
researchers who operate under PR. In general, the equilibrium will involve
a share of researchers operating under PD or PR that is between 0 and
1. The first prediction of our analysis is, then, that the two regimes can
coexist, as we shall also see with some examples in the following sections.
Our model also predicts that new profit opportunities common to all
the researchers in a field reduce the share of PD researchers in equilibrium,
while a stronger taste for research (e.g. because of particular systems of
academic values) raises it. There is fairly widespread evidence that in fields
like software or biotechnology there are pressures on academic research-
ers to place their findings in a proprietary regime. Also, our examples in
the later sections show that shifts from academic to commercial software
are more prominent when the market demand for the products increases,
which raises the profitability of the programming efforts. Finally, there are
several accounts of the fact that tension between industrial research and
academic norms becomes higher if university access to IPRs is increased
174 The capitalization of knowledge

(Cohen et al., 1998; Hall et al., 2001; Hertzfeld et al., 2006; Cohen et al.,
2006). As these authors report, such tension has already been observed in
the USA, as the latter country has pioneered the trend towards stronger
IPRs and the use of intellectual property protection by universities, but it
is becoming more pronounced in Europe as well, as European universities
follow the path opened up by the US system (Geuna and Nesta, 2006).
Collins and Wakoh (1999) describe similar changes in Japan, and show
how the regime shift to patenting by universities is inconsistent with the
previous system of collaborative research with industry in that country,
implying increasing stress for the system.

Instability of Open Source Production

Our model also shows that the only way to get a stable equilibrium con-
figuration with individuals operating under open sharing rules is when
there is coordination among them. Otherwise, the sharing (cooperative)
equilibrium tends to break down because some individuals find it in their
interest to defect. The instability of the open sharing equilibrium is just an
application of the famous principle by Mancur Olson (1971) that without
coordination collective action is hard to sustain. Our contribution is
simply to highlight that Olson’s insight finds application to the analysis of
the instability of open systems. When many researchers contribute to PD
knowledge, an individual deviation to PR is typically negligible compared
to the (discrete) jump in income offered by a proprietary regime. Thus,
individually, the researchers always have an incentive to deviate.
Another way to see this point is to note that some of the tensions that are
created in the open research systems can be attributed to the asymmetry
between the open and the proprietary mode. The researchers shift to pro-
prietary research only if it is individually profitable. By contrast, in the col-
lective production of knowledge, a desirable individual outcome depends
on the actions of others. In our framework this is because the individuals
care about the fact that others contribute to the stock of knowledge, and
because this may affect their benefits from their own contribution as well.
As we show in the Appendix, this creates situations in which the lack of
coordination produces individual incentives to deviate in spite of the fact
that collectively the researchers would like to produce under PD. The
intuition is that a group of individuals can produce a sizable increase in
the stock of public knowledge if they jointly deviate from the PR regime.
Thus, if there were commitment among them to stay within the PD rules,
they could be better off than with private profits. In turn, this is because
the larger the group of people who deviate in a coordinated fashion, the
higher the impact on the public knowledge good, while the private profits,
Proprietary versus public domain licensing 175

which do not depend so heavily on the collective action, are not affected
substantially by the joint movement of researchers from one regime to the
other. But even if they all prefer to stay with the PD system, because of the
larger impact of their PD contributions as a group, individually they have
an incentive to deviate because if the others stay with PD, the individual
deviation does not subtract that much from public knowledge, while it
does produce a discrete jump in the individual’s private income. Since
everyone knows that everyone else faces this tension, and could deviate, it
will be difficult to keep the researchers under the PD system unless some
explicit coordination or other mechanism is in place.
Ultimately, this asymmetry in the stability of the two configurations
suggests why there may be a tendency to move from public to private pro-
duction of knowledge, while it is much harder to move back from private
to public. The implication is that there is little need for policy if more
proprietary research is desirable, as the latter is likely to arise naturally
from the individual actions. By contrast, policy or institutional devices
that could sustain the right amount of coordination is crucial if the system
underinvests in knowledge that is placed in the public domain.

Generalized Public License (Copyleft) as a Coordination Device

The Generalized Public License (GPL) used in open source software can
be an effective mechanism for obtaining the required coordination. As dis-
cussed by Lerner and Tirole (2002), inter alia, with a GPL the producer of
an open source program requires that all modifications and improvements
of the program be subject to the same rules of openness; most notably the
source code of all the modifications ought to be made publicly available
like the original program.4 To see how a GPL provides the coordination to
solve the Mancur Olson problem, imagine the following situation. There is
one researcher who considers whether to launch a new project or not. We
call her the ‘originator’. She knows that if she launches the project, others
may follow with additional contributions. The latter are the ‘contribu-
tors’. If the originator attaches a GPL to the project, the contributors can
join only under PD. If no GPL is attached, they have the option to priva-
tize their contribution. Of course, once (and if) the project is launched, the
contributors always have the option not to join the project and work on
some alternative activities. Given the expected behavior of the contribu-
tors, the originator will choose whether to launch the project or not. She
also has potential alternatives. If she decides to launch it, she will choose
whether to put her contribution under PD or PR, and if the former, she
considers whether to attach a GPL to the project. We can safely rule out
the possibility that the originator operates under PR and attaches a GPL
176 The capitalization of knowledge

to the project. It will be odd to think that she can enforce open source
behavior given that she does not abide by the same rules.
The key implication of a GPL is that it increases the number of con-
tributors operating under PD. The intuition, which we formalize in the
Appendix, is simple. Without a GPL the contributors have three choices:
work on the project under PD, or under PR, or not join because they have
better alternatives. PD contributors to the project will still choose PD if a
GPL is imposed. If they preferred PD over both PR and other alternatives,
they will still prefer PD if the PR option is ruled out. Those who did not
join the project will not join with a GPL either. They preferred their alter-
natives over PD and PR, and will still prefer them if PR is not an option.
Finally, some of those who joined under PR will join under PD instead,
while others who joined under PR will no longer join the project. As a
result, a GPL reduces the total number of researchers who join the project,
but raises the number of researchers working under PD. The reduced
number of participants is consistent with the fact that the GPL is a restric-
tion on the behavior of the researchers. However, this is a small cost to
the public diffusion of knowledge because those who no longer participate
would have not joined under PD. By contrast, the GPL encourages some
researchers who would not have published their results without the GPL
to do so.
Given the behavior of the contributors, will the originator launch the
project and issue a GPL? We know that the originator, like any other
researcher, enjoys greater utility from a larger size of the public knowl-
edge stock. At the same time, she enjoys utility from monetary income
or, as we noted, from alternative projects. Here we want to compare her
choice when she can employ a GPL vis-à-vis a world in which there is no
GPL. With a GPL she knows that the number of contributors to public
knowledge increases, which in turn increases the size of the expected public
knowledge stock when compared to a no-GPL case. As a result, when
choosing whether to launch the project under PD with a GPL, under PD
and no GPL, under PR, or work on alternative projects, she knows that
the GPL choice raises the future public knowledge stock in the area while
not raising her monetary income from the project or her utility from alter-
natives. This makes it more likely that the originator will choose to work
on the project under PD cum GPL. More generally, a GPL will increase
the number of projects launched under PD and the size of the public
knowledge contributions.
To summarize, the way the GPL works is by giving rise to an implicit
coordination among a larger number of researchers to work on PD. The
originator knows that there will be researchers who would prefer PR but
choose PD if the former opportunity is not available, while all those who
Proprietary versus public domain licensing 177

would choose PD will stick to it in any case. This enlarges the number of
expected PD researchers, thereby placing greater advantages on the PD
choice. Our intuition is that those with a strong taste for PD research will
always work under PD, whether there is a GPL or not. By contrast, those
with a high opportunity cost will never join the project. But those who
have a small opportunity cost, and a weak taste for PD research, might
contribute via PD if a GPL is introduced. The GPL then lures people
who are on the border between doing PD research on the project or not.
For example, a GPL may be crucial to enhance the participation under
PD of young researchers, who do not have significant opportunity costs
(e.g. because they do not yet have high external visibility), but who do not
have a strong taste for PD research either, and hence would privatize their
findings if it were profitable to do so. There might also be dynamic impli-
cations, for example the GPL helps young researchers to ‘acquire’ a taste
for PD research. This might help create a system of norms and values for
public research that could sustain the collective action. We leave a more
thorough assessment of such dynamic implications to future research.

Nature and Consequences of the GPL Coordination

A GPL is most effective as a coordination device when the opportunity


cost of the individual researchers, and the private profits from contribut-
ing to the project, are not positively correlated. Suppose that they were.
This could arise because there is some common element between the two
factors. For example, an individual researcher could be effective in com-
mercializing knowledge in any field because he belongs to institutions (uni-
versity or other) that encourage the commercialization of knowledge. In
this case, the contributors to the project, who have low opportunity costs,
also have low private profits from contributing to the project via PR. A
GPL would not make a big difference because a very large fraction of the
contributors to the project will do so under PD since their private rewards
are low in any case. Hence a GPL induces few researchers to switch from
PR to PD. In turn, this has a small effect on the choice of the originator
to launch the project under PD vis-à-vis PR because the number of addi-
tional PD contributors with a GPL is small. By contrast, if they are not
positively correlated, some of the contributors to the project, who have
low opportunity cost, will have high private rewards from PR. They could
be encouraged by a GPL to switch to PD. As a result, the number of PD
contributors could be sizably different with a GPL, with implied greater
opportunities for PD rather than PR research.
The independence between the opportunity cost and the private rewards,
as opposed to positive correlation, may be associated with the novelty of
178 The capitalization of knowledge

the project. When the projects are in new areas, the opportunities of the
individuals may change substantially, and the researchers who might
profit the most from the new projects can be different from those who
benefited in the old projects. New skills, or new forms of learning are nec-
essary in the new fields, and the people who have made substantial invest-
ments in the old projects may have greater difficulties in the new areas
(see, e.g. Levinthal and March, 1993). In these cases, researchers with low
opportunity costs may instead find that they have great opportunities to
commercialize knowledge in the new fields (high private rewards). Thus
the GPL is more likely to be a useful coordination device when the project
is in a new field rather than an incrementally different one from previous
projects, and when it is socially desirable to run these projects under PD.
Our mechanism relies on the fact that there is enforcement of the GPL.
But can the copyleft system be enforced? In some settings people seem to
abide by the copyleft rules, as Lerner and Tirole (2002) have noted, in spite
of the lack of legal enforcement. In many situations, there may be a repu-
tation effect involved when the copyleft agreement is not complied with. In
this respect, the reason why a copyleft license may be useful is that without
it, it may not be clear to the additional contributors whether the intention
of the initial developers of the project was to keep it under PD or not.
But if the will is made explicit, deviations may be seen as an obvious and
explicit challenge to the social norms, and this may be sanctioned by the
community. The GPL then acts as a signal that clears the stage of potential
ambiguities about individual behavior and the respect of social norms.
Even in science, if a researcher develops a certain result, others may build
on it, and privatize their contributions. This might be seen as a deviation
from the social norms. While this behavior could be sanctioned, according
to the strength with which the norms of open science are embedded in and
pursued by the community, with no explicit indication that the original
contributor did not want future results from her discoveries to be used for
private purposes, the justification for the sanctions or the need for them
can be more ambiguous.
A GPL removes ambiguity about the original intentions of the develop-
ers, and any behavior that contradicts the GPL is more clearly seen as not
proper. This reduces privatization of future contributions compared to a
situation with no GPL, increases the expectations that more researchers
will make their knowledge public, and, other things being equal, creates
greater incentives to make projects public in the first place. It is in this
respect that we think that explicit indications of the norms may be a
stronger signal than the mere reliance on the unwritten norms of open
science or open source software.
A related point is that the literature has typically been concerned with
Proprietary versus public domain licensing 179

the need to protect the private property of knowledge when this is neces-
sary to enhance the incentives to innovate. The inherent assumption is that
when it is not privately protected, the knowledge is by default public, and
it enriches the public domain. Yet our model points out that this is not
really true. The public nature of knowledge needs itself to be protected
when commitments to the production of knowledge in the public domain
are socially desirable. In other words, there is a need for making it explicit
that the knowledge has to remain public, and this calls for positive actions
and institutions to protect it. Not allowing for private property rights on
some body of knowledge is not equivalent to assuming that the knowledge
will be in the public domain. One may then need to assign property rights
not just to private agents, but also to the public. For example, the IPRs are
typically thought of as being property rights to private agents. But we also
need to have institutions that preserve the public character of knowledge.
The copyleft license is a beautiful example of this institutional device. A
natural policy suggestion is therefore to make it legal and enforceable as
copyright, patents and other private-based IPRs.

4. COMPLEMENTARY INVESTMENTS IN OPEN


SOURCE PRODUCTION

Another feature of traditional open source or academic software produc-


tion that we alluded to in the introduction is that it normally requires addi-
tional investments that enhance the usefulness and value of the scattered
individual contributions, or it simply requires investments to combine
them. For example, while several individuals can contribute to the devel-
opment of a whole body of scientific knowledge, there must be some
stage at which the ‘pieces’ are combined into useful products, systems,
or transferable knowledge. Some scientists, or most likely some special-
ized agents, i.e. academic licensing offices or firms, normally perform this
function. A typical example is when scientific knowledge needs substantial
downstream investments to become economically useful technologies or
commercializable products. Thursby et al. (2001) report that this is often
the case for university research outputs. The latter activities are normally
performed by firms. In software, additional investments are often required
to enhance the usability of the software for those who did not develop
it, and to produce documentation and support. The need for additional
investments in open source production, or more generally in tasks that rely
on public domain knowledge, has some specific implications that we want
to discuss in this section.
The problem is that the (downstream) ‘assembling’ agent needs some
180 The capitalization of knowledge

profits in order to carry out the investments that are necessary to produce
the complementary downstream assets of the good. Since the downstream
assembling agents are typically firms, we now refer to them as such. There
are two issues. First, the firm needs to obtain some economic returns to
finance its investment. Clearly, there are many ways to moderate its poten-
tial monopoly power so that the magnitude of the rents will be sufficient
to make the necessary investments but not high enough to produce serious
extra-normal profits. However, it would be difficult for the firm to obtain
such rents if it operated under perfect competition, or if it operated under
an open, public domain system itself.
The second issue is more subtle. The firm uses the public domain contri-
butions of the individual agents (software programmers, scientists etc.) as
inputs in its production process. If these contributions are freely available
in the public domain, and particularly if they are not available on an exclu-
sive basis, many downstream firms can make use of them. As a result, the
downstream production can easily become a free entry, perfectly competi-
tive world, with many firms having access to the widely available knowl-
edge inputs. If so, each firm could not make enough rents to carry out the
complementary investments. This would be even harder for the individual
knowledge producers, who are normally scattered and have no resources
to cover the fixed set-up costs for the downstream investments. The final
implication is that the downstream investments will not be undertaken, or
they will be insufficient. Of course, there can be other factors that would
provide the firms with barriers to entry, thereby ensuring that they can
enjoy some rents to make their investments. However, in productions
where the knowledge inputs are crucial (e.g. software), the inability to
use them somewhat exclusively can generate enough threats of wide-
spread entry and excessive competition to discourage the complementary
investments.
Paradoxically, if the knowledge inputs were produced under proprietary
rules, the producers of them could charge monopoly prices (e.g. because
they could obtain an exclusive license), or at least enjoy some positive
price cost margins. This raises the costs of the inputs. In turn, this height-
ens barriers to entry in the downstream sector, and adjusts the level of
downstream investment upward. In other words, if the inputs are freely
available, there could be excessive downstream competition, which may
limit the complementary investments. If they are offered under proprietary
rules, the costs of acquiring the inputs are higher, which curbs entry and
competition, and allows the downstream firms to make enough rents to
carry out such investments.5
But the privatization of the upstream inputs has several limitations.
For one, as Heller and Eisenberg (1998) have noted, the complementarity
Proprietary versus public domain licensing 181

among the ‘pieces’ of upstream knowledge produced by the different indi-


viduals can give rise to the so-called problem of the anti-commons. That
is, after all the other rights have been collected under a unique proprietor-
ship, the final owner of a set of complementary inputs can enjoy enormous
monopoly power. This is because by withholding his own contribution,
he can forestall the realization of the whole technology, especially when
the complementarity is so tight that each individual contribution is crucial
to make the whole system work. The possibility of ex post hold-up can
discourage the effort to collect all the complementary rights ex ante, and
therefore prevent the development of the technology. Another limitation
of the privatization of the upstream inputs is the one discussed in the previ-
ous section. With copyleft agreements, more people can contribute to the
public good. The decentralized nature of the process by which scientists
or open source software producers operate has typically implied that the
network of public contributors to a given field can be so large that the
overall improvements can be higher than what can be obtained within
individual organizations, including quite large ones. Some evidence that
open source projects also increase the quality of software output has been
supplied by Kuan (2002).
One solution to the problem of paying for complementary downstream
investment is allowing for property rights, and particularly intellectual
property rights, on the innovations of the downstream producer. This
would of course raise its monopoly power and therefore curb excessive
competition. At the same time, it avoids attaching IPRs to pieces of
upstream knowledge, thereby giving rise to the problems of the anti-
commons, or to reduced quality of the upstream knowledge. In addition,
the downstream producer would enjoy rights on features of the innova-
tion that are closer to his own real contribution to the project, that is
the development of specific downstream investments. Clearly, this also
implies that the IPRs thus offered are likely to be more narrow, as they
apply to downstream innovations as opposed to potentially general pieces
of knowledge upstream. At the same time, they are not likely to be as
narrow as in the case of small individual contributions to an open software
module or a minor contribution to a scientific field, which can give rise to
the fragmentation and hold-up problems discussed earlier.

5. ACADEMIC SOFTWARE AND DATABASES

In this section we draw some implications for the provision of scientific


software and databases from the model and discussion in the previous two
sections and then go on to discuss the possible modes in which they could
182 The capitalization of knowledge

be provided. First, this type of activity is more likely to be privatized than


scientific research itself because there is greater and more focused market
demand for the product, because norms are weaker due to weaker reputa-
tion effects, and because there are more potential users who are not inven-
tors (and do not participate in the production of the good). Second, there
could easily be both public and private provision at the same time, because
such an equilibrium can be sustained when there are different communities
of researchers with different norms. Third, as the market for a particular
product grows, privatization is likely simply because the individual’s dis-
crete return to privatization has increased. Finally, when the components
to a valuable good are produced under public domain rules, free entry in
the downstream industry producing a final good based on those compo-
nents implies too few profits for those undertaking investments that will
enhance the value of the good. The final producers have to earn some rents
to be able to make improvements beyond the mere availability of research
inputs.
The privatization of scientific databases and software has both advan-
tages and disadvantages. With respect to the latter, David (2002) has
emphasized the negative consequences of the privatization of scientific and
technical data and information. One of the most important drawbacks is
the increase in cost, sometimes substantial, to other scientists, researchers
or software developers for use of the data in ways that might considerably
enhance public domain knowledge. A second is that the value of such
databases for scientific research is frequently enhanced by combining them
or using them in their entirety for large-scale statistical analysis, both of
which activities are frequently limited when they are commercially pro-
vided.6 Maurer (2002) gives a number of examples of privatized databases
that have somewhat restricted access for academic researchers via their
pricing structure or limitations on reuse of the data, such as Swiss-PROT,
Space Imaging Corporation, Incyte and Celera. In this issue, David (2006)
cites the case of the privatization of Landsat images under the Reagan
Administration, which led to a tenfold increase in the price of an image. In
terms of our model, the potential to privatize scientific and technical data
and information implies that a smaller number of researchers will contrib-
ute to the public good, with implied smaller stock of public knowledge
being produced, which frustrates the launch of projects undertaken under
public diffusion rules.
At the same time, a common argument in favor of the privatization of
databases is that it helps in the development of a database-producing indus-
try, and more generally of an industry that employs these data as inputs.
A similar argument can be used more broadly for software. For example,
the recent European Directive that defines the terms for the patenting of
Proprietary versus public domain licensing 183

software in Europe (European Commission, 2002) was largely justified by


the argument that it would encourage the formation of a software industry
in niches and specialized fields. Although it is sometimes true that exclu-
sivity can have positive effects on the provision of information products,
it is also true that there can be drawbacks like those suggested earlier
(fragmentation of IPRs, little contribution to public domain knowledge,
restricted access when welfare would be enhanced with unlimited access)
to the privatization of knowledge inputs. At times, one can obtain similar
advantages by allowing for the privatization of the outputs that can be
generated using the database or software in question. That is, discovery of
a useful application associated with a particular gene that is obtained by
use of a genomic database is patentable in most countries. Or, in the case
of the econometric software example used later in the chapter, consulting
firms such as Data Resources, Inc. or Chase Econometrics marketed the
results of estimating econometric models using software whose origins
were in the public domain. Following our earlier argument, by allowing
for the privatization of the downstream output we make it possible for
the industry to obtain enough rents to make the necessary complemen-
tary investments, while avoiding the limitations of privatizations in the
upstream knowledge.
There are, however, limits to this particular strategy for ensuring that
scientific databases and software remain in the public domain while
downstream industries based on these freely available discoveries can earn
enough profit to cover their necessary investments. The difficulty of course
is that in the case of generally useful information products, a firm selling
a particular product, one whose inputs is an upstream academic product,
has no reason to undertake the enhancements to the upstream product
that would make it useful to others, unless the firm can sell the enhanced
product in the marketplace. But this is what we were trying to avoid, and
what is ruled out by a GPL.
In fact, we now turn to a discussion of an alternative way in which such
goods can be provided. The production of information products including
software and databases has always been characterized by large fixed costs
relative to marginal cost, but the cost disparity has grown since the advent
of the Internet. In practice, the only real marginal costs of distribution arise
from two sources: the support offered to individual users (which in many
cases has been converted into a fixed cost by requiring users to browse
knowledge bases on the Web) and the congestion costs that can occur on
web servers if demand is too great.7 Standard economic theory tells us that
when the production function for a good is characterized by high fixed
costs and low marginal costs, higher welfare can often be achieved by
using discriminatory pricing, charging those with high willingness to pay
184 The capitalization of knowledge

more in order to offer the good to others at lower prices, thus increasing
the overall quantity supplied. The problem with applying this mechanism
generally is the difficulty of segmenting the markets successfully and of
preventing resale.
In the case of academic software and databases, however, it is quite
common for successful price-discriminating strategies to be pursued.8 There
are several reasons for this: (1) segmentation is fairly easy because academ-
ics can be identified via addresses and institutional web information; (2)
resale is difficult in the case of an information product that requires signing
on to use it and also probably not very profitable; (3) the two markets (aca-
demic and commercial) have rather different tastes and attitudes toward
technical support (especially towards the speed with which it is provided),
so the necessary price discrimination is partly cost-based.

6. CASE STUDY: ECONOMETRIC SOFTWARE


PACKAGES

As an illustration of the pattern of software development in the academic


arena, we present some evidence about a type of product familiar to
economists that has largely been developed in a university research envi-
ronment but is now widely available from commercial firms: packaged
econometric software. Our data are drawn primarily from the excellent
surveys on the topic by Charles Renfro (2003, 2004). We have supple-
mented it in places from the personal experience of one of the co-authors,
who participated in the activity almost from its inception. The evidence
supplied here can be considered illustrative rather than a formal statistical
test of our model, since the sample is relatively small. To form a complete
picture of the phenomenon of software and database commercialization
in academia, it would be necessary to augment our study with other case
studies. For example, see Maurer (2002) for a good review of methods of
database provision in scientific research.
Econometric software is very much a by-product of the empirical eco-
nomic research activity, which is conducted largely at universities and non-
profit research institutions and to a lesser extent in the research departments
of banks and brokerage houses. It is an essential tool for the implementation
of statistical methods developed by econometric theorists, at least if these
methods are to be used by more than a very few specialists. To a great extent,
this type of software originated during the 1960s, when economists began to
use computers rather than calculating machines for estimation, and for the
first time had access to more data than could comfortably be manipulated
by hand. The typical such package is implemented using a simple command
Proprietary versus public domain licensing 185

language and enables the use of a variety of modeling, estimating and fore-
casting methods on datasets of varying magnitudes. Most of these packages
are now available for use on personal computers, although their origins are
often a mainframe computer implementation. For a complete history of the
development of this software, see Renfro (2003).
Like most software, econometric software can be protected via various
IP measures. The most important is a combination of copyright (for the
specific implementation in source code of the methods provided) and trade
secrecy (whereby only the ‘object’ code, or machine-readable version of
the code, is released to the public). This combination of IP protection has
always been available but has only become widely used during the personal
computer era. Before that time, distributors of academic software usually
provided some form of copyrighted source code for local installation on
mainframes, and relied on the fact that acquisition and maintenance were
performed by institutions rather than a single individual to protect the
code. This meant that the source code could be modified for local use, but
because the size of the potential market for ‘bootleg’ copies of the source
was rather small, piracy posed no serious competitive threat. The advent
of the personal computer, which meant that in many cases software was
being supplied to individuals rather than institutions, changed this situa-
tion, and today the copyright-trade secrecy model is paramount.9 Thus it
is possible to argue that developments in computing have made the avail-
able IP protection in the academic software sector stronger at the same
time that the potential market size grew, which our model implies will lead
to more defection from public domain to proprietary rules.
In Table 6.1, we show some statistics for the 30 packages identified by
Renfro. The majority (20 of the 30) have their origins in academic research,
either supported by grants or, in many cases, written as a by-product of
thesis research on a student’s own time.10 A further five were written
specifically to support the modeling or research activities of a quasi-
governmental organization such as a central bank. Only five were written
with a specific commercial purpose in mind. Two of those five were forks
of public domain programs, and in contrast to those of academic origin
(whose earliest date of introduction was 1964 and whose average date was
1979), the earliest of the commercial programs was developed in 1981/82,
a date that clearly coincides with the introduction of the non-hobbyist
personal computer. Notwithstanding the academic research origin of
most of these packages, today no fewer than 25 out of the 30 have been
commercialized, with an average commercialization lag of nine years.
Reading the histories of these packages supplied in Renfro (2003), it
becomes clear that although many of them had more than one contribu-
tor, normally there was a ‘lead user’ who coordinated development, the
186 The capitalization of knowledge

Table 6.1 Econometric software packages

Type of seed Total Number Average Average


funding number of commer- lag to date of
products cialized commercial- introduction
ization
Research 20 16 9.4 1979
grants or own
research
Quasi- 5 4 16.4 1974
governmental
organization
Private 5 5 0.8 1984
(for profit)

Total or average 30 25 9.0 1979

identity of the ‘lead user’ occasionally changing as time passed. Most


of the packages had their origins in the solution of a specific research
problem (e.g. the development of LIMDEP for estimation of the Nerlove
and Press logit model, or the implementation of Hendry’s model develop-
ment methodology in PCGive), but were developed, often through the
efforts of others besides the initial inventor, into more general tools.
These facts clearly reflect the development both of computing technol-
ogy and of the market for these kinds of packages. As predicted by our
model, growth in the market due to the availability of personal computers
and the growth of the economics profession as whole has caused the early
largely open source development model of the 1960s to become privatized.
Nevertheless, there remain five programs that are supplied for free over
the Internet; of these, three had their origins before 1980 and the other
two are very recent. As our model in Section 3 suggests, not all of the indi-
viduals in the community shift to the private system, and the share of PD
activities can well be between 0 and 1. Interestingly, only one of the five is
explicitly provided with a GPL attached. A quote from one of the author’s
websites summarizes the motivation of those who make these programs
available quite well:
Why is EasyReg free?
EasyReg was originally designed to promote my own research. I came to realize
that getting my research published in econometric journals is not enough to get
it used. But writing a program that only does the Bierens’ stuff would not reach
the new generation of economists and econometricians. Therefore, the program
should contain more than only my econometric techniques.
Proprietary versus public domain licensing 187

When I taught econometrics at Southern Methodist University in Dallas in


the period 1991–1996, I needed software that my graduate students could use
for their exercises. The existing commercial software was not advanced enough,
or too expensive, or both. Therefore, I added the econometric techniques that
I taught in class first to SimplReg, and later on to EasyReg after I had bought
Visual Basic 3.
Meanwhile, working on EasyReg became a hobby: my favorite pastime
during rainy weekends.
When I moved to Penn State University, and made EasyReg downloadable
from the web, people from all over the world, from developing countries in Asia
and Africa as well as from western Europe and the USA, wrote me e-mails with
econometric questions, suggestions for additions, or just saying ‘thank you’. It
appears that a lot of students and researchers have no access, or cannot afford
access, to commercial econometrics software. By making EasyReg commercial
I would therefore let these people down.
There are also less altruistic reasons for keeping EasyReg free:

(1) By keeping EasyReg free my own econometric work incorporated in


EasyReg will get the widest distribution.
(2) I will never be able to make enough money with a commercial version of
EasyReg to be compensated for the time I have invested in it.
(3) Going commercial would leave me no time for my own research.11

Indeed, the second statement suggests that one reason to leave the soft-
ware in the public domain was that the researcher’s commercial profits
were not large enough. Likewise, the third statement suggests that the
researcher cared about research and this was an important reason for not
privatizing it. This is suggestive of the fact that the individual displayed
a relatively low utility of commercial profits vis-à-vis his preference for
research, which in turn affected his choice of staying public. In sum, the
model’s prediction that both private and public modes of provision can
coexist when at least some individuals adhere to community norms is
borne out, at least for one example.
We also discussed explicitly the role of complementary services or
enhanced features for non-inventor users in the provision of software.
This is clearly one of the motivations behind commercialization, as was
illustrated by the example of TeX. Table 6.2, which is drawn from data in
Renfro (2004), attempts to give an impression of the differences between
commercialized and non-commercialized software, admittedly using a
rather small sample. To the extent that ease of use can be characterized
by the full WIMP interface, there is no difference in the average perform-
ance of the two types of software. The main difference seems to be that the
commercialized packages are larger and allow both more varied and more
complex methods of interaction. Note especially the provision of a macro
facility to run previously prepared programs, which occurs in 84 percent
of the commercial programs, but only in two out of the five free programs.
188 The capitalization of knowledge

Table 6.2 Comparing non-commercial and commercial software

Features Share of non- Share of


commercial (%) commercial (%)
Full windows, icons, menus interface 60 60
(WIMP)
Interactive use possible 60 68
Macro files can be executed 40 84
Manipulate objects with icons/menus 60 88
Generate interactive commands with 20 60
icons/menus

Such programs are likely to require more user support and documentation
because of their complexity, which increases the cost of remaining in the
PD system. In short, as our earlier discussion suggested, a commercial
operation, which is likely to imply higher profits, also provides a greater
degree of additional investments beyond the mere availability of the
research inputs.
To summarize, the basic predictions of our model, which are that
participants in an open science community will defect to the private (IP-
using) sector when profit opportunities arise (e.g. the final demand for the
product grows, or IP protection becomes available) are confirmed by this
example. We also find some support for the hypothesis that commercial
operations are likely to undertake more complementary investments than
pure open source operations. We do not find widespread use of the GPL
idea in this particular niche market yet, although use of such a license
could evolve. In the broader academic market, Maurer (2002) reports that
a great variety of open source software licenses is in use, both viral (GPL,
LPL) and non-viral (BSD, Apache-CMU).
Finally, our model in Section 3 does not explicitly incorporate all the
factors that are clearly important in the case of software and databases.
Specifically, one area seems worthy of further development. We did not
model the competitive behavior of the downstream firms in the database
and software industries. In practice, in some cases, there is competition to
supply these goods, and in others, it is more common for the good to be
supplied at prices set by a partially price-discriminating monopolist. We
report the evidence on price discrimination for our sample briefly here.
Table 6.3 presents some very limited data for our sample of 30 econo-
metric software packages. Of the 30, five are distributed freely and a
further eight are distributed as services, possibly bundled with consulting
(such sales are essentially all commercial); this is the ‘added value’ business
Proprietary versus public domain licensing 189

Table 6.3 Price discrimination in econometric software

Price-discriminate? No. of packages


By size or complexity 3
Academic/commercial 10
No discrimination 2
NA 2

Sold as a service 8
Free 5

Total 30

model discussed earlier. Of the remaining 17, we were able to collect data
from their websites for 15. Of these, only two did not price-discriminate,
three discriminate by the size and complexity of the problem that can be
estimated, and ten by the type of customer, academic or commercial.12 A
number of these packages were also offered in ‘student’ versions at sub-
stantially lower prices, segmenting the market even further. This evidence
tends to confirm that in some cases, successful price discrimination is fea-
sible and can be used to serve the academic market while covering some of
the fixed costs via the commercial market.
Although price discrimination is widely used in these markets, it does
have some drawbacks as a solution to the problem of software provision.
The most important one is that features important to academics or even
programs important to academics may fail to be provided or maintained in
areas where there is either a very small commercial market or no market,
because their willingness to pay for them is much lower. Obviously this is
not a consequence of price discrimination per se, but simply of low will-
ingness to pay; the solution is not to eliminate price discrimination, but
to recognize that PD production of some of these goods is inevitable. For
example, a database of elementary particle data has been maintained by
an international consortium of particle physicists for many years. Clearly
such a database has little commercial market.

7. CONCLUSIONS

Among the activities that constitute academic research, the production


of software and databases for research purposes is likely to be especially
subject to underprovision and privatization. The reason is that, like most
190 The capitalization of knowledge

research activities, the public-good nature of the output leads to free-


riding, but that the usual norms and rewards of the ‘Republic of Science’
are less available to their producer and maintainers, especially the latter.
In this chapter we presented a model that illustrates and formalizes these
ideas and we used the model to show that the GPL can be a way to ensure
provision of some of these goods, at least when the potential producers
also want to consume them.
Although we have emphasized the beneficial role of the GPL as a coor-
dination device for producing the public good, in these conclusions we
also want to point out that the GPL is not a panacea that works in all situ-
ations, and one of those situations may indeed be the production of sci-
entific software and databases. One reason is that in practice it is difficult
to distinguish between the ‘upstream’ activities, which, as we discussed,
ought to be produced under PD, and the ‘downstream’ ones. As we noted,
the latter may entail important complementary investments. Therefore
they could be more effectively conducted under private rules that enable
the producers to raise the rents necessary to perform such investments. But
the GPL ‘forces’ the contributors to work under PD rules. If one cannot
properly distinguish between upstream and downstream activities, the
downstream activities, with implied complementary investments, will also
be subject to PD rules. This makes it more difficult to raise the resources to
make the investments, with implied lower quality of the product.
To return to the example of the introduction, the TeX Users’ Group
reports the following on their website in answer to the FAQ ‘If TeX is so
good, how come it’s free?’:

It’s free because Knuth chose to make it so. He is nevertheless apparently happy
that others should earn money by selling TeX-based services and products.
While several valuable TeX-related tools and packages are offered subject to
restrictions imposed by the GNU General Public License (‘Copyleft’), TeX
itself is not subject to Copyleft. (http://www.tug.org)

Thus part of the reason for the spread of TeX and its use by a larger
number of researchers than just those who are especially computer-
oriented is the fact that the lead user chose not to use the GPL to enforce
the public domain, enabling commercial suppliers of TeX to offer easy-to-
use versions and customer support.
The so-called ‘lesser’ GPL (LGPL) or other similar solutions can in
part solve the problem. As discussed by Lerner and Tirole (2002), among
others, the LGPL and analogous arrangements make the public domain
requirement less stringent. They allow for the mixing of public and
private codes or modules of the program. As a result, the outcome of the
process is more likely to depend on the private incentives to make things
Proprietary versus public domain licensing 191

private or public, and this might encourage the acquisition of rents in the
downstream activities. But following the logic of our model, as we allow
for some degree of privatization, the efficacy of the license as a coordina-
tion mechanism is likely to diminish. We defer to future research a more
thorough assessment of this trade-off. Here, however, we want to note
that when the importance of complementary investments is higher, one
would expect LPGL to be socially more desirable. The benefits of having
the downstream investments may offset the disadvantage of a reduced
coordination in the production of the public good. By contrast, when such
investments are less important, or the separation between upstream and
downstream activities can be made more clearly (and hence one can focus
the GPL only on the former), a full GPL system is likely to be socially
better.

ACKNOWLEDGMENTS

Conversations with Paul David on this topic have helped greatly in clarify-
ing the issues and problems. Both authors acknowledge his contribution
with gratitude; any remaining errors and inconsistencies are entirely our
responsibility. We are also grateful to Jennifer Kuan for bringing some of
the open source literature to our attention.
This chapter was previously published in Research Policy, Vol. 35, No.
6. 2006, pp. 875–92.

NOTES

1. This brief history of TeX is drawn from the TeX Users’ Group website, http://www.tug.
org. In giving a simplified overview, we have omitted the role played by useful programs
based on TeX such as LaTeX, etc. See the website for more information.
2. WYSIWYG is a widely used acronym in computer programming design that stands for
‘What You See Is What You Get’.
3. We can subsume both cases as instances of ‘patronage’ – self-patronage of the donated
efforts is a special case of this. See David (1993) and Dasgupta and David (1994).
4. There are many variants of a GPL, with different possibilities of privatizing future
contributions. See, for example, Lerner and Tirole (2005). However, in this chapter we
want to focus on some broad features of the effect of a GPL as a coordinating device,
and therefore we simply consider the extreme case in which the GPL prevents any pri-
vatization of the future contributions.
5. This argument should be familiar as it is the same as the argument used by some
to justify Bayh–Dole and the granting of exclusive licenses for development by
universities.
6. The usual commercial web-based provision of data is based on a model where the user
constructs queries to access individual items in the database, like looking up a single
word in the dictionary. The pricing of such access reflects this design and is ill suited (i.e.
192 The capitalization of knowledge

very costly) for researcher use in the case where research involves studying the overall
structure of the data.
7. This can be a real cost. The US Patent Office, which provides a large patent database
free to the public at large on its web server, has a notice prominently posted on the
website saying that use of automated scripts to access large amounts of these data is
prohibited and will be shut down, because of the negative impact this has on individuals
making live queries.
8. Another type of academic information product deserves mention here, academic jour-
nals. The private sector producers of these journals face the same type of cost struc-
ture and have pursued a price discrimination strategy for many years, discriminating
between library and personal use, and also among the income levels of the purchasers
in some cases, where income level is proxied by country of origin.
9. In principle, in the aftermath of the (1981) Diamond v. Diehr decision, patent protec-
tion might also be available for some features of econometric software. In this area, as
in many other software areas, there is tremendous resistance to this idea on the part of
existing players, perhaps because they are well aware of the nightmare that might ensue
if patent offices were unacquainted with prior art in econometrics (as is no doubt cur-
rently the case).
10. Unfortunately, it is not possible to identify precisely the nature of the seed money
support for many of the packages from the histories supplied in Renfro (2003), other
than the simple fact that the development took place at a university.
11. This quotation is from Hermann Bierens’s website at http://econ.la.psu.edu/~hbierens/
EASYREG.HTM.
12. The average ratio of commercial to academic price was 1.7. Assuming an iso-elastic
demand curve with elasticity h and letting s = share of commercial (high-demand) cus-
tomers, one can perform some very rough computations using the relationship ΔQ/Q =
− h ΔP/P or (1 − s) = h 0.7. If h = 1, then the implied share of academic customers is 70
percent. If the share of academic customers is only 30 percent, then the implied demand
elasticity is about 0.42.

REFERENCES

Allen, R.C. (1983), ‘Collective invention’, Journal of Economic Behavior and


Organization 4, 1–24.
Anton, J.J. and D.A. Yao (2002), ‘The sale of ideas: strategic disclosure, property
rights, and contracting’, Review of Economic Studies, 67, 585–607.
Arrow, K. (1962), ‘Economic welfare and the allocation of resources for invention’,
in R.R. Nelson (ed.), The Rate and Direction of Inventive Activity, Princeton, NJ:
Princeton University Press, pp. 609–25.
Cohen, W.M., R. Florida and L. Randazzese (2006), For Knowledge and Profit:
University–Industry Research Centers in the United States, Oxford: Oxford
University Press.
Cohen, W.M., R. Florida, L. Randazzese and J. Walsh (1998), ‘Industry and
the academy: uneasy partners in the cause of technological advance’, in R.
Noll (ed.), The Future of the Research University, Washington, DC: Brookings
Institution Press, pp. 171–99.
Collins, S. and H. Wakoh (1999), ‘Universities and technology transfer in
Japan: recent reforms in historical perspective’, University of Washington and
Kanagawa Industrial Technology Research Institute, Japan.
Dalle, J.M. (2003), ‘Open source technology transfer’, paper presented to the
Third EPIP Conference, Maastricht, The Netherlands, 22–23 November.
Proprietary versus public domain licensing 193

Dasgupta, P. and P.A. David (1994), ’Toward a new economics of science’,


Research Policy, 23, 487–521.
David, P. (1993), ‘Knowledge, property, and the system dynamics of technological
change’, in Proceedings of the World Bank Annual Conference on Development
Economics 1992, Washington, DC: The World Bank, pp. 215–47.
David, P.A. (2002), ‘The economic logic of open science and the balance between
private property rights and the public domain in scientific data and information:
a primer’, National Research Council Symposium on The Role of the Public
Domain in Scientific and Technical Data and Information. National Academy
of Sciences, Washington, DC.
David, P.A. (2006), ‘A tragedy of the public knowledge “commons”? Global
science, intellectual property and the digital technology boomerang’, unpub-
lished paper, Stanford University, Stanford CA and All Souls College, Oxford
University, Oxford UK.
European Commission (2002), Draft directive on the patentability of computer-
implemented inventions (20 February), available at http://www.europa.eu.int/
comm/internal_market/en/indprop/comp/index.htm.
Foray, D. and L. Hilaire-Perez (2005), ‘The economics of open technology: col-
lective organization and individual claims in the Fabrique Lyonnaise during the
Old Regime’, in C. Antonelli, D. Foray, B.H. Hall and W.E. Steinmueller (eds),
Essays in Honor of Paul A. David, Cheltenham, UK and Northampton, MA,
USA: Edward Elgar, pp. 239–54.
Geuna, A. and L. Nesta (2006), ‘University patenting and its effects on academic
research: the emerging European evidence’, Research Policy, 35 (6), 790–807.
Hall, B.H. (2004), ‘On copyright and patent protection for software and databases:
a tale of two worlds’, in O. Granstrand (ed.), Economics, Law, and Intellectual
Property, Boston/Dordrecht: Kluwer Academic Publishers, pp. 259–78.
Hall, B.H. and R. Ziedonis (2001), ‘The determinants of patenting in the U.S.
semiconductor industry, 1980–1994’, Rand Journal of Economics, 32, 101–28.
Hall, B.H., A.N. Link and J.T. Scott (2001), ‘Barriers inhibiting industry from
partnering with universities’, Journal of Technology Transfer, 26, 87–98.
Harhoff, D., J. Henkel and E. von Hippel (2003), ‘Profiting from voluntary
information spillovers: how users benefit by freely revealing their innovations’,
Research Policy, 32, 1753–69.
Heller, M.A. and R.S. Eisenberg (1998), ‘Can patents deter innovation? The anti-
commons in biomedical research’, Science, 280, 698–701.
Hertzfeld, H.R., A.N. Link and N.S. Vonortas (2006), ‘Intellectual property
protection mechanisms and research partnerships’, Research Policy, 35 (6),
825–38.
Isabelle, M. (2004), ‘They invent (not patent) like they breathe: what are their incen-
tives to do so? Short tales and lessons from researchers in a public research organi-
zation’, paper presented at the Third EPIP Workshop, Pisa, Italy, 2–3 April.
Kuan, J. (2002), ‘Open source software as lead user’s make or buy decision: a study
of open and closed source quality’, Stanford University, CA.
Lee, Y.S. (2000), ‘The sustainability of university–industry research collabora-
tion’, Journal of Technology Transfer, 25, 111–33.
Lerner, J. and J. Tirole (2002), ‘Some simple economics of open source’, Journal of
Industrial Economics, 50, 197–234.
Lerner, J. and J. Tirole (2005), ‘The scope of open source licensing’, Journal of
Law, Economics and Organization, 21, 20–56.
194 The capitalization of knowledge

Levinthal, D. and J.G. March (1993), ‘The myopia of learning’, Strategic


Management Journal, 14, 95–112.
Maurer, S.M. (2002), ‘Promoting and disseminating knowledge: the public/private
interface’, available at http://www7.nationalacademies.org/biso/Maurer_
background_paper.html, accessed 20 June 2008.
Merton, R.K. (1957), ‘Priorities in scientific discovery: a chapter in the sociology
of science’, American Sociological Review, 22, 635–59.
Merton, R.K. (1968), ‘The Matthew effect in science’, Science, 159 (3810),
56–63.
Nelson, R.R. (1959), ‘The simple economics of basic scientific research’, Journal of
Political Economy, 77, 297–306.
Nuvolari, A. (2004), ‘Collective invention during the British Industrial Revolution:
the case of the Cornish pumping engine’, Cambridge Journal of Economics, 28,
347–63.
Olson, M. (1971), ‘The Logic of Collective Action: Public Goods and the Theory of
Groups’, Cambridge, MA: Harvard University Press.
Owen-Smith, J. and W.W. Powell (2001), ‘To patent or not: faculty decisions and
institutional success at technology transfer’, Journal of Technology Transfer, 26,
99–114.
Renfro, C.G. (2003), ‘Econometric software: the first fifty years in perspective’,
Journal of Economics and Social Measurement, 29, 1–51.
Renfro, C.G. (2004), ‘A compendium of existing econometric software packages’,
Journal of Economics and Social Measurement, 29, 359–409.
Scotchmer, S. (1991), ‘Standing on the shoulders of giants: cumulative research
and the patent law’, Journal of Economic Perspectives, 5, 29–41.
Stephan, P.E. (1996), ‘The economics of science’, Journal of Economic Literature,
34, 1199–235.
Thursby, J., R. Jensen and M.C. Thursby (2001), ‘Objectives, characteristics and
outcomes of university licensing: a survey of major U.S. universities’, Journal of
Technology Transfer, 26, 59–72.
Von Hippel, E. (1987), ‘Cooperation between rivals: informal know-how trading’,
Research Policy, 16, 291–302.
Von Hippel, E. (1988), The Sources of Innovation, Oxford: Oxford University
Press.
Von Hippel, E. and G. von Krogh (2003), ‘Open source software and the private–
collective innovation model: issues for organization science’, Organization
Science, 209–23.
Proprietary versus public domain licensing 195

APPENDIX A MODEL OF PUBLIC DOMAIN


VERSUS PROPRIETARY RESEARCH

Set-up and Equilibrium

The total (indirect) utility of a researcher is U = z + q·X(n − 1), where


X(n − 1) is the stock of PD knowledge when n − 1 other researchers work
under PD, and q ≥ 0 is a parameter that measures how much they care
about the fact that others work under PD. Also, z = x(n) if they work
under PD, and z = p if they work under PR, where x(n) is the utility
that the researcher gains from her public contribution (assumed to be a
function of the number of PD researchers n) and p is the utility from the
monetary income. We assume that x(n) ≥ 0, and we make no assumption
about the impact of n on x. There could be diminishing returns, i.e. a
larger n implies smaller utility from one’s own contribution (e.g. because
fewer important discoveries can be made), or there could be externalities,
i.e. x increases with n, or both. Note that we assume that the researchers
enjoy the public contribution of the others even if they work under PR.
We could make more complicated assumptions, for example the impact of
X(n − 1) on utility is different according to whether the individual operates
under PD or PR, but this will not affect our main results.
A researcher will produce under PD if p ≤ x(n). We assume that the
individuals are heterogeneous in their preferences of PD versus PR, i.e.
[p – x(n)] ~ F(·|n), where the distribution function F depends on n because
of x(n). In principle, the support of p − x(n) is the whole real line. The
share of individuals working under PD is then F(0|n), and the equilibrium
number of researcher ne working under PD is defined by F(0|ne) = ne/N,
where N is the total number of researchers in the community. This condi-
tion says that in equilibrium the share of researchers working under PD is
equal to the share of researchers whose utility from p is not larger than the
utility of their contribution x(ne) to PD.
Figure 6A.1 depicts our equilibria. Point E in Figure 6A.1 is an equi-
librium because if the number of researchers working under PD increases
beyond ne, the share of researchers with p − x(n) ≤ 0 increases by less
than the share of researchers working under PD. But this is a contradic-
tion because for some of the researchers who have moved to PD it was
not profitable to do so. The reasoning is symmetric for the deviations
from PD to PR in equilibrium. Stability of the equilibrium requires that
the F(ne) curve cuts the ne/N line from above. This ensures that whenever
an individual deviates from the equilibrium, moving from PD to PR, the
share of individuals with p ≤ x(ne − 1), i.e., those who find it profitable to
operate under PD, is higher than the actual share of individuals working
196 The capitalization of knowledge

F(n); n/N F(n); n/N

1 1
n/N
E2
n/N
E F(n)
F(n)
E1

ne N ne ne N
One stable equilibrium (E) Two stable equilibria (E1 and E2)

Figure 6A.1 Equilibria

under PD after the move, i.e. (ne − 1)/N. Hence the move is not profitable.
Similarly, whenever an individual moves from PR to PD in equilibrium,
the share of researchers with p ≤ x(ne + 1) becomes smaller than the share
of researchers who now work under PD, i.e. (ne + 1)/N. The stability
conditions are then F(0| ne − 1) > (ne − 1)/N and F(0|ne + 1) < (ne + 1)/N.
Multiple equilibria are also possible. There may be more than one ne that
satisfies (1) with F(n) cutting n/N from above, as shown by Figure 6A.1.
The share of researchers working under PD decreases if the economic
profitability of research increases relatively to the researchers’ utility from
their public contributions. This can be thought of as a first-order stochas-
tic downward shift in F(·) which would stem from an increase in p − x(n)
for all the individuals. Likewise, a stronger taste for research would be
represented by an upward shift in F as p − x(n) decreases for all the indi-
viduals. This raises ne.

Instability of PD Knowledge Production

To see why the production of knowledge under PD is unstable, suppose


that ne is an equilibrium and v researchers working under PR coordinate
to work under PD. If ne is an equilibrium, then p > x(ne + v), for at least
one of these researchers, otherwise ne + v would be an equilibrium. Yet,
it is possible that x(ne + v) + q∙X(ne + v − 1) > p + q·X(ne − 1) for all the v
researchers; i.e. if the v researchers coordinate, they are better off than in
equilibrium. To see this recall that x(n) ; X(n) − X(n − 1). Therefore x(ne
+ v) + q∙X(ne + v − 1) – q∙X(ne − 1) = x(ne + v) + qSvj51x (ne 1 v 2 j) . But
the expression in the summation sign is non-negative because x ≥ 0. Hence
this expression can be greater than p in spite of the fact that p > x(ne + v).
Proprietary versus public domain licensing 197

If the v researchers working under PR coordinate to work under PD, the


system is unstable because at least one of them can find it profitable to
deviate since he exhibits p > x(ne + v). Once he deviates, at least one of
the remaining v − 1 researchers has an incentive to deviate because ne +
v − 1 is not an equilibrium, and so on until all the v researchers have devi-
ated. At that point nobody else has an incentive to deviate because ne is an
equilibrium.

The GPL Model

Let B be an opportunity cost faced by an originator and any potential con-


tributor to a project, with [B − x(n)] ~ G(·|n). We assume that even if the
contributors do not join the project, they still enjoy qX(n) from the project
if n researchers work on it under PD. That is, their indirect utility is B +
qX(n). Let nG and nNG be the equilibrium number of contributors joining
the project under PD if the originator launches the project, works under
PD and attaches a GPL or not. Finally, let | n NG be the number of contribu-
tors under PD if the originator launches the project under PR (and cannot
attach a GPL to it).
If the originator launches the project, works under PD and does not
issue a GPL, the contributors working under PD will exhibit p ≤ x(nNG +
1) and B ≤ x(nNG + 1). If G(·) is the joint distribution function of p − x and
B − x, in equilibrium we have G(0, 0 | nNG) = nNG/N. With no GPL, the con-
dition becomes less restrictive, because only B ≤ x is required. As a result,
with a GPL, G(0|nNG) > nNG/N. This will induce some researchers to join
under PD, because the share of researchers with B ≤ x is smaller than the
share of researchers actually working under PD, that is, nNG/N. Provided
that the stability conditions discussed above hold, the movement towards
PD will stop at nG such that G(0|nG) = nG/N. This implies nG ≥ nNG; i.e. a
GPL induces more researchers to work under PD.
If the originator launches the project under PR, the contributors can
still join under PD. These will be all those with p − x(n |NG) ≤ 0 and B − x
| NG
(n ) ≤ 0. The difference with the previous no-GPL case is only that the
originator does not join the project under PD. More generally, the same
reasoning as above applies here, and nG ≥ | n NG. As a matter of fact, if n is
| NG
large enough, n ≈ n . NG

Given the behavior of the contributors, will the originator who launches
the project working under PD issue a GPL? With PD-GPL his utility will
be x(nG + 1) + qX(nG). With PD and no GPL it will be x(nNG + 1)+qX(nNG).
By using the fact that x(n) ; X(n) − X(n − 1), the former will be higher
than the latter if X(nG + 1) − (1 − q)X(nG) ≥ X(nNG + 1) − (1 − q)X(nNG). A
sufficient condition for this inequality to hold is that q ≥ 1. This follows
198 The capitalization of knowledge

Table 6A.1 Comparing researcher actions with and without a GPL

Researchers’ set Action under no GPL Action under GPL


B ≤ x; p ≤ x Join under PD Join under PD
B ≤ x; p ≥ x (p ≥ B) Join under PR Join under PD
B ≥ x; p ≤ x (p ≤ B) Not join Not join
B ≥ x; p ≥ x Join under PR if p ≥ B Not join
Not join if p ≤ B

from nG ≥ nNG and the fact that X(n) increases with n, which in turn follows
from x(n) ≥ 0. Thus, if the originator chooses to work under PD, setting
a GPL will be a dominant strategy unless q is close to zero (i.e. the impact
of the others’ behavior is not that important) and some special conditions
occur. For simplicity, we assume that q is large enough, and therefore
choosing a GPL always dominates when the originator chooses PD.
|NG). As a result,
If the originator chooses PR, his utility will be p + qX(n
the originator will choose to work on the project under PD (and issue
a GPL) if x(nG + 1) + q ∙ X(nG) ≥ B, and x(nG + 1) + q ∙ X(nG) ≥ p + q ∙
X(n|NG). If there were no GPL, the condition would be the same with nNG in
lieu of nG. Since nG ≥ nNG, with no GPL the condition becomes more restric-
tive. As a result, the possibility to use a GPL implies not only that more
researchers will join under PD, but also that more projects will be launched
under PD with a GPL.
As discussed in the text, the GPL is least effective when there is a strong
positive correlation between B and p. This implies that many individuals
with small B tend to have a small p as well. As a result, the restriction p ≤
x associated with B ≤ x does not restrict the set of PD researchers much
more than B ≤ x alone, which means that nG is close to nNG, and the addi-
tional set of PD researchers created by the GPL is not large. In turn, this
implies that the GPL does not encourage a more intensive coordination
than without it.
The GPL raises the number of contributors working under PD in spite
of the fact that the total number of contributors to the project decreases.
To see this, assume for simplicity that x is roughly constant with respect
to n, so that x(nNG) ≈ x(nG) ; x. Consider Table 6A.1, which shows that,
with a GPL, some researchers who joined under PR switch to PD, while
the opposite is not true. The researchers who no longer join the project
with the GPL are only those who joined under PR. Thus they do not affect
n in equilibrium.
PART II

Triple helix in the knowledge economy


7. A company of their own:
entrepreneurial scientists and the
capitalization of knowledge
Henry Etzkowitz

INTRODUCTION: THE ACADEMIC WORLD HAS


TURNED

The capitalization of knowledge is an emerging mode of production


(Etzkowitz, 1983). Until the past few decades, a sceptical view of firm
formation was the taken-for-granted perspective of most faculty members
and administrators at research universities. Since 1980, an increasing
number of academic scientists have broadened their professional interests,
from a single-minded interest in contributing to the literature, to making
their research the basis of a firm. Formerly largely confined to a special-
ized academic sector, firm formation has spread to a broad range of uni-
versities: public and private; elite and non-elite. In addition, a complex
web of relationships has grown up among university-originated startups
in emerging industries and older and larger firms in traditional industries.
Often the same academic scientists are involved with both types of firms,
managing a diversified portfolio of industrial interactions (Powell et al.,
2007).
An entrepreneurial science model, combining basic research and teach-
ing with technological innovation, is displacing the ‘ivory tower’ of knowl-
edge for its own sake. In the mid-1980s, a faculty member at Stanford
University reviewed his colleagues’ activities: ‘In psychiatry there are a
lot of people interested in the chemistry of the nervous system and two
of them have gone off to form their own company.’ Another Stanford
professor, during the same period, estimated that

In electrical engineering about every third student starts his own company.
In our department [computer science] it’s starting as well. That’s a change in
student behaviour and faculty acceptance because the faculty are involved in
companies and interacting a lot with companies and the attitude is . . . we talk
to them, we teach them. Why not try it . . . this is my experience.

201
202 The capitalization of knowledge

While still significant, the ‘barrier to entry’ to firm formation is decreasing,


especially as universities develop mechanisms to assist the process.
Until quite recently, pursuing the ‘endless frontier’ of basic research
was the primary ideological justification of elite US academic institu-
tions. Harvard University was the ideal, with numerous schools iden-
tifying themselves as the ‘Harvard’ of their respective regions. With an
entrepreneurial mode increasingly followed at Harvard, and at academic
institutions that model themselves upon it, the prediction that MIT
would eventually conform to the ivory tower mode has been disconfirmed
(Geiger, 1986). Instead, the reverse has occurred as universities take up the
‘land grant’ mission of regional economic development and capitalization
of knowledge, MIT’s founding purpose (Etzkowitz, 2002). This chapter
discusses the impetuses to firm formation arising from the nature of the
US research university system and its built-in drivers of transition to an
entrepreneurial academic model.

A COMPANY OF THEIR OWN

The relatively new existence of regularized paths of academic entrepreneur-


ship, as a stage in an academic career or as an alternative career, is only of
interest to some faculty; others prefer to follow traditional career paths.
However, for some members of the professoriate, participation in the
formation of a firm has become an incipiently recognizable stage in an aca-
demic career, located after becoming an eminent academic figure in science.
For others, typically at earlier stages of their career, either just before or
after being granted permanent tenure, such activity may lead to a career
in industry outside the university. As one faculty member put it: ‘Different
people I have known have elected to go different ways . . . some back to their
laboratories and some running technology companies. You can’t do both.’
This difficulty has not prevented other professors from trying. A typical
starting point is recruitment to the Scientific Advisory Board of a firm. The
discussions that take place in this venue typically include the business impli-
cations of the firm’s research strategy, providing the neophyte academic
entrepreneur with the equivalent of Entrepreneurial Science 101.
The approach of leaving it up to the technology transfer office to find a
developer and marketer for a discovery precisely met the needs of many
faculty members, then and now, who strictly delimit their role in putting
their technology into use. A faculty member delineated this perspective on
division of labour in technology transfer: ‘It would depend on the transfer
office expertise and their advice. I am not looking to become a businessper-
son. I really am interested in seeing if this could be brought into the market.
Entrepreneurial scientists and the capitalization of knowledge 203

I think it could have an impact on people’s lives. It is an attractive idea.’


This attitude does not necessarily preclude a startup firm but it does exclude
the possibility that the faculty member will be the lead entrepreneur.
A stance of moderate involvement is becoming more commonplace,
with scientists becoming knowledgeable and comfortable operating in a
business milieu while retaining their primary interest and identity as aca-
demic scientists. A faculty member exemplifying this approach expressed
the following view:

In science you kind of sit down and you share ideas . . . There tends to be a
very open and very detailed exchange. The business thing when you sit down
with somebody, the details are usually done later and you have to be very
careful about what you say with regard to details because that is what business
is about: keeping your arms around your details so that you can sell them to
somebody else, otherwise there is no point.

Faculty are learning to calibrate their interaction to both scientific and


business needs, giving out enough information to interest business persons
in their research but not so much so that a business transaction to acquire
the knowledge becomes superfluous. Another researcher said, ‘I am think-
ing about what turns me on, in terms of scientific interest and the money
is something if I can figure out how to get it then it is important but it is
certainly not the most important thing to me.’ The primary objective is still
scientific; business objectives are strictly secondary.
There has been a significant change of attitude among many faculty
members in the sciences towards the capitalization of knowledge. Three
styles of participation have emerged, reflecting increasing degrees of
industrial involvement. These approaches can be characterized as (1)
hands off, leaving the matter entirely to the technology transfer office; (2)
knowledgeable participant, aware of the potential commercial value of
research and willing to play a significant role in arranging its transfer to
industry; and (3) seamless web, integration of campus research group and
research programme of a firm. Of course, many faculty fit in the fourth cell
of ‘no interest’ or non-involvement. These researchers are often referred
to under the rubric of the federal agency that is their primary source of
support, as in ‘She is an NIH person’. While still a minority interest of a
relatively small number of academics at most universities, the prestige of
an entrepreneurial undertaking has risen dramatically.

Entrepreneurial Scientists

A significant number of faculty members have adopted multiple objec-


tives: ‘To not only run a successful company . . . and start a centre here
204 The capitalization of knowledge

[at the university] that would become internationally recognized but to


retain their traditional role as individual investigator’, directing a research
group. An ideal-typical entrepreneurial scientist held that the ‘interaction
of constantly going back and forth from the field, to the university lab, to
the industrial lab, has to happen all the time’. These relationships involve
different levels of commitment (financial and otherwise) by industrial
sponsors, including the involvement of industrial sponsors in problem
selection and research collaboration.
As industrial sectors and universities move closer together, informal
relationships and knowledge flows are increasingly overlaid by more inten-
sive, formal institutional ties that arise from centres and joint projects.
Firms formed by academics have been viewed in terms of their impact
on the university but they are also ‘carriers’ of academic values and prac-
tices into industry and, depending upon the arrangements agreed upon, a
channel from industry back to the university. In these latter circumstances,
traditional forms of academic–industry relations, such as consulting and
liaison programmes that encourage ‘knowledge flows’ from academia to
industry, become less important as an increasing number of large firms
acquire academic startups for sources of new products (Matkin, 1990).
The conduct of academic science is also affected by a heightened interest
in its economic potential. As companies externalize their R&D, they want
more tangible inputs from external sources such as universities. As one
close observer from the academic side of the equation put it, ‘From the
point of view of the company, they tend to want a lot of bang for the buck
. . . [they] tend to not get involved in Affiliates programs precisely because
they can’t point to anything.’ The growth of centres and the formation
of firms from academic research have had unintended consequences that
have since become explicit goals: the creation of an industrial penumbra
surrounding the university as well as an academic ethos among firms that
collaborate with each other in pre-competitive R&D through joint aca-
demic links. In 2001, for example, 626 licences from US university technol-
ogy transfer offices formed the basis of 494 startup firms (AUTM, 2005).

Industry–University Relations Transformed

From an industrial perspective, universities have been viewed as a source


of human capital – future employees – and, secondarily, as a source of
knowledge useful to the firm. In this view, the academic and industrial
spheres should each concentrate on their respective functions and interact
across distinct, strongly defended, boundaries. The hydraulic assumptions
of knowledge flows include reservoirs, dams and gateways that facilitate
and regulate the transmission of information between institutional spheres
Entrepreneurial scientists and the capitalization of knowledge 205

Quasi-firm High-tech
research growth
firm
University input

Spin-offs Mid- and


low-tech
firms

Industry contribution

Figure 7.1 The capitalization of knowledge

with distinctly different functions (e.g. academia: basic research; compa-


nies: product development). From this perspective, what industry needs
from academic researchers is basic research knowledge; therefore universi-
ties should focus on their traditional missions of research and education,
their unique function (Faulkner and Senker, 1995).
However, the organization of academic research, especially in the sci-
ences, has been transformed from a dyadic relationship between profes-
sor and research student to a research group with firm-like qualities,
even when the objective is unchanged (see Figure 7.1). The emergence
of research findings with recognizable commercial potential, on the one
hand, but a gap between potential and demonstrated utility that typically
requires a bridge in the form of a startup firm. Such a firm may develop
into a full-fledged commercial firm but in many cases it is purchased by a
larger firm once product viability is demonstrated. Indeed, this outcome is
often the firm founders’ objective.
Intensive relationships occur with a group of firms that have grown out of
university research and are still closely connected to their originary source
and with a third group that, given the rapid pace of innovation in their
industrial sector, have externalized some of their R&D and seek to import
technologies or engage in joint R&D programmes to develop them (Rahm,
1996). In a fourth group of companies with little or no R&D capacity,
relations with academia, if any, will also be informal through engaging an
academic consultant to test materials or trouble-shoot a specific problem.
206 The capitalization of knowledge

THE ORIGINS OF ENTREPRENEURIAL SCIENCE

The formation of firms out of research activities occurred in the late nine-
teenth century at Harvard, as well as at MIT, in the fields of industrial con-
sulting and scientific instrumentation (Shimshoni, 1970). However, these
commercial entities were viewed as anomalies rather than as a normal
outcome of academic research. In recent decades, an increasing number of
academic scientists have taken some or all of the steps necessary to start
a scientific firm by writing business plans, raising funds, leasing space,
recruiting staff and so on (Blumenthal et al., 1986a; Krimsky et al., 1991).
These studies probably underestimate the extent of faculty involvement,
especially in molecular biology. For example, in the biology department at
MIT, where surveys identified half the faculty as industrially involved in
the late 1980s, an informant could identify only one department member
as uninvolved at the time.
While the model of separate spheres and technology transfer across
strongly defined boundaries is still commonplace, academic scientists are
often eager and willing to marry the two activities, nominally carrying
out one in their academic laboratory and the other in a firm with which
they maintain a close relationship. Thus, technology transfer is a two-way
flow from university to industry and vice versa, with different degrees and
forms of academic involvement:

● the product originates in the university but its development is under-


taken by an existing firm;
● the commercial product originates outside of the university, with
academic knowledge utilized to improve the product, or
● the university is the source of the commercial product and the aca-
demic inventor becomes directly involved in its commercialization
through establishment of a firm.

Potential products are often produced as a normal part of the research


process, especially as software becomes commonplace in collecting and
analysing data. As a faculty member commented in the mid-1980s, ‘In
universities we tend to be very good at producing software, [we] produce
it incidentally. So there is a natural affiliation there. My guess is a lot of
what you are going to see in university–industry interaction is going to be
in the software area.’ In the 1990s this phenomenon spread well beyond
the research process, with software produced in academia outside of the
laboratory, and startups emerging from curriculum development and
other academic activities (Kaghan and Barnet, 1997).
Entrepreneurial scientists and the capitalization of knowledge 207

IMPETUSES TO ACADEMIC FIRM FORMATION

The appearance of commercializable results in the course of the academic


research process, even before scientists formulate their research pro-
grammes with the intention of seeking such results or universities reorder
their administrative processes in order to capture them, is the necessary
cause for the emergence of entrepreneurial science. In addition to the
opportunity presented by research results with commercial potential,
entrepreneurial science has several sufficient causes, both proximate and
long term, that encourage academic scientists to utilize these opportuni-
ties themselves rather than leaving them to others. Entrepreneurial impe-
tuses in US academic science include the stringency in federal research
funding, a culture of academic entrepreneurship originating in the seeking
of government and foundation funds to support research, examples of
colleagues’ successful firms, and government policies and programmes
to translate academic research into industrial innovation (Etzkowitz and
Stevens, 1995).

Research Funding Difficulties

Although federal investment in academic R&D increased during the


1990s, academic researchers strongly perceived a shortfall of resources
during this period (National Science Board, 1996). The explanation of
this paradox lies in the expansionary dynamic inherent in an academic
research structure, based upon a PhD training system that produces
research as a by-product. The expansionary dynamic is driven by the
ever-increasing number of professors and their universities who wish them
to engage in research. Formerly this pressure was largely impelled by the
wish to conform to the prevailing academic prestige mode associated
with basic research. In recent years, expansionary pressure has intensified
due to increased attention to the economic outcomes of basic research
that drew less research-intensive areas of the country into the competi-
tion to expand the research efforts of local universities as an economic
development strategy.
With the notable exception of a relatively brief wartime and early
postwar era, characterized by rapidly expanding public resources for
academic research, US universities have always lived with the exigencies
of scarce resources. The increasing scale and costliness of research is also
a factor. As traditional sources of research funding were unable to meet
ever-expanding needs, academics sought alternative sources such as indus-
trial sponsors. A faculty member discussed his involvement with industry:
‘In some areas we have found it necessary to go after that money. As the
208 The capitalization of knowledge

experimental needs in computer science [increase], equipment needs build


up. People realize that a small NSF grant just doesn’t hack it anymore.’
Nevertheless, there are cultural and other barriers to overcome before a
smooth working relationship can be established. A professor described
the dilemma: ‘It’s harder work with industry funding than federal funding,
harder to go through the procurement process, to negotiate the terms of
the contract.’ Industry’s expectations for secrecy, for example, are some-
times unreasonable. Dissatisfaction with working with existing companies
is another reason professors offer for starting their own company.
Creating an independent financial base to fund one’s own research is
a significant motivator of entrepreneurial activity. Stringency in federal
research funding has led academic scientists to broaden their search for
research support from basic to applied government programmes and vice
versa. A possible source that has grown in recent years has been research
subventions from companies, including firms founded by academics them-
selves for this purpose, driving industrial support of academic research
up from a low of 4 per cent to a still modest 7 per cent during the 1980s.
Concomitant with a shift from military to commercial criteria, new sources
of funding for academic research have opened up in some fields that have
experienced a rise in practical significance, such as the biological sciences,
making the notion of stringency specific to others, such as nuclear physics,
which has experienced a decline (Blumenthal et al., 1986a).
Nevertheless, federal research agencies are still the most important
external interlocutors for academic researchers. A department chair at Cal
Tech noted:

The amount of money from industry is a pittance in the total budget, therefore
everybody’s wasting their time to try to improve it . . . it’s still a drop in the
bucket . . . we were running about 3 per cent total in our dept. We do value our
industrial ties . . . have good friends, interact strongly with them in all kinds of
respects and . . . the unrestricted money is invaluable, you wouldn’t want to lose
a penny of it and would like to increase it a lot, but its impact vis-à-vis federal
funding is almost non-existent.

In contrast to this view, based upon the current source of resources,


others look towards realizing greater value from the commercial potential
of research.
Even without creating a firm themselves, academic scientists can earn
funds to support their research by making commercializable results avail-
able for sale to existing firms. As a Stanford faculty member described the
process:

It’s also motivating for us to try to identify things that we do that may be
licensable or patentable and to make OTL [Office of Technology and Licensing]
Entrepreneurial scientists and the capitalization of knowledge 209

aware of that because according to University policy, 30 per cent of the money
comes back to the scientist, 30 per cent comes back to the Department as well
as 30 per cent for the University. So, almost all the computing equipment and
money for my post docs have been funded by the work that we did. So there’s
motivation.

Earlier in the twentieth century, experiencing difficulties in the French


research funding system, the Curies considered exploring the commercial
possibilities of their radium discovery for just this purpose (Quinn, 1995).
Tapping capital markets through a public offering of stock is an addi-
tional source of research funding, especially in biotechnology-related
fields, although one not yet recognized in Science Indicators volumes! In
response to the increasingly time-consuming task of applying for federal
research grants, a faculty member said that ‘another way to get a whole
bunch of money . . . is to start a company’. After resisting the idea of start-
ing a firm in favour of establishing an independent non-profit research
institute, largely supported by corporate and governmental research
funds, two academic scientists returned to an idea they had earlier rejected
of seeking venture capital funding. As one of the founders explained their
motivation for firm formation, ‘Post docs who are really good will want
to have some place that at least guarantees their salary. And that we were
not able to do. It was for that reason we decided to start the company.’
Other scientists realized that they could combine doing good science with
making money by starting a company. As they enhanced their academic
salaries through earnings from entrepreneurial ventures, and continued to
publish at a high rate, they lost any previous aversion to the capitalization
of knowledge (Blumenthal et al., 1986b).

The Industrial Penumbra of the University

The success of the strategy to create a penumbra of companies surround-


ing the university has given rise to an industrial pull upon faculty members.
For example, a faculty member reported:

The relationship with Collaborative is ongoing daily. We are always talking


about what project we are going to do next. What the priority is, who is
involved, there are probably six projects, a dozen staff members and maybe
close to a dozen people scattered around three or four different departments on
campus that are doing things with them.

Geographical proximity makes a difference in encouraging appropriate


interaction. Such intensive interaction sheds new light on the question of
industrial influence on faculty research direction and whether this is good,
210 The capitalization of knowledge

bad or irrelevant. Thus the ‘issue of investigator initiation is much more


complicated because I am bringing my investigator initiated technology to
their company initiated product. It is a partnership in which each partner
brings his own special thing. That is the only reason they are talking. Do
your thing on our stuff.’ Previous conflicts based on an assumption of
a dividing line between the academic and industrial sides of a relation-
ship are superseded as divisions disappear. A more integrated model of
academic–industry relations is emerging along with a diversified network
of transfer institutions. Indeed, the very notion of technology transfer, or
at least transfer at a distance, is superseded as universities develop their
own industrial sector.
Not surprisingly, a receptive academic environment is an incentive
to entrepreneurship, while a negative one is a disincentive. MIT and
Stanford University are the exemplars of firm formation as an academic
mission. In the 1930s the president of MIT persuaded the leadership of the
New England region to make the creation of companies from academic
research the centrepiece of their regional economic development strategy
(Etzkowitz, 2002). At Stanford, Frederick Terman, dean of engineering,
provided some of the funds to help two of his former students, Hewlett
and Packard, to form their firm just before World War II. A faculty
member commented: ‘Because it has been encouraged here from its incep-
tion, it makes it easy to become involved [in firm formation]. [there are] . . .
more opportunities . . . people come in expecting it.’ On the other hand, at
a university noted for its opposition to entrepreneurial activity, an admin-
istrator noted that despite the disfavour in which it is held, ‘There have
been some [firms founded] . . . it’s frowned upon. It takes a lot of time and
the faculty are limited . . . in the amount of time they have.’ Under these
conditions, procedures that could ameliorate conflicts are not instituted
and faculty who feel constricted by the environment leave.
A surrounding region filled with firms that have grown out of the university
is also a significant impetus to future entrepreneurial activity. The existence
of a previous generation of university-originated startups provides consult-
ing opportunities, even for faculty at other area universities. A Stanford pro-
fessor noted: ‘in the area there’s a lot of activity and that tends to promote
the involvement of people’. From their contact with such companies, faculty
become more knowledgeable about the firm formation process and thus
more likely to become involved. Faculty who have started their own firms
also become advisers to those newly embarking on a venture. An aspiring
faculty entrepreneur recalled that a departmental colleague who had formed
a firm ‘gave me a lot of advice . . . he was the role model’. The availability of
such role models makes it more likely that other faculty members will form a
firm out of their research results when the opportunity appears.
Entrepreneurial scientists and the capitalization of knowledge 211

Once a university has established an entrepreneurial tradition, and a


number of successful companies, fellow faculty members can offer mate-
rial, in addition to moral, support to their colleagues who are trying
to establish a company of their own. A previous stratum of university-
originated firms and professors who have made money from founding
their own firms creates a potential cadre of ‘angels’ that prospective aca-
demic firm founders can look to in raising funds to start their firms. Early
faculty firm founders at MIT were known on campus for their willingness
to supply capital to help younger colleagues.

Normative Impetus to Firm Formation

In an era when results are often embodied in software, sharing research


results takes on a dimension of complexity well beyond reproducing and
mailing a preprint or reprint of an article. Software must be debugged,
maintained, enhanced, translated to different platforms to be useful.
These activities require organizational and financial resources well beyond
the capacity of an academic lab and its traditional research supporters,
especially if the demand is great and the software complex. As one of the
researchers described the dilemma of success, ‘We had an NSF [National
Science Foundation] Grant that supported [our research] and many people
wanted us to convert our programs to run on other machines. We couldn’t
get support (on our grant) to do that and our programs were very popular.
We were sending them out to every place that had machines available that
could run them.’ The demand grew beyond the ability of the academic
laboratory to meet it.
Firm formation is also driven by the norms of academic collegiality,
mandating sharing of research results. When the federal research support
funding system was not able or willing to expand the capabilities of a labo-
ratory to meet the demand for the software that its research support had
helped create, the researchers reluctantly turned to the private sector. They
decided that, ‘Since we couldn’t get support, we thought perhaps the com-
mercial area was the best way to get the technology that we developed here
at Stanford out into the commercial domain.’ This was a step taken only
if they failed to receive support from NSF and NIH [National Institutes
of Health] to distribute the software. ‘The demonstration at NIH was
successful, but they didn’t have the funds to develop this resource.’ The
researchers also tried and failed to find an existing company to develop
and market the software. As one of the researchers described their efforts,
‘We initially looked for companies that might license it from us . . . none
were really prompted to maintain or develop the software further.’ Failure
to identify an existing firm to market a product is a traditional impetus
212 The capitalization of knowledge

to inventors, who strongly believe in their innovation, to form a firm


themselves to bring their product to market.
Chemists involved with molecular modelling, previously a highly theo-
retical topic, have had to face the exigencies of software distribution as
their research tools increasingly became embodied in software. Since the
interest in the software is not only from academic labs but from companies
that can afford to pay large sums, the possibility opens up of building a
company around a programme or group of programmes and marketing
them to industry at commercial rates while distributing them to academia
at a nominal cost. Academic firm founders thus learn to balance aca-
demic and commercial values. In one instance, as members of the Board,
the academics were able to influence the firm to find a way to make a
research tool available to the academic community at modest cost. An
academic described the initial reaction to the idea: ‘The rest of the board
were venture capitalists, you can imagine how they felt! They required we
make a profit.’ On the other hand, ‘It was only because we were very aca-
demically oriented and we said, “Look, it doesn’t matter if this company
doesn’t grow very strongly at first. We want to grow slow and do it right
and provide the facilities to academics”.’ The outcome was a compro-
mise between the two sides, meeting academic and business objectives
at the same time, through the support of government research agency to
partially subsidize academic access to the firm’s product.
There is some evidence that firms spin out of interdisciplinary research
or, at least, that some such collaborations are a significant precursor to
firm formation. As one academic firm founder described the origins of his
firm, ‘If it had not been for the collaboration between the two departments
[biochemistry, computer science], intimate, day-to-day working [together],
it never would have happened. Intelligenetics and Intellicorp grew out of
this type of collaboration. We had GSB [the Graduate School of Business],
the Medical School and Computer Science all working together.’ In this
model, the various schools of the university contributed to the ability
of the university to spin out firms, providing specialized expertise well
beyond the original intellectual property.

Academic Entrepreneurial Culture

An entrepreneurial culture within the university encourages faculty to


look at their research results from a dual perspective: (1) a traditional
research perspective in which publishable contributions to the literature
are entered into the ‘cycle of credibility’ (Latour and Woolgar, 1979)
and (2) an entrepreneurial perspective in which results are scanned for
their commercial as well as their intellectual potential. A public research
Entrepreneurial scientists and the capitalization of knowledge 213

university that we studied experienced a dramatic change from a single to


a dual mode of research salience. A faculty member who lived through
the change described the process: ‘When I first came here the thought of a
professor trying to make money was anathema . . . really bad form. That
changed when biotech happened.’
Several examples of firm formation encouraged by overtures from
venture capitalists led other faculty, at least in disciplines with similar
opportunities, to conclude: ‘Gosh, these biochemists get to do this
company thing, that’s kind of neat, maybe it’s not so bad after all.’
Although some academics working in the humanities and science policy
experts remain concerned that the research direction of academic science
will be distorted (Krimsky et al., 1991), serious opposition dissipated as
leading opponents of entrepreneurial ventures from academia, such as
Nobel Laureate Joshua Lederberg of Rockefeller University, soon became
involved with firms themselves, in his case, Cetus.
A research group within an academic department and a startup firm
outside are quite similar despite apparent differences represented by the
ideology of basic research, on the one hand, and a corporate legal form
on the other. As an academic firm founder summed up the comparison,
‘the way [the company] is running now it’s almost like being a professor
because it’s all proposals and soft money’. There is an entrepreneurial
dynamic built into the US research funding system, based on the premise
that faculty have the primary responsibility for obtaining their own
research funds. As a faculty member described the system, ‘It’s amazing
how much being a professor is like running a small business. The system
forces you to be very entrepreneurial because everything is driven by
financing your group.’ At least until a startup markets its product or is
able to attract funding from conventional financial sources, the focus of
funding efforts is typically on a panoply of federal and state programmes
that are themselves derived from the research funding model and its peer
review procedures.
As a faculty entrepreneur viewed the situation, ‘What is the difference
between financing a research group on campus and financing a research
group off campus? You have a lot more options off campus but if you
go the federal government proposal route, it’s really very similar.’ The
entrepreneurial nature of the US academic research system helps explain
why faculty entrepreneurs typically feel it is not a great leap from an on-
campus research group to an off-campus firm.
A typical trajectory of firm formation is the transition from an individ-
ual consulting practice, conducted within the parameters of the one-fifth
rule, to a more extensive involvement, leading to the development of tangi-
ble products. A faculty member described his transition from consulting to
214 The capitalization of knowledge

firm formation: ‘It got to the point where I was making money consulting
and needed some sort of corporate structure and liability insurance; so I
started [the company] a couple of years ago. From me [alone, it has grown]
to eight people. We’re still 70 per cent service oriented, but we do produce
better growth media for bacteria and kits for detecting bacteria.’ The firm
was built, in part, on the university’s reputation but was symbiotic in that
its services to clients brought them into closer contact with on-campus
research projects.
In another instance, an attempt was made to reconcile the various
conflicting interests in firm formation and make them complementary
with each other by the university having some equity in the company and
holding the initial intellectual property rights.
Despite the integrated mode arrived at, some separation, worked out on
technical grounds, was still necessary to avoid conflicts.

There is no line. It’s just a complete continuum. It is true that I have a notebook
that says [university name] and a notebook that says [firm name] and if I make
an invention in the [company] notebook then the assignment and the exclusive
license goes to [the firm] and if I make an invention in the university notebook
then the government has rights to the invention because they are funding the
work. [Interviewer: How do you decide which notebook you are going to write
in?] We have ways of dividing it up by compound class. In the proposals that
I write to the government I propose certain compound classes. There is no
overlap between the compound classes that we work on campus and the com-
pound classes that we work on off campus so there is a nice objective way of
distinguishing that.

The technical mode of separation chosen, by compound classes, sug-


gests that while boundaries have eroded as firm and university cooperate
closely to mount a joint research effort, a clear division of labour persists.
Once the university accepted firm formation and assistance to the local
economy as an academic objective, the issue of boundary maintenance was
seen in a new light. An informant noted: ‘When the university changed its
attitude toward entrepreneurial ventures, one consequence was that the
administration renegotiated its contract with the patent management firm
that dealt with the school’s intellectual property.’ A new sentence said, ‘If
the university chooses to start an entrepreneurial new venture based upon
the invention then the university can keep the assignment and do whatever
it wants. [Interviewer: Why did the university make that change?] Because
the university decided that it wanted to encourage faculty to spin off these
companies.’ Organizational and ideological boundaries between academia
and industry were redrawn, with faculty encouraged to utilize leave pro-
cedures to take time to form a firm, and entrepreneurial ventures noted as
contributing to research excellence in university promotional literature.
Entrepreneurial scientists and the capitalization of knowledge 215

CONCLUSION: THE CAPITALIZATION OF


KNOWLEDGE

A co-evolution of academic scientists and universities may be identi-


fied. When the transition towards capitalization of knowledge is uneven,
conflicts of interest arise; when the transition is in parallel, confluence of
interests is the likely result. A more direct role in the economy has become
an accepted academic function, and this is reflected in the way universities
interact with industry. There has been a shift in emphasis from traditional
modes of academic–industry relations oriented to supplying academic
‘inputs’ to existing firms either in the form of information flows or through
licensing patent rights to technology in exchange for royalties. Utilizing
academic knowledge to establish a new firm, usually located in the vicinity
of the university, has become a more important objective. Indeed, the firm
may initially be established on or near the campus in an incubator facility
sponsored by the university to contribute to the local economy.
Defining and maintaining an organization’s relationship to the external
environment through ‘boundary work’ has different purposes, depending
upon whether goals are static or undergoing change. A defensive posture
is usually taken, and buttressed by arguments supporting institutional
integrity, in affirming a traditional role (Gieryn, 1983). The reworking
of boundaries around institutions undergoing changes in their mission
occurs through a ‘game of legitimation’ that can take various forms. One
strategy is to conflate new purposes with old ones to show that they are
in accord. For example, universities legitimize entrepreneurial activities
by aligning them with accepted functions such as research and service. In
addition, new organizational roles are posited, in this instance contribu-
tion to economic and social development as a third academic mission in
its own right. Similarly, faculty members find that their entrepreneurial
activities provide vivid examples for their teaching practice as well as a
source of research ideas.
University–industry relations are increasingly led by opportunities dis-
cerned in academic research that is funded to achieve long-term utilitarian
objectives as well as theoretical advance, recognizing that the two goals
are compatible and, indeed, mutually reinforcing. In a recursion from
science studies research to practice, a University of New Orleans professor
requested a copy of a study of entrepreneurial activities at State University
of New York at Stony Brook to encourage his colleagues in the marine
research centre to found a firm. Regional economic development is super-
seding the sale of intellectual property rights to the highest bidder, even as
the translation of commercializable research results into economic activity
is becoming an accepted academic mission (Etzkowitz, 2008).
216 The capitalization of knowledge

NOTE

Data on university–industry relations in the USA are drawn from studies conducted by the
author with the support of the US National Science Foundation. More than 100 in-depth
interviews were conducted with faculty and administrators at universities, both public and
private, with long-standing and newly emerging industrial ties.

REFERENCES

AUTM (2005) www.autm.net/surveys/ accessed 10 July 2007.


Blumenthal, D. et al. (1986a), ‘Industrial support of university research in biotech-
nology’, Science, 231, 242–46.
Blumenthal, D. et al. (1986b), ‘University–industry research relations in biotech-
nology’, Science, 232, 1361–66.
Etzkowitz, Henry (1983), ‘Entrepreneurial scientists and entrepreneurial universi-
ties in American academic science’, Minerva, 21 (Autumn), 198–233.
Etzkowitz, Henry (2002), MIT and the Rise of Entrepreneurial Science, London:
Routledge.
Etzkowitz, Henry (2008), The Triple Helix: University–Industry–Government
Innovation in Action, London: Routledge.
Etzkowitz, H. and Ashley Stevens (1995), ‘Inching toward industrial policy: the
university’s role in government initiatives to assist small, innovative companies
in the U.S.’, Science Studies, 8 (2), 13–31.
Faulkner, Wendy and Jacqueline Senker (1995), Knowledge Frontiers: Public Sector
Research and Industrial Innovation in Biotechnology, Engineering Ceramics, and
Parallel Computing, Oxford: Clarendon Press.
Geiger, Roger (1986), To Advance Knowledge: The Growth of American Research
Universities, 1900–1940, New York: Oxford University Press.
Gieryn, T. (1983), ‘Boundary-work and the demarcation of science from non-
science: strains and interests in professional ideologies of scientists’, American
Sociological Review, 48, 781–95.
Kaghan, William and Gerald Barnet (1997), ‘The desktop model of innovation
in digital media’, in Henry Etzkowitz and Loet Leydesdorff (eds), Universities
and the Global Knowledge Economy: A Triple Helix of Academic–Industry–
Government Relations, London: Cassell.
Krimsky, Sheldon, James Ennis and Robert Weissman (1991), ‘Academic–
corporate ties in biotechnology: a quantitative study’, Science, Technology and
Human Values, 16 (3), 275–87.
Latour, Bruno and Steve Woolgar (1979), Laboratory Life, Beverly Hills, CA:
Sage.
Matkin, Gary (1990), Technology Transfer and the University, New York:
Macmillan.
National Science Board (1996), Science and Engineering Indicators, Washington,
DC: National Science Foundation.
Powell W., Jason Owen Smith and Jeanette Colyvas (2007), ‘Innovation and
emulation: lessons from American universities in selling private rights to public
knowledge’, Minerva, June, 121–42.
Quinn, Swan (1995), Marie Curie: A Life, New York: Simon and Schuster.
Entrepreneurial scientists and the capitalization of knowledge 217

Rahm, Diane (1996), R&D Partnering and the Environment of U.S. Research
Universities, Proceedings of the International Conference on Technology
Management: University/Industry/Government Collaboration, Istanbul:
Bogazici University.
Shimshoni, D. (1970), ‘The mobile scientist in the American instrument industry’,
Minerva, 8 (1), 59–89.
8. Multi-level perspectives:
a comparative analysis of national
R&D policies
Caroline Lanciano-Morandat and Eric Verdier

In the era of globalization, the quality of science–industry relations is pre-


sented as a key source of industrial innovation and, more generally, of eco-
nomic competitiveness. The supranational orientations of the European
Union and the recommendations of the OECD converge to promote
a ‘knowledge society’. These political references recycle the results of
research in economics and in the sociology of innovation, and together
inspire European states’ public R&D and innovation (RDI) policies. Yet
these convergences between political action, international expertise and
the social sciences do not mean that there is necessarily a ‘one best way’
of articulating science and industry. Shifts in public intervention have
to compromise with institutions inherited from the past,1 and with the
practices of firms and other private sector actors.
This chapter develops an approach that integrates these different levels
of analysis. The aim is to explain the specific national characteristics of
policy-making in a context in which the references of that action tend
to be standardized. In this respect, there is some convergence with the
analysis in terms of ‘Varieties of Capitalism’ (Hall and Soskice, 2001),
although our approach differs as regards the national coherence high-
lighted by the latter in a deterministic perspective: ‘In any national
economy, firms will gravitate towards the mode of coordination for
which there is institutional support’ (Hall and Soskice, 2001, p. 9). Our
approach consists in explicitly taking into account the arrangements,
negotiations and institutional bricolages that the actors use to coordi-
nate their actions in the form of conventions. It highlights the devices
that support the science–industry relations between different public
and private sector actors, from firms to political authorities at different
levels (regional, national and European). It therefore examines the rules,
contractual devices and organizational forms equipping interactions
between actors, but also the different political and ethical principles

218
A comparative analysis of national R&D policies 219

orienting their choices. ‘The state, like other institutions, is essentially


a convention between persons, but unlike other institutions, in western
democracies all state conventions are based on representations of the
“common good” for their societies’ (Storper, 1998, p. 10). Four types of
conventions of policy-making concerning RDI and innovation can thus
be identified.
It emerges that national specificities do not stem from a structural dis-
tinction between what would ‘ontologically’ be German science–industry
relations and French or British science–industry relations. An action
regime peculiar to each country, and that we therefore qualify as ‘soci-
etal’, is in fact the outcome of a compromise between these four legitimate
patterns concerning RDI policy. But this compromise is only temporary.
It evolves under the impulse of dynamics that may be endogenous – for
instance the emergence of new technological districts – or exogenous – ‘best
practices’ imposed at international level as new standards. Consequently,
the legitimacy of different conceptions of the common good varies and the
work of the actors (firms, government agencies, universities etc.) generates
new rules and therefore new compromises.
This approach is briefly illustrated in a comparative analysis of the
trajectories of the British, French and German RDI policies based on the
classical ‘societal approach’ (Maurice et al., 1986) that we combine with
a ‘conventionalist’ analysis in order to deal with issues of coordination
(Eymard-Duvernay, 2002).

AN ANALYTICAL FRAMEWORK: THE


CONSTRUCTION OF POLICY-MAKING
CONVENTIONS

These conventions as representations of the common good are the basis


for the legitimacy of rules. They encompass different conceptions of what
is ‘efficient and fair’ in collective action concerning RDI. Each of them
corresponds to a specific mode of justification, in other words to a par-
ticular research ethic (see ‘The orders of worth’ – Economies de la grandeur
– Boltanski and Thévenot, 1991 and Lamont and Thévenot, 2000): sci-
entific progress; state services and the national interest; the market, i.e.,
the creation of shareholder value; and, lastly, the project embodying
technological creativity. These referents are based on conceptualizations
from the economics and the sociology of innovation. They are mobilized
by national and international experts, who promote them in the national2
and supranational arenas (in the OECD and the EU).
By explicitly referring to the state’s position in collective action, we
220 The capitalization of knowledge

distinguish four patterns of conventions: the ‘Republic of Science’, the


‘state as an entrepreneur’, the ‘state as a regulator’ and, finally, the ‘state
as a facilitator’ (of technological projects).
These conventions cannot be dissociated from the following dimensions
which constitute the RDI policy-making regime in which they are set:

1. The positioning of the public authorities in a multi-level perspective


(identify which level predominates: local, national, supranational or
European?), considering that many analysts today perceive a distinct
weakening of the national level (Larédo and Mustar, 2001); conse-
quently, the diversity of practices within the same country is increased,
at least potentially.
2. The boundary between the public and private sectors; for instance, the
university could be considered either essentially as a public actor or as
a private entrepreneur (Etzkowitz and Leydesdorff, 2000).
3. The predominant organizational frame that connects different actors:
from complete independence in the pure academic form to interde-
pendence in the network form.
4. The mediators in charge of networking the different worlds, from
occasional contact to integration when the state is the entrepreneur of
science–industry relations.
5. The definition of the competencies that shapes the legitimacy of the
actors, the criteria of success and the goals to achieve.
6. The rules that frame, drive and evaluate researchers’ work, in a per-
spective of innovation.
7. The modes of financing, from public funds (for basic research) to a
web of public and private financing in the case of a network.
8. The rules governing the circulation and employment of people (the
type of labour market).

These conventions claim to account for the ideal-types of research ethic


underlying collective policy-making, rather than directly for the structural
coherences of a particular country or region. There are common points
with the approach developed by Storper and Salais (1997) in terms of
conventions of the ‘absent state’ (particularly marked in the USA) and the
‘external’ or ‘overarching state’ (strong in France) and the ‘situated state’
or ‘subsidiary state’ (marked in Germany).
Historical societal constructions stem from the arrangements – that
vary in time and space – between these different conceptions of policy-
making. It is therefore necessary to highlight the tensions, conflicts and
compromises that trigger changes in public systems and are spurred by
attempts at reform. Their degree of success is the result of interactions
A comparative analysis of national R&D policies 221

between inherited historical constructions and the projects of (new) actors


involved in the definition of the common good.

FOUR PATTERNS OF RDI POLICY-MAKING


CONVENTIONS
The characteristics of the four conventions of policy-making in science
and innovation can be synthesized in terms of their insertion in a general
regime of action (see Table 8.1).

The Republic of Science

The ‘Republic of Science’ is based on a convention similar to the model


of Merton, the founder of the sociology of science. It highlights the posi-
tive role of science in society and aims for ‘the development of codified
knowledge’ (Merton, 1973). It implies a strict separation between scien-
tific institutions and those governing the rest of society. In this model,
public intervention can acquire legitimacy only by adhering to guidelines
and priorities defined independently by scientists whose reputation sets
the standards for competencies. This conception of the ‘academic state’
limits public intervention to the financing of the pure public good that
scientific knowledge is supposed to be. These characteristics imply the
complete application of a ‘disclosure norm’ for scientific progress (the
‘open science’ model), after peer validation. Government has to ensure
that ‘generic’ resources are made available to society. It is up to firms to
‘endogenize’ them, i.e., to appropriate them efficiently, in a specific way.
The other side to the ‘Republic of Science’ is therefore the ‘Kingdom of
Technology’ (Polanyi, 1962), founded on a private appropriation by each
agent of this general, abstract knowledge, for the purpose of generating
comparative advantages from the efficient application of new knowledge.
This radical distinction between pure research and the pursuit of indus-
trial and economic objectives causes relations between universities and
industry to depend on personalities in academia who act as advisers on
the efficient application of knowledge. These relations remain occasional
and informal and even tend to be hidden, for the purpose of maintaining
science’s original purity.
The ‘priority norm’, which rewards only the first discovery and thus
grants the creator a sort of ‘moral ownership’ of a product, is an incen-
tive to produce original knowledge. More generally, the competencies
produced under the aegis of this ‘Republic’ are above all academic. The
evaluation of scientists, at the time of recruitment and throughout their
222 The capitalization of knowledge

Table 8.1 The characteristics of conventions of policy-making concerning


R&D and innovation

Relevant I II III IV
dimensions Republic of The state as an The state as a The state as a
Science entrepreneur regulator facilitator
Overriding The progress of State service Market: Project:
principle: science and national shareholder technological
research ethic interest value creativity
Level of state Discipline National Regional Multi-level
regulation (local faculty) integration
(‘Europe’)
Governance Independence Control by Co- Delegation of
of the of academic central state: determination responsibility
public–private communities ministry or of the for technico-
relationship agency entrepreneurial scientific
university and coordination
firms
Organizational Academia Large Contract Network
architecture (faculties) programme (negotiation (interaction
(hierarchical between and alignment
management individuals or within the
and organizations) network)
organization)
Category of Renowned Managerial and Individual Diversity
mediating scientific political elites mobility of of actors as
actors personalities scientists intermediaries
between the between
private and university and
public spheres firms
Key Disciplinary Meritocratic Operational Interdis-
competencies knowledge excellence versatility of ciplinary and
individuals ability to
cooperate
Incentive Peer evaluation Power over Property rights, Salary increases
institution (disclosure and scientific and patents and and stock
priority norms) industrial profit-sharing options
development
Funding Public grants Public Joint Multiplicity
institution and individual subsidies and contribution of of sources
fees government higher education and levels of
orders and firms financing
Labour Occupational Public and External labour Labour
institution labour markets private internal markets markets
markets peculiar to
networks
A comparative analysis of national R&D policies 223

careers, is based on judgements of the scientific community. The training


of engineers is embodied in academic disciplines that are the foundations
of occupational labour markets. Peer evaluation is important throughout
an individual’s career, by way of a labour market in which professional
societies guarantee the reliability and standardization of skills.

The ‘State as an Entrepreneur’

This convention underlies a ‘mission-oriented’ public policy (Ergas, 1992)


corresponding to ‘radically innovative projects which are necessary for
the pursuit of national interests’. The mission concerns technological
domains of strategic importance to the state. Its main features are the cen-
tralization of decision-making, the definition of objectives in government
programmes, the concentration of the number of firms involved, and the
creation of a specific government agency with a high level of discretionary
power, responsible for operational coordination, under the supervision of
a national or federal administration. The science/innovation relationship
is then explicitly built (unlike the preceding convention) in a framework of
planning, on the basis of a model often referred to as ‘Colbertist’ (Barré
and Papon, 1998). This schema organizes a science/innovation twosome
guided by a ‘higher’ socioeconomic order since technological policy is
legitimized by its contribution to a national interest that, in this case, is
confused with the state service.
This convention appeared and was conceptualized in the period after
World War II and was promoted for two main reasons:

1. To accelerate the country’s productive and technological moderniza-


tion in order to catch up with competitors;
2. To guarantee the availability of technologies essential to the quest for
national independence.

The literature highlights the fact that it is a ‘top-down’ innovation


model, ‘suited to complex technological objects used for large public infra-
structures’ (Barré and Papon, 1998, p. 227). This convention has proved
particularly effective for producing high-tech objects in public sector
markets (aeronautics, space, military, nuclear, telecommunications etc.).
Its organization is based on the model of the ‘large technological pro-
gramme’ that involves a public agency, a research institution and a large
industrial group (or several privileged operators) supported by a set of
subcontractors. The objectives of the programme, the actors who have to
participate in it, the operations and their scheduling are strictly defined ex
ante. As part of a state-led and modernizing approach, this ‘industrial’ and
224 The capitalization of knowledge

managerial conception is based to a large extent on coordination by well-


identified professions or academic elites (e.g. graduates of leading research
universities or French ‘Grandes écoles’) and by applied research labora-
tories administered directly to help to implement government policies. It
resembles an ‘external state’ convention, in the sense of Storper and Salais
(1997): ‘The state has devised a methodology to evaluate differences with
the common good (that it has defined indisputably, ex ante) and intervenes
beforehand to correct those differences as far as possible. Everyone relies
on it, conventionally’ (Salais, 1998, p. 62). Meritocratic excellence is based
on selection for admission to the best schools and universities, which
regulates access to the typically French ‘Grands corps de l’Etat’3. These
combine the technical skills and organizational capacities that lie at the
interface between government administration and large firms, with a view
to running the large technological programmes. The large programmes
are almost exclusively financed and managed in the framework of state
contracts and public markets, with the aim of producing technological
progress as a source of competitive advantage for the country’s industry.
The development of competencies is produced by internal markets in the
public and private sectors.

The ‘State as a Regulator’

The state as a regulator promotes the transfer of scientific results to the


private sector. It also ensures that the objectives of basic research are
inspired or structured by the expectations of the ‘market’ and the corpo-
rate world. Whereas the preceding convention (i.e. the state as an entre-
preneur) was limited to a national scale, here there is an openness onto
supranational horizons due to the increasing weight of multinationals in
technological dynamics. The quality of public research and its partner-
ships with the private sector are becoming key arguments to attest to the
attractiveness of the country on a national and local level. The role of this
convention is therefore to guarantee an efficient balance between the use
of public research resources and market dynamics. This balance implies
that the governance of the public–private relationship is co-determined
by partnerships between firms and entrepreneurial universities, via con-
tracts negotiated between the partners. As a regulator, the state has to
guarantee the balance of commitments, even if that means promoting the
establishment of private R&D resources within the province of its political
authority, through targeted aid.
This predominantly market orientation is attested by the importance
granted to the definition of the ‘property rights’ that frame and stimulate
two types of initiative emblematic of this convention: the creation of high-
A comparative analysis of national R&D policies 225

tech startups by academic scientists, and the development of contractual


relations between universities and firms. The first type of initiative requires
the availability of adequate funding, via access to venture capital and
the support of incubators for the first steps of innovative startups, which
public agencies have to promote. The second helps to compensate for
the financial difficulties experienced by both types of actor: government
budgetary restrictions for universities, and the necessity for firms, due to
heightened competition, to outsource a part of their R&D.
This construction of the common good is justified theoretically in the
‘Mode 2’ of knowledge production (Gibbons, 1994). This ‘new’ Mode
2, focused on the problems to solve, as defined by industry, differs
from ‘Mode 1’ in which the problems are posed and solved in a context
governed by the interests of an independent scientific community with
strictly academic and disciplinary aims. Mode 2 is based on a repeated
reconfiguration of human resources, in flexible forms of organization of
R&D, in order to be able to adjust to market trends – an ability based
on the creation of knowledge of a transdisciplinary nature (Lam, 2001)4.
Thus the formalization of Gibbons’s Mode relates more to a change of the
ideological norm and ‘beliefs’ than to the empirical validity of a change of
scientific practices. It thus aims politically to legitimize a narrowing of the
gap between the academic world and enterprise, and is clearly open to a
merchandization of science (Shinn, 2002).
Efficient competencies stem from a co-production of firms and public
research laboratories. This co-production is supported institutionally by
a contract that organizes the collaboration between public and private
sector researchers. The generally short-term mobility of scientists between
the two worlds and the co-production of PhDs help to build up the trust
needed to meet contractual objectives. A hybrid labour market is thus
created between the higher education system and the industrial system,
based on a joint construction of generic knowledge that can be used both
commercially and industrially.
Strong tension exists between this form of policy-making and the pre-
ceding one, based on a hierarchy organized around public programmes
that structure private practices. It promotes the withdrawal of the state
that refuses to define the common good a priori.

The ‘State as a Facilitator’ (of Technological Projects)

In the past ten years the literature on the economics of science and innova-
tion has emphasized the importance of interactions between the different
partners in scientific and technical production: government higher educa-
tion or research institutions, firms with their own R&D capacities, and
226 The capitalization of knowledge

organizations involved in funding and intermediation between these dif-


ferent ‘worlds’. This articulation has been systematized and popularized
by a current of thinking involving scientists, managers and public authori-
ties, known as the ‘triple helix’ (Etzkowitz and Leydesdorff, 1997, 2000).
In this model the ‘co-production’ of knowledge is situated at the inter-
section of three interacting institutional spheres: university and research
organizations; industry; and public authorities, especially through their
specialized agencies.
This articulation is said to generate trilateral networks through the
overlapping of the different institutional spheres and the emergence of
hybrid organizations at the interfaces. The objective is to create an inno-
vative environment consisting of firms that are university or research
organization spin-offs, tripartite initiatives for knowledge-based economic
development, strategic alliances between firms of different sizes and tech-
nological levels, public laboratories, and university research teams. By
promoting the establishment of R&D organizations that transcend tradi-
tional institutional boundaries (public/private, academic/applied etc.), and
the creation of scientific and industrial parks at local level (Porter, 1998),
these public interventions seem to correspond to a logic of organized
accumulation of knowledge and the creation of innovative capacities at
the micro-, meso- and macroeconomic levels. The dynamics of this model
implies organizational transformations in each of the three spheres, as well
as the intensification of their interrelations.
This conception of the common good calls for the creation of coop-
erative research networks that group together the institutionally diverse
partners (Callon, 1991). In terms of public intervention, ‘the convention
is . . . that of a situated State that is expected to promote initiatives and
their subsequent deployment, but not to dictate to them’ (Salais, 1998,
p. 78).
The collective construction of the common good can be concretized in
two ways, depending on the degree of state involvement. The first relates
to ‘more or less spontaneous creations, gradually resulting from local
interactions. They do not correspond to clearly defined identities and
rarely have clearly identified boundaries’ (Vinck, 1999, p. 389). The second
results from state initiatives that, in the name of the proclaimed efficiency
of cooperative scientific networks, are designed to catch up with the level
of rival technological clusters.
Without being exclusive, the local (or regional) dimension is strongly
present in the form of the science or technology district (Saxenian,
1996). In his modelling of Silicon Valley, Aoki (2001) describes how
this construction of the common good is based on ‘local institu-
tional arrangements’ between independent entrepreneurs and venture
A comparative analysis of national R&D policies 227

capitalists, outside state regulation. The information generated by inven-


tion work is ephemeral and rapidly depreciated due to the speed with
which technology evolves. This tends to reduce the protective role of
contracts and industrial property, so crucial in the context of the ‘state
as a regulator’ convention. Aoki (2001) notes that intellectual property
clauses that limit the mobility of experts between rival firms have become
inapplicable in California. Specific institutions are therefore being created
for technological networks.
The same applies in the more classical configuration of the pro-
fessional network (see Meyer-Krahmer and Schmoch, 1998), based
on branch-specific techno-scientific societies that circulate knowledge
between the various types of firm, mainly by promoting cooperative rela-
tions between large firms and subcontractors. Here again, the specific
dimension of collective action is based not on the withdrawal of the state
but on a delegation of public responsibility to private actors. While tech-
nology districts are turned more towards radical innovation, professional
networks generate mainly incremental innovations (Hall and Soskice,
2001).
Despite its lack of precision, the concept of a network is relevant ‘as a
passage between the micro-economic behaviours of firms and the meso- or
macro-economic levels’ (Amable et al. 1997, p. 94).
Moreover, this form of collective action underscores the limits of the
codification of knowledge and the importance of the ‘absorptive capaci-
ties’ of each protagonist to develop effective cooperation in technological
research (Lundvall and Johnson, 1994). The literature logically questions
the separation between basic and applied research, as well as the exist-
ence of a causal link, in the linear schema, between scientific discoveries,
industrial R&D, and market applications. Finally, criteria of separation
between public and private sector research have been criticized by several
authors. Callon (1992) shows that state investments in scientific produc-
tion are justified not by an intrinsic characteristic of science as a common
good, but by the maintenance of a degree of diversity and flexibility in
science, so that a wider range of research options is left open.
These analytical constructions and their translations in terms of public
policy illustrate the development of a network organization based on the
initiatives of public and private actors around common projects (Boltanski
and Chiappello, 1999). It is symptomatic that the individual competencies
of ‘new research workers’ are formulated in terms of abilities to cooperate,
to work in networks and to combine different types of knowledge (Lam,
2001). These competencies underlie the constitution of specific labour
markets, peculiar to the network or industrial field concerned (Lanciano-
Morandat and Nohara, 2002).
228 The capitalization of knowledge

MAIN TRENDS IN PUBLIC REGIMES OF ACTION IN


THREE EUROPEAN COUNTRIES

Relations maintained between firms and universities, with a view to pro-


ducing innovations, are supported by configurations of actors and rules
of the game that draw on different patterns of conventions. The resulting
compromise peculiar to each country outlines a regime of action that is
a particular societal construction. However, these national regimes are
increasingly subject to the critical evaluation of international benchmarks.
For instance, the OECD regularly formulates normative recommenda-
tions inspired by ‘good practice’ (OECD, 1998), many of which originate
in the USA (OECD, 2000). The conventions of the state as a regulator and
the state as a project facilitator are strongly favoured by these recommen-
dations, which not only inspire reforms but also contribute towards the
setting of policy standards in different countries.

The UK: Risks of Short-sightedness and Merchandization

Traditionally, the British RDI regime has been characterized by a dual


position:

1. A very strong influence of the Republic of Science, especially in


the medical research and biological fields (see Table 8A.4 in the
Appendix); that is why many large US firms, like Pfizer (pharmaceuti-
cals industry) or Hewlett-Packard (IT industry),5 set up some leading
centres devoted to basic research as early as the 1960s, in order to
exploit that scientific potential in a ‘Technology Kingdom’ perspective
(see, in Table 8A.2 of the Appendix, the share of foreign funding in the
national R&D expenditure);
2. A strong engagement of the ‘state as an entrepreneur’ in the defence
industry and as a decisive factor in technological independence as
regards computer technology in the 1970s. In the early 1980s this
characteristic was translated into the weight of public expenditures
on R&D. Although these were proportionally lower than in France,
they were substantially higher than in Germany (see Table 8A.3 in the
Appendix). In 1990 the share of military expenditures still accounted
for nearly half of public R&D budgets (49 per cent), against 15 per
cent in Germany. This explains why firms such as Racal, special-
ized in defence electronics, were able to constitute a technological
competency base of a very high quality. Without too much difficulty
they were then able to redeploy towards civil markets when defence
research budgets were cut in the latter half of the 1990s.
A comparative analysis of national R&D policies 229

Like others, Racal developed its portfolio of contracts with academic


research, thus contributing to overall change towards science–industry
relations based on the pattern of the ‘state as regulator’.
Through successive reforms6 the UK governments have tried to redirect
the policy-making towards:

1. The commercialization of scientific results, spurred, throughout the


1980s and 1990s, by the withdrawal of the state, which resulted in
drastic cuts in funding, the cancellation of large national programmes
and the privatization of public laboratories devoted to applied
research. The national R&D effort declined from the mid-1980s (see
Table 8A.1 in the Appendix), under the effect of slashed government
funding (from 48.1 per cent of overall funding in 1981 to 35 per cent
in 1991), while the share of foreign funding strongly increased to
reach an unequalled level in Europe (see Table 8A.2 in Appendix).
The share of R&D financed by firms and not-for-profit institutions
grew (ten points during the 1980s for the former), primarily owing to
contracts with academic laboratories, although these were not enough
to offset the state’s withdrawal. With a view to promoting relations
between scientific research and industry, the LINK programme was
launched in 1986. It was designed to develop generic research within
universities in order to meet the needs of enterprises more fully than
did purely academic research. In parallel, many firms, especially in
the information and communication sector such as ICL, Nortel and
Signal, which wanted to control their internal R&D expenditures,
developed their contractual relations with universities. This trend
towards a more market-oriented conception of science–industry rela-
tions also involved financial support to academic entrepreneurs, from
three programmes launched in the late 1990s: University Challenge,
Science Enterprise Challenge, and Higher Education Innovation
Fund. This was how the Imperial College was able to generate 14 aca-
demic spin-offs in the healthcare and biotechnology areas in 2001.
2. The emergence of techno-scientific networks in the form of
university–industry consortia and technology districts (clusters) from
1997 (Georghiou, 2001). Various public programmes accompanied
and stimulated this trend. This was how the ‘virtual centres of excel-
lence’ (VCE) were formed, comprising universities and firms, espe-
cially multinationals, in areas identified as priorities by the Foresight
Communications Panel (for example the ‘Mobile VCE’). Science
parks, less directly supported by government intervention and closer
to ‘classic’ clusters, are business and technology transfer initiatives
that encourage the startup and knowledge-base business, provide an
230 The capitalization of knowledge

environment where international businesses can develop specific and


close interactions with a particular centre of knowledge creation, and
have formal and operational links with higher education institutes (see
Lam, 2002). We can also cite the clusters developed in the ‘Oxbridge’
framework. These networks provide incentives for firms and universi-
ties to alter the organization of their internal R&D activities in order
to be able to collaborate more effectively.

This movement towards the ‘state as a project facilitator’ is based on


undeniable resources. The excellence of the leading scientific universities
(‘Oxbridge’) unquestionably places the UK closest to the ‘standard’ rec-
ommended by the OECD (1998). The fact remains that it is increasingly
difficult to reach compromises between the various institutional settings,
for several reasons: a growing risk of underinvestment in R&D by both
the public and private sectors, due to the predominance of such short-
term commercial objectives that traditional fields of excellence and the
capacity to produce generic know-how are being eroded (see, in Table
8A.4 in the Appendix, the decrease in the UK world share of scientific
publications); increasing extraversion of the UK scientific and technical
system, the potential of which is locked more and more into the strategies
of large multinationals (see the continual increase in foreign investments
in R&D); and excessive concentration of public and private resources on
a few universities (Oxbridge and London), to the detriment of the crea-
tion of a sufficiently large capacity in RDI according to the ‘knowledge
society’; in 1997, seven universities received 33 per cent of R&D funds
from industry.
These lower academic performances and the downward trend of patents
of UK origin (see Table 8A.3 in the Appendix), prompted the New
Labour government to increase public investments in R&D from the late
1990s, especially with the Joint Infrastructure Fund, thus departing from
20 years of negligence motivated, in a sort of paradox, by the very high
‘productivity’ of UK science (Georghiou, 2001).

Germany: The Strength of Professional Networks and the Problems of


University Entrepreneurship

Traditionally, the assets of German industry stem from the proximity


between higher education, especially the Fachhochschulen (technical uni-
versities), and industries, via research companies situated at the interface
between these two worlds. This is clearly reflected in two main indicators:
first, the share of R&D financed by industry, which is structurally much
higher than in the UK and France (see Table 8A.2 in the Appendix), give
A comparative analysis of national R&D policies 231

evidence of a strong comparative advantage; and second, the performance


of German firms as regards patents, especially in the chemicals, mechanics
and transport sectors (see Table 8A.6 in the Appendix). Although large
firms account for 90 per cent of this effort, the 10 per cent share of SMEs
in this industrial research is greater than that of the UK and France, owing
to the science–industry interfaces.
The efficiency of these professional networks has been proven in all
capital-good industries, through the regular production of incremental
innovations, which explain the high quality of products and their ability
to meet customers’ needs. These networks are organized around various
applied research institutes that have a private status but are financed
by the federal government: for instance, the Fraunhofer Society or the
Gottfreid-Wilhelm-Leibniz Association of Research Institutes (Meyer-
Krahmer, 2001).
Many firms in the pharmaceutical sector (HMR-Aventis, Merck KgeA)
and the computing–telecommunications sector (Siemens) are traditionally
part of these networks to construct the science–industry common good
and circulate it in industry.
But both state authorities and the business community no longer believe
that this situation is enough to maintain the competitive positions of
the German economy. Alone it no longer meets some of the main scien-
tific, technical and industrial challenges facing the German RDI regime,
especially in high-tech sectors (telecommunications, biotechnology), in
which technological performances are not as good (see Table 8A.6 in the
Appendix).
The first problem concerns the possibility of commercializing academic
scientific work in the ‘high-tech’ field (information, communication, bio-
technologies etc.). Following the US example, the Federal Ministry of
Education and Research has therefore tried to structure collective action
around a ‘state as a regulator’ convention, by making property rights and
statuses more favourable to researchers and thus creating incentives.
Many large firms have used this policy to externalize a part of their
high-tech research by setting up spin-offs on university campuses (e.g.
Merck KgeA around Munich). In this way they have helped to promote
new clusters. The results of new incentives seem to be most tangible in the
biotechnology field since the number of startups and the use of venture
capital have increased substantially – so much so that Germany is the
leader in this respect (Ernst & Young, 2000), with an increase in the
number of small research-oriented biotech firms, from 75 in 1995 to 279 at
the end of 1999. Although many of these startups are fragile, real success
has been witnessed in the area of technology platforms where the tradi-
tional qualities of the prevailing regime of action in Germany are at play,
232 The capitalization of knowledge

i.e., incremental innovations that combine existing technologies (Casper


et al., 1999).
Although limited, the same type of dynamic has developed in the ICT
sector (linked to firms such as Agilent Technology, Lucent Technologies,
Alcatel Research Centre). In the software field a spin-off dynamic from the
Fraunhofer Gesellschaft has developed, in line with the policy initiated by
the Federal Ministry of Education and Research (BMBF). To support the
networks involving SMEs in applied research, specific measures have been
taken in the new Länder (Meyer-Krahmer, 2001).

France: The Tricky Exit from the ‘State as an Entrepreneur’ Convention

The French higher education and research system is confronted with a


profound challenge to the ‘state as an entrepreneur’ convention that has
prevailed until now (Larédo and Mustar, 2001) – as evidenced in the struc-
tural importance of public R&D funding (see Table 8A.2 in the Appendix),
especially funds allocated to large organizations directly under the author-
ity of the state: CNRS (National Centre for Scientific Research), Inserm
(National Institute for Medical Research) and so on. Policy-making is
traditionally structured around big programmes, which have proved to be
highly effective in producing complex technological objects used for large
public infrastructure (telecommunications, nuclear energy, air and rail
transport, etc.).
The results have been far less convincing in the private sector, in com-
puter technology – as the difficult industrial trajectory of Bull (previously
the ‘national champion’) attests – and in the pharmaceutical industry.
The ‘top-down’ model that prevailed until quite recently in France in the
biotech field (see, e.g. the publicly managed Bio-Avenir programme and its
relations with Rhône Poulenc-Rorer in the SESI project study) does not
generally lend itself well to the spin-off forms of innovation which abound
in biotechnology and ICT (Foray, 2000).
This ‘state as an entrepreneur’ institutional setting has also been altered
internally, which has strongly undermined its efficiency. For instance, the
existence of several institutional channels for allocating aid and support for
technology transfer to industry has resulted at best in a juxtaposition but
more often in sterile competition between public institutions. The system
as a whole is consequently largely illegible, especially for SMEs. Since 1982
incentives for researchers to transfer their results have proved inadequate.
This applies both to high-tech startups and to the reduction of the cultural
gap – probably more marked in France than elsewhere – between science
and industry. It explains the severe diagnosis of the French scientific and
technological scene (Guillaume, 1998) – ‘Honourable scientific research,
A comparative analysis of national R&D policies 233

weak technology’ (Barré and Papon, 1998) – which inspired the 12 July
1999 blueprint law on research. With its related measures concerning
innovation, this law was designed to ease statutory constraints, to develop
incubators and to facilitate access to venture capital, in order to promote
the development of high-tech companies based on public research results.
In this perspective, INRIA (National Institute of Computer Science) and
its 436 spin-offs are presented as a model to follow7. It is significant that at
the end of 2003 the director of INRIA was appointed as General Director
of CNRS, an institution focusing first and foremost on basic research,
with a staff of 25 000.
But the new law also aims to move towards a ‘state as a facilitator’ insti-
tutional setting. The idea is no longer to set up large programmes but to
encourage the creation of precise industry–research cooperative projects.
This goal is reflected in a semantic and institutional change: it refers to
‘network’ rather than ‘programme’, in a logic similar to that of ‘consortia’
advocated by the Guillaume Report, which inspired the law to a large
extent. Two areas in which France lags behind other leading industrialized
countries are priorities: the life sciences, and information and communica-
tion technologies (see Table 8A.5 in the Appendix). In the first field case
the aim is to promote activities in the field of genomics – by supporting
genopoles and startups developing computer technology applications for
use in the biotech field – and health-related technology – by setting up a
national health technology research and innovation network (the RNTS).
The example of genomics is relevant since it shows how narrow this French
strategy for integrating models and institutions developed in other societal
contexts is in its conception. It dates back to before the 1999 law, since
the Evry genopole was launched in February 1998. This experiment was
intended to make up for the fact that France had fallen behind, by encour-
aging ‘interpenetration’ between technological and scientific advances.
It was to promote collaboration between public and private laboratories
and firms, while attempting to avoid the pitfalls associated with orienting
academic research too strongly towards short-term objectives (Branciard,
2001).
The problem of ‘catching up’ with the most competitive countries has
been an incentive to develop state-led policy-making in which coordination
by the hierarchy takes precedence over cooperation. Although the latter is
indispensable for producing collective learning, the time taken to establish
it is not necessarily compatible with the need to catch up with competi-
tors. Reaching a compromise between the different institutional settings
is thus difficult. In the meantime the French RDI regime is witnessing its
structural position, strongly supported by the ‘state as an entrepreneur’,
gradually being undermined (Branciard and Verdier, 2003).
234 The capitalization of knowledge

Recently the French government took steps to extend the ‘good prac-
tice’ of the Grenoble technological district. With the support of national
and regional public agencies and bodies, this innovative district is now
becoming a key player in the nanotechnologies field with the creation of
a new cluster ‘Minatec’. Under the initiative of researchers of the public
Commissariat général à l’énergie atomique, the cluster has attracted
MNCs as its main private stakeholders, including firms from Europe
(e.g. ST Microelectronics; Philips) and the USA (e.g. Motorola, Atmel).
Moreover, based on the main conclusions of the ‘Beffa report’ (Beffa,
2005)8, state and regional governments decided to support 60 projects
for competitiveness clusters, after a selective process. This new genera-
tion of public programmes expresses the search for compromises between
‘the state as an entrepreneur’ and ‘the state as a facilitator’. It may also
meet another challenge facing the French research system, the ‘under-
specialization’ of public funding devoted to basic research (Lanciano-
Morandat and Nohara, 2005).

CONCLUSION

State reforms are made and implemented at national level but are based
on the recommendations of supranational authorities (the OECD and
the European Commission), which are themselves influenced by the ideas
produced by the sociology and the economics of innovation (Lundvall and
Borras, 1997).
The resulting compromises and arrangements define rapidly changing
national regimes that are symptomatic of specific compromises with the
international and scientific references mentioned above. The increasing
weight of the ‘state as a regulator’ and the ‘state as a facilitator’ insti-
tutional settings, at the expense of the ‘state as an entrepreneur’ and, to
a lesser degree, the ‘Republic of Science’, is generating increasing diver-
sification of collective action. This action is less dependent on national
institutional frames than previously. It is more and more the result of the
initiatives of cooperative networks or the local configurations in which
multinational firms, among others, develop practices that could not be
explained only in terms of a ‘global’ strategy. If we want to continue refer-
ring to a ‘national system’, we will have to conceive of it more and more
as the outcome of a ‘set’ of networks and configurations whose coherence
stems only partially from the direct influence of national institutions.
The approach in terms of conventions of policy-making enables us
to define regimes of action for each country. These regimes are com-
promises between different patterns and are continually moving. They
A comparative analysis of national R&D policies 235

are constantly subject to critique, to reinterpretations in changing con-


texts, and to failures in coordination that motivate attempts at reform
– themselves interpreted and adjusted by individual and collective, public
and private actors.
As far as RDI is concerned, all countries implement measures aimed
at promoting a better diffusion and commercialization of the results of
public research, in order to stimulate private innovation. This ‘market’
reference (Jobert, 1994) has undeniably impacted strongly on the course
of policy-making. The shared risk is of favouring ‘short-term’ behaviours
by research institutions and firms, to the detriment of the accumulation of
generic knowledge on which the Republic of Science and the technological
creativity targeted by innovation network projects are based. Powell and
Owen-Smith (1998) have drawn attention to the fact that the predomi-
nance of market criteria for assessing the ‘merits’ of academic research
has helped to corrode their mission and consequently has dangerously
undermined the public’s trust in these institutions and in science. Reaching
a compromise between the different research ethics is thus asserted as a
condition for sustainable development of the legitimacy and efficiency of
policy-making concerning R&D.

NOTES

1. In one sentence, ‘state reform is about building new precedents that would lead to new
conventions; to do this, they need to involve the actors, which requires talk among the
actors so that they might ultimately build confidence in new patterns of mutual interac-
tion, which is a prerequisite of new sets of mutual expectations which are, in effect, con-
vention’ (Storper, 1998, p. 13).
2. See Branciard and Verdier (2003) on the French case and the influence of the OECD’s
expertise.
3. The role of the senior levels of the French civil service, which are staffed by graduates of
the elite engineering schools and the civil-service college (ENA), in conducting a state-led
economic policy and controlling France’s largest firms (nationalized in 1945 and 1981),
has often been emphasized (Suleiman, 1995).
4. This thesis of one regime of production of science supplanting another is criticized by
Pestre (1997), who sees the two modes as having functioned in parallel for the past few
centuries in the West.
5. Firms studied during our European research project: see a presentation of the methodol-
ogy in the Appendix.
6. With the advent of New Labour, official reports on these issues proliferated: see com-
petitive White Paper (DTI, 1999a), special reports on biotechnology clusters (1999b) and
Genome Valley (1999c), White Paper on enterprise, skills and innovation (DTI/DfEE,
2001).
7. It is worth noting that this particular configuration is a hybrid between the French and
American models (Lanciano-Morandat and Nohara, 2002). Those who created and
managed it had previously visited the USA, where they learned how to handle applied
research and to launch private entrepreneurial initiatives. This mind-set has since been
handed on to the younger generations.
236 The capitalization of knowledge

8. Jean-Louis Beffa, former chairman of the multinational Saint-Gobain, was the leader of
an expert commission created by President Chirac.

REFERENCES

Amable, B., R. Barré and R. Boyer (1997), Les systèmes d’innovation à l’ère de la
globalisation, Paris: Economica.
Aoki, M. (2001), Toward a Comparative Institutional Analysis, Cambridge, MA:
MIT Press.
Barré, R. and P. Papon (1998), ‘La compétitivité technologique de la France’, in
H. Guillaume (1998), ‘Rapport de mission sur la technologie et l’innovation’,
submitted to the Ministry of National Education, Research and Technology,
the Ministry of the Economy, Finances and Industry, and the State Secretary for
Industry, Paris, mimeo, pp. 216–27.
Beffa, J.-L. (2005), ‘Pour une nouvelle politique industrielle’, Rapport remis au
Président de la République Française, Paris, mimeo.
Boltanski, L. and E. Chiappello (1999), Le nouvel esprit du capitalisme, Paris:
Gallimard.
Boltanski, L. and L. Thévenot (1991), De la Justification. Les Economies de la
Grandeur, Paris: Gallimard.
Branciard, A. (2001), ‘Le génopole d’Evry: une action publique territorialisée’,
Journées du Lest, avril, mimeo, Aix en Provence.
Branciard, A. and E. Verdier (2003), ‘La réforme de la politique scientifique
française face à la mondialisation: l’émergence incertaine d’un nouveau référen-
tiel d’action publique’, Politiques et Management Public, 21 (2), 61–81.
Callon, M. (1991), ‘Réseaux technico-économiques et flexibilité’, in R. Boyer and
B. Chavance (eds), Figures de l’irréversibilité, Editions de l’EHESS.
Callon, M. (1992), ‘Variété et irréversibilité dans les réseaux de conception et
d’adoption des techniques’, in D. Foray and C. Freeman (eds), Technologie et
richesse des nations, Paris: Economica, 275–324.
Casper, S., M. Lehrer and D. Soskice (1999), ‘Can high-technology industries
prosper in Germany? Institutional frameworks and the evolution of the
German software and biotechnology industries’, Industry and Innovation, 6
(1), 6–26.
DTI (Department of Trade and Industry) (1999a), Our Competitive Future: UK
Competitive Indicators 1999, London: Department of Trade and Industry.
DTI (Department of Trade and Industry) (1999b), Biotechnology Clusters, London:
Department of Trade and Industry.
DTI (Department of Trade and Industry) (1999c), Genome Valley, London:
Department of Trade and Industry.
DTI/DfEE (Department of Trade and Industry/Department for Education and
Employment) (2001), Opportunity for All in a World of Change: A White Paper
on Enterprise, Skills and Innovation, London: HMSO.
Ergas, H. (1992), A Future for Mission-oriented Industrial Policies? A Critical
Review of Developments in Europe, Paris: OECD.
Ernst & Young (2000), Gründerzeit. Zweiter Deutscher Biotechnologie-Report
2000, Stuttgart: Ernst & Young.
Etzkowitz, H. and L. Leydesdorff (1997), Universities and the Global Knowledge
A comparative analysis of national R&D policies 237

Economy. A Triple Helix of University–Industry–Government Relations, London


and Washington, DC: Pinter.
Etzkowitz, H. and L. Leydesdorff (2000), ‘The dynamics of innovation: from
National Systems and “Mode 2” to a triple helix of university–industry–govern-
ment relations’, Research Policy, 29, 109–23.
Eymard-Duvernay, F. (2002), ‘Conventionalist approaches to enterprise’, in
O. Favereau and E. Lazega (eds), Conventions and Structures in Economic
Organization, New Horizons in Institutional and Evolutionary Economics,
Cheltenham, UK and Northampton, MA, USA: Edward Elgar, pp. 60–78.
Foray, D. (2000), ‘Inerties institutionnelles et performances technologiques dans la
dynamique des systèmes d’innovation: l’exemple français’, in Michèle Tallard,
Bruno Théret and Didier Uri, Innovations institutionnelles et territoires, coll.
Logiques Politiques, Paris: L’Harmattan, pp. 81–100.
Georghiou, K. (2001), ‘The United Kingdom national system of research,
technology and innovation’, in P. Larédo and P. Mustar (eds), Research and
Innovation Policies in the New Global Economy: An International Comparative
Analysis, Cheltenham, UK and Northampton, MA, USA: Edward Elgar, pp.
253–96.
Gibbons, M. (ed.) (1994), The New Production of Knowledge, The Dynamics of
Science and Research in Contemporary Societies, London: Sage.
Guillaume, H. (1998), ‘Rapport de mission sur la technologie et l’innovation’, sub-
mitted to the Ministry of National Education, Research and Technology, to the
Ministry of the Economy, Finances and Industry, and to the Secretary of State
for Industry, Paris, mimeo.
Hall, P. and D. Soskice (eds) (2001), Varieties of Capitalism: The Institutional
Foundations of Comparative Advantage, Oxford: Oxford University Press.
Jobert, B. (ed.) (1994), Le tournant néo-libéral en Europe, coll. Logiques Politiques,
Paris: L’Harmattan.
Lam, A. (2001), ‘Changing R&D organization and innovation: developing the
next generation of R&D knowledge workers’, Benchmarking of RTD Policies
in Europe: A Conference organized by the European Commission, Directorate
General for Research, 15–16 March.
Lam, A. (2002), ‘Alternative societal models for learning and innovation in the
knowledge economy’, International Journal of Social Science, 171, 67–82.
Lamont, M. and L. Thévenot (eds) (2000), Rethinking Comparative Cultural
Sociology: Repertoires of Evaluation in France and the United States, Cambridge:
Cambridge University Press.
Lanciano-Morandat, C. and H. Nohara (2002), ‘Analyse sociétale des marchés
du travail des scientifiques: premières réflexions sur la forme professionnelle
d’hybridation entre la science et l’industrie’, Economies et Sociétés, Série
‘Economie du travail’, AB, 8 (22), 1315–47.
Lanciano-Morandat, C. and H. Nohara (2005), ‘Comparaison des régimes de
Recherche et Développement (R/D) en France et au Japon: changements récents
analysés à travers les trajectoires historiques’, Revue française d’Administration
Publique, 112, 765–76.
Larédo, P. and P. Mustar (2001), ‘French research and innovation policy: two
decades of transformation’, in P. Larédo and P. Mustar (eds), Research and
Innovation Policies in the New Global Economy: An International Comparative
Analysis, Cheltenham, UK and Northampton, MA, USA: Edward Elgar, pp.
447–95.
238 The capitalization of knowledge

Lundvall, B.-Å. and S. Borras (1997), The Globalising Learning Economy:


Implications for Innovation Policy, DG XII, EC, Brussels.
Lundvall, B.-Å. and B. Johnson (1994), ‘The learning economy’, Journal of
Industry Studies, 1 (2), 23–42.
Maurice, M., F. Sellier and J.-J. Silvestre (1986), The Social Foundations of
Industrial Power. A Comparison of France and Germany, Cambridge, MA: MIT
Press.
Merton, R. (1973), The Sociology of Science: Theoretical and Empirical
Investigations, Chicago, IL: Chicago University Press.
Meyer-Krahmer, F. (2001), ‘The German innovation system’, in P. Larédo, and
P. Mustar (eds), Research and Innovation Policies in the New Global Economy:
An International Comparative Analysis, Cheltenham, UK and Northampton,
MA, USA: Edward Elgar, pp. 205–51.
Meyer-Krahmer, F. and U. Schmoch (1998), ‘Science-based technologies; univer-
sity–industry interactions in four fields’, Research Policy, 27 (8), 835–52.
OECD (1998), Technology, Productivity and Job Creation – Best Policy Practices,
Paris: OECD.
OECD (2000), Science, Technology and Industry Outlook 2000, Paris: OECD.
Polanyi, M. (1962), ‘The Republic of Science: its political and economic theory’,
Minerva, 1 (1), 54–73.
Pestre, D. (1997), ‘La production des saviors entre économie et marché’, Revue
d’Economie Industrielle, 79, 163–74.
Porter, M. (1998), ‘Clusters and the new economics of competition’, Harvard
Business Review, 76, 77–90.
Powell, W.W. and J. Owen-Smith (1998), ‘Universities and the market for intel-
lectual property in the life sciences’, Journal of Policy Analysis and Management,
17 (2), 253.
Salais, R. (1998), ‘Action publique et conventions: état des lieux’, in J. Commaille
and B. Jobert (eds), Les métamorphoses de la régulation politique, Paris: LGDJ,
pp. 55–82.
Saxenian, A.L. (1996), Regional Advantage: Culture and competition in Silicon
Valley and Route 128, Cambridge, MA: Harvard University Press.
Shinn, T. (2002), ‘Nouvelle production du savoir et triple hélice: tendances du
prêt-à-penser des sciences’, Actes de la Recherche en Sciences Sociales, N. spécial
Science, 141–2, 21–30.
Storper, M. (1998), ‘Conventions and the genesis of institutions’, http://216.239.59.
104/search?q=cache:TnNqv5cKKbQJ:www.upmf-grenoble.fr/irepd/regulation/
Journees_d_etude/Journee_1998/Storper.htm+Michael+Storper&hl=fr.
Storper, M. and R. Salais (1997), Worlds of Production: The Action Frameworks of
the Economy, Cambridge, MA: Harvard University Press.
Suleiman, E. (1995), Les ressorts cachés de la réussite française, Paris: Le Seuil.
Vinck, D. (1999), ‘Les objets intermédiaires dans les réseaux de coopération
scientifique, contribution à la prise en compte des objets dans les dynamiques
sociales’, Revue Française de Sociologie, XL (2), 385–414.
A comparative analysis of national R&D policies 239

METHODOLOGICAL APPENDIX

Within the SESI project (Higher Education Systems and Industrial


Innovation, funded by the EC), from 1998 to July 2001, national public
action trajectories concerning RDI were studied from two angles: that
of public policies at national and local level, and that of firms’ practices,
especially from the point of view of relations with public research, both
basic and applied.
Three sectors were chosen in each country as being representative of
the new challenges emerging for the relationship between higher educa-
tion and industry in key areas where generic technologies are tending
to develop, albeit in different ways. The information technology sector,
whose growth has been very rapid, is of interest because it brings together
industrial production activities and customer service activities, in ways
specific to individual countries. The telecommunications sector, which has
undergone a huge amount of technological and organizational innova-
tion, was having its links with the public sector challenged by deregulation
in various EU countries just as the project was being launched. The phar-
maceutical sector, whose links with higher education and research date
back further, was facing the biotechnology revolution.
This project was based on the results of investigations carried out in
firms. Three companies per sector and per country were studied, making
a total of about 40. The initial idea was to take one ‘foreign’ multina-
tional, one large ‘national’ company and one SME from each sector, in an
attempt to have a comparable sub-sample for at least two countries.
On the company side, the interviewees were R&D managers, project
managers, research workers and engineers, HR managers and those
responsible for related fields such as contracts and patents. Among their
academic partners, interviews were conducted with heads of laborato-
ries, departments and projects, sometimes with research workers. Semi-
structured interviewing techniques were used with both types of partner,
based on a standardized interview guide devised for all firms in the various
countries. Each case study is divided into two parts. The first deals with
the firm’s trajectory and strategy in respect of innovation, competencies
and knowledge. The second contains a presentation of some actual cases
of collaboration between the firm and the higher education system in the
two fields of research and training (competencies).
240 The capitalization of knowledge

APPENDIX

Table 8A.1 R&D expenditure: share in GDP in Germany, the UK and


France (in %)

1981 1991 1995 2001 2003


G UK F G UK F G UK F G UK F G UK F
Share 2.43 2.38 1.93 2.52 2.07 2.37 2.25 1.95 2.31 2.51 1.86 2.23 2.52 1.88 2.18
in GDP

Source: OECD, PIST, May 2004, OST 2006.

Table 8A.2 R&D expenditure: breakdown by sources of funds in


Germany, the UK and France

Germany UK France
1995 1999 2001 2003 1995 1999 2001 2003 1995 1999 2001 2003
(1) 60.0 65.4 65.7 66.1 48.2 48.5 46.9 43.9 48.3 54.1 54.2 50.8
(2) 37.9 32.1 31.4 31.1 32.8 29.9 29.1 31.3 41.9 36.8 36.9 40.8a
(3) 0.3 0.4 0.4 0.4 4.5 5.0 5.7 5.4 1.7 1.9 1.7 –
(4) 1.8 2.1 2.5 2.3 14.5 17.3 18.2 19.4 8.0 7.0 7.2 8.4

Notes: (1) = Industry; (2) = Public funding; (3) = Other national sources;
(4) = Foreign funding.

Source: OECD, Data MSTI, May 2005.

Table 8A.3 World shares (European patents in % )

Countries 1990 1996 2000 2004


Germany 20.9 17.7 18.1 16.4
UK 7.0 5.8 5.3 4.8
France 8.3 7.1 6.3 5.6
EU-15a 48.1 43.0 42.6 39.7
USA 26.4 33.1 32.3 30.5

a
Notes: EU-25 in 2004.

Source: Data INPI and DEB, OST Processing.


A comparative analysis of national R&D policies 241

Table 8A.4 World shares (scientific publications in %)

Countries 1993 1995 2000 2004


Germany 6.7 6.7 7.2 6.4
UK 8.1 8.3 7.7 6.7
France 5.2 5.4 5.3 4.7
EU-15a 32.0 32.9 33.6 34.2
US 33.9 32.7 30.1 27.1

a
Notes: EU-25 in 2004.

Source: Data ISI, OST Processing 2006.

Table 8A.5 Scientific publications: European shares by disciplines in %


(2004) and evolution from 1999 to 2004

Disciplines 2004 Evolution 2004/1999 (%)


Germany UK France Germany UK France
Basic biology 18.7 20.8 13.8 −2 −7 −11
Medical 18.7 23.4 11.9 0 −6 −12
research
Applied biology/ 15.9 18.3 11.9 −10 −13 −10
ecology
Chemistry 20.6 14.8 14.5 −14 −7 −8
Physics 23.1 13.7 15.8 −9 −10 −5
Space sciences 16.7 20.5 14.5 −4 −12 −9
Sciences for 16.8 19.0 13.5 −13 −20 −2
engineering
Mathematics 15.9 12.7 20.5 −18 −8 −3
Total 18.8 19.5 13.6 −6 −9 −9

Source: Data ISI, OST Processing 2006.


242 The capitalization of knowledge

Table 8A.6 European patents by technological domains: European shares


in % (2001) and evolution from 1996 to 2001

Technological 2001 Evolution 2001/1996 (%)


domains
Germany UK France Germany UK France
Electronics– 37.7 12.9 15.2 +3 −7 −21
electricity
Instruments 41.5 15.8 14.0 +11 −4 −22
Chemistry– 47.1 13.2 13.2 −4 −8 −6
materials
Pharmaceuticals– 30.4 19.9 19.6 +12 −3 −4
biotechnology
Industrial 42.2 10.1 13.0 −4 −9 +1
processes
Mechanics– 52.2 8.4 13.7 +14 −22 −20
transports
equipments
Household 38.9 12.8 14.9 −2 +11 −6
equipment–
construction

Total 42.3 12.6 14.5 +3 −6 −12

Source: Data ISI, OST Processing 2004.


9. The role of boundary organizations
in maintaining separation in the
triple helix
Sally Davenport and Shirley Leitch

INTRODUCTION
Life Sciences Network, an umbrella group of industry and scientists who
support genetic engineering, wants the chance to contradict evidence given by
groups opposed to GE and to put new evidence before [the New Zealand Royal
Commission on Genetic Modification]. (Beston, 2001, p. 8)
A new hybrid organizational representation of action that is neither purely
scientific nor purely political is created. (Moore, 1996, p. 1621)

The triple helix is said to consist of co-evolving networks of communica-


tion between three institutional players: universities (and other research
organizations), industry and government (Leydesdorff, 2000). This separa-
tion into three separate ‘strands’ implies the existence of only three players,
with distinct boundaries between each sphere of activity. However, there
are other organizations that mediate the interaction between science,
industry and government, such as bioethics councils (Kelly, 2003) and
environmental groups (Guston, 2001; Miller, 2001). These new boundary
organizations have arisen in the triple helix, in order to manage ‘bound-
ary work’ and ‘boundary disputes’ as a result of the ‘new demands on
researchers and their organizations’ (Hellstrom and Jacob, 2003, p. 235).
Boundaries are drawn based on the assumption that they describe ‘a
stable order’. This implies that the ordering of human actions and interac-
tions circumscribed by the boundary is restraining (Hernes and Paulsen,
2003, p. 6). However, boundaries can be both enabling and constraining
(Hernes, 2003, 2004). In the mainstream management literature there
is much talk of breaking down boundaries, as they are perceived to be
a barrier to communication and information flows (Heracleous, 2004).
In this literature organizational boundaries are perceived to be ‘real’,
physical entities. However, for others, boundaries are ‘complex, shifting
socially constructed entities’ (Heracleous, 2004), created to maintain both

243
244 The capitalization of knowledge

internal order and external protection but also as ‘a means of establish-


ing the organization as an actor in its relationships with other organiza-
tions’ (Hernes, 2003, p. 35). Thus the concept of a boundary is complex
and there is considerable ambiguity surrounding the role that boundary
organizations play in the triple helix (Kelly, 2003).
This chapter will explore the role of one boundary organization, an
incorporated society called the Life Sciences Network (LSN), which
was a vehicle for presenting a pro-GM (genetic modification) stance in a
national debate on behalf of the research and industry organizations that
it represented. It was active in New Zealand during an election, the Royal
Commission into GM (RCGM) and subsequent debates. In many ways,
the LSN was similar to other boundary organizations described in the
literature in that it was a vehicle for information provision and coordina-
tion of interests between triple-helix players. However, in other ways it
was distinctly different. The LSN, whose membership included a range of
scientific and other publicly funded organizations that could not all openly
engage in debate and lobbying for various reasons, shielded its member
organizations from significant engagement in the debate, especially with
anti-GM activist groups. In this way, this new sort of boundary organiza-
tion not only provided demarcation between science and non-science but
also allowed deniability.
In many ways, boundary organizations like the LSN are similar to
front organizations in that they play a distinctive communication role that
would be politically difficult for the individual members to undertake. The
use of front organizations is a common subject of discussion in political
science but they have not been recognized as potential boundary organiza-
tions in science studies. The role of lobby or interest groups, representing
a variety of causes, as a vehicle for the provision of information and other
resources to politicians and law makers, in return for influence, is well rec-
ognized in the former literature (e.g. Holyoke, 2003; Ainsworth and Itai,
1993; Baumgartner and Leech, 2001). Yet we know little about the use of
these potentially very influential boundary organizations as mediators in
the triple helix.

BOUNDARIES

Boundaries are a popular subject in many social science subjects. For


example, ‘the concept of boundaries has been at the center of influential
research agendas in anthropology, history, political science, social psy-
chology, and sociology’ (Lamont and Molnar, 2002, p. 167). Working
from descriptions of boundaries in sociology, Hernes (2003) described
Boundary organizations in the triple helix 245

three types of boundaries: physical, social and mental. Physical bounda-


ries not only consist of ‘real’ objects and structures, but also of rules
and regulations governing the exchanges that may take place within the
organization or between the organization and its environment. Physical
boundaries can provide stability and predictability as ‘they serve to bind
resources over time and space’. They also ‘serve to create and consolidate
impressions of robustness’ externally, helping to ‘establish an organiza-
tion as a more recognized and powerful actor in a field of organizations’
(Hernes, 2003, p. 38).
Groups and organizations develop social boundaries to ‘distinguish
themselves from others’. The sense of identity created by a social boundary
enables the ‘creation of “otherness”’ and what is not part of the organiza-
tion. ‘Social boundaries depend on social interactions for their existence’,
are ‘central to the creation of behavioral norms’ and also ‘uphold patterns
of social power’ (Hernes, 2003, p. 39). Symbolic boundaries become social
boundaries when their characteristics have been agreed upon and ‘pattern
social interaction in important ways’ (Lamont and Molnar, 2002, p. 169).
They also provide protection to the group when its identity is perceived
to be threatened. Strong social boundaries may span formal boundaries
and can enable cooperation between members without close proximity
(Hernes, 2003, p. 39).
Mental boundaries are also about making distinctions and assist in how
individuals make sense of the world. They form the basis for ‘action inside
and outside the borders of the group’, and become embedded as collec-
tively held ‘tacit assumptions’. They allow the segmentation between ‘us’
and ‘them’ (Lamont and Molnar, 2002). Like social boundaries, they are
moderated by social interaction but are also guarded as a ‘basis for power’
and a ‘shield’ from ‘the outside world’ yet to be understood (Hernes, 2003,
p. 40).
To summarize, there are different types of boundaries that may not nec-
essarily coincide with each other. For the purposes of this chapter, physical,
social and mental boundaries allow not only the creation of a seemingly
‘stable’ concept of what is ‘inside’ the boundary; they also enable a demar-
cation of what is ‘outside’ the boundary and therefore facilitate expedient
interaction, or boundary work, with the external environment.

BOUNDARY WORK

The concepts of boundary organizations and boundary activities are not


new in science studies (Guston, 2001) and are often couched in terms of
the ‘problem of demarcation’ that has been concerning philosophers and
246 The capitalization of knowledge

sociologists of science. In a key paper on boundary work, Geiryn (1983,


p. 781) described the traditional approach to the ‘problem of demarca-
tion’ as ‘how to identify unique and essential characteristics of science that
distinguish it from other kinds of intellectual activities’. The same author
likened boundary work to the literary concept of a ‘foil’ and suggested
that we are better able to learn about ‘science’ through contrasts to ‘non-
science’ (ibid., p. 791). In this way, ‘the power and prestige of science is
typically thought to be grounded in the ability of scientists to draw strong
distinctions between scientific and non-scientific interests’ (Moore, 1996,
p. 1592).
Boundary work was originally conceived to explain how science
defended the boundary of its communities from attack and was a problem
of ‘how to maintain control over the use of material resources by keeping
science autonomous from controls by government or industry’. ‘Boundary
work is an effective ideological style for protecting professional autonomy
. . . the goal is immunity from blame for undesirable consequences of non-
scientists’ consumption of scientific knowledge’ (Geiryn, 1983, p. 789).
Geiryn identified three occasions when boundary work was likely to be
employed as a resource for scientific ideologists. First, ‘when the goal is
expansion of authority or expertise’ into domains claimed by others, that
is ‘boundary work heightens the contrast between rivals in ways flatter-
ing to the ideologist’s side’. Second, when the goal is monopolization of
professional authority and resources, boundary work excludes rivals from
within by defining them as outsiders with labels such as ‘pseudo’, ‘deviant’
or ‘amateur’. Third, ‘when the goal is protection of autonomy over pro-
fessional activities, boundary work exempts members from responsibility
for consequences of their work by putting the blame on scapegoats from
outside’ (ibid., pp. 791–2).
Moore (1996) studied the rise of US public interest science organiza-
tions, such as the Union for Concerned Scientists, as ‘boundary work’
activities, in that they ‘created a new form of action among scientists that
was deemed neither purely scientific nor purely political’, and that they
‘permitted the preservation of organizational representations of pure,
unified science, while simultaneously assuming responsibilities to serve the
public good’. In the case of public interest science organizations, science
and its relation to politics became the main object of ‘action’, and enabled
the alignment between the interests of scientists and their patrons (Moore,
1996, pp. 1592–8).
However, the fact that scientists are portrayed as ‘unified’ is problem-
atic in descriptions of boundary work. Moore notes that ‘scientists, like
other people, sometimes exhibit commitments to multiple social identities’
so that notions of unified scientists ignore ‘variation in when, how and
Boundary organizations in the triple helix 247

around what issues they are unified’. ‘Scientists exhibit a wide range of
political opinions, religious beliefs, and income levels; these differences
impinge upon the kinds of claims that scientists make about the proper
relationships between science and politics as well as forming the basis for
conflict among scientists. Thus, the process of setting boundaries is not
simply a struggle between a unified group of scientists and non-scientists,
but a process of struggle among scientists as well’ (Moore, 1996, p. 1596).
Geiryn also saw this variation as a source of ambiguity in the notion of
boundary work, noting that ‘demarcation is as much a practical problem
for scientists as an analytical problem for sociologists and philosophers’
(1983, p. 792). He argued that ambiguity surfaces because of inconsisten-
cies in the boundaries constructed by different groups of scientists inter-
nally or in response to different external challenges, as well as because of
the conflicting goals of different scientists. Moore noted that the formation
of the boundary organizations helped perpetuate the perception of unity
in science by preserving ‘the professional organizations that represented
“pure” science and unity among scientists’ (1996, p. 1594).
The demarcation problem is still of great interest to science studies
(e.g. Evans, 2005 and papers in that special issue of Science, Technology
& Human Values). However, in more recent times, boundary work has
evolved to mean the ‘strategic demarcation between political and scien-
tific tasks in the advisory relationship between scientists and regulatory
agencies’ (Guston, 2001, p. 399). This newer version of boundary work
has taken on a less instrumental tone and the boundaries are viewed as
means of communication rather than of division (Lamont and Molnar,
2002). The demarcation between politics and science is viewed as some-
thing to bridge, with boundary organizations seen as mediators between
the two realms (Miller, 2001), and successful boundary organizations are
said to be those that please both parties. In this new framing, boundary
organizations may help manage boundary struggles over authority and
control but are primarily focused on facilitating cooperation across social
domains in order to achieve a shared objective. They ‘are useful to both
sides’ but ‘play a distinctive role that would be difficult or impossible for
organizations in either community to play’ (Guston, 2001, p. 403, citing
the European Environmental Agency).

THE RESEARCH PROJECT

The research reported in this chapter is derived from a much larger


five-year study on the sociocultural impact of biotechnology in New
Zealand funded by the New Zealand Foundation for Research Science &
248 The capitalization of knowledge

Technology (UoWX0227). The study has involved hundreds of interviews


and focus groups with organizations, groups and individuals involved
with biotechnology as scientists, policy managers, entrepreneurs, consum-
ers, members of ethnic, cultural or religious groups, environmentalists and
activists. These interviews have been augmented by a thematic analysis
of websites, official documents, annual reports, promotional material,
media material and other documents. They have been further augmented
by researcher observation of various events, such as conferences and
workshops, staged by actors in the biotechnology sector.
For this chapter, we drew on interviews with triple-helix players
(researchers, government and industry representatives), with LSN, Royal
Society of New Zealand (RSNZ) and Association of Crown Research
Institutes (ACRI) representatives, as well as secondary data such as media
and website material. The data were analysed and categorized according
to our interest in the ‘boundary’ aspects of the strategies and action of
the LSN, its members and other triple-helix players. A case study of the
LSN was developed, which is now summarized to provide context for our
discussion of the LSN as an interesting boundary organization.

THE LIFE AND TIMES OF THE LIFE SCIENCES


NETWORK

The Life Sciences Network Incorporated (LSN) was a non-profit body


incorporated in 1999 with the overall objective, according to its Executive
Director Francis Wevers, to advocate ‘cautious, selective and careful’ use
of GM technology ‘to deliver benefits to New Zealand as a whole’. ‘While
the emotional non-scientific issues were important’, Wevers said, ‘they’ve
got to be considered in balance . . . we’ve got to be informed by knowledge,
not by emotion.’
As indicated by the objectives laid out in the LSN Constitution (Box
9.1), LSN essentially wanted to coordinate responses to GM issues and to
provide information to decision-makers and the public. In part, LSN was
a successor to the Gene Technology Information Trust (more commonly
known as ‘GenePool’), which had also claimed to be an independent pro-
vider of information on GM. However, GenePool had been wound up
in September 1999 after concern was raised by the Green Party over its
support from Monsanto (e.g. Espiner, 1999) and for an apparent conflict
of interest (e.g. Samson, 1999) between a communications company that
worked both for GenePool and for New Zealand King Salmon (Weaver
and Motion, 2002), a company that had allegedly attempted to cover up
deformities in some genetically modified salmon it had bred.
Boundary organizations in the triple helix 249

BOX 9.1 OBJECTIVES OF THE LSN

The [LSN] is formed to promote the following objectives:

1. To promote the strategic economic opportunity available to


each country to benefit from the application of biotechnology
in the expanding knowledge age.
2. To assist members to effectively input into public biotechnol-
ogy research and development policy, both individually and
collectively.
3. To positively influence the advancement of responsible bio-
technology research and development within an appropriate
framework of regulatory controls based on scientific and risk
management principles.
4. To positively influence the continued availability of:
a. medicine derived from biotechnology or genetic engineer-
ing;
b. products used in crime prevention, food safety testing and
other similar applications, which involves or are derived
from biotechnology;
c. biotechnology applications and advances which benefit
each country, including but not limited to, applications in
manufacturing and production, pest control, environmen-
tal protection, production of quality food products, human
and animal health and welfare.
5. To provide:
a. An active voice for the creation of a positive environment
for responsible use of genetic modification with appropri-
ate caution;
b. Accurate and timely information and advocacy to respond
to issues related to biotechnology as they arise, and;
c. Assisting and obtaining cooperation of organisations that
will or may benefit from the application of genetic modifi-
cation procedures.
Source: Extracted from the LSN Constitution, 2002.
250 The capitalization of knowledge

The formation of LSN was driven by William Rolleston, a doctor


who came from an established farming tradition but had also launched
a very successful biopharmaceuticals firm, South Pacific Sera Ltd. He
was also the founding chairman of Biotenz, an association for biotech-
nology businesses, which ‘use the application of science and technol-
ogy to create biological-based wealth’ (Worrall, 2002). In 2000, when
New Zealand environmental groups started voicing concerns about
GM, Biotenz decided that there was a need for another body to voice
opinions. ‘Biotenz wanted to add balance and facts so that debate on
GM could be rigorous and well informed. We wanted to counteract the
“mythinformation”, the scaremongering talk of things like Frankenfood
science-fiction nonsense that created unnecessary fear among the public’
(Worrall, 2002).
In July 2002, LSN had a membership of 22 organizations including 13
industry organizations (with one based in Australia) and producer boards,
four Crown Research Institutes (CRIs – primarily government-funded
research organizations), three universities, one independent research
organization and one industry cooperative. Members paid between
NZ$1000 and $20 000 per year depending on the size of the organization,
contributing to the LSNs annual running costs of about NZ$300 000, with
additional costs such as the extra $500 000 during the RCGM (Fisher,
2003). Together, the member organizations apparently represented pro-
ducers of up to about 70 per cent of New Zealand’s gross domestic
product through the member industry organizations, which included
many farmer and manufacturer groups (Worrall, 2002). There were many
questions over its membership, particularly from those who believed that
public funding should not be used to fund what was, essentially, a lobby
group (e.g. Hawkes, 2000; Collins 2002a, 2002b).
The LSN was probably most active, and had its highest profile, around
the time of the general election in 2002 (during which GM was a major
election issue1), during the RCGM in 2001/2002 and when the moratorium
on the release of GM crops was lifted in 2003. It was given ‘interested
person’ status2 during the RCGM, even though many of its constituent
groups were also granted such status independently, arguing that it could
play a strong role in coordinating evidence in relation to scientific argu-
ments that were likely to be raised by ‘multinational’ organizations such as
Greenpeace and Friends of the Earth (Samson, 2000). During these hectic
few years, there were, at times, daily press releases and articles in various
media, providing the LSN perspective on the potential of GM or rebutting
the views of those against the GM-based technology.
The LSN espoused a very public strategy for its involvement in the
RCGM (Box 9.2). In particular, the LSN desired that the totality of its
Boundary organizations in the triple helix 251

BOX 9.2 LSN CAMPAIGN STRATEGY FOR THE


RCGM

For the New Zealand Life Sciences Network (Inc.) to:

1. Build public, and therefore political confidence in gene tech-


nology, science and scientists through information, education
and knowledge.
2. Coordinate the activities of member organisations and network
participants to ensure the public, media, politicians and Royal
Commission are presented with the most complete case in
favour of continued responsible research, development and
application of gene technology in New Zealand.
3. Work with non-member organisations to seek to achieve its
desired campaign outcome.

Implications of strategy

1. While the strategically important campaigns run outside, the


Royal Commission is the focal point from which other activity
can be leveraged.
2. The totality of submissions and evidence to the Royal
Commission of Inquiry into Genetic Modification by the
Network and its members should constitute a shadow Royal
Commission report and recommendations.
3. Fully engage with Royal Commission to ensure a complete
case is presented and heard.
4. Every misrepresentation or erroneous accusation made by
the organisations of anti-GMO (that is Greenpeace, Friends
of the Earth, Green Party) should be challenged and rebut-
ted.
5. Fears expressed by ordinary New Zealanders should be lis-
tened to and acknowledged.
6. The focus of all communications, messages, submissions,
evidence should be to establish the truth of the assertions as
stated in our desired outcomes.
7. Expert witnesses from New Zealand and overseas should
be available for media interviews; public meetings and other
opportunities to build public confidence through information,
education and knowledge.
252 The capitalization of knowledge

8. All opportunities for positive media coverage of the submis-


sions and evidence of the Network, its members and wit-
nesses should be taken.
9. Scientists must be prepared to stand up to re-build public trust
in their science and themselves.
10. Scientists must be prepared to engage and win over, through
logical debate, church leaders because they are still per-
ceived to be the arbiters of ethical standards.
Source: Extracted from the LSN website education archives, http://www.
lifesciencesnetwork.com/educationarchives.asp, accessed March 2005.

submissions, along with member group submissions, should ‘constitute a


shadow Royal Commission report and recommendations’. The outcome
of the RCGM, with its overall recommendation to ‘keep options open’
and to ‘preserve opportunities’, was then most reassuring that the LSN
was accomplishing its objectives. Looking back on the results of the
RCGM, Wevers stated that the LSN

could have written the Royal Commission’s report . . . the Royal Commission
basically accepted the moderate position we took . . . all that reflects is the fact
that the people in organizations which made up the Life Sciences Network were
not some radical outrageous group of people looking at this in an irrespon-
sible way . . . [but] that the position that we had come to was one which was
consistent with good policy, and consistent with the strategic interests of New
Zealand, which is what the Royal Commission was about.

With the government’s decision to ‘proceed with caution’, and the


subsequent lifting of the moratorium on GM in October 2003, the initial
objectives of the LSN were largely satisfied, but members expressed a wish
that the capabilities of the LSN not be lost and that, whilst the GM issue
might have subsided, there were other issues that could be addressed,
albeit not by the LSN in its present incarnation. Thus the LSN evolved
into the Biosciences Policy Institute, set up to promote ‘the education of
the public of New Zealand in matters relating to the biological sciences’
by providing non-governmental policy analysis and development. Wevers
was the Institute’s executive director and a former prime minister chaired
it. Wevers’s intentions for the ‘very, very low profile’ organization were
to ‘reflect the diversity of opinions and to encourage people and deci-
sion makers to make policy decisions on the basis of the best evidence
and the best scientific support’. A news service, which had been very
heavily subscribed during the RCGM period, was to expand to six-weekly
Boundary organizations in the triple helix 253

digests, a journal was to be launched and a biennial conference was to be


organized.
In May 2004, after less than a year in operation, the Biosciences Policy
Institute announced it was closing down due to ‘insufficient financial
support’ (Collins, 2004). Of the demise of the Biosciences Policy Institute
and dormancy of the LSN, Wevers argued, ‘It was a single-issue entity
set up right from the beginning to achieve a very specific objective, which
made it easy at the end to say “OK, we’ve done our job”.’ However,
the ‘absence of an important issue is the thing we haven’t been able to
overcome . . . There is no public controversy anymore . . . the public has
moved on.’ The LSN still exists, although it no longer has a staff resource.
However, as Rolleston stated, ‘It’s still there should industry and science
need to get together to deal with other issues where fear and over-reaction
arise’ (Collins, 2004). In the same month that the Biosciences Policy
Institute closed, Wevers was awarded the Supreme Award by the Public
Relations Institute of New Zealand for the LSN campaign on GM.

DISCUSSION

There is no doubt that, for its member organizations, the LSN was a very
successful boundary organization in that it achieved the outcome that was
desired through being very actively involved in the political discussion and
negotiation over the future of GM in New Zealand. Like any lobby group,
the purpose of this type of boundary organization was to supply informa-
tion from a particular perspective, in the hope of influencing decisions
made. Even though the physical aspects of the LSN (staff numbers, office
space etc.) were very small relative to those of other organizations, the iden-
tity of the organization and the ‘boundary’ between the LSN and ‘science’
was very effectively created by its constitution, its membership consisting
of representative industry organizations (not individual companies, having
learnt from the maligned participation of Monsanto in GenePool) and
research organizations (rather than individual scientists), its website and
prolific press releases and media articles. By responding at every opportu-
nity and rebutting any anti-GM sentiment expressed ‘in public’, the LSN
became the ‘voice’ for pro-GM. As one industry representative stated, ‘the
pro-GM side would have been a mess without [the LSN]’.
In a societal debate, such as that which occurred around the role of
GM in New Zealand, the audience was very diffuse and LSN’s aim was
to directly influence ‘public opinion’, and ultimately political decision-
making and resource allocation, in favour of the member organizations’
objectives but without the member organizations needing to actively
254 The capitalization of knowledge

participate or publicly ‘own’ the perspective or information supplied, as


indicated above. This is particularly important for members that were
publicly funded, such as the CRI and university members of the LSN.
Thus this type of boundary organization works towards, and enables,
the legitimation of a view, such as a pro-GM approach to New Zealand’s
future, rather than legitimation of the strategies of the member organiza-
tions. In other words, the front organization is the ‘active voice’ (a phrase
used in the LSN constitution), which enables a separation to be maintained
between the voiced perspective and those players who wish to express it.
In many ways, this type of boundary organization is true to the original
concept in that organizations like the LSN can be seen to be defending
scientific boundaries, in this case of GM science, from attack.
In addition to providing a voice for its members, the LSN also enabled
New Zealand science professional organizations, such as the RSNZ and
the ACRI, space to appear more moderate and conciliatory in the range of
views of their scientist members. Of the delineation between the LSN and
RSNZ, Wevers stated:

The [RSNZ] avoided being involved in this issue because it was schizophrenic.
It had one view which was the dominant view of the physical scientists and
another view called ‘traditional scientist’ which was a view espoused by a group
of social scientists and the two views were in conflict. So the [RSNZ] was unable
to take a leadership role in this and it was uncomfortable about that but realised
its own political reality. The [LSN] was specifically set up to be a focal point and
we weren’t going to beat around the bush and pretend we weren’t advocates.

Even though the ACRI had common membership with the LSN, with
four of the nine CRIs also members of the LSN, it was constrained by
its representative function. As explained by Anthony Scott, executive
director of ACRI:

[ACRI] is the voice for the nine CRIs on matters which the CRIs have in
common and on which they agree to have a common voice. We’re representa-
tive when we’ve been authorised but cannot bind any member . . . Almost all
of [the CRIs] were involved in GM research. Some are really leading from the
front. Some were, or were likely to be, primarily researching whether GM was
‘safe’ or what was likely to happen [if GM release occurred] in the New Zealand
situation. So some CRIs felt that they had to not only be, but had to be seen to
be, independent of an advocacy role. Some of course, were doing both. So the
ACRI role was to say to New Zealand ‘let’s talk about it, let’s understand what
the issues are, let’s have an informed discussion’.

Scott continued, ‘I think it was a wise strategy for those CRIs that
wanted to more strongly advocate the use of GM to join the LSN. It
avoided cross-fertilisation, if you like.’
Boundary organizations in the triple helix 255

As illustrated by these quotes, the LSN played a second demarcation role,


which Moore (1996) also observed with the public interest science organiza-
tions, in that the LSN was able to undertake the ‘trench-fighting’ role that
the RSNZ and the ACRI were unable to do (even if they had so wished)
because of the diversity of their membership. Thus the LSN enabled these
professional organizations to preserve their positions as representative
organizations of ‘pure science’ calling for more informed debate while, at
the same time, the LSN was able to be more ‘assertive and aggressive’ and
undertake the ‘trench-fighting’ for pro-GM science. Scott said that, in his
view, the LSN was useful in that ‘in a socially and politically contentious
issue such as [GM], a number of the CRIs were able to mediate their activi-
ties through that and get involved in what was a highly politicised debate’.
To some extent the same could be said of the role LSN played for the
representative industry organizations that were LSN members. As Wevers
argued, ‘these are organizations that have got diverse memberships, in
which they didn’t want to have that debate internally because of the inter-
nal disruption that would occur and so, consequently, its been important
to them in a strategic sense, to have these things said in public to enable
debate to develop’. Thus this demarcation of the political role into another
organization facilitates unity in the member organizations by minimizing
the opportunities for disruptive debate internal to the organizations, but
still facilitates debate in ‘public’. This is not just a need of ‘science’ but also
perhaps a need of any organization that purports to be representative of
diverse members on a range of issues, to have a ‘front’ organization play
the more contentious role.
The demarcation maintained by the LSN did more, however, than just
remove the member organizations from active participation in the debate.
The separation certainly helped the member organizations privilege their
seemingly distant positions, which may (perhaps necessarily for a public
sector organizations) have been more ‘neutral’ in stance. More signifi-
cantly, however, the separation in the relationship allowed the member
organizations to deny aspects of the front organization’s message that did
not suit its circumstances at a particular time, such as when it was chal-
lenged by other stakeholders. As publicly funded institutes, the CRIs’ and
universities’ participation in the LSN was open to challenge by parliamen-
tarians and the media, as happened just before the 2002 New Zealand elec-
tion when the Green Party leader questioned the suitability of membership
of the four publicly funded CRIs in the LSN (Hawkes, 2000). Again,
during the election, the appropriateness of ‘taxpayer’ funding (from the
CRIs) being used for pro-GE advertisements (Collins, 2002a; 2002b)
was queried. A typical media response from one CRI CEO illustrates the
deniability factor enabled by the separation:
256 The capitalization of knowledge

[The CEO] approved the [research organization’s] contribution as part of the


institute’s wider programme of public education. ‘I think it’s appropriate for
us to provide information to the public, and that is what the Life Sciences
Network is mostly about – factual information.’ (Collins, 2002a)

The demarcation in the relationships with the boundary organization


enabled diverse players to join together in a way that they might not be able
to do otherwise, in order to facilitate an action that could not be achieved
by individual players. This demarcation not only allows an individual
organization to distance itself from the debate but also enables a deni-
ability factor in that such ‘front’ organizations, having effectively created
a ‘boundary’, are viewed as relatively independent of their members. This
intended effect is illustrated in Rolleston’s rebuttal of accusations against
LSN: ‘The [LSN] did not tell its members what to say or think, and the
[member organizations] would be free to act independently. The main
reason for joining was to exchange information’ (Hawkes, 2000).
The ‘front’ aspect of boundary organizations also allows this demarca-
tion to happen on a temporary basis, maintaining a separation while a
strategy is facilitated. Once the desired action has been implemented, the
member organizations no longer require the front organization as a voice
for their strategy. The ambiguous separation between a boundary organi-
zation and its members is no longer required and the temporary boundary
organization is ‘retired’. In this case, the LSN tried to evolve into a more
permanent independent policy organization, but was unable find enough
financial support from previous or prospective members. As Moore
observes, ‘the longevity of an organization (or form of organization) is
largely determined by the ability to obtain resources such as members
or monies . . . to provide a product to those that need it, and to fit with
prevailing ideas about legitimate forms of organization’ (1996, p. 1621).
We propose that, unlike other boundary organizations discussed in the
literature, it is in the nature of these ‘front’ and/or ‘single-issue’ bound-
ary organizations that they are mostly, like the LSN, short-term entities,
as the conditions that necessitate their creation no longer exist. Unlike
Moore’s (1996) public interest science organizations that had survived
for several decades, the characteristics of demarcation and deniability
that come with such ‘front’ boundary organizations as the LSN run
counter to the transparency and accountability usually accepted as part
of the operations of publicly funded organizations. However, the (albeit
temporary) demarcation and deniability afforded by organizations such
as the LSN may, in fact, enable far less disturbance of the ‘system’ than
if the member organizations had more openly engaged in the public
debate and ‘trench-fighting’. Relationships could feasibly be severely and
Boundary organizations in the triple helix 257

irreversibly disturbed if players entered ‘the trenches’ themselves in order


to voice perspectives or pursue strategies to ‘protect’ science and research
organizations in the traditional sense of boundary work.
Thus the role of such boundary organizations in temporarily providing
separation contributes to sustaining the long-term stability of the relation-
ships between players, even though the ambiguity (and potential deniabil-
ity) in the relationship blurs the ‘normal’ boundaries between science and
political realms during its existence. Using this line of reasoning, the fact
that the Biosciences Policy Institute did not survive is not surprising, as
the conditions that instigated the need for, and tolerance of, the LSN were
no longer present in the sciento-political reality after the lifting of the GM
moratorium. Although some may view the ambiguity that surrounds the
role of these sorts of lobby groups as potentially disruptive to triple-helix
relationships, our research suggests that these organizations will only be
formed, and the ambiguity tolerated, under certain conditions. Attempts
to prolong the life of such boundary organizations once conditions have
changed may be problematic because the very ambiguity surrounding the
separation between players is not likely to be tolerated on a longer-term
basis in a new political environment. Thus such temporary boundary
organizations can be viewed as serving a useful short-term function for
the science they represent. As one industry representative reflected, ‘there
was a job needing doing. [The LSN] was an organization for its time’.
Temporary boundary organizations such as the LSN may, in fact, be a
pragmatic alternative to direct engagement of different scientific players in
political debates, which could potentially be very disruptive of the stable
interior of ‘bounded’ science.

CONCLUSION

The aim of this chapter was to provide an example of an organization inter-


mediary between science, industry and government that extends the current
conceptions of what roles such boundary organizations can play. The LSN
was an organization set up to provide the active pro-GM voice that repre-
sentative science (and industry) organizations were unable to undertake,
during a societal debate on the role of GM in New Zealand. The LSN very
effectively played the communication roles suggested for boundary organi-
zations, especially with its public relations campaign aimed at providing
politicians with accessible information that supported the LSN cause.
However, we argue that the LSN also performed several other impor-
tant functions. It provided the demarcation between science and politics,
which boundary work was originally conceived to explain. It also enabled
258 The capitalization of knowledge

an important element of deniability for the member science organizations


when stakeholders questioned the use of public funds for promoting one
‘side’ of the debate. Both of these characteristics of the ‘boundary’ created
by the LSN were particularly salient for the public-funded research
organizations, but were also equally useful to representative industry
organization members.
The particular contribution of this chapter is also to propose that the
existence of such boundary organizations is almost inevitably likely to be
fairly short and determined by the life-cycle of the issues they represent
in societal debate. Member organizations and their stakeholders (includ-
ing ‘the public’) are only likely to tolerate the ambiguity inherent in the
demarcated relationship while ‘the job needs doing’. We propose that such
short-term boundary organizations be viewed as a pragmatic solution to
the alternative of direct engagement in the debate by science, which could
be far more disruptive to ‘science’ and to relationships between triple-helix
players than the activities of a boundary organization separated at arm’s
length from ‘representative’ bodies and science in general.
Guston (2004) argued that we should be democratizing science by cre-
ating institutions and practices that fully incorporate principles of acces-
sibility, transparency and accountability. ‘Science advising in government
is unavoidably political, but we must make a concerted effort to make sure
it is democratic’, Guston (2004, p. 25) suggested, adding that we should be
asking, ‘What are the appropriate institutional channels for political dis-
course, influence and action in science?’ It is debatable whether the LSN
would, from some perspectives at least, be categorized as a ‘democratic’
boundary organization given the demarcation and deniability characteris-
tics. Yet it was tolerated for a period of time by many players in the soci-
etal debate and viewed as very successful by its members and some parts
of science. Given that science has evolved its boundaries to supposedly
encompass an unproblematic unified whole, such short-term ‘undemo-
cratic’ boundary organizations appear to be tolerated in certain political
conditions in order to preserve systems and relationships that underpin the
long-term stability of ‘science’.

NOTES

1. During the lead-up to the election, the government was accused of having covered up
the importation of corn with a modicum of GM contamination. ‘Corngate’ began with a
very public ‘ambush’ of the prime minister with the accusations in a live TV interview.
2. An organization was able to claim ‘interested person’ status if it could prove it had an
interest in the RCGM, over and above that of any interest in common with the public
(Davenport and Leitch, 2005).
Boundary organizations in the triple helix 259

REFERENCES

Ainsworth, S. and S. Itai (1993), ‘The role of lobbyists: entrepreneurs with two
audiences’, American Journal of Political Science, 37, 834–66.
Baumgartner, F. and B. Leech (1996), ‘Issue niches and policy bandwagons: pat-
terns of interest group involvement in national politics’, Journal of Politics, 63,
1191–213.
Beston, A. (2001), ‘GE parties fight to finish’, New Zealand Herald, 26 February.
Collins, S. (2002a), ‘Taxpayer cash in pro-GE adverts’, New Zealand Herald, 25
July.
Collins, S. (2002b), ‘Varsity unit gives cash to pro-GE fund’, New Zealand Herald,
26 July.
Collins, S. (2004), ‘Pro-GM lobby institute closes’, New Zealand Herald, 7 May.
Davenport, S. and S. Leitch (2005), ‘Agora, ancient & modern and a framework
for public debate’, Science & Public Policy, 32 (3), 137–53.
Espiner, G. (1999), ‘Accepting Monsanto money naïve, says Kirton’, The Evening
Post, 3, September, 2.
Evans, R. (2005), ‘Introduction: demarcation socialized: constructing boundaries
and recognizing difference’, Special Issue of Science, Technology & Human
Values, 30, 3–16.
Fisher, D. (2003), ‘Money talks for pro-GE spin machine’, Sunday Star Times, 16
November.
Geiryn, T. (1983), ‘Boundary-work and the demarcation of science from non-
science: strains and interests in the professional ideologies of scientists’, American
Sociological Review, 48, 781–95.
Guston, D. (2001), ‘Boundary organizations in environmental policy and science:
an introduction’, Special Issue of Science, Technology, & Human Values, 26 4),
399–408.
Guston, D. (2004), ‘Forget politicizing science. Let’s democratise science!’, Issues
in Science and Technology, 21, 25–8.
Hawkes, J. (2000), ‘Row flares over council’s pro-GE stance’, Waikato Times 28
August, 1.
Hellstrom, T. and M. Jacob (2003), ‘Boundary organizations in science: from dis-
course to construction’, Science & Public Policy, 30 (4), 235–8.
Heracleous, L. (2004), ‘Boundaries in the study of organization’, Human Relations,
57, 95–103.
Hernes, T. (2003), ‘Enabling and constraining properties of organizational bound-
aries’, in N. Paulsen and T. Hernes (eds), Managing Boundaries in Organizations:
Multiple Perspectives, New York: Palgrave, pp. 35–54.
Hernes, T. (2004), ‘Studying composite boundaries: a framework for analysis’,
Human Relations, 57, 9–29.
Hernes, T. and N. Paulsen (2003), ‘Introduction: boundaries and organization’,
in N. Paulsen and T. Hernes (eds), Managing Boundaries in Organizations:
Multiple Perspectives, New York: Palgrave, pp. 1–13.
Holyoke, T. (2003), ‘Choosing battlegrounds: interest group lobbying across mul-
tiple venues’, Political Research Quarterly, 56 (3), 325–36.
Kelly, S. (2003), ‘Public bioethics and publics: consensus, boundaries, and partici-
pation in biomedical science policy’, Science, Technology, & Human Values, 28
(3), 339–64.
260 The capitalization of knowledge

Lamont, M. and V. Molnar (2002), ‘The study of boundaries in the social sciences’,
Annual Review of Sociology, 28, 167–95.
Leydesdorff, L. (2000), ‘The triple helix: an evolutionary model of innovations’,
Research Policy, 29, 234–55.
Miller, C. (2001), ‘Hybrid management: boundary organizations, science policy,
and environmental governance in the climate regime’, Science, Technology &
Human Values, 26 (4), 478–501.
Moore, K. (1996), ‘Organizing integrity: American science and the creation of
public interest organizations, 1955–1975’, American Journal of Sociology, 6,
1592–627.
Samson, A. (1999), ‘Experts discuss action over deformed fish’, The Dominion, 23
April, 6.
Samson, A. (2000), ‘Scientists seek active role in GE Inquiry’, The Dominion, 14
August, 8.
Weaver, C.K. and J. Motion (2002), ‘Sabotage and subterfuge: public relations,
democracy and genetic engineering in New Zealand’, Media, Culture & Society,
24, 325–43.
Worrall, J. (2002), ‘GM man in the middle’, The Timaru Herald, 19 January.
10. The knowledge economy: Fritz
Machlup’s construction of a
synthetic concept1
Benoît Godin

In 1962, the American economist Fritz Machlup published an influ-


ential study on the production and distribution of knowledge in the
USA. Machlup’s study gave rise to a whole literature on the knowledge
economy, its policies and its measurement. Today, the knowledge-based
economy or society has become a buzzword in many writings and dis-
courses, both academic and official. Where does Machlup’s concept of
a knowledge economy come from? This chapter looks at the sources
of Machlup’s insight. It discusses The Production and Distribution of
Knowledge in the United States as a work that synthesizes ideas from four
disciplines or fields of research – philosophy (epistemology), mathematics
(cybernetics), economics (information) and national accounting – thus
creating an object of study, or concept for science policy, science studies
and the economics of science.

INTRODUCTION

According to many authors, think tanks, governments and international


organizations, we now live in a knowledge-based economy. Knowledge is
reputed to be the basis for many if not all decisions, and an asset to indi-
viduals and firms. Certainly, the role of knowledge in the economy is not
new, but knowledge is said to have gained increased importance in recent
years, both quantitatively and qualitatively, partly because of information
and communication technologies (see Foray, 2004).
The knowledge-based economy (or society) is only one of many con-
ceptual frameworks developed over the last 60 years to guide policies.
In this sense, it competes for influence with other frameworks such as
national innovation systems and triple helix. Certainly, all these concep-
tual frameworks carry knowledge as one dimension of analysis, but only

261
262 The capitalization of knowledge

the knowledge-based economy has knowledge itself – its production and


use – as its focus.
The concept of a knowledge economy comes from Fritz Machlup.
In 1962, the Austrian-born economist published a study that measured
the production and distribution of (all kinds of) knowledge in the USA
(Machlup, 1962a). The author estimated that, in 1958, the knowledge
economy accounted for $136.4 million or 29 per cent of GNP. Machlup
was the first to measure knowledge as a broad concept, while other meas-
urements were concerned with the production of scientific knowledge,
namely research and development (R&D), not its distribution.
Machlup’s calculations gave rise to a whole literature on the knowledge
economy, its policies and its measurement. The first wave, starting in the
1970s, was concerned with the so-called information economy. In fact,
both information and knowledge as terms were used interchangeably
in the literature. Using Machlup’s insights and the System of National
Accounts as a source for data, Porat (1977) calculated that the informa-
tion economy amounted to 46 per cent of GNP and 53 per cent of labour
income in the USA in 1967. Porat’s study launched a series of similar
analyses conducted in several countries and in the OECD (e.g. Godin,
2007). The second wave of studies on the knowledge economy started in
the 1990s and continues today. The OECD, and Foray as consultant to the
organization, relaunched the concept of a knowledge economy, with char-
acteristics broadly similar to those Machlup identified (Godin, 2006a).
This chapter is concerned with explaining where Machlup’s concept of
the knowledge economy comes from. In fact, from the then-current litera-
ture, it can easily be seen that the term, as well as Machlup’s definition,
was not entirely new. Philosophy was full of reflections on knowledge,
and some economists were beginning to develop an interest in knowledge.
Equally, Machlup’s method for measuring knowledge – accounting –
already existed, namely in the fields of the economics of research and the
economics of education. So, where does Machlup’s originality lie?
The thesis of this chapter is that The Production and Distribution
of Knowledge is a work of synthesis. First, the book is a synthesis of
Machlup’s own work conducted before its publication. Second, and more
importantly, the book is a synthesis of ideas from four disciplines or
fields of research: philosophy (epistemology), mathematics (cybernetics),
economics (information) and national accounting. Contrary to the work
of some economists, this chapter takes Machlup’s work on knowledge
seriously. Langlois, for example, has suggested that The Production and
Distribution of Knowledge is ‘more a semantic exercise than an economic
analysis . . . , categorizing and classifying, defining and refining, organ-
izing and labeling’ (Langlois, 1985). Given the influence the book had on
Fritz Machlup’s construction of a synthetic concept 263

science studies (although not on the mainstream economic literature) and


on policy discourses, we believe that this assertion offers a biased view of
history. As we discuss, there are several methods for quantifying knowl-
edge, and these are in competition. Machlup’s method was definitely not
orthodox in mainstream economics, and Langlois’s judgement precisely
illustrates this fact.
The first section of this chapter discusses Machlup’s construction of the
concept of knowledge and the sources of this construction. It looks at the
definition of knowledge as both scientific and ordinary knowledge, and
both production and distribution, and its ‘operationalization’ into four
components: education, R&D, communication and information. Writings
on epistemology, cybernetics and the economics of information are identi-
fied as the main intellectual inspiration for this construction. The second
section analyses Machlup’s measurement of the knowledge economy
based on a method of national accounting. This method is contrasted
which economists’ most cherished method: growth accounting. The final
section identifies the message or policy issues that Machlup associated
with the knowledge economy.

MACHLUP’S CONSTRUCTION

Fritz Machlup (1902–83), born in Austria, studied economics under


Ludwig von Mises and Friedrich Hayek at the University of Vienna in the
1920s, and emigrated to the USA in 1933.2 His two main areas of work
were industrial organization and monetary economics, but he also had a
life-long interest in the methodology of economics and the ideal-typical
role of assumptions in economic theory. Machlup’s work on the knowl-
edge economy, a work of a methodological nature, grew out of five lec-
tures he gave in 1959 and 1960. The rationale or motive Machlup offered
for studying the economics of knowledge was the centrality of knowledge
in society, despite the absence of theorizing in the economic literature. To
Machlup, ‘knowledge has always played a part in economic analysis, or at
least certain kinds of knowledge have . . . But to most economists and for
most problems of economics the state of knowledge and its distribution in
society are among the data assumed as given’ (Machlup, 1962a, pp. 3–4).
To Machlup, ‘now, the growth of technical knowledge, and the growth of
productivity that may result from it, are certainly important factors in the
analysis of economic growth and other economic problems’ (ibid., p. 5).
However, Machlup argued, there are other types of knowledge in addi-
tion to scientific knowledge. There is also knowledge of an ‘unproductive’
type for which society allocates ample resources: schools, books, radio
264 The capitalization of knowledge

and television. Also, organizations rely more and more on ‘brain work’ of
various sorts: ‘besides the researchers, designers, and planners, quite natu-
rally, the executives, the secretaries, and all the transmitters of knowledge
. . . come into focus’ (ibid., p. 7). To Machlup, these kinds of knowledge
deserve study.
Machlup (ibid., pp. 9–10) listed 11 reasons for studying the economics
of knowledge, among them:

1. Knowledge’s increasing share of the nation’s budget.


2. Knowledge’s social benefits, which exceed private benefits.
3. Knowledge as strongly associated with increases in productivity and
economic growth.
4. Knowledge’s linkages to new information and communication
technologies.
5. Shifts of demand from physical labour to brain workers.
6. Improving and adjusting national-income accounting.

Armed with such a rationale, Machlup suggested a definition of knowl-


edge that had two characteristics. First, Machlup’s definition included
all kinds of knowledge, scientific and ordinary knowledge: ‘we may des-
ignate as knowledge anything that is known by somebody’ (ibid., p. 7).
Second, knowledge was defined as consisting of both its production and
its distribution: ‘producing knowledge will mean, in this book, not only
discovering, inventing, designing and planning, but also disseminating and
communicating’ (ibid.).

Defining Knowledge

The first point about Machlup’s concept of knowledge was that it included
all kinds of knowledge, not only scientific knowledge, but ordinary knowl-
edge as well. Until then, most writings on knowledge were philosophical,
and were of a positivistic nature: knowledge was ‘true’ knowledge (e.g.
Ayer, 1956). As a consequence, the philosophy of practical or ordinary
action ‘intellectualized’ human action. Action was defined as a matter of
rationality and logic: actions start with deliberation, then intention, then
decision (see Bernstein, 1971). Similarly, writings on decision-making
were conducted under the assumption of strict rationality (rational choice
theory) (e.g. Amadae, 2003).
In 1949, the philosopher Gilbert Ryle criticized what he called the cul-
tural primacy of intellectual work (Ryle, 1949). By this he meant under-
standing the primary activity of mind as theorizing, or knowledge of true
propositions or facts. Such knowledge or theorizing Ryle called ‘knowing
Fritz Machlup’s construction of a synthetic concept 265

that’. ‘Both philosophers and laymen tend to treat intellectual operations


as the core of mental conduct (cognition) . . . The capacity for rigorous
theory that lays the superiority of men over animals, of civilized men over
barbarians and even of the divine mind over human mind . . . the capacity
to attain knowledge of truth was the defining property of a mind’ (ibid.,
p. 26).
To Ryle, there were other intellectual activities in addition to theoriz-
ing. To ‘knowing that’, Ryle added ‘knowing how’. Intelligence is ‘the
ability, or inability, to do certain sorts of things’, the ability to ‘know how
to perform tasks’ (ibid., pp. 27–8). These tasks are not preceded by intel-
lectual theorizing. ‘Knowing how’ is a disposition, a skill, and is a matter
of competence. To act intelligently is to apply rules, without necessarily
theorizing about them first. To Ryle, the error comes from the old ana-
lytical separation of mind (mental) and body (physical): doing is not itself
a mental operation, so performing ‘intelligent’ action must come from
thinking.
Ryle was one of the philosophers who were increasingly concerned with
subjective knowledge.3 The chemist and philosopher Michael Polanyi
drew a similar distinction to Ryle’s ten years later in Personal Knowledge,
between what he called connoisseurship, as the art of knowing, and
skills, as the art of doing (Polanyi, 1958). In this same book, Polanyi also
brought forth the idea of inarticulate intelligence, or tacit knowledge, and
this vocabulary became central to the modern conception of knowledge in
science studies (see, e.g., Winter, 1987) – together with concepts such as
learning-by-doing (e.g. Arrow, 1962a):

● information (data; facts) versus knowledge (useful information);


● codified versus ‘uncodified’ knowledge (not generally available);
● tacit knowledge (individual and experiential).

Knowledge as subjective knowledge came to economics via the Austrian


school of economists (see Knudsen, 2004), among them F.A. Hayek. In
Hayek’s hands, the concept of knowledge was used as a criticism of the
assumption of perfect information in economic theory. As is well known,
information is a central concept of neoclassical economic theory: people
have perfect information of the markets, and firms have perfect infor-
mation of the technology of the time, or production opportunities. This
is the familiar assumption of economic theory concerned with rational
order, coordination and equilibrium, and its modern formulation owes its
existence to John Hicks, Paul Samuelson and Gérard Debreu. As Hayek
put it: ‘If we possess all the relevant information, if we can start out from
a given system of preferences and if we command complete knowledge
266 The capitalization of knowledge

of available means, the problem which remains is purely one of logic’


(Hayek, 1945, p. 519).
But to Hayek, knowledge is never given for the whole society. Social
knowledge is:

dispersed bits of incomplete and frequently contradictory knowledge which all


the separate individuals possess. The economic problem of society is thus not
merely a problem of how to allocate given resources . . . It is rather a problem
of how to secure the best use of resources known to any of the members of
society . . . To put it briefly, it is a problem of the utilization of knowledge not
given to anyone in its totality . . . Any approach, such as that of mathemati-
cal economics with its simultaneous equations, which in effect starts from the
assumption that people’s knowledge corresponds with the objective facts of the
situation, systematically leaves out what is our main task to explain (Ibid., pp.
519–20; 530).

To Hayek, as to Ryle, objective or ‘scientific knowledge is not the sum


of all knowledge’ (ibid., p. 521). There are different kinds of knowledge:
unorganized, particular, individual, practical, skill, and experience. In real
life, no one has perfect information, but they have the capacity and skill to
find information. This knowledge has nothing to do with a pure logic of
choice, but is knowledge relevant to actions and plans. This kind of knowl-
edge, unfortunately for mathematical economists, ‘cannot enter into sta-
tistics’: it is mostly subjective. ‘To what extent’, thus asked Hayek, ‘does
formal economic analysis convey any knowledge about what happens in
the real world?’ (Hayek, 1937, p. 33) To Hayek, economic equilibrium is
not an (optimal) outcome (or state), but a process (activity) – the coordi-
nation of individuals’ plans and actions. In this process, individuals learn
from experience and acquire knowledge about things, events and others
that help them to act. In this sense, the system of prices plays the role of
a signal; prices direct attention: ‘the whole reason for employing the price
mechanism is to tell individuals that what they are doing, or can do, has
for some reason for which they are not responsible become less or more
demanded’ (Hayek, 1968 [1978], p. 187). ‘The price system is a mechanism
for communicating information’ (Hayek, 1945, p. 526).
Although perfect information, particularly information on prices,
would continue to define economic orthodoxy in the 1960s (and after),
more and more economists became interested in types of information,
or knowledge different from strict rationality and prices,4 and in analysis
of the economics of information itself (see Stigler, 1961, 1985; Boulding,
1966; Marschak, 1968, 1974; Lamberton, 1971; Arrow, 1973, 1974, 1979,
1984). The economics of science was one field where information took
centre stage. From the start, the problem of science was defined in terms
of decisions under uncertainty: how do you allocate resources to research,
Fritz Machlup’s construction of a synthetic concept 267

where benefits are uncertain and long-term? Researchers from RAND


(e.g. Hounshell, 1997), among them Arrow (1962b), then came to define
scientific knowledge as information, with specific characteristics that made
of it a public good: indivisibility, non-appropriability, uncertainty.
As can be seen, an interest in studying information or knowledge differ-
ently was developing from various economic angles in the 1950s and early
1960s. The new developments shared a different understanding apart from
strictly objective knowledge. This was also Machlup’s view. In line with
Ryle and Hayek, Machlup argued for ‘subjective’ knowledge. To Machlup,
knowledge ‘must not be limited by positivistic restrictions’ and need not be
‘true’ knowledge: ‘knowledge need not be knowledge of certified events and
tested theories; it may be knowledge of statements and pronouncements,
conjectures and hypotheses, no matter what their status of verification may
be’ (Machlup, 1962a, p. 23). To Machlup, ‘all knowledge regardless of the
strength of belief in it or warranty for it’ is knowledge (ibid.).
After discussing existing classifications of knowledge and their limita-
tions,5 Machlup identified five types of knowledge (1962a, pp. 22–3). His
classification or definition of knowledge served as the basis for selecting
activities and measuring the contribution of knowledge to the economy:

● practical (professional, business, workers, political, households);


● intellectual;
● small-talk and pastime (entertainment and curiosity);
● spiritual;
● ‘unwanted’ (accidentally acquired).

‘Operationalizing’ Knowledge

Defining knowledge as composed of all kinds of knowledge, scientific and


ordinary, was the first aspect of Machlup’s definition of knowledge. The
second was defining knowledge as both its production and its distribu-
tion. To Machlup, information is knowledge only if it is communicated
and used. This theoretical insight allowed Machlup to ‘operationalize’
his concept of knowledge as being composed of four elements: education,
R&D, communication and information.
According to Machlup, the largest sector of the knowledge economy
is concerned with distribution, and education itself is the largest part of
this ‘industry’. To Machlup, education includes not just formal educa-
tion in school, but also informal education. Eight categories or sources
of education were identified: home (mothers educating children), school,
training on the job (systematic and formal, excluding learning on the job),
church, armed forces, television, self-education, and experience. Machlup
268 The capitalization of knowledge

concentrated his analysis on the first six, in which knowledge is systematic


or transmitted by a teacher, but was able to measure only the first four due
to statistical difficulties.
The second component of knowledge, the creation of knowledge
or R&D, was what Machlup called the narrow sense of knowledge,
as opposed to his wider definition, which included its distribution. To
Machlup, R&D, commonly defined as the sum of basic research, applied
R&D, was inappropriate.6 In lieu of the existing classification as used by
the US National Science Foundation, for example, he offered a classifica-
tion based on a four-stage ‘model’ – which culminated in innovation, a
term Machlup explicitly preferred not to use:

Basic research S Inventive work S Development S Plant construction

Machlup was here taking stock of the new literature on the economics of
innovation and its linear model (e.g. Godin, 2006b, 2008). To economists,
innovation included more than R&D. Economists defined innovation as
different from invention as studied by historians. Innovation was defined
as the commercialization of invention by firms. To Machlup, adhering
to such an understanding was part of his analytical move away from the
primacy of scientific knowledge, or intellectual work.
The third component of Machlup’s ‘operationalization’ of knowledge
was media (of communication). Since all kinds of knowledge were relevant
knowledge to Machlup, not only scientific knowledge but also ordinary
knowledge, he considered a large range of vehicles for distribution: printing
(books, periodicals, newspapers), photography and phonography, stage
and cinema, broadcasting (radio and television), advertising and public
relations, telephone, telegraph and postal service, and conventions.
The final component of Machlup’s ‘operationalization’ was informa-
tion, itself composed of two elements: information services and infor-
mation machines (technologies). Information services, the eligibility for
inclusion of which ‘may be questioned’ in a narrow concept of knowledge
(Machlup, 1962a, p. 323), were: professional services (legal, engineering,
accounting and auditing, and medical), finance, insurance and real estate,
wholesale trade, and government. Information machines, of which he says
‘the recent development of the electronic-computer industry provides a
story that must not be missed’ (ibid., p. 295), included signalling devices,
instruments for measurement, observation and control, office information
machines and electronic computers.
Where does Machlup’s idea of defining knowledge as both production
and distribution come from? Certainly, production and distribution have
been key concepts of economics for centuries. However, the idea also
Fritz Machlup’s construction of a synthetic concept 269

derives from the mathematical theory of communication, as developed


independently by Claude Shannon and Norbert Wiener in the late 1940s
(Shannon, 1948; Wiener, 1948; Shannon and Weaver, 1949). In the fol-
lowing decades, this theory became very popular in several disciplines,
such as biology (e.g. Kay, 2000) and the social sciences (e.g. Heims, 1991).
Economics (e.g. Mirowsky, 2002), and the economics of information
(e.g. Marschak, 1959), would be no exception, and neither would science
studies (e.g. Rogers and Shoemaker, 1970). The theory of communica-
tion defined information in terms of probability and entropy, and the
content of information as resulting from the probability of this message
being chosen among a number of alternative communication channels.
Schematically, the theory portrayed information as a process involving
three elements (see Weaver, 1949):

Transmitter S Message S Receiver

To Machlup,

modern communication theory has given a description of the process between


and within two persons or units in a system, one the transmitter, the other the
receiver of the message. The transmitter selects the message from his informa-
tion store, transmits it, usually after encoding it into a signal, through a com-
munication channel to the receiver, who, after decoding the signal, puts the
message into his information store. (Machlup, 1962a, p. 31)

What types of communicators are involved in this process? To Machlup,


communicators, or knowledge-producers, as he suggested calling them,
were of several types, according to the degree to which the messages deliv-
ered to a person differ from the messages he has previously received. He
identified six types of knowledge-producers:

1. transporter: delivers exactly what he has received, without changing it


in the least (e.g. a messenger carrying a written communication);
2. transformer: changes the form of the message received, but not its
content (e.g. a stenographer);
3. processor: changes both form and content of what he has received,
by routine procedures like combinations or computations (e.g. an
accountant);
4. interpreter: changes form and content, but use imagination to create
new form effects (e.g. a translator);
5. analyser: uses so much judgement and intuition that the message
that he communicates bears little or no resemblance to the message
received;
270 The capitalization of knowledge

6. original creator: adds so much of his inventive genius and creative


imagination that only weak and indirect connections can be found
between what he has received and what he communicates.

To Machlup, knowledge covers the ‘entire spectrum of activities, from


the transporter of knowledge up to the original creator’ (ibid., p. 33). His
selection of industries for operationalizing knowledge’s activities illus-
trates this variety. He selected 30 specific groups of industries, or activi-
ties, shown in Table 10.1, covering the whole spectrum from creators to
transporters.
Despite his use of communication theory,7 Machlup did not retain the
theory of communication’s key term – information. As he explained later,
in a book he edited on information in 1984, information in cybernetics is
either a metaphor (as in the case of machines) or has nothing to do with
meaning but is a statistical probability of a sign or signal being selected
(as in the case of transmission): ‘real information can come only from an
informant. Information without an informant – without a person who
tells something – is information in an only metaphoric sense’ (Machlup,
1983, p. 657). Machlup preferred to use the term knowledge. In fact, he
refused to distinguish information (events or facts) from knowledge. To
Machlup, knowledge has a double meaning: ‘both what we know and
our state of knowing it’ (Machlup, 1962a, p. 13). The first is knowledge
as state, or result, while the second meaning is knowledge as process,
or activity. From an economic point of view, the second (transmission
of knowledge) is as important as the first: ‘Knowledge – in the sense of
what is known – is not really complete until it has been transmitted to
some others’ (ibid., p. 14). This was Machlup’s rationale for using the
term knowledge rather than information: ‘Information as that which is
being communicated becomes identical with knowledge in the sense of
that which is known’ (ibid., p. 15). Thus Machlup suggested that it is
‘more desirable to use, whenever possible, the word knowledge’, like the
ordinary use of the word, where all information is knowledge (see ibid.,
p. 8).

MEASURING KNOWLEDGE

When Machlup published The Production and Distribution of Knowledge,


the economic analysis of science was just beginning (see Hounshell, 2000).
A ‘breakthrough’ of the time was R.M. Solow’s paper on using the pro-
duction function to estimate the role of science and technology in eco-
nomic growth and productivity (Solow, 1957). The production function
Fritz Machlup’s construction of a synthetic concept 271

Table 10.1

Education Information machines


In the home Printing trades machines
On the job Musical instruments
In the church Motion picture apparatus and
In the armed forces equipment
Elementary and secondary Telephone and telegraph equipment
Colleges and universities Signaling devices
Commercial, vocational and Measuring and controlling
residential instruments
Federal funds Typewriters
Public libraries Electronic computers
R&D Other office machines
Basic research Office-machine parts
Applied research Personal services
Printing and publishing Legal
Books and pamphlets Engineering and architectural
Periodicals Accounting and auditing
Newspapers Medical
Stationery and other office suppliers Financial services
Commercial printing and Cheque-deposit banking
lithography Securities brokers
Photography and phonography Insurance agents
Photography Real estate agents
Phonography Wholesale agents
Stage, podium and screen Miscellaneous business services
Theatres and concerts Government
Spectator sports Federal
Motion pictures State and local
Radio and television
Advertising
Telecommunication media
Telephone
Telegraph
Postal service
Conventions

is an equation, or econometric ‘model’, that links the quantity produced


of a good (output) to quantities of input. There are at any given time, or
so economists argue, inputs (labour, capital) available to the firm, and a
large variety of techniques by which these inputs can be combined to yield
the desired (maximum) output. Using the production function, Solow
formalized early works on growth accounting (decomposing GDP into
272 The capitalization of knowledge

capital and labour), and equated the residual in his equation with techni-
cal change – although it included everything that was neither capital nor
labour – as ‘a shorthand expression for any kind of shift in the produc-
tion function’ (p. 312). Integrating science and technology was thus not
a deliberate initiative, but it soon became a fruitful one. Solow estimated
that nearly 90 per cent of growth was due to the residual. In the follow-
ing years, researchers began adding variables to the equation in order to
better isolate science and technology (e.g. Denison, 1962, 1967), or adjust-
ing the input and capital factors to capture quality changes in output (e.g.
Jorgenson and Griliches, 1967).
According to Machlup, a mathematical exercise such as the production
function was ‘only an abstract construction designed to characterize some
quantitative relationships which are regarded as empirically relevant’
(Machlup, 1962b, p. 155). What the production function demonstrated
was a correlation between input and output, rather than any causality: ‘a
most extravagant increase in input might yield no invention whatsoever,
and a reduction in inventive effort might by a fluke result in the output that
had in vain been sought with great expense’ (ibid., p. 153). To Machlup,
there were two schools of thought:

According to the acceleration school, the more that is invented the easier it
becomes to invent still more – every new invention furnishes a new idea for
potential combination . . . According to the retardation school, the more that
is invented, the harder it becomes to invent still more – there are limits to the
improvement of technology. (Ibid., p. 156)

To Machlup, the first hypothesis was ‘probably more plausible’, but ‘an
increase in opportunities to invent need not mean that inventions become
easier to make; on the contrary, they become harder. In this case there
would be a retardation of invention . . .’ (ibid., p. 162), because ‘it is pos-
sible for society to devote such large amounts of productive resources to
the production of inventions that additional inputs will lead to less than
proportional increases in output’ (ibid., p. 163).
For measuring knowledge, Machlup chose another method than econo-
metrics and the production function, namely national accounting. National
accounting goes back to the eighteenth century and what was then called
political arithmetic (see Deane, 1955, Buck, 1977, 1982; Cookson, 1983;
Endres, 1985; Mykkanen, 1994; Hoppit, 1996). But national accounting
really developed after World War II with the establishment of a standard-
ized System of National Accounts, which allowed a national bureau of sta-
tistics to collect data on the production of economic goods and services in
a country in a systematic way (see Studenski, 1958; Ruggles and Ruggles,
1970; Kendrick, 1970; Sauvy, 1970; Carson, 1975; Fourquet, 1980; Vanoli,
Fritz Machlup’s construction of a synthetic concept 273

2002). Unfortunately for Machlup, knowledge was not – and is still not – a
category of the National System of Accounts.
There are, argued Machlup, ‘insurmountable obstacles in a statistical
analysis of the knowledge industry’ (Machlup, 1962a, p. 44). Usually,
in economic theory, ‘production implies that valuable input is allocated
to the bringing forth of a valuable output’, but with knowledge there
is no physical output, and knowledge is most of the time not sold on
the market (ibid., p. 36). The need for statistically operational concepts
forced Machlup to concentrate on costs, or national income account-
ing. To estimate costs8 and sales of knowledge products and services,
Machlup collected numbers from diverse sources, both private and public.
However, measuring costs meant that no data were available on the inter-
nal (non-marketed) production and use of knowledge, for example inside
a firm: ‘all the people whose work consists of conferring, negotiating,
planning, directing, reading, note-taking, writing, drawing, blueprinting,
calculating, dictating, telephoning, card-punching, typing, multigraphing,
recording, checking, and many others, are engaged in the production of
knowledge’ (Machlup, 1962a, p. 41). Machlup thus looked at comple-
mentary data to capture the internal market for knowledge. He conducted
work on occupational classes of the census, differentiating classes of
white-collar workers who were knowledge-producing workers from those
that were not, and computing the national income of these occupations
(ibid., pp. 383 and 386). Machlup then arrived at his famous estimate: the
knowledge economy was worth $136.4 million, or 29 per cent of GNP in
1958, had grown at a rate of 8.8 per cent per year over the period 1947–58,
and occupied people representing 26.9 per cent of the national income (see
Table 10.2).
In conducting his accounting exercise, Machlup benefited from the
experience of previous exercises conducted on education (e.g. Wiles,
1956) and human capital (e.g. Walsh, 1935; Mincer, 1958; Schultz, 1959,
1960, 1961a, 1961b, 1962; Becker, 1962; Hansen, 1963), and, above all, on
research or R&D. The US National Science Foundation, as the producer

Table 10.2

$ (millions) %
Education 60.194 44.1
R&D 10.990 8.1
Media of communication 38.369 28.1
Information machines 8.922 6.5
Information services 17.961 13.2
274 The capitalization of knowledge

of official statistics on science in the USA, started collecting data on R&D


expenditures in the early 1950s (see Godin, 2005). Regular surveys were
conducted on four economic sectors: government, universities, firms,
and non-profit organizations. Then, in 1956, the Foundation published
its ‘first systematic effort to obtain a systematic across-the-board picture’
(NSF, 1956). It consisted of the sum of the results of the sectoral surveys
for estimating national funds for R&D. The National Science Foundation
calculated that the national budget for R&D amounted to $5.4 billion in
1953.
From the start, the data on R&D from the National Science Foundation
were inserted into the System of National Accounts’ framework as a
model: surveys were conducted according to economic sectors, the clas-
sifications used corresponded to available classifications, the matrix of
R&D money flows imitated the input–output tables accompanying the
System of National Accounts, and a ratio R&D to GNP was constructed.
To the National Science Foundation, such an alignment with the System
of National Accounts was its way to relate R&D to economic output
statistically: describing ‘the manner in which R&D expenditures enter the
gross national product in order to assist in establishing a basis for valid
measures of the relationships of such expenditures to aggregate economic
output’ (NSF, 1961, p. i).
Machlup made wide use of the National Science Foundation’s data
for his own accounting. As Nelson once stated: ‘the National Science
Foundation has been very important in focusing the attention of econo-
mists on R&D (organized inventive activity), and the statistical series the
NSF has collected and published have given social scientists something
to work with’ (Nelson, 1962, p. 4). The organization’s numbers were one
of many sources Machlup added together in calculating his estimate of
the size of the knowledge economy. In fact, for most of his calculations,
Machlup did not use the System of National Accounts, as Porat would
for his work on the information economy. Instead he looked liberally at
the literature for available numbers, like the National Science Foundation
data, and conducted many different calculations (summations, math-
ematical projections, estimations and computations of opportunity costs).
Neither was Machlup addicted to accounting. Although he chose costs for
his estimate of the knowledge economy, he discussed and suggested many
other statistics. For media of communication, he looked at the number
of books and periodicals, their circulation and content; for information,
he collected figures on types of technology, and use of technologies in
households; on education, he recommended using figures for attendance,
years of schooling, achievement tests, class hours, amount of homework,
and subject-matter requirements; for R&D, he proposed a list of measures
Fritz Machlup’s construction of a synthetic concept 275

on input and output (see Appendix 1), and relationships or ratios between
the two9.
Machlup was realistic about his own accounting, qualifying some of his
estimates as being speculative (Machlup, 1962a, p. 62), that is, ideas of
magnitude and trends based on conjecture rather than exact figures (ibid.,
p. 103), and he qualified some of his comparisons ‘with several grains of
salt’ (ibid., p. 374). To Machlup, it was the message rather than the statisti-
cal adequacy that was important. The very last sentence of the book reads
as follows: ‘concern about their accuracy [statistical tables] should not
crowd out the message it conveys’ (ibid., p. 400).

THE MESSAGE

Apart from theoretical borrowings from philosophy, mathematics, eco-


nomics and national accounting, we can identify policy issues and
even professional interests in Machlup’s analysis at several levels. First,
Machlup was concerned with the challenges facing the education and
research system of which he was part. Second, he was concerned, as
analyst, with the new information technology ‘revolution’.
For each of the four components operationalizing his definition of
knowledge, Machlup identified policy issues, and this partly explains the
inclusion of the components in the operationalization. The policy issues
Machlup identified were mainly economic. To begin with education, the
central question discussed was productivity. Machlup distinguished pro-
ductivity in education (or performance) from productivity of education (or
simply productivity). With regard to productivity in education, Machlup
suggested compressing the curriculum to accelerate the production of
well-trained brainpower and therefore economic growth.

We need an educational system that will significantly raise the intellectual


capacity of our people. There is at present a great scarcity of brainpower in our
labor force. . .. Unless our labor force changes its composition so as to include
a much higher share of well-trained brainpower, the economic growth of the
nation will be stunted and even more serious problems of employability will
arise. (Machlup, 1962a, p. 135)

Concerning the productivity of education, he suggested considering


(and measuring) education as an investment rather than as a cost, and
as an investment not only in the individual (earnings) but also in society
(culture), in line with studies on social rates of return on research (e.g.
Shultz, 1953; Griliches, 1958).
As to the second component – R&D – Machlup confessed that ‘this
276 The capitalization of knowledge

subject was his first interest in the field of knowledge production. The
temptation to expand the area of study to cover the entire industry came
later, and proved irresistible’ (Machlup, 1962a, p. 48). To Machlup, the
policy issues involving R&D were twofold. One was the decline of inven-
tions. From the early 1950s, Machlup had studied monopolies and the role
of patents in competition (Machlup, 1952), and particularly the role of
the patent system in inducing invention (e.g. Machlup and Penrose, 1950;
Machlup, 1958b). Following several authors, among them J. Schmookler
(e.g. Schmookler, 1954), he calculated a decline in patenting activity after
1920 (Machlup, 1961). He wondered whether this was due to the patent
system itself, or to other factors. In the absence of empirical evidence,
he suggested that ‘faith alone, not evidence, supports’ the patent system.
To Machlup, it seems ‘not very likely that the patent system makes much
difference regarding R&D expenditures of large firms’ (Machlup, 1962a,
p. 170).
A second policy issue concerning R&D was the productivity of research,
and his concern with this issue grew out of previous reflections on the allo-
cation of resources to research activities and the inelasticity in the short-
term supply of scientists and engineers (Machlup, 1958a). To Machlup,
research, particularly basic research, is an investment, not a cost. Research
leads to an increase in economic output and productivity (goods and
services), and society gains from investing in basic research with public
funds: the social rate of return is higher than private ones (see Griliches,
1958), and ‘the nation has probably no other field of investment that yields
return of this order’ (Machlup, 1962a, p. 189). But there actually was a
preference for applied research in America, claimed Machlup: ‘American
preference for practical knowledge over theoretical, abstract knowledge
is a very old story’ (ibid., pp. 201–2). That there was a ‘disproportionate
support of applied work’ (ibid., pp. 203) was a popular thesis of the time
among scientists (see Reingold, 1971). To Machlup, there was a social cost
to this: echoing V. Bush, according to whom ‘applied research invariably
drives out pure’ research (Bush, 1945 [1995], p. xxvi), Machlup argued
that industry picks up potential scientists before they have completed
their studies, and dries up the supply of research personnel (shortages).
Furthermore, if investments in basic research remain too low (8 per cent
of total expenditures on R&D), applied research will suffer in the long run,
since it depends entirely on basic research. Such was the rhetoric of the
scientific community’s members at the time.
These were the main policy issues Machlup discussed. Concerning the
last two components of his definition – communication and information
– Machlup was very brief. In fact, his policy concern was mainly with
information technologies and the technological revolution. To Machlup,
Fritz Machlup’s construction of a synthetic concept 277

the important issue here was twofold. The first part was rational decision-
making: the effects of information machines are ‘improved records,
improved decision-making, and improved process controls . . . that permit
economies’ (Machlup, 1962a, p. 321). Machlup was here offering what
would become the main line of argument for the information economy
in the 1980s and after: information technologies as a source of growth
and productivity. The second part was the issue of structural change and
unemployment (‘replacement of men by machines’). Structural change
was a concern for many in the 1940s and 1950s, and the economist Wassily
Leontief devoted numerous efforts to measuring it using input–output
tables and accounting as a framework (e.g. Leontief, 1936, 1953, 1986; see
also Leontief, 1952, 1985; Leontief and Duchin, 1986). ‘There has been’,
stated Machlup, ‘a succession of occupations leading [the movement to a
knowledge economy], first clerical, then administrative and managerial,
and now professional and technical personnel . . . , a continuing move-
ment from manual to mental, and from less to more highly trained labor’
(Machlup, 1962a, pp. 396–7). To Machlup, ‘technological progress has
been such as to favor the employment of knowledge-producing workers’
(ibid., p. 396), but there was the danger of increasing unemployment
among unskilled manual labour (ibid., p. 397). In the long run, however,
‘the demand for more information may partially offset the labor-replacing
effect of the computer-machine’ (ibid., p. 321).
With regard to communication, the fourth component of his ‘opera-
tionalization’ of knowledge, Machlup discussed no specific policy issue.
But there was one in the background, namely the information explosion
(see Godin, 2007). In the 1950s, the management of scientific and techni-
cal literature emerged as a concern to many scientists and universities,
and increasingly to governments. According to several authors, among
them science historian Derek J. de Solla Price, scientific and technical
information, as measured by counting journals and papers, was growing
exponentially. Science was ‘near a crisis’, claimed Price, because of the pro-
liferation and superabundance of literature (Price, 1956). Some radically
new technique must be evolved if publication is to continue as a useful
contribution (Price, 1961). The issue gave rise to scientific and technical
information policies starting from the early 1960s, as a precursor to poli-
cies on the information economy and, later, on information technology
(see Godin, 2007).
In 1962, Machlup did not discuss the issue of information explosion. He
even thought that counting the number of books was a ‘very misleading
index of knowledge’ (Machlup, 1962a, p. 122). However, in the 1970s, he
conducted a study on ‘The production and distribution of scientific and
technological information’, published in four volumes as Information
278 The capitalization of knowledge

through the Printed World (Machlup and Lesson, 1978–80). Produced for
the National Science Foundation, the study looked at books, journals,
libraries, and their information services from a quantitative point of
view, as had been done in The Production and Distribution of Knowledge:
the structure of the industries, markets, sales, prices, revenues, costs,
collections, circulation, evaluation and use.
Machlup wrote on knowledge at a time when science, or scientific
knowledge, was increasingly believed to be of central importance to society
– and scientists benefited largely from public investments in research.
Economists, according to whom ‘if society devotes considerable amounts
of its resources to any particular activity, will want to look into this alloca-
tion and get an idea of the magnitude of the activity, its major breakdown,
and its relation to other activities’ (Machlup, 1962a, p. 7), started measur-
ing the new phenomenon, and were increasingly solicited by governments
to demonstrate empirically the contribution of science to society – cost
control on research expenditures was not yet in sight. Machlup was part
of this ‘movement’, with his own intellectual contribution.

CONCLUSION

Machlup’s study on the knowledge economy accomplished three tasks.


It defined knowledge, measured it, and identified policy issues. The
message was that knowledge was an important component of the
economy, but does not completely respond to an economic logic. With
The Production and Distribution of Knowledge, Machlup brought the
concept of knowledge into science policies and science studies. His con-
ception of knowledge was synthesized from three intellectual trends of
the time: ‘disintellectualizing’ and ‘subjectivizing’ knowledge (ordinary
knowledge); looking at knowledge as a communication process (produc-
tion and distribution); and measuring its contribution to the economy (in
terms of accounting).
In the early 1980s, Machlup began updating his study on the knowl-
edge economy with a projected ten-volume series entitled Knowledge: its
Creation, Distribution, and Economic Significance (Machlup, 1980–84).
He died after finishing the third volume. By then, he was only one of
many measuring the knowledge or information economy. With this
new project, Machlup kept to his original method as developed in 1962:
national accounting. This was a deliberate choice. In fact, there were two
types of accounting measurement in the economic literature of the time.
One was growth accounting. It used econometrics, and was the cherished
method among quantitative economists. With the aid of equations and
Fritz Machlup’s construction of a synthetic concept 279

statistical correlations, economists tried to measure the role of knowledge


in economic growth, following in Solow’s footsteps. Machlup did not
believe in this method. The second method was national accounting. This
method was not very attractive to economists – although developed by
one of them (Simon Kuznets). It relied on descriptive statistics rather than
formalization. Its bad reputation, and the reluctance of economists to use
national accounting, have a long tradition, going back to the arguments
of eighteenth-century classical economists against political arithmetic (see
Johannisson, 1990). It was such a reluctance that economist R.R. Nelson
expressed while reviewing Machlup’s book in Science in 1963. Nelson
expressed his disappointment that Machlup had not studied the role and
function of knowledge: ‘Machlup is concerned principally with identify-
ing and quantifying the inputs and outputs of the knowledge-producing
parts of the economy and only secondarily with analyzing the function
of knowledge and information in the economic system’ (Nelson, 1963,
pp. 473–4).

MACHLUP’S SOURCES OF INSIGHT (see Figure 10.1)

Today, the measurement of knowledge is often of a third kind. Certainly,


knowledge is still, most of the time, defined as Machlup suggested (crea-
tion and use) – although the term has also become a buzzword for any
writing and discourse on science, technology and education. But in the
official literature, knowledge is actually measured using indicators. Such

Field Concept Machlup

Philosophy Subjective knowledge


(Ryle, Polanyi)
KNOWLEDGE

Economics Information
(Hayek, Arrow)

Mathematics Communication COMMUNICATION


(Shannon and Wiener) INFORMATION

Statistics Accounting EDUCATION


(NSF, human capital) R&D

Figure 10.1 Machlup’s sources of insight


280 The capitalization of knowledge

measurements are to be found in publications from the OECD and the


European Union, for example. Here, knowledge is measured using a
series or list of indicators gathered under the umbrella term ‘knowledge’
(see Godin, 2006a). There is no summation (or composite value), as in
accounting, but a collection of available statistics on several dimensions of
knowledge, that is, science and technology, among them those on informa-
tion technologies (see Appendix 2).
The methodology of indicators for measuring knowledge, information
or simply science comes partly from Machlup. We have seen how Machlup
complemented his accounting exercise with discussions on various sorts of
statistics, among them statistics on R&D organized into an input–output
framework. In 1965, the British economist Christopher Freeman, as con-
sultant to the OECD, would suggest such a collection of indicators to
the organization (Freeman and Young, 1965). In the 1970s, the National
Science Foundation initiated such a series, entitled Science Indicators,
which collected multiple statistics for measuring science and technol-
ogy. To statistics on input, among them money devoted to R&D, the
organization added statistics on output such as papers, citations, patents,
high-technology products and so on. The rationale behind the collection
of indicators was precisely that identified by Machlup as a policy issue:
the ‘productiveness’, or efficiency of the research system (National Science
Board, 1973, p. iii). Since then, the literature and measurement on knowl-
edge has growth exponentially, and knowledge is now a central concept of
many conceptual frameworks like the triple helix.
While Machlup has been influential on many aspects of the analysis
of knowledge, among them definition and measurement, current meas-
urements of knowledge are still restricted to scientific knowledge and
information technology. Certainly, many aspects of knowledge remain
non-accountable, as they were in the 1960s, but the economic orienta-
tion of policies and official statistics (economic growth and productiv-
ity) probably explains much of this orientation, to which Machlup has
contributed.

NOTES

1. The author thanks Michel Menou for comments on a preliminary draft of this chapter.
2. He taught at the University of Buffalo (1935–47), then Johns Hopkins (1947–60), then
Princeton (1960–71). After retiring in 1971, he joined New York University until his
death.
3. At about the same time, B. Russell distinguished between what he called social and indi-
vidual knowledge, the first concerned with learned knowledge, the other with experience.
See Russell (1948); see also Schutz (1962) and Schutz and Luickmann (1973).
Fritz Machlup’s construction of a synthetic concept 281

4. Knowledge of others’ behaviour (strategic), knowledge of institutions and rules, bounded


rationality.
5. Basic versus applied (difficulties in separating the two), scientific versus historical (focus-
ing largely on school learning), enduring versus transitory (the latter nevertheless has
great economic value), instrumental versus intellectual versus spiritual (no place for
knowledge of transitory value).
6. Other distinctions he discussed were: discovery versus invention (W.C. Kneale), major
versus minor inventions (S.C. Gilfillan and W.F. Ogburn).
7. Machlup quoted Weaver at two points in his book.
8. Machlup preferred the concept of investments in the case of education and R&D.
9. For an in-depth discussion of Machlup on this topic, see Machlup (1960).

REFERENCES

Amadae, S.M. (2003), Rationalizing Capitalist Democracy: The Cold War Origins
of Rational Choice Liberalism, Chicago, IL: University of Chicago Press.
Arrow, K.J. (1962a), ‘The economic implication of learning-by-doing’, Review of
Economic Studies, 29, 155–73.
Arrow, K.J. (1962b), ‘Economic welfare and the allocation of resources for inven-
tion’, in National Bureau of Economic Research, The Rate and Direction of
Inventive Activity, Princeton, NJ: Princeton University Press, pp. 609–25.
Arrow, K.J. (1973), ‘Information and economic behavior’, lecture given at the
1972 Nobel Prize Celebration, Stockholm: Federation of Swedish Industries.
Arrow, K.J. (1974), ‘Limited knowledge and economic analysis’, American
Economic Review, 64, 1–10.
Arrow, K.J. (1979), ‘The economics of information’, in M.L. Deltouzos and J.
Moses (eds), The Computer Age: A Twenty-Year View, Cambridge, MA: MIT
Press, pp. 306–17.
Arrow, K.J. (1984), The Economics of Information, Cambridge, MA: Harvard
University Press.
Ayer, A.J. (1956), The Problem of Knowledge, Harmondsworth: Penguin Books.
Becker, G.S. (1962), ‘Investment in human capital: a theoretical analysis’, Journal
of Political Economy, 70 (5), 9–49.
Bernstein, R.J. (1971), Praxis and Action, Philadelphia, PA: University of
Pennsylvania Press.
Boulding, K.E. (1966), ‘The economics of knowledge and the knowledge of eco-
nomics’, American Economic Review, 56 (1–2), 1–13.
Buck, P. (1977), ‘Seventeenth-century political arithmetic: civil strife and vital
statistics’, ISIS, 68 (241), 67–84.
Buck, P. (1982), ‘People who counted: political arithmetic in the 18th century’,
ISIS, 73 (266), 28–45.
Bush, V. (1945) [1995], Science: The Endless Frontier, North Stratford, NH: Ayer
Company Publishers.
Carson, C.S. (1975), ‘The history of the United States National Income and
Product Accounts: the development of an analytical tool’, Review of Income and
Wealth, 21 (2), 153–81.
Cookson, J.E. (1983), ‘Political arithmetic and war in Britain, 1793–1815’, War
and Society, 1, 37–60.
Deane, P. (1955), ‘The implications of early national income estimates for the
282 The capitalization of knowledge

measurement of long-term economic growth in the United Kingdom’, Economic


Development and Cultural Change, 4 (1), Part I, 3–38.
Denison, E.F. (1962), The Sources of Economic Growth in the United States and the
Alternatives Before Us, Committee for Economic Development, New York.
Denison, E.F. (1967), Why Growth Rates Differ, Washington, DC: Brookings
Institution.
Endres, A.M. (1985), ‘The functions of numerical data in the writings of Graunt,
Petty, and Davenant’, History of Political Economy, 17 (2), 245–64.
Foray, D. (2004), Economics of Knowledge, Cambridge, MA: MIT Press.
Fourquet, F. (1980), Les comptes de la puissance, Paris: Encres.
Freeman, C. and A. Young (1965), The Research and Development Effort in Western
Europe, North America and the Soviet Union: An Experimental International
Comparison of Research Expenditures and Manpower in 1962, Paris: OECD.
Godin, B. (2005), Measurement and Statistics on Science and Technology: 1920 to
the Present, London: Routledge.
Godin, B. (2006a), ‘The knowledge-based economy: conceptual framework or
buzzword?’, Journal of Technology Transfer, 31, 17–30.
Godin, B. (2006b), ‘The linear model of innovation: the historical construction of
an analytical framework, science’, Technology and Human Values, 31 (6), 1–29.
Godin, B. (2007), ‘The information economy: the history of a concept through its
measurement, 1949–2005’, History and Technology, 24 (3), 255–87.
Godin, B. (2008), In the Shadow of Schumpeter: W. Rupert Maclaurin and the
Study of Technological Innovation, Minerva, 46 (3), 343–60.
Griliches, Z. (1958), ‘Research costs and social returns: hybrid corn and related
innovation’, Journal of Political Economy, 46, 419–31.
Hansen, W.L. (1963), ‘Total and private rates of return to investment in school-
ing’, Journal of Political Economy, 71, 128–40.
Hayek, F.A. (1937), ‘Economics and knowledge’, Economica, 4, 33–54.
Hayek, F.A. (1945), ‘The use of knowledge in society’, American Economic Review,
35 (4), 519–30.
Hayek, F.A. (1968) [1978], ‘Competition as a discovery procedure’, in New Studies
in Philosophy, Economics and the History of Ideas, London: Routledge, pp.
179–90.
Heims, S.J. (1991), Constructing a Social Science for Postwar America: The
Cybernetics Group, 1946–1953, Cambridge, MA: MIT Press.
Hoppit, J. (1996), ‘Political arithmetic in 18th century England’, Economic History
Review, 49 (3), 516–40.
Hounshell, D.A. (1997), ‘The Cold War, RAND, and the generation of knowl-
edge, 1946–1962’, Historical Studies in the Physical and Biological Sciences, 27
(2), 237–67.
Hounshell, D.A. (2000), ‘The medium is the message, or how context matters: The
RAND Corporation builds an economics of innovation, 1946–1962’, in A.C.
Hughes and H.P. Hughes (eds), Systems, Experts, and Computers, Cambridge,
MA: MIT Press, pp. 255–310.
Johannisson, K. (1990), ‘Society in numbers: the debate over quantification in
18th century political economy’, in T. Frangsmyr et al. (eds), The Quantifying
Spirit in the Eighteenth Century, Berkeley, CA: University of California Press,
pp. 343–61.
Jorgenson, D.W. and Z. Griliches (1967), ‘The explanation of productivity
change’, Review of Economic Studies, 34 (3), 249–83.
Fritz Machlup’s construction of a synthetic concept 283

Kay, L.E. (2000), ‘How a genetic code became an information system’, in A.C.
Hughes and H.P. Hughes (eds), Systems, Experts, and Computers, Cambridge,
MA: MIT Press, pp. 463–91.
Kendrick, J.W. (1970), ‘The historical development of national-income accounts’,
History of Political Economy, 2 (1), 284–315.
Knudsen, C. (2004), ‘Alfred Schutz, Austrian economists and the knowledge
problem’, Rationality and Society, 16 (1), 45–89.
Lamberton, D. (1971), Economics of Information and Knowledge, Harmondsworth:
Penguin.
Langlois, R.N. (1985), ‘From the knowledge of economics to the economics of
knowledge: Fritz Machlup on methodology and on the knowledge society’,
Research in the History of Economic Thought and Methodology, 3, 225–35.
Leontief, W. (1936), ‘Quantitative input and output relations in the economic
systems of the United States’, Review of Economic Statistics, 18 (3), 105–25.
Leontief, W. (1952), ‘Machines and man’, Scientific American, 187, 150–60.
Leontief, W. (ed.) (1953), Studies in the Structure of the American Economy, New
York: Oxford University Press.
Leontief, W. (1985), ‘The choice of technology’, Scientific American, 252, 37–45.
Leontief, W. (1986), Input–Output Economics, Oxford: Oxford University Press.
Leontief, W. and F. Duchin (1986), The Future Impact of Automation on Workers,
Oxford: Oxford University Press.
Machlup, F. (1952), The Political Economy of Monopoly: Business, Labor and
Government Policies, Baltimore, MD: Johns Hopkins Press.
Machlup, F. (1958a), ‘Can there be too much research?’, Science, 128 (3335),
1320–25.
Machlup, F. (1958b), An Economic Review of the Patent System, Study no. 15,
Committee on the Judiciary, 85th Congress, Second Session, Washington, DC.
Machlup, F. (1960), ‘The supply of inventors and inventions’, Weltwirtschaftliches
Archiv, 85, 210–54.
Machlup, F. (1961), ‘Patents and inventive efforts’, Science, 133 (3463), 1463–6.
Machlup, F. (1962a), The Production and Distribution of Knowledge in the United
States, Princeton, NJ: Princeton University Press.
Machlup, F. (1962b), ‘The supply of inventors and inventions’, in National Bureau
of Economic Research, The Rate and Direction of Inventive Activity, Princeton,
NJ: Princeton University Press, pp. 143–69.
Machlup, F. (1980–84), Knowledge: Its Creation, Distribution, and Economic
Significance, Princeton, NJ: Princeton University Press.
Machlup, F. (1983), ‘Semantic quirks in studies of information’, in F. Machlup
and U. Mansfield (eds), The Study of Information: Interdisciplinary Messages,
New York: John Wiley, p. 657.
Machlup, F. and K. Lesson (1978–80), Information through the Printed World: The
Dissemination of Scholarly, Scientific, and Intellectual Knowledge, New York:
Praeger.
Machlup, F. and E. Penrose (1950), ‘The patent controversy in the nineteenth
century’, Journal of Economic History, 1 (1), 1–29.
Marschak, J. (1959), ‘Remarks on the economics of information’, in Contributions
to Scientific Research in Management, Western Data Processing Center, Los
Angeles: University of California, pp. 79–98.
Marschak, J. (1968), ‘Economics of inquiring, communicating, deciding’, American
Economic Review, 58 (2), 1–18.
284 The capitalization of knowledge

Marschak, J. (1974), Economic Information, Decision and Prediction, Dordrecht:


Reidel.
Mincer, J. (1958), ‘Investment in human capital and personal income distribution’,
Journal of Political Economy, 66 (4), 281-302.
Mirowsky, P. (2002), Machine Dreams: Economics Becomes a Cyborg Science,
Cambridge: Cambridge University Press.
Mykkanen, J. (1994), ‘To methodize and regulate them: William Petty’s govern-
mental science of statistics’, History of the Human Sciences, 7 (3), 65–88.
OECD (2001), STI Scoreboard: Towards a Knowledge-Based Economy, Paris:
OECD.
National Science Board (1973), Science Indicators 1972, Washington, DC: NSF.
Nelson, R.R. (1962), ‘Introduction’, in National Bureau of Economic Research,
The Rate and Direction of Inventive Activity, Princeton, NJ: Princeton University
Press, p. 4.
Nelson, R.R. (1963), ‘Role of knowledge in economic growth’, Science, 140 (3566),
473–4.
NSF (1956), ‘Expenditures for R&D in the United States: 1953’, Reviews of Data
on R&D, 1, NSF 56-28, Washington, DC.
NSF (1961), ‘R&D and the GNP’, Reviews of Data on R&D, 26, NSF 61-9,
Washington, DC, p. 1.
Polanyi, M. (1958), Personal Knowledge: Towards a Post-Critical Philosophy,
Chicago, IL: University of Chicago Press.
Porat, M.U. (1977), The Information Economy, 9 vols, Office of Telecommunication,
US Department of Commerce, Washington, DC.
Price, D.D.S. (1956), ‘The exponential curse of science’, Discovery, 17, 240–43.
Price, D.D.S. (1961), Science since Babylon, New Haven, CT: Yale University
Press.
Reingold, N. [1971], ‘American indifference to basic research: a reappraisal’, in N.
Reingold, Science: American Style, New Brunswick, NJ and London: Rutgers
University Press, 1991, pp. 54–75.
Rogers, E.M. and F.F. Shoemaker (1970), Communication of Innovation: a Cross-
Cultural Approach, New York: Free Press.
Ruggles, N. and R. Ruggles (1970), The Design of Economic Accounts, National
Bureau of Economic Research, New York: Columbia University Press.
Russell, B. (1948), Human Knowledge: Its Scope and Limits, New York: Simon and
Schuster.
Ryle, G. (1949), The Concept of Mind, Chicago, IL: University of Chicago Press.
Sauvy, A. (1970), ‘Histoire de la comptabilité nationale’, Économie et Statistique,
14, 19–32.
Schmookler, J. (1954), ‘The level of inventive activity’, Review of Economics and
Statistics, 36 (2), 183–90.
Schultz, T.W. (1953), The Economic Organization of Agriculture, New York:
McGraw-Hill.
Schultz, T.W. (1959), ‘Investment in man: an economist’s view’, Social Service
Review, 33 (2), 109–17.
Schultz, T.W. (1960), ‘Capital formation by education’, Journal of Political
Economy, 68 (6), 571–83.
Schultz, T.W. (1961a), ‘Investment in human capital’, American Economic Review,
51 (1), 1–17.
Schultz, T.W. (1961b), ‘Education and economic growth’, in N.B. Henry (ed.),
Fritz Machlup’s construction of a synthetic concept 285

Social Forces Influencing American Education, Chicago, IL: University of


Chicago Press, pp. 46–88.
Schultz, T.W. (1962), ‘Reflections on investment in man’, Journal of Political
Economy, 70 (5), 1–8.
Schutz, A. (1962), Collected Papers I: the Structure of Social Reality, Dordrecht:
Kluwer.
Schutz, A. and T. Luickmann (1973), The Structures of the Life-World, Evanston,
IL: Northwestern University Press.
Shannon, C.E. (1948), ‘A mathematical theory of communication’, in N.J.A.
Sloane and A.D. Wyner (eds), C. E. Shannon, Collected Papers, New York:
IEEE Press, pp. 5–83.
Shannon, C.E. and W. Weaver (1949), Mathematical Theory of Communication,
Urbana, IL: University of Illinois Press.
Solow, R.M. (1957), ‘Technical change and the aggregate production function’,
Review of Economics and Statistics, 39, 312–20.
Stigler, G.J. (1961), ‘The economics of information’, Journal of Political Economy,
LXIX (3), 213–25.
Stigler, J.E. (1985), ‘Information and economic analysis: a perspective’, Economic
Journal, 95, 21–41.
Studenski, P. (1958), The Income of Nations: Theory, Measurement, and Analysis,
Past and Present, New York: New York University Press.
Vanoli, A. (2002), Une histoire de la comptabilité nationale, Paris: La Découverte.
Walsh, J.R. (1935), ‘Capital concept applied to man’, Quarterly Journal of
Economics, 49 (2), 255–85.
Weaver, W. (1949), ‘The mathematics of communication’, Scientific American, 181
(1), 11–15.
Wiener, N. (1948), Cybernetics: Or Control and Communication in the Animal and
the Machine, Cambridge, MA: MIT Press.
Wiles, J.D. (1956), ‘The nation’s intellectual investment’, Bulletin of the Oxford
University Institute of Statistics, 18 (3), 279–90.
Winter, S.G. (1987), ‘Knowledge and competence as strategic assets’, in D.J. Teece
(ed.), The Competitive Challenge, Cambridge, MA: Ballinger, pp. 159–84.
APPENDIX 1
Table 10A.1 The flow of ideas through the stages of research, invention and development to application

Stage Input Output


Intangible Tangible Measurable Intangible Measurable
I 1. Scientific Scientists Men, man-hours A. New scientific Research papers
‘Basic research’ knowledge (old Technical aides ∂ Payrolls, current knowledge: and memoranda:
[Intended output stock and output Clerical aides and deflated hypotheses and formulas
‘formulas’] from I-A) theories

2. Scientific Laboratories Outlays, current B. New scientific
problems and Materials, fuel, ∂ and deflated problems and
hunches (old power Outlay per man hunches

286
stock and output
C. New practical —
from I-B, II-B and
problems and
III-B)
ideas

II 1. Scientific Scientists Men, man-hours A. Raw inventions,


‘Inventive Work’ knowledge (old Non-scientist ∂ Payrolls, current technological
(including minor stock and output inventors and deflated recipes
improvements but from I-A) Engineers a. Patented a. Patent
excluding further Technical aides inventions applications and
development of Clerical aides b. Patentable patents
inventions) inventions, not b. Technological
2. Scientific Laboratories Outlays, current patented but papers and
problems and Materials, fuel, ∂ and deflated published memoranda
hunches (old power Outlay per man
stock and output
[Intended output: from II-A and c. Patentable c. —
‘Sketches’] III-A) inventions,
neither patented
3. Practical
nor published
problems and
d. Non-patentable d. Papers and
ideas (old stock
inventions, memoranda
and output from
published
I-C, II-C, III-C
e. Non-patentable e. —
and IV-A)
inventions, not
published
f. Minor f. —
improvements
B. New scientific

problems and

287
hunches
C. New practical —
problems and
ideas

III 1. Scientific Scientists Men, man-hours A. Developed Blueprints and


‘Development knowledge (old Engineers inventions: specifications
∂ Payrolls, current
work’ stock and output Technical aides blueprints,
and deflated
[Intended output: from I-A) Clerical aides specification,
‘Blueprints and samples
specifications’] 2. Technology (old Laboratories Outlays, current
B. New scientific
stock and output Materials, fuel, ∂ and deflated —
problems and
from III-A) power Outlay per man
hunches
Pilot plants Investment
Table 10A.1 (continued)

Stage Input Output


Intangible Tangible Measurable Intangible Measurable
3. Practical C. New practical —
problems and problems and
ideas (old stock ideas
and output from
I-C, II-C, III-C
and IV-A)
4. Raw inventions
and improvements
(old stock and

288
output from II-A)
IV 1. Developed Entrepreneurs A. New practical New-type plant
‘New-type plant inventions (output Managers problems and producing
construction’ from III-A) Financiers and ideas a. novel products
[Intended output: bankers b. better products
2. Business acumen
‘New-type plant’] Builders and c. cheaper
and market
contractors products
forecasts
Engineers
3. Financial
$ investments in
resources
new-type plant
4. Enterprise Building materials
(venturing) Machines and tools

Source: Machlup (1962), pp. 180–81.


Fritz Machlup’s construction of a synthetic concept 289

APPENDIX 2 INDICATORS ON THE KNOWLEDGE-


BASED ECONOMY

A. Creation and Diffusion of Knowledge

Investments in knowledge
Domestic R&D expenditure
R&D financing and performance
Business R&D
R&D in selected ICT industries and ICT patents
Business R&D by size classes of firms
Collaborative efforts between business and the public sector
R&D performed by the higher education and government sectors
Public funding of biotechnology R&D and biotechnology patents
Environmental R&D in the government budget
Health-related R&D
Basic research
Defence R&D in government budgets
Tax treatment of R&D
Venture capital
Human resources
Human resources in science and technology
Researchers
International mobility of human capital
International mobility of students
Innovation expenditure and output
Patent applications
Patent families
Scientific publications

B. Information Economy

Investment in information and communication technologies (ICT)


Information and communication technology (ICT) expenditures
Occupations and skills in the information economy
Infrastructure for the information economy
Internet infrastructure
Internet use and hours spent on-line
Access to and use of the Internet by households and individuals
Internet access by enterprise size and industry
Internet and electronic commerce transactions
Price of Internet access and use
290 The capitalization of knowledge

Size and growth of the ICT sector


Contribution of the ICT sector to employment growth
Contribution of the ICT sector to international trade
Cross-border mergers, acquisitions and alliances in the ICT sector

C. Global Integration of Economic Activity

International trade
Exposure to international trade competition by industry
Foreign direct investment flows
Cross-border mergers and acquisitions
Activity of foreign affiliates in manufacturing
Activity of foreign affiliates in services
Internationalization of industrial R&D
International strategic alliances between firms
Cross-border ownership of inventions
International co-operation in science and technology
Technology balance of payments

D. Economic Structure and Productivity

Differences in income and productivity


Income and productivity levels
Recent changes in productivity growth
Labour productivity by industry
Technology and knowledge-intensive industries
Structure of OECD economies
International trade by technology intensity
International trade in high and medium-high technology industries
Comparative advantage by technology intensity
Source: OECD (2001).
11. Measuring the knowledge base of
an economy in terms of triple-helix
relations
Loet Leydesdorff, Wilfred Dolfsma and
Gerben Van der Panne

Ever since evolutionary economists introduced the concept of a ‘knowledge-


based economy’ (Foray and Lundvall, 1996), the question of the measure-
ment of this new type of economic coordination has come to the fore
(Carter, 1996; OECD, 1996). Godin (2006) argued that the concept of a
knowledge-based economy has remained a rhetorical device because the
development of specific indicators has failed. However, the ‘knowledge-
based economy’ has been attractive to policy-makers. For example, the
European Summit of March 2000 in Lisbon was specifically held in order
‘to strengthen employment, economic reform, and social cohesion in the
transition to a knowledge-based economy’ (European Commission, 2000;
see also European Commission, 2005).
When originally proposing their program of studies of the knowledge-
based economy at the OECD, David and Foray (1995, p. 14) argued that
the focus on national systems of innovation (Lundvall, 1988, 1992; Nelson,
1993) had placed too much emphasis on the organization of institutions and
economic growth, and not enough on the distribution of knowledge itself.
The hypothesis of a transition to a ‘knowledge-based economy’ implies a
systems transformation at the structural level across nations. Following
this lead, the focus of the efforts at the OECD and Eurostat has been to
develop indicators of the relative knowledge intensity of industrial sectors
(OECD, 2001; 2003) and regions (Laafia, 1999, 2002a, 2002b). Alternative
frameworks for ‘systems of innovation’ like technologies (Carlsson and
Stankiewicz, 1991) or regions (Braczyk et al., 1998) were also considered
(Carlsson, 2006). However, the analysis of the knowledge base of innova-
tion systems (e.g. Cowan et al., 2000) was not made sufficiently relevant for
the measurement efforts (David and Foray, 2002). In the economic analy-
sis, knowledge was hitherto not considered as a coordination mechanism
of society, but mainly as a public or private good.

291
292 The capitalization of knowledge

Knowledge as a coordination mechanism was initially defined in terms


of the qualifications of the labor force. Machlup (1962) argued that in
a ‘knowledge economy’ knowledge workers would play an increasingly
important role in industrial production processes. Employment data have
been central to the study of this older concept. For example, employment
statistics can be cross-tabled with distinctions among sectors in terms of
high- and medium-tech (Cooke, 2002; Schwartz, 2006). However, the
concept of a ‘knowledge-based economy’ refers to a change in the struc-
ture of an economy beyond the labor market (Foray and Lundvall, 1996;
Cooke and Leydesdorff, 2006). How does the development of science
and technology transform economic exchange processes (Schumpeter,
1939)?
The social organization of knowledge production and control was first
considered as a systemic development by Whitley (1984). Because of the
reputational control mechanisms involved, the dynamics of knowledge
production and diffusion are different in important respects from eco-
nomic market or institutional control mechanisms (Dasgupta and David,
1994; Mirowski and Sent, 2001; Whitley, 2001). When a third coordi-
nation mechanism is added as a sub-dynamic to the interactions and
potential co-evolution between (1) economic exchange relations and (2)
institutional control (Freeman and Perez, 1988), non-linear effects can be
expected (Leydesdorff, 1994, 2006).
The possible synergies may lead to the envisaged transition to a
knowledge-based economy, but this can be expected to happen to a vari-
able extent: developments in some geographically defined economies will
be more knowledge-based than others. The geographical setting, the tech-
nologies as deployed in different sectors, and the organizational structures
of the industries constitute three relatively independent sources of vari-
ance. One would expect significant differences in the quality of innovation
systems among regions and industrial sectors in terms of technological
capacities (Fritsch, 2004). The three sources of variance may reinforce
one another in a configuration so that a knowledge-based order of the
economy can be shaped.
Our research question is whether one can operationalize this configu-
rational order and then also measure it. For the operationalization we
need elements from our two research programs: economic geography
and scientometrics. We use Storper’s (1997, pp. 26 ff.) notion of a ‘holy
trinity’ among technology, organization, and territory from regional
economics, and we elaborate Leydesdorff’s (1995, 2003) use of informa-
tion theory in scientometrics into a model for measuring the dynamics
of a triple helix.
Measuring the knowledge base of an economy 293

A COMBINATION OF THEORETICAL PERSPECTIVES

Storper (1997, p. 28) defined a territorial economy as ‘stocks of relational


assets’. The relations determine the dynamics of the system:

Territorial economies are not only created, in a globalizing world economy, by


proximity in input–output relations, but more so by proximity in the untraded
or relational dimensions of organizations and technologies. Their principal
assets –because scarce and slow to create and imitate – are no longer material,
but relational.

The ‘holy trinity’ is to be understood not only in terms of elements in a


network, but as the result of the dynamics of these networks shaping new
worlds. These worlds emerge as densities of relations that can be devel-
oped into a competitive advantage when and where they materialize by
being coupled to the ground in regions. For example, one would expect
the clustering of high-tech services in certain (e.g. metropolitan) areas. The
location of such a niche can be considered as a consequence of the self-
organization of the interactions (Bathelt, 2003; Cooke and Leydesdorff,
2006). Storper argued that this extension of the ‘heterodox paradigm’ in
economics implies a reflexive turn.
In a similar vein, authors using the model of a triple helix of university–
industry–government relations have argued for considering the possibility
of an overlay of relations among universities, industries and governments to
emerge from these interactions (Etzkowitz and Leydesdorff, 2000). Under
certain conditions the feedback from the reflexive overlay can reshape the
network relations from which it emerged. Because of this reflexive turn,
the parties involved may become increasingly aware of their own and each
other’s expectations, limitations and positions. These expectations and inter-
actions can be further informed by relevant knowledge. Thus the knowledge-
based overlay may increasingly contribute to the operation of the system.
The knowledge-based overlay and the institutional layer of triple-helix
relations operate upon one another in terms of frictions that provide
opportunities for innovation both vertically within each of the helices
and horizontally among them. The quality of the knowledge base in the
economy depends on the locally specific functioning of the interactions
in the knowledge infrastructure and on the interface between this infra-
structure and the self-organizing dynamics at the systems level. A knowl-
edge base would operate by reducing the uncertainty that prevails at the
network level, that is, as a structural property of the system.
The correspondence between these two perspectives can be extended to
the operationalization. Storper (1997, p. 49), for example, uses a depic-
tion of ‘the economy as a set of intertwined, partially overlapping domains
294 The capitalization of knowledge

an emerging overlay
of relations

Figure 11.1 The emerging overlay in the triple-helix model

of action’ in terms of recursively overlapping Venn diagrams denoting


territories, technologies and organizations. Using the triple-helix model,
Leydesdorff (1997, p. 112) noted that these Venn diagrams do not need to
overlap in a common zone. In a networked arrangement, an overlay of inter-
relations among the bilateral relations at interfaces can replace the function
of central integration by providing an emerging structure (Figure 11.1).
The gap in the overlap between the three circles in Figure 11.1 can be
measured as a negative entropy, that is, a reduction of the uncertainty
in the system (Ulanowicz, 1986). Unlike the mutual information in
two dimensions (Shannon, 1948; Theil, 1972), information among three
dimensions can become negative (McGill, 1954; Abramson, 1963). This
reduction of the uncertainty is a consequence of the networked configura-
tion. The ‘configurational information’ is not present in any of the subsets
(Jakulin and Bratko, 2004).1 In other words, the overlay constitutes an
additional source or sink of information. A configurational reduction of
the uncertainty locally counteracts the prevailing tendency at the systems
level towards increasing entropy and equilibrium (Khalil, 2004).

METHODS AND DATA

Data

The data consist of 1 131 668 records containing information based on


the registration of enterprises by the Chambers of Commerce of the
Netherlands. These data are collected by Marktselect plc on a quarterly
Measuring the knowledge base of an economy 295

basis. Our data specifically correspond to the CD-Rom for the second
quarter of 2001 (Van der Panne and Dolfsma, 2003). Because registration
with the Chamber of Commerce is obligatory for corporations, the dataset
covers the entire population. We brought these data under the control of
a relational database manager in order to enable us to focus on the rela-
tions more than on the attributes. Dedicated programs were developed for
further processing and computation where necessary.
The data contain three variables that can be used as proxies for the
dimensions of technology, organization and geography at the systems
level. Technology will be indicated by the sector classification (Pavitt,
1984; Vonortas, 2000), organization by the company size in terms of
numbers of employees (Pugh and Hickson, 1969; Pugh et al., 1969; Blau
and Schoenherr, 1971), and the geographical position by the postal codes
in the addresses. Sector classifications are based on the European NACE
codes.2 In addition to major activities, most companies also provide infor-
mation about second and third classification terms. However, we use the
main code at the two-digit level.
Postal codes are a fine-grained indicator of geographical location. We
used the two-digit level, which provides us with 90 districts. Using this
information, the data can be aggregated into provinces (NUTS-2) and
NUTS-3 regions.3 The Netherlands is thus organized in 12 provinces and
40 regions, respectively.
The distribution by company size contains a class of 223 231 companies
without employees. We decided to include this category because it con-
tains, among others, spin-off companies that are already on the market,
but whose owners are employed by mother companies or universities.
Given our research question, these establishments can be considered as
relevant economic activities.

Knowledge Intensity and High Tech

The OECD (1986) first defined knowledge intensity in manufacturing


sectors on the basis of R&D intensity. R&D intensity was defined for a
given sector as the ratio of R&D expenditure to value added. Later this
method was expanded to take account of the technology embodied in pur-
chases of intermediate and capital goods (Hatzichronoglou, 1997). This
new measure could also be applied to service sectors, which tend to be
technology users rather than technology producers. The discussion con-
tinues about how best to delineate knowledge-intensive services (Laafia,
1999, 2002a, 2002b; OECD, 2001, 2003, p. 140). The classification intro-
duced in STI Scoreboard will be used here (OECD, 2001, pp. 137 ff.). The
relevant NACE categories are shown in Table 11.1.
296 The capitalization of knowledge

Table 11.1 Classification of high-tech and knowledge-intensive sectors


according to Eurostat

High-tech manufacturing Knowledge-intensive sectors (KIS)


30 Manufacturing of office 61 Water transport
machinery and computers 62 Air transport
32 Manufacturing of radio, 64 Post and telecommunications
television and communication 65 Financial intermediation, except
equipment and apparatus insurance and pension funding
33 Manufacturing of medical 66 Insurance and pension funding,
precision and optical instruments, except compulsory social security
watches and clocks 67 Activities auxiliary to financial
intermediation
Medium-high-tech manufacturing 70 Real-estate activities
71 Renting of machinery and equipment
24 Manufacture of chemicals and without operator and of personal
chemical products and household goods
29 Manufacture of machinery and 72 Computer and related activities
equipment n.e.c. 73 Research and development
31 Manufacture of electrical 74 Other business activities
machinery and apparatus n.e.c. 80 Education
34 Manufacture of motor vehicles, 85 Health and social work
trailers and semi-trailers 92 Recreational, cultural and sporting
35 Manufacturing of other transport activities
equipment
Of these sectors, 64, 72 and 73 are
considered high-tech services.

Source: Laafia (2002a), p. 7.

These classifications are based on normalizations across the member


states of the European Union and the OECD, respectively. However, the
percentages of R&D and therefore the knowledge intensity at the sectoral
level may differ in the Netherlands from the average for the OECD or the
EU.
Statistics Netherlands (CBS, 2003) provides figures for R&D intensity
as percentages of value added in 2001. Unfortunately, these data were
aggregated at a level higher than the categories provided by Eurostat
and the OECD. For this reason and, furthermore, because the Dutch
economy is heavily internationalized so that knowledge can easily spill
over from neighboring countries, we decided to use the Eurostat catego-
ries provided in Table 11.1 to distinguish levels of knowledge intensity
among sectors.
Measuring the knowledge base of an economy 297

Regional Differences

The geographical make-up of the Netherlands is different from its image.


The share of employment in high- and medium-tech manufacturing in the
Netherlands rates only above Luxembourg, Greece and Portugal in the
EU-15 (OECD, 2003, pp. 140 ff.). The economically leading provinces of
the country, like North- and South-Holland and Utrecht, rank among the
lowest on this indicator in the EU-15 (Laafia, 1999, 2002a). The south-
east part of the country is integrated in terms of high- and medium-tech
manufacturing with neighboring parts of Belgium and Germany. More
than 50 percent of private R&D in the Netherlands is located in the
regions of Southeast North-Brabant and North-Limburg (Wintjes and
Cobbenhagen, 2000).
The core of the Dutch economy has traditionally been concentrated
on services. These sectors are not necessarily knowledge-intensive, but
the situation is somewhat brighter with respect to knowledge-intensive
services than in terms of knowledge-based manufacturing. Utrecht and
the relatively recently reclaimed province of Flevoland score high on this
employment indicator,4 while both North- and South-Holland are in
the middle range. South-Holland is classified as a leading EU region in
knowledge-intensive services (in absolute numbers), but the high-tech end
of these services remain underdeveloped. The country is not homogeneous
on any of these indicators.

Methodology

Unlike a covariation between two variables, a dynamic interaction among


three dimensions can generate a complex pattern (Schumpeter, 1939, pp.
174 ff.; Li and Yorke, 1975): one can expect both integrating and dif-
ferentiating tendencies. In general, two interacting systems reduce the
uncertainty on either side with the mutual information or the transmis-
sion. Using Shannon’s formulae, this mutual information is defined as the
difference between the sum of the uncertainty in two systems without the
interaction (Hx + Hy) minus the uncertainty contained in the two systems
when they are combined (Hxy). This can be formalized as follows:5

Txy = Hx + Hy – Hxy (11.1)

Abramson (1963, p. 129) derived from the Shannon formulae that the
mutual information in three dimensions is:

Txyz = Hx + Hy + Hz – Hxy – Hxz – Hyz + Hxyz (11.2)


298 The capitalization of knowledge

While the bilateral relations between the variables reduce the uncer-
tainty, the trilateral term adds to the uncertainty. The layers thus alternate
in terms of the sign. The sign of Txyz depends on the magnitude of the tri-
lateral component (Hxyz) relative to the mutual information in the bilateral
relations.
For example, the trilateral coordination can be associated with a new
coordination mechanism that is added to the system. In the network mode
(Figure 11.1) a system without central integration reduces uncertainty by
providing a differentiated configuration. The puzzles of integration are
then to be solved in a non-hierarchical, that is, reflexive or knowledge-
based mode (Leydesdorff, 2010).

RESULTS

Descriptive Statistics

Table 11.2 shows the probabilistic entropy values in the three dimensions
(G = geography, T = technology/sector and O = organization) for the
Netherlands as a whole and the decomposition at the NUTS-2 level of the
provinces.
The provinces are very different in terms of the numbers of firms and

Table 11.2 Expected information contents (in bits) of the distributions in


the three dimensions and their combinations

HG HT HO HGT HGO HTO HGTO N


NL 6.205 4.055 2.198 10.189 8.385 6.013 12.094 1 131 668
% Hmax 95.6 69.2 61.3 82.5 83.2 63.7 75.9

Drenthe 2.465 4.134 2.225 6.569 4.684 6.039 8.413 26 210


Flevoland 1.781 4.107 2.077 5.820 3.852 6.020 7.697 20 955
Friesland 3.144 4.202 2.295 7.292 5.431 6.223 9.249 36 409
Gelderland 3.935 4.091 2.227 7.986 6.158 6.077 9.925 131 050
Groningen 2.215 4.192 2.220 6.342 4.427 6.059 8.157 30 324
Limburg 2.838 4.166 2.232 6.956 5.064 6.146 8.898 67 636
N.-Brabant 3.673 4.048 2.193 7.682 5.851 6.018 9.600 175 916
N.-Holland 3.154 3.899 2.116 6.988 5.240 5.730 8.772 223 690
Overijssel 2.747 4.086 2.259 6.793 5.002 6.081 8.749 64 482
Utrecht 2.685 3.956 2.193 6.611 4.873 5.928 8.554 89 009
S.-Holland 3.651 3.994 2.203 7.582 5.847 5.974 9.528 241 648
Zeeland 1.802 4.178 2.106 5.941 3.868 6.049 7.735 24 339
Measuring the knowledge base of an economy 299

their geographical distribution over the postal codes. While Flevoland


contains only 20 955 units, South-Holland provides the location for
241 648 firms.6 This size effect is reflected in the distribution of postal
codes: the uncertainty in the geographical distribution – measured as
HG – correlates significantly with the number of firms N (r = 0.76; p <
0.05). The variance in the probabilistic entropies among the provinces
is high (> 0.5) in this geographical dimension, but the variance in the
probabilistic entropy among sectors and the size categories is relatively
small (< 0.1). Thus the provinces are relatively similar in terms of their
sector and size distributions of firms,7 and can thus be meaningfully
compared.
The second row of Table 11.2 informs us that the probabilistic entropy
in the postal codes of firms is larger then 95 percent of the maximum
entropy of this distribution at the level of the nation. Since the postal codes
are more fine-grained in metropolitan than in rural areas, this indicates
that the firm density is not a major source of variance in relation to the
population density. However, the number of postal-code categories varies
among the provinces, and postal codes are nominal variables that cannot
be compared across provinces or regions.
The corresponding percentages for the technology (sector) and the
organization (or size) distributions are 69.2 and 61.3 percent, respectively.
The combined uncertainty of technology and organization (HTO) does
not add substantially to the redundancy. In other words, organization
and technology have a relatively independent influence on the distribu-
tion different from that of postal codes. In the provincial decomposition,
however, the highly developed and densily populated provinces (North-
and South-Holland, and Utrecht) show a more specialized pattern of
sectoral composition (HT) than provinces further from the center of the
country. Flevoland shows the highest redundancy in the size distribu-
tion (HO), perhaps because certain traditional formats of middle-sized
companies are still underrepresented in this new province.

The Mutual Information

Table 11.3 provides the values for the transmissions (T) among the
various dimensions. These values can be calculated straightforwardly
from the values of the probabilistic entropies provided in Table 11.2
using Equations 11.1 and 11.2 provided above. The first line for the
Netherlands as a whole shows that there is more mutual information
between the geographical distribution of firms and their technological
specialization (TGT = 72 mbits) than between the geographical distribu-
tion and their size (TGO = 19 mbits). However, the mutual information
300 The capitalization of knowledge

Table 11.3 The mutual information in millibits among two and


three dimensions disaggregated at the NUTS-2 level
(provinces)

TGT TGO TTO TGTO


NL 72 19 240 −34

Drenthe 30 5 320 −56


Flevoland 68 6 164 −30
Friesland 54 8 274 −56
Gelderland 40 4 242 −43
Groningen 65 7 353 −45
Limburg 47 6 251 −33
N.-Brabant 39 16 223 −36
N.-Holland 65 30 285 −17
Overijssel 40 4 263 −35
Utrecht 31 5 221 −24
S.-Holland 62 6 223 −27
Zeeland 38 39 234 −39

between technology and organization (TTO = 240 mbits) is larger than


TGO by an order of magnitude. The provinces exhibit a comparable
pattern.
While the values for TGT and TGO can be considered as indicators of the
geographical clustering of economic activities (in terms of technologies
and organizational formats, respectively), the TTO provides an indicator
for the correlation between the maturity of the industry (Anderson and
Tushman, 1991) and the specific size of the firms involved (Suárez and
Utterback, 1995; Utterback and Suárez, 1993; see Nelson, 1994). The
relatively low value of this indicator for Flevoland indicates that the
techno-economic structure of this province is less mature than in other
provinces. The high values of this indicator for Groningen and Drenthe
indicate that the techno-economic structure in these provinces is perhaps
relatively over-mature. This indicator can thus be considered as repre-
senting a strategic vector (Abernathy and Clark, 1985; Watts and Porter,
2003).
All values for the mutual information in three dimensions (TTGO) are neg-
ative. When decomposed at the NUTS-3 level of 40 regions, these values
are also negative, with the exception of two regions that contain only a
single postal code at the two-digit level. (In these two cases the uncertainty
is by definition zero.)8 However, these values cannot be compared among
geographical units without a further normalization.
Measuring the knowledge base of an economy 301

THE REGIONAL CONTRIBUTIONS TO THE


KNOWLEDGE BASE OF THE DUTCH ECONOMY

One of the advantages of statistical decomposition analysis is the pos-


sibility of specifying the within-group variances and the between-group
variances in great detail (Theil, 1972; Leydesdorff, 1995). However, a full
decomposition at the lower level is possible only if the categories for the
measurement are similar among the groups. As noted, the postal codes are
nominal variables and cannot therefore be directly compared. Had we used
a different indicator for the regional dimension – for example, percentage
‘rural’ versus percentage ‘metropolitan’ – we would have been able to
compare and therefore to decompose along this axis, but the unique postal
codes cannot be compared among regions in a way similar to the size or
the sectoral distribution of the firms (Leydesdorff and Fritsch, 2006).
In Leydesdorff et al. (2006, p. 190) we elaborated Theil’s (1972) decom-
position algorithm for transmissions. The algorithm enables us to define
the between-group transmission T0 as the difference between the sum of
the parts and the transmission at the level of the composed system of the
country (Table 11.4).

Table 11.4 The mutual information in three dimensions statistically


decomposed at the NUTS-2 level (provinces) in millibits of
information

ΔTGTO (= ni * Ti /N) in millibits of ni


information
Drenthe −1.29 26 210
Flevoland −0.55 20 955
Friesland −1.79 36 409
Gelderland −4.96 131 050
Groningen −1.20 30 324
Limburg −1.96 67 636
N.-Brabant −5.56 175 916
N.-Holland −3.28 223 690
Overijssel −1.98 64 482
Utrecht −1.86 89 009
S.-Holland −5.84 241 648
Zeeland −0.83 24 339

Sum (∑i Pi Ti) −31.10 1 131 668


T0 −2.46
NL −33.55 N = 1 131 668
302 The capitalization of knowledge

ΔT > –0.50
> –1.00
≤ –1.00

Figure 11.2 Contribution to the knowledge base of the Dutch economy at


the regional (NUTS-3) level

The knowledge base of the country is concentrated in South-Holland


(ΔT = −5.84 mbits), North-Brabant (−5.56), and Gelderland (−4.96).
North-Holland follows with a contribution of −3.28 mbits of informa-
tion. The other provinces contribute to the knowledge base less than the
in-between provinces interaction effect at the national level (T0 = −2.46
mbit).
The further disaggregation (Figure 11.2) shows that the contribution of
South-Holland is concentrated in the Rotterdam area, the one in North-
Brabant in the Eindhoven region, and North-Holland exclusively in the
agglomeration of Amsterdam. Utrecht, the Veluwe (Gelderland) and the
northern part of Overijssel also have above-average contributions on this
indicator. These findings correspond with common knowledge about the
industrial structure of the Netherlands (e.g. Van der Panne and Dolfsma,
2003), although the contribution of northern Overijssel to the knowledge
base of the Dutch economy is a surprise. Perhaps this region profits from a
spillover effect of knowledge-based activities in neighboring regions.
Measuring the knowledge base of an economy 303

SECTORAL DECOMPOSITION

While the geographical comparison is compounded with traditional


industrial structure such as firm density, all effects of the decomposition in
terms of the sectoral classification of high- and medium-tech sectors and
knowledge-intensive services can be expressed as a relative effect, that is,
as a percentage increase or decrease of the negative value of the mutual
information in three dimensions when a specific selection is compared with
the complete population. We shall consistently use the categories provided
by the OECD and Eurostat (Table 11.1) and compare the results with
those of the full set as a baseline.
Table 11.5 provides the results of comparing the subset of enterprises
indicated as high-tech manufacturing and knowledge-intensive services
with the full set. Column 4 indicates the influence of high-tech sectors on
the knowledge base of the economy. The results confirm our hypothesis
that the mutual information or entropy that emerges from the interaction
among the three dimensions is more negative for high-tech sectors than
for the economy as a whole. The dynamics created by these sectors deepen
and tighten the knowledge base.
Table 11.5 also provides figures and normalizations for high- and
medium-tech manufacturing combined and knowledge-intensive services
(KIS), respectively. These results indicate a major effect on the indicator
for the sectors of high- and medium-tech manufacturing. The effect is by
far the largest in North-Holland, with 943 percent increase relative to the
benchmark of all sectors combined. Zeeland has the lowest value on this
indicator (365 percent), but the number of establishments in these catego-
ries is also the lowest. North-Brabant has the largest number of establish-
ments in these categories, but it profits much less from a synergy effect in
the network configuration.
The number of establishments in knowledge-intensive services is more
than half (51.3 percent) of the total number of companies in the country.
These companies are concentrated in North- and South-Holland, with
North-Brabant in third position. With the exception of North-Holland,
the effect of knowledge-intensive services on this indicator of the knowl-
edge base always leads to a decrease of configurational information. We
indicate this with an opposite sign for the change. In the case of North-
Holland, the change is marginally positive (+1.0 percent), but this is not
due to the Amsterdam region.9 North-Brabant is second on this rank
order with a decrease of −16.6 percent.
These findings do not accord with a theoretical expectation about the
contributions to the economy of services in general and KIS in particular.
Windrum and Tomlinson (1999) argued that to assess the role of KIS, the
Table 11.5 Mutual information among three dimensions for different sectors of the economy

1 2 3 4 5 6 7 8 9 10 11
Txyz in All High- % change N High- and % change N Knowledge- % change N
millibits sectors tech (2/3) medium-tech (2/6) intensive (2/9)
manufacturing services
NL −34 −60 80.2 45 128 −219 553 15 838 −24 −27.3 581 196

Drenthe −56 −93 67.6 786 −349 526 406 −34 −39.1 11 312
Flevoland −30 −36 20.6 1307 −206 594 401 −18 −37.9 10 730

304
Friesland −56 −136 144.9 983 −182 227 951 −37 −32.6 14 947
Gelderland −43 −94 120.1 4 885 −272 536 2 096 −25 −40.8 65 112
Groningen −45 −66 48.1 1 204 −258 479 537 −29 −34.0 14 127
Limburg −33 −68 105.9 2 191 −245 647 1 031 −18 −45.1 30 040
N.-Brabant −36 −58 61.2 6 375 −190 430 2 820 −30 −16.6 86 262
N.-Holland −17 −34 103.4 9 346 −173 943 2 299 −17 1.0 126 516
Overijssel −35 −79 127.6 2 262 −207 496 1 167 −20 −42.8 30 104
Utrecht −24 −39 65.9 4 843 −227 859 1 020 −13 −45.0 52 818
S.-Holland −27 −44 61.7 10 392 −201 635 2 768 −15 −45.5 128 725
Zeeland −39 −67 73.3 554 −180 365 342 −28 −27.8 10 503
Measuring the knowledge base of an economy 305

degree of integration is more important than the percentage of representa-


tion in the economy, and expect KIS especially to contribute favorably
to the economy (Bilderbeek et al., 1998; Miles et al., 1995; European
Commission, 2000, 2005; OECD, 2000). Unlike output indicators, the
measure adopted here focuses precisely on the degree of integration in
the configuration. However, our results indicate that KIS unfavorably
affects the synergy between technology, organization and territory in the
techno-economic system of the Netherlands.
Knowledge-intensive services seem to be largely uncoupled from the
knowledge flow within a regional or local economy. They contribute unfa-
vorably to the knowledge-based configuration because of their inherent
capacity to deliver these services outside the region. Thus a locality can
be chosen on the basis of considerations other than those relevant for the
generation of a knowledge-based economy in the region.
Table 11.6 shows the relative deepening of the mutual information
in three dimensions when the subset of sectors indicated as ‘high-tech
services’ is compared with KIS in general. More than KIS in general,
high-tech services produce and transfer technology-related knowledge
(Bilderbeek et al., 1998). These effects of strengthening the knowledge
base seem highest in regions that do not have a strong knowledge base in
high- and medium-tech manufacturing to begin with, such as Friesland

Table 11.6 The subset of high-tech services improves on the knowledge


base in the service sector

Txyz in millibits Knowledge High- tech % change N


intensive services
services
NL −24 −34 37.3 41 002

Drenthe −34 −49 45.2 678


Flevoland −18 −18 −4.6 1 216
Friesland −37 −87 131.5 850
Gelderland −25 −46 82.3 4 380
Groningen −29 −44 49.5 1 070
Limburg −18 −39 118.7 1 895
N.-Brabant −30 −35 16.1 5 641
N.-Holland −17 −20 17.0 8 676
Overijssel −20 −46 133.1 1 999
Utrecht −13 −20 49.8 4 464
S.-Holland −15 −25 69.8 9 650
Zeeland −28 −45 59.7 483
306 The capitalization of knowledge

and Overijssel. The effects of this selection for North-Brabant and North-
Holland, for example, are among the lowest. However, this negative rela-
tion between high- and medium-tech manufacturing on the one hand, and
high-tech services on the other, is not significant (r = −0.352; p = 0.262).
At the NUTS-3 level, the corresponding relation is also not significant.
Thus the effects of high- and medium-tech manufacturing and high-tech
services on the knowledge base of the economy are not related to each
other.

CONCLUSIONS AND DISCUSSION

Our results suggest that:

1. the knowledge base of a (regional) economy is carried by high-tech,


but more importantly by medium-tech manufacturing;
2. the knowledge-intensive services tend to uncouple the knowledge base
from its geographical dimension and thus have a relatively unfavora-
ble effect on the territorial knowledge base of an economy. However,
high-tech services contribute to the knowledge-based structuring of an
economy.

These conclusions have been confirmed for other countries (Leydesdorff


and Fritsch, 2006; Lengyel and Leydesdorff, 2010). In terms of policy
implications, our results suggest that regions that are less developed may
wish to strengthen their knowledge infrastructure by trying to attract
medium-tech manufacturing and high-tech services in particular. The
efforts of firms in medium-tech sectors can be considered as focused on
maintaining absorptive capacity (Cohen and Levinthal, 1989), so that
knowledge and technologies developed elsewhere can more easily be under-
stood and adapted to particular circumstances. High-tech manufacturing
may be more focused on the (internal) production and global markets than
on the local diffusion parameters. High-tech services, however, mediate
technological knowledge more than knowledge-intensive services, which
are medium-tech. KIS seems to have an unfavorable effect on territorially
defined knowledge-based economies.
Unlike the focus on comparative statics in employment statistics and
the STI Scoreboards of the OECD (OECD, 2001, 2003; Godin, 2006), the
indicator developed here measures the knowledge base of an economy as
an emergent property (Jakulin and Bratko, 2004; see Ulanowicz, 1986,
pp. 142 ff.). This, then, is the most important contribution of this chapter:
the knowledge base of an economy (the explanandum of this study) can
Measuring the knowledge base of an economy 307

be measured as an overlay of communications among differently codified


communications using the triple-helix model for the explanation (i.e. as
the explanans) (Luhmann, 1986, 1995; Leydesdorff, 2006). When market
expectations, research perspectives and envisaged retention mechanisms
can be interfaced, surplus value can be generated by reducing uncer-
tainty prevailing at the systems level without diminishing room for future
explorations using a variety of perspectives (Ashby, 1958).

ACKNOWLEDGMENT

A different version of this chapter appeared in Research Policy, Vol. 35,


No. 2, 2006, pp. 181–99.

NOTES

1. The so-called interaction or configurational information is defined by these authors as


the mutual information in three dimensions, but with the opposite sign (McGill, 1954;
Han, 1980).
2. NACE stands for Nomenclature générale des Activités économiques dans les
Communautés Européennes. The NACE code can be translated into the International
Standard Industrial Classificiation (ISIC) and in the Dutch national SBI (Standaard
Bedrijfsindeling) developed by Statistics Netherlands.
3. These classifications are used for statistical purposes by the OECD and Eurostat.
NUTS stands for Nomenclature des Unités Territoriales Statistiques (Nomenclature of
Territorial Units for Statistics).
4. Flevoland is the only Dutch province eligible for EU support through the structural
funds.
5. Hx is the uncertainty in the distribution of the variable x (i.e. Hx = − ∑x px 2log px), and
analogously, Hxy is the uncertainty in the two-dimensional probability distribution
(matrix) of x and y (i.e. Hxy = − ∑x ∑y pxy 2log pxy). The mutual information will be indi-
cated with the T of transmission. If the basis two is used for the logarithm, all values are
expressed in bits of information.
6. The standard deviation of this distribution is 80 027.04 with a means of 94 305.7.
7. The value of H for the country corresponds to the mean of the values for the provinces
in these dimensions: H̄T = 4.088 ± 0.097 and H̄O = 2.196 ± 0.065.
8. These are the regions Delfzijl and Zeeuwsch-Vlaanderen (COROP / NUTS-3 regions 2
and 31).
9. Only in NUTS-3 region 18 (North-Holland North) is the value of the mutual informa-
tion in three dimensions more negative when zooming in on the knowledge-intensive
services. However, this region is predominantly rural.

REFERENCES

Abernathy, W. and Clark, K.B. (1985), ‘Mapping the winds of creative destruc-
tion’, Research Policy, 14, 3–22.
308 The capitalization of knowledge

Abramson, N. (1963), Information Theory and Coding, New York: McGraw-


Hill.
Anderson, P. and M.L. Tushman (1991), ‘Managing through cycles of technologi-
cal change’, Research-Technology Management, 34 (3), 26–31.
Ashby, W.R. (1958). ‘Requisite variety and its implications for the control of
complex systems’, Cybernetica, 1 (2), 1–17.
Bathelt, H. (2003), ‘Growth regimes in spatial perspective 1: innovation, institu-
tions and social systems’, Progress in Human Geography, 27 (6), 789–804.
Bilderbeek, R., P. Den Hertog, G. Marklund and I. Miles (1998), Services in
Innovation: Knowledge Intensive Business Services (KIBS) as C-producers of
innovation, STEP report No. S14S.
Blau, P.M. and R. Schoenherr (1971), The Structure of Organizations, New York:
Basic Books.
Braczyk, H.-J., P. Cooke and M. Heidenreich (eds) (1998), Regional Innovation
Systems, London and Bristol, PA: University College London Press.
Carlsson, B. (2006), ‘Internationalization of innovation systems: a survey of the
literature’, Research Policy, 35 (1), 56–67.
Carlsson, B. and R. Stankiewicz (1991), ‘On the nature, function, and composition
of technological systems’, Journal of Evolutionary Economics, 1 (2), 93–118.
Carter, A.P. (1996), ‘Measuring the performance of a knowledge-based economy’,
in D. Foray and B.-Å. Lundvall (eds), Employment and Growth in the Knowledge-
Based Economy, Paris: OECD, pp. 203–11.
CBS (2003), Kennis en Economie 2003, Voorburg and Heerlen: Centraal Bureau
voor de Statistiek.
Cohen, W.M. and D.A. Levinthal (1989), ‘Innovation and learning: the two faces
of R&D’, The Economic Journal, 99, 569–96.
Cooke, P. (2002), Knowledge Economies, London: Routledge.
Cooke, P. and L. Leydesdorff (2006), ‘Regional development in the knowledge-
based economy’, Journal of Technology Transfer, 31 (1), 5–15.
Cowan, R., P. David and D. Foray (2000), ‘The explicit economics of knowledge
codification and tacitness’, Industrial and Corporate Change, 9 (2), 211–53.
Dasgupta, P. and P. David (1994), ‘Towards a new economics of science’, Research
Policy, 23 (5), 487–522.
David, P. and D. Foray (1995), ‘Assessing and expanding the science and technol-
ogy knowledge base’, STI Review, 16, 13–68.
David, P.A. and D. Foray (2002), ‘An introduction to the economy of the knowl-
edge society’, International Social Science Journal, 54 (171), 9–23.
Etzkowitz, H. and L. Leydesdorff (2000), ‘The dynamics of innovation: from
national systems and “Mode 2” to a triple-helix of university–industry–
government relations’, Research Policy, 29 (2), 109–23.
European Commission (2000), Towards a European Research Area, Brussels, 18
January, at http://europa.eu.int/comm/research/era/pdf/com2000-6-en.pdf.
European Commission (2005), Working Together for Growth and Jobs. A New
Start for the Lisbon Strategy, at http://europa.eu.int/growthandjobs/pdf/
COM2005_024_en.pdf.
Foray, D. and B.-Å. Lundvall (1996), ‘The knowledge-based economy: from the
economics of knowledge to the learning economy’, in Employment and Growth
in the Knowledge-Based Economy, Paris: OECD, pp. 11–32.
Freeman, C. and C. Perez (1988), ‘Structural crises of adjustment, business cycles
and investment behaviour’, in G. Dosi, C. Freeman, R.N.G. Silverberg and
Measuring the knowledge base of an economy 309

L. Soete (eds), Technical Change and Economic Theory, London: Pinter, pp.
38–66.
Fritsch, M. (2004), ‘Cooperation and the efficiency of regional R&D activities’.
Cambridge Journal of Economics, 28 (6), 829–46.
Godin, B. (2006), ‘The knowledge-based economy: conceptual framework or
buzzword’, Journal of Technology Transfer, 31 (1), 17–30.
Han, T.S. (1980), ‘Multiple mutual information and multiple interactions in fre-
quency data’, Information and Control, 46 (1), 26–45.
Hatzichronoglou, T. (1997), Revision of the High-Technology Sector and Product
Classification, Paris: OECD, http://www.olis.oecd.org/olis/1997doc.nsf/LinkTo/
OCDE-GD(97)216.
Jakulin, A. and I. Bratko (2004), Quantifying and Visualizing Attribute Interactions:
An Approach Based on Entropy, http://arxiv.org/abs/cs.AI/0308002.
Khalil, E.L. (2004), ‘The three laws of thermodynamics and the theory of produc-
tion’, Journal of Economic Issues, 38 (1), 201–26.
Laafia, I. (1999), Regional Employment in High Technology, Eurostat, http://epp.
eurostat.ec.europa.eu/cache/ITY_OFFPUB/CA-NS-99-001/EN/CA-NS-99-
001-EN.PDF.
Laafia, I. (2002a), ‘Employment in high tech and knowledge intensive sectors in
the EU continued to grow in 2001’, Statistics in Focus: Science and Technology,
Theme, 9 (4), at http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-NS-
02-004/EN/KS-NS-02-004-EN.PDF.
Laafia, I. (2002b), ‘National and regional employment in high tech and knowl-
edge intensive sectors in the EU – 1995–2000’, Statistics in Focus: Science
and Technology, Theme 9 (3), http://epp.eurostat.ec.europa.eu/cache/ITY_
OFFPUB/KS-NS-02-003/EN/KS-NS-02-003-EN.PDF.
Lengyel, B. and L. Leydesdorff (2010), ‘Regional innovation systems in Hungary:
the failing synergy at the national level’, Regional Studies (in press).
Leydesdorff, L. (1994), ‘Epilogue’, in L. Leydesdorff and P. Van den Besselaar
(eds), Evolutionary Economics and Chaos Theory, London and New York:
Pinter, pp. 180–92.
Leydesdorff, L. (1995), The Challenge of Scientometrics: The Development,
Measurement, and Self-Organization of Scientific Communications, Leiden:
DSWO Press.
Leydesdorff, L. (1997), ‘The new communication regime of university–industry–gov-
ernment relations’, in H. Etzkowitz and L. Leydesdorff (eds), Universities and the
Global Knowledge Economy, London and Washington, DC: Pinter, pp. 106–17.
Leydesdorff, L. (2003), ‘The mutual information of university–industry–
government relations’, Scientometrics, 58 (2), 445–67.
Leydesdorff, L. (2006), The Knowledge-Based Economy: Modeled, Measured,
Simulated, Boca Rota, FL: Universal Publishers.
Leydesdorff, L. (2010), ‘Redundancy in systems which entertain a model them-
selves: interaction information and self-organization of anticipation’, Entropy,
12 (1), 63–79.
Leydesdorff, L. and M. Fritsch (2006), ‘Measuring the knowledge base of regional
innovation systems in Germany in terms of a triple helix dynamics’, Research
Policy, 35 (10), 1538–53.
Leydesdorff, L., W. Dolfsma and G. Van der Panne (2006), ‘Measuring the knowl-
edge base of an economy in terms of triple-helix relations among technology,
organization, and territory’, Research Policy, 35 (2), 181–99.
310 The capitalization of knowledge

Li, T.-Y. and J.A. Yorke (1975), ‘Period three implies chaos’, American
Mathematical Monthly, 82 (10), 985–92.
Luhmann, N. (1986), Love as Passion: The Codification of Intimacy, Stanford, CA:
Stanford University Press.
Luhmann, N. (1995), Social Systems, Stanford, CA: Stanford University Press.
Lundvall, B.-Å. (1988), ‘Innovation as an interactive process: from user–producer
interaction to the national system of innovation’, in G. Dosi, C. Freeman,
R. Nelson, G. Silverberg and L. Soete (eds), Technical Change and Economic
Theory, London: Pinter, pp. 349–69.
Lundvall, B.-Å. (ed.) (1992), National Systems of Innovation, London: Pinter.
Machlup, F. (1962), The Production and Distribution of Knowledge in the United
States, Princeton, NJ: Princeton University Press.
McGill, W.J. (1954), ‘Multivariate information transmission’, Psychometrika, 19
(2), 97–116.
Miles, I., N. Kastrinos, K. Flanagan, R. Bilderbeek, P. Den Hertog, W. Hultink
and M. Bouman (1995), Knowledge-Intensive Business Services: Users, Carriers
and Sources of Innovation, European Innovation Monitoring Service, No. 15,
Luxembourg.
Mirowski, P. and E.-M. Sent (2001), Science Bought and Sold, Chicago, IL:
University of Chicago Press.
Nelson, R.R. (ed.) (1993), National Innovation Systems: A Comparative Analysis,
New York: Oxford University Press.
Nelson, R.R. (1994), ‘Economic growth via the coevolution of technology and
institutions’, in L. Leydesdorff and P. Van den Besselaar (eds), Evolutionary
Economic and Chaos Theory, London and New York: Pinter, pp. 21–32.
OECD (1986), OECD Science and Technology Indicators: R&D, Invention and
Competitiveness, Paris: OECD.
OECD (1996), New Indicators for the Knowledge-Based Economy: Proposals for
Future Work, DSTI/STP/NESTI?GSS/TIP (96) 6.
OECD (2000), Promoting Innovation and Growth in Services, Paris: OECD.
OECD (2001), Science, Technology and Industry Scoreboard: Towards a Knowledge-
based Economy, Paris: OECD.
OECD (2003), Science, Technology and Industry Scoreboard, Paris: OECD.
OECD/Eurostat (1997), Proposed Guidelines for Collecting and Interpreting
Innovation Data, ‘Oslo Manual’, Paris: OECD.
Pavitt, K. (1984), ‘Sectoral patterns of technical change’, Research Policy, 13 (6),
343–73.
Pugh, D.S. and D.J. Hickson (1969), ‘The context of organization structures’,
Administrative Science Quarterly, 14 (1), 91–114.
Pugh, D.S., D.J. Hickson and C.R. Hinings (1969), ‘An empirical taxonomy of struc-
tures of work organizations’, Administrative Science Quarterly, 14 (1), 115–26.
Schumpeter, J. [1939] (1964), Business Cycles: A Theoretical, Historical and
Statistical Analysis of Capitalist Process, New York: McGraw-Hill.
Schwartz, D. (2006), ‘The regional location of knowledge based economy activities
in Israel’, The Journal of Technology Transfer, 31 (1), 31–44.
Shannon, C.E. (1948), ‘A mathematical theory of communication’, Bell System
Technical Journal, 27 (July and October), 379–423 and 623–56.
Storper, M. (1997), The Regional World, New York: Guilford Press.
Suárez, F.F. and J.M. Utterback (1995), ‘Dominant design and the survival of
firms’, Strategic Management Journal, 16 (6), 415–30.
Measuring the knowledge base of an economy 311

Theil, H. (1972), Statistical Decomposition Analysis, Amsterdam and London:


North-Holland.
Ulanowicz, R.E. (1986), Growth and Development: Ecosystems Phenomenology,
San Jose, CA: Excel Press.
Utterback, J.M. and F.F. Suárez (1993), ‘Innovation, competition, and industry
structure’, Research Policy, 22 (1), 1–21.
Van der Panne, G. and W. Dolfsma (2003), ‘The odd role of proximity in knowl-
edge relations’, Journal of Economic and Social Geography, 94 (4), 451–60.
Vonortas, N.S. (2000), ‘Multimarket contact and inter-firm cooperation in R&D’,
Journal of Evolutionary Economics, 10 (1–2), 243–71.
Watts, R.J. and A.L. Porter (2003), ‘R&D cluster quality measures and technology
maturity’, Technological Forecasting & Social Change, 70 (8), 735–58.
Whitley, R.D. (1984), The Intellectual and Social Organization of the Sciences,
Oxford: Oxford University Press.
Whitley, R.D. (2001), ‘National innovation systems’, in N.J. Smelser and P.B.
Baltes (eds), International Encyclopedia of the Social and Behavioral Sciences,
Oxford: Elsevier, pp. 10303–9.
Windrum, P. and M. Tomlinson (1999), ‘Knowledge-intensive services and inter-
national competitiveness’, Technology Analysis and Strategic Management, 11
(3), 391–408.
Wintjes, R. and J. Cobbenhagen (2000), ‘Knowledge intensive industrial clustering
around Océ’, MERIT Research Memorandum No. 00-06. MERIT, University
of Maastricht.
12. Knowledge networks: integration
mechanisms and performance
assessment
Matilde Luna and José Luis Velasco

Triple-helix relations involve communication among systems that have


distinctive codes and languages. Therefore they should be conceptualized
as complex systems, or as systems of network-structured social relations.
Because of their complexity, triple-helix relations are hard to analyse and
evaluate. Strictly speaking, they are not economic, political or scientific
organizations, and therefore conventional criteria and standards for
assessing performance cannot be taken for granted.
This chapter addresses a set of theoretical and methodological issues
related to the functioning and performance of organizations involved in
triple-helix relations. More specifically, we look at four integration mecha-
nisms and their impact on several criteria of performance. We consider
two types of network performance: functional and organizational, respec-
tively related to the practical results of collaboration and the conditions
for the production of such results. We shall focus on academy–industry
relations or knowledge networks conceived as complex problem-solving
structures devoted to the generation and diffusion of knowledge through
the establishment of collaborative links between academy and industry.
We begin by drawing some theoretical insights from the literature on
network complexity. We then identify four mechanisms for integrating
actors with different and sometimes diverging codes, interests, needs, pref-
erences, resources and abilities: mutual trust, translation, negotiation and
deliberation. Next, we focus on trust and translation, respectively related
to problems of internal social cohesion and communication. Next we turn
to negotiation and deliberation, both related to decision-making. Finally,
we try to establish to what extent, in what sense, and under what condi-
tions knowledge networks can or cannot be considered effective, effica-
cious and efficient, and propose some criteria for assessing functional and
organizational performance.
The empirical data that illustrate some of our analytical claims come

312
Integration mechanisms and performance assessment 313

from 38 structured interviews with people from academic and economic


organizations.1 The interviewees are from different economic sectors, tech-
nological fields and regions in Mexico. All of them have participated in
joint research projects aimed at the generation and diffusion of knowledge.
Most of the firms involved were large and had R&D departments. While
not statistically representative, these data allow us to put forward some
hypotheses and proposals regarding the operation of integration mecha-
nisms and their impact on network performance. Given the heterogeneity
of the cases and the diversity of the actors interviewed, we can reasonably
suppose that high frequencies or low levels of data dispersion may indicate
highly significant social regularities.

NETWORKS AS COMPLEX SYSTEMS

Since the early 1990s, the network metaphor has inspired a variety of
theoretical, methodological and technical developments. Despite their
differences, three seminal approaches tend to converge on the notion
of networks as complex systems: social network analysis, the study of
networks as social coordination mechanisms, and actor-network theory.
These approaches provide some theoretical foundations for the study of
integration mechanisms and network performance.
Social network analysis developed from the notion that networks are
systems of bonds between nodes and that bonds are structures of interper-
sonal communication. Nodes can be individuals, collective entities (e.g.
organizations or countries), or positions in the network. This approach
has centred on the morphological dimension of networks, asking how
actors are distributed in informal structures of relations and what are the
boundaries or limits of a network. Its main concerns include the opera-
tionalization, measurement, formalization and representation of ties. Only
recently has this approach developed mathematical models and tools to
analyse network dynamics.
The dominant image in this literature is one of a dense, egocentric
network, comprising homogeneous actors linked by strong (intense) ties.
One of the most interesting problems analysed through this approach is
‘the strength of weak ties’ (Granovetter, 1973). A tie with a low level of
interpersonal intensity, it is argued, may have a high level of informative
strength if it is a ‘bridge’, that is to say, the only link between two or more
groups, each of them formed by individuals connected by strong bonds.
Seeing networks as complex organizations, Burt (1992) put forward
the notion of ‘structural holes’: sparse regions located between two or
more dense regions of relations and representing opportunities for the
314 The capitalization of knowledge

circulation of original information or new ideas (Walker et al., 1997; see


also Valente, 1995).
Theories of networks as social coordination mechanisms have distin-
guished several patterns of social coordination, each associated with a
different type of institution. Hierarchical control has been associated
with the state, exchange with the market, solidarity with the community,
and concerting with the associative model (Eising and Kohler-Koch,
1999; Martinelli, 2002; Streeck and Schmitter, 1992). More recently,
networks have also been seen as a specific mode of social coordination.
One of the most interesting developments incorporates elements of evo-
lutionary theories. From this perspective, what distinguishes networks
from other forms of coordination is their high level of complexity, which
results from the differentiation, specialization and interdependence of
social systems.
Seen as a form of governance, a network is represented as a polycentric
configuration and as a system of relations having weak but strong ties and
a membership that is both elastic and heterogeneous. Within these con-
figurations, actors follow different codes or coordination principles – for
example, exchange or profit, legitimate authority or the law, and solidarity
– which may be mutually inconsistently.
Arising from the sociology of science, technology and knowledge, actor-
network theory emphasizes the autonomy and indeterminacy of networks
and their actors. Bruno Latour (1993, p. 91), its main proponent, holds
that the separation of nature, society and language is artificial. Reality, he
claims, is ‘simultaneously real like nature, narrated like discourse, and col-
lective like society’. Actor-network theory seeks to describe the association
of humans and non-humans (objects, abilities, machines), or ‘actants’.
This theory opened social sciences to the non-human part of the social
world (Callon and Law, 1997).
Thus a network is a configuration of animated and inanimate elements.
An actor-network is simultaneously an actor that brings together het-
erogeneous and differentiated elements, and a network that transforms
its own elements. In this view, translation – defined as the combination of
negotiations, intrigues, calculations, acts of persuasion, and violence by
means of which an actor or force acquires the authority to speak or act on
behalf of other actors or forces – provides the link that connects actors or
‘actants’ into a network and defines the network itself. Translation both
allows actors to communicate and defines the evolution of the network. In
fact, from this perspective, a network can be seen as a system of translation
(Leydesdorff, 2001).
From these approaches, we can conclude that triple-helix relations
should be considered complex systems of relations, resulting from
Integration mechanisms and performance assessment 315

simultaneous processes of differentiation and interdependence among


individuals, groups, institutions or subsystems.
The collaborative projects analysed here comprise moderate or highly
complex interactions that can be usefully conceptualized as networks.
Members act according to different codes. The networks that they create
have their own rules of interaction and are highly autonomous from
the original organizations. Authority within the network is dynamically
dispersed and decision-making involves various actors and complex
processes of negotiation and deliberation. Relations among members
also involve the collective construction of objectives through a variety of
coordination mechanisms (such as formal agreements, personal relation-
ships, committees of all sorts, working teams) and diverse communication
techniques.
Some specific data show other dimensions of this complexity, such as the
weakness of the ties among members and institutions, the elasticity of the
network, and the uncertainty of their results. On the weakness of ties, 82 per
cent of the interviewees considered that there were important obstacles to
the flow of knowledge, and nearly 50 per cent recognized that there had been
‘important differences in opinion among the members’. Elasticity is shown
by the fact that the original aims of collaboration were changed and new
members were included as projects developed. On the uncertainty of results,
only half of the people interviewed thought that ‘the problem addressed was
solved’ and a relatively high 24 per cent ‘do not know whether the problem
addressed was solved’. However, most of them (82 per cent) claimed that
other problems, not initially addressed, were solved.
Being complex entities, knowledge networks are autonomous in a
double sense. First, each of their components (institutions, organiza-
tions and individuals) is autonomous and remains so even as interaction
and collaboration intensify. Second, the whole network is autonomous,
in the sense that it is not subject to a superior entity that regulates its
actions. This individual and organizational autonomy entails that there
are no settled rules unequivocally determining the rights and obligations
of members (self-regulation) and that participants are reasonably free to
express their opinions and to choose their options (self-selection).
At the level of decision-making, the double autonomy of networks
means that no member has absolute authority, each has certain autonomy
(Hage and Alter, 1997) and authority is dynamically distributed within the
network. In the words of one participant: ‘leadership shifts from one place
to another, from one [strategic] member to another’, depending on the
nature of the issues addressed.
To coordinate heterogeneous actors, structure and process conflicts, col-
lectively make decisions, reach agreements and solve problems, triple-helix
316 The capitalization of knowledge

relations combine four mechanisms of integration: complex trust, trans-


lation, negotiation and deliberation. Each mechanism operates in other
social structures as well, for example, markets, politics and communities.
What is distinctive of networks is that they require the combined opera-
tion of the four mechanisms and assign a central role to three of them:
complex trust, translation and deliberation.

TRUST AND TRANSLATION

Two of the four integrative mechanisms analysed here have to do with


how networks structure their internal relations, creating a common
ground where people from universities and business firms may fruitfully
interact, cooperate and thereby generate knowledge that is at the same
time scientifically valid, technically useful and economically profitable.

Trust and Internal Cohesion

Interpersonal or mutual trust is a set of positive expectations regarding the


actions of other people. Such expectations become important when someone
has to choose a course of action knowing that its success depends, to some
extent, on the actions of others; and yet such a decision must be made before
it is possible to evaluate the actions of the others (Dasgupta, 1988).
Trust, hence, has three basic features: interdependence, uncertainty
and a positive expectation. There is a trust relation when the success of a
person’s action depends on the cooperation of someone else; therefore it
entails at least partial ignorance about the behaviour of the others and the
assumption that they will not take undue advantages (Lane, 1998, p. 3;
Sable, 1993).
For knowledge networks, three kinds of trust are particularly signifi-
cant: calculated or strategic, technical or prestige-based, and normative.
Prestige-based or technical trust depends mainly on perceptions about
the capabilities and competences of participants. Calculated or strategic
trust arises from calculations of costs and benefits and is based on the
expectation of mutual benefits from the relationship. Personal or norma-
tive trust depends on shared norms, beliefs and values; it is based on social
solidarity, rather than on the expected benefits of the interaction.
Interviews with participants showed that the three types of trust are
highly important. They were asked to rate an indicator of each kind of
trust on a scale ranging from 0 to 10, where 10 meant ‘very important for
the development of the collaborative project’. Ratings for each dimension
of trust averaged between 8.8 and 9.2.
Integration mechanisms and performance assessment 317

We also found that there is a strong internal consistency among the


three dimensions of trust; each of them can be either a cause or an effect of
the other two; any of them can give rise to transitive chains (A trusts B, B
trusts C, therefore A trusts C); they may be mutually supportive, overlap-
ping or conflicting, and the way they combine has important consequences
for the origin, development, and dissolution of triple-helix relations.
The existence of one type of trust may increase the opportunities for
the development of the other types; conversely, the predominance of one
type may undermine the others. For instance, a relation initially based
on normative trust may give rise to technical trust; or a collaborative
project motivated solely by expectations of mutual gain may impede the
development of normative or personal trust, which is the firmest basis of
interpersonal communication. In the former case, trust may help stabilize
or integrate the network, or even operate as a multiplying factor creating
new relationships out of an original system of ties. In the latter case, trust
creates problems of coordination, efficacy, or efficiency.2
It seems clear, therefore, that an important amount of trust in each of its
three types is indispensable in complex networks. That is why we should
not refer to normative, strategic and technical ‘trusts’ as separate kinds
of trust but as dimensions of it. Therefore, in complex networks, the total
amount of trust is a combination (the ‘algebraic sum’) of the three dimen-
sions. If one neglects this fact, it becomes difficult to explain how trust can
be generalized among people with different ideas, from diverse disciplinary
and organizational cultures and even from different countries, as is the
case in knowledge networks.
More generally, it could be said that as a property of complex systems –
and as a mechanism of integration among actors with different codes and
languages – mutual trust takes on a complex character. It involves calcula-
tions, solidarity, and a perception of the technical prestige of participants.
Thus, as opposed to simple social interactions based on a single type of
trust, triple-helix relations involve an unstable balance among these three
dimensions of trust.
Therefore the evaluation of network performance has to consider not
only the total level of mutual trust produced or wasted during the interac-
tion but also the way in which its different dimensions are combined. This
leads us to question the assumption, shared by many theories of trust, that
there is always a positive relationship between trust and performance.

Translation and Communication

The problem of translation is located at the core of the weak-tie paradox,


according to which the flow of new ideas and original information is
318 The capitalization of knowledge

associated with the mismatch of language and cognitive orientation


(Steward and Conway, 1996; Granovetter, 1973; Valente, 1995; Burt,
2000; Lundvall, 2000). More strongly, translation systems may be seen as
the answer to an evolutionary paradox, with integration being a form of
de-differentiation (Leydesdorff, 1997). Moreover, from diverse theoretical
approaches, the literature on innovation and academy–industry relations
has identified boundary personnel under different names, such as gate-
keepers, brokers, boundary-spanners, gold-collar workers or translators,
and has assigned them a critical role in collaboration (e.g. Leydesdorff,
1997; Steward and Conway, 1996).
Drawing on this literature and on data from the projects described
above, we hold that translation performs five critical functions in knowl-
edge networks, creating a common ground between different and some-
times conflicting cognitive orientations, coding schemes, ‘local languages’,
normative orientations and interests.3
At the level of cognitive orientation, translation connects organiza-
tions and people with diverging notions of knowledge and knowledge
creation. As Leydesdorff (2001) has noted, while scientific propositions
are usually evaluated in terms of ‘truth’, market-oriented communications
are normally judged by their utility or profit potential. But there are also
important differences in the notion of innovation. For some participants,
innovation means new ideas or the rupture of paradigms; for others,
changes in the market (products, technology, organization patterns, or
trade techniques). Moreover, actors from academic institutions and firms
often perceive the mismatch of cognitive orientation as a difference in lan-
guage, sometimes considered the major impediment to interaction. More
generally, interviewees referred to differences in culture, approaches and
time spans. The task of reconciling these differences becomes all the more
difficult because they often overlap and combine in complex ways.
Second, at the inter-organizational level, translation reconciles different
organizational structures and procedural mechanisms. Mutual under-
standing within the network is made difficult by the existence of diverging
standards about confidentiality, information flows, intellectual property,
patents, evaluation criteria and administration. The need for translation
at this level increases because network members remain autonomous and
control their own resources, which entails that virtually any decision has
to be jointly made.
Third, operating at the interdisciplinary level, translation contributes
to the development of a problem-solving approach, which commonly
involves the collaboration of multiple disciplines. At this level, translation
also deals with tensions between basic and applied research.
Fourth, at the level of codification, translation combines ‘local’ and
Integration mechanisms and performance assessment 319

‘universal’ knowledge. As one interviewee put it, a ‘translator’ must have


the ‘sensitivity’ to simultaneously ‘see an industrial problem’ and ‘the
basic science behind it’.
Finally, at the level of interests and negotiations, translation deals with
power struggles, many of them related to asymmetries and differences in
the kinds of ‘goods’ exchanged within the network. Networks are complex
entities that have to coordinate particular, divergent and common inter-
ests (Messner, 1999). At this level, translation is critical to the efficacy of
the network, which is often defined as the capacity to manage conflicts.
Translation may be approached at both the structural and individual
levels. By bringing together people from academic institutions and firms,
knowledge networks function as translation structures. They connect
entities from two social subsystems, each with its own cognitive orienta-
tions, coding schemes, ‘local languages’ and normative orientations. But,
inside the network, some individuals specialize in translation, facilitating
communication and understanding among members. In this respect, it is
noticeable that most of the interviewees identified a person who facilitated
communication between participants.
Translators are usually individuals who have worked in both the busi-
ness and academic sectors. They have interdisciplinary knowledge and
skills; understand the cultures and procedures of different organizations;
have many varied and informal links with the other members of the
network; and are distinguished by personal traits that allow them to act
as interpersonal communication facilitators. They posses high amounts of
tacit knowledge and frequently command all types of knowledge: know-
who, know-what, know-how and know-why.
Although translators may occupy marginal positions in the network,
they are not just carriers of messages from one sector to another. They
transform scientific knowledge into economically useful information,
knowledge, products and processes. Conversely, they transform the practi-
cal knowledge requirements of firms into scientifically relevant questions.

Trust or Translation?

Within certain limits, there seems to be an inversely proportional relation


between trust and translation: when members of the network strongly trust
each other, communication is easier and translation less necessary. Only
those respondents who thought that communication among members was
‘very difficult’, ‘difficult’, or ‘easy’ believed that there was one person facili-
tating communication. People who thought that communication was ‘very
easy’ denied that such a facilitator existed.
Another interesting fact is that trustworthiness is an essential
320 The capitalization of knowledge

characteristic of individuals identified as translators. Personal descrip-


tions of translators are full of phrases or words such as ‘very participative’,
‘intelligent person’, ‘willingness’, ‘self-motivated individual’, ‘empathy’,
‘trust’, ‘reliable person’ and so on. In sum, translators must be trustworthy
people – trustworthy in a complex way, with features corresponding to
each of the three dimensions of trust.4

NEGOTIATION AND DELIBERATION

Given the complexity of triple-helix relations, their collective decisions


have to be reached by bargaining, by giving reasons, and by solving con-
flicts or problems. This is especially true of decisions concerning the aims
of the project, the nature of problems to be addressed, and the best ways
to solve them – all of which decisively shape the origins, dynamics and
evolution of knowledge networks.5
Interviewees showed a generalized perception that a clear definition of
aims was decisive for the consolidation of networks. It was seen as even
more important than the amount of financial support, the existence of pre-
vious relations among participants, the specific capacities of institutions
and actors for solving the problem in question, and the incentives pro-
vided through public policy instruments. Conflicts and differences about
the nature of the problem addressed can be exemplified with the following
statement made by a participant from an academic institution:

People from industry always claim to know what the problem is from a non-
technical standpoint. But if you get into it, you realize that what lies behind is
very different . . . Sometimes, they think they have a problem of processes, and
perhaps it is in fact a problem of materials; or they see it as a problem of materi-
als and indeed is a problem of characterisation . . . To break down [the problem]
also allows us to identify other problems, of which they were probably unaware
even though the problems were about to explode.

According to Elster (1999), there are three basic forms of collective


decision-making: voting, bargaining or negotiation, and deliberation.
Although not mutually exclusive, they have fundamental differences in
their theoretical presuppositions, possibilities and constraints. Voting has
been considered a very effective and efficient method, generally producing
clear and quick decisions (Jachtenfuchs, 2006). However, it is hardly suit-
able to knowledge networks, characterized by the search for consensus6
through a mixture of deliberation and negotiation. Therefore we shall
concentrate on the features, possibilities and constraints of these two ways
of making decisions.
Integration mechanisms and performance assessment 321

In contrast to theories of social coordination (e.g. Messner, 1999) and


actor-network theory, which claim that networks are predominantly
governed by the logic of negotiation, we argue that deliberation is what
distinguishes them from other forms of coordinating actions and making
collective decisions. Networks make decisions mainly through the rational
exchange of arguments; this sets them apart from political, economic or
social structures that rely primarily on power, money, or solidarity.7 This
is particularly evident in academy–industry relations, where information,
ideas, knowledge and expertise – resources that are essential for argumen-
tation and collectively making judgements – play an important role in
authority formation.

Negotiation

The central distinction between negotiation and deliberation can be


approached from the standpoint of interests. Whereas in negotiation inter-
ests are fixed and defined beforehand, deliberation involves the collective
definition of preferences. Deliberation presupposes that interests are not
external to the political process: the debate and exchange of arguments
transform preferences, making them more compatible.8
In an interdependent situation, negotiation seeks either ‘to create some-
thing new that neither party could do on his or her own, or . . . to resolve
a problem or dispute between the parties’. It may be an informal haggling
(bargaining) or a ‘formal, civilized process’ through which parties try to
‘find a mutually acceptable solution to a complex conflict’ (Lewicki et al.,
2004, p. 3).
A negotiation situation has several distinctive characteristics: there are
two or more parties, who are interdependent (they need each other) but
have conflicting interests; the parties prefer to seek an agreement instead
of fighting openly or opting out of the relationship; there is no set of rules
that automatically resolves the conflict of interests (Lewicki et al., 2004).
The main concern of participants in this competitive cooperation is how to
distribute the costs and benefits of the interaction.
As long as participants remain diverse and autonomous from each
other, each of them independently controlling important resources and
sharing in the distribution of power, negotiation will always be neces-
sary. This makes it an essential, permanent feature of network decision-
making. However, other structural features of knowledge networks place
important limits on the use of negotiation.
First, interests – and therefore objectives, goals, strategies, gains and
losses – are defined and redefined within the network itself. They are
internal to the network itself, transformed and even generated by it. The
322 The capitalization of knowledge

problems that networks usually address are necessarily complex. If partici-


pants were able to correctly define them in a way that is both scientifically
correct and economically useful, then there would scarcely be any need
to create networks like these. This interactive redefinition of interests and
problems fundamentally transforms network exchange, making it differ-
ent from market bargaining or political negotiation. Before being negoti-
ated, those interests have to be defined through other communication and
decision-making mechanisms.
But, in the second place, networks are more than exchange mechanisms.
They are autonomous organizations: collective actors in their own right,
with their own interests, goals, strategies, gains, losses and problems to
solve. As seen above, trust and translation play critical roles in creating
a common ground and solidifying the collective structure of the network,
even if individual participants remain mutually independent.
Negotiation is embedded in this collective structure and therefore occu-
pies a narrower place than it does in market bargaining and conventional
political negotiations. Given that many network decisions concern not the
best way to accommodate diverse individual interests but the best solution
to common problems, they have to be agreed upon rather than negotiated.

Deliberation

At its core, deliberation refers to the rational exchange of arguments


aiming at reasonable decisions and solutions. Its main goal is to identify
a common good, which implies a redefinition of private interests. As
Elster (1999, pp. 12–13) puts it, ‘When the private and idiosyncratic wants
have been shaped and purged in public discussion about the public good,
uniquely determined rational agreement would emerge. Not optimal com-
promise but unanimous agreement’ would be the result of such a process.
As suggested above, this is especially true of knowledge networks, whose
main explicit purpose is to solve knowledge-related problems that are not
precisely defined beforehand.
Even when it fails to identify a common good, deliberation may at least
result in a ‘collective evaluation of divergences’ (Oléron, 1983, p. 108).
This evaluation may facilitate mutual understanding among members of
the network. Moreover, it almost inevitably leads participants to redefine
their interests, objectives and goals. Deliberation compels participants to
present their arguments in terms of the common interests of the organiza-
tion. Individual interests are legitimate and publicly defensible only in so
far as they may be presented as compatible with, or at least not contrary
to, the common interests of participants. This, again, is especially impor-
tant in knowledge networks, because the problem that must be solved has
Integration mechanisms and performance assessment 323

to be collectively defined or at least significantly redefined by the network


itself.
In contrast to compromise solutions (as well as to coercion, manipula-
tion, acquiescence, unthinking obedience, or market decisions), delib-
eration implies justification (Warren, 1996). Through deliberation,
participants make collective judgements of discretional character – judge-
ments that are freely and prudently made through debates, driven by
reason and good sense, which are not partial or inappropriate.
In the cases analysed, most decisions had important deliberative ele-
ments. A common way to describe the process through which partici-
pants made joint decisions is the following: proposals are elaborated and
analysed; arguments are presented and discussed; then, technical support
is provided, some tests are conducted and results are compared. Asked
how they solve their differences, network participants emphasized the
importance of deliberative mechanisms:

Differences are solved by seeking more information and studying it more care-
fully. That is, everybody has to learn how to understand the others . . . We must
understand how to measure and evaluate things . . . But this is achieved through
studying, and afterward, in some meeting, talking more deeply about mistakes,
definitions, and so on.
You have to back up your proposals with indicators . . . You have to show
the viability of numbers.

Certain norms are indispensable for deliberation. Among them are


‘openness, respect, reciprocity, and equality’ (Dryzek, 2000, pp. 134–5).
These norms may be enshrined in explicit rules, but even if they are not,
they must be respected in practice. Openness means that several decisions
are possible in principle. A measure of respect for partners is indispensa-
ble for serious discussion. As Gutmann and Thompson (1996, pp. 52–3)
point out, reciprocity refers to ‘the capacity to seek fair terms of coopera-
tion’, And ‘equality of opportunity to participate in . . . decision making’
has been considered ‘the most fundamental condition’ of deliberation
(Bohman and Rehg, 1999, p. xxiii).
Several structural features of networks facilitate deliberation: the auton-
omy and interdependence of their members, their decentralized power
structures and so on. But even within this favourable structure, delib-
eration needs specific institutional arrangements. Among the facilitating
institutions and practices are: frequent meetings where free, horizontal
discussion is not only allowed but actively encouraged; the use of appro-
priate channels of communication that do not require physical presence
(e-mail, virtual meetings); the existence of recognized mechanisms for
solving disputes and defining the rules of debate. Mechanisms like these
324 The capitalization of knowledge

allow networks to embed deliberation in their daily life and to take full
advantage of its potential for producing legitimate decisions.
Deliberation has several advantages over other forms of collective
decision-making. In contrast to voting, where the acceptance of majority
decision by the minority is a structural problem, negotiation and delibera-
tion do not create absolute losers. But whereas the main goal of negotia-
tion is to reach a compromise among conflicting interests, the main goal of
deliberation is to convince the other partners. Thus collective agreements
reached through deliberation are self-enforcing and therefore less vulner-
able to unilateral action, which is a weakness of negotiated agreements.
Deliberation may reinforce the efficacy of networks. According to
Weale (2000, p. 170), under certain conditions, transparent processes
based on deliberative rationality must lead to solutions that are function-
ally efficacious in most cases. This will happen if the solution to a given
problem complies with the following conditions: it must arguably belong
to the set of those decisions that may be reasonably chosen, even if there
were other options that could have been reasonably chosen as well; it must
be open to scrutiny by those affected or benefited by it. If this is the case,
then negotiation and the pressure for unanimity are irrelevant to the extent
that their potential outcomes belong to the set of decisions that may be
made through deliberation.
However, deliberation has important drawbacks. Agreements often
exact a heavy price, as they are usually achieved through long and compli-
cated processes of discussion. Besides, there is always the risk that deliber-
ation may lead to non-decisions (Jachtenfuchs, 2006). Deliberation is not
only a time-consuming activity; it also requires energy, attention, infor-
mation and knowledge, which have been considered scarce deliberative
resources (Warren, 1996).
Moreover, by stimulating public discussion, deliberation may intensify
disagreement and increase ‘the risk that things could go drastically wrong’
(Bell, 1999, p. 73). It may even create disagreement where there was none.
It can impede or at least complicate the adoption of rules guiding collec-
tive discussion and decision-making, which form the basis for subsequent
deliberations.
Although it has been considered that collective agreements reached
through deliberation are ideal for heterogeneous actors, several interview-
ees think that some differences among participants were never solved.
But this apparent deficit of efficiency is perhaps a structural problem of
knowledge networks. Members frequently complain that to reach agree-
ments they have to participate in multiple committees of all sorts and to
spend much time discussing in formal and informal settings. This is not
surprising, as knowledge networks are characterized by a permanent
Integration mechanisms and performance assessment 325

search for equilibrium between academic and profit-making criteria. This


equilibrium can be achieved only through regular and frequent interac-
tion among participants. This interaction is highly demanding: it requires
the recognition of others’ perceptions and preferences, a relatively gen-
eralized and well-balanced trust, and a good disposition to engage in the
deliberative process itself.

NETWORK PERFORMANCE

The complexity of networks entails that there are no straightforward crite-


ria to define and evaluate their performance. From their respective stand-
points, participants may draw varying and even contradictory conclusions
about the achievements and failures of collaborative projects.
The performance of knowledge networks has to be assessed at two
equally important levels: practical (or functional) and organizational. The
former refers to actions and decisions aimed at solving technical problems,
transforming or generating knowledge, or creating economically useful
products by scientifically valid methods. This level of performance has
normative, technical and exchange dimensions.
Organizational performance conventionally refers to the survival of
the network, but it also concerns its stabilization and the preservation,
waste, or increase of the bases and opportunities for future collabora-
tive exchanges between academy and industry. Network organizational
performance will be higher to the extent that it facilitates the continued
operation of the four integration mechanisms analysed above and pro-
vides participants with the knowledge and skills necessary for this complex
form of collaboration.

Dimensions of Functional Performance

The decisions made and actions taken by knowledge networks may be


assessed along three dimensions, each with its distinctive question:

1. Normative dimension. Whether decisions and actions are right: the extent
to which they comply with the normative standards of participants.
2. Technical dimension. Whether they are true or accurate: how suc-
cessful they are in solving the problems that the network is meant to
address and in finding correct answers to relevant questions.
3. Exchange dimension. Whether they are profitable: how much they
satisfy the interests of all participants and how well they deal with
their concerns.
326 The capitalization of knowledge

Actions and decisions must perform reasonably well in each of the three
dimensions simultaneously. A good decision or action should be right,
true or accurate, and profitable9. An action that is judged normatively
sound but fails to bring about accurate solutions to the problems or profit-
able results to participants would be practically useless. One that has tech-
nically accurate and profitable consequences but violates norms and rules
that are fundamental to any of the participant entities may undermine the
collaborative project.
Moreover, given that these networks bring together people from dif-
ferent institutional settings, each of the three dimensions listed above
necessarily comprises a variety of standards. The norms and values held
by academic and business organizations are obviously different, and all of
them must be taken into account when determining whether a decision or
action was right or wrong. Similarly, to determine whether a given deci-
sion was correct, it is necessary to consider the definitions of truth and
accuracy that prevail in all the participant entities. The same holds for
finding out whether the results of an action were profitable, since universi-
ties and firms have distinct interests and goals, which result in different
views of what a ‘profit’ should be.
Equally, or even more importantly, those dimensions also comprise the
standards created by the network itself. The relevant norms, the nature
of the technical problems to be solved, and the interests and goals of par-
ticipants are defined, shaped and transformed by means of the interaction
itself. To the extent that the interaction crystallizes into a genuinely new
entity and becomes autonomous from its sponsors, the network acquires
its own performance standards.

Functional Performance: Effectiveness, Efficacy, Efficiency

The complexity of knowledge networks further implies that effectiveness,


efficacy and efficiency, the traditional ways to evaluate performance, also
become multifaceted. An agent is usually considered effective to the extent
that it is capable of having the intended or expected effects.10 It is clear that,
according to their own members, the networks analysed here did have the
capacity to produce their intended effect: to transmit, transform and even
create knowledge. However, effectiveness, while important, is too vague. It
does not evaluate that effect by comparing it to a given standard.
Efficacy is more precise. It normally means not only the ‘power or
capacity to produce effects’ in general, but, more specifically, also the
‘power to effect the object intended’.11 This is associated with the capac-
ity for achieving precise, previously stated, goals. But according to the
interviews, problem-solving in knowledge networks involves a complex
Integration mechanisms and performance assessment 327

process of definition and redefinition of the issues addressed. Therefore


network efficacy has to be evaluated dynamically, considering the resolu-
tion of problems arisen during the interaction itself, rather than solely the
achievement of previously set goals.
By itself, efficacy is about achieving goals and says nothing about the
costs of doing so. Efficiency is the notion that attempts to fill this void.
An agent or action is efficient to the extent that it achieves its goals at
the lowest costs (time, technical resources, money, physical effort and so
on). It implies a balance between means and ends, between costs and ben-
efits. This notion is critical for evaluating the performance of knowledge
networks, since it is clear that assessments will vary greatly according
to which combination of criteria and standards is taken into account.
Therefore it requires a more detailed attention than the others.
Efficiency is an empirical, not a normative, matter. But a potentially
efficient decision that violates fundamental normative or legal standards
may be unacceptable, unsustainable or counterproductive. Hence a pre-
vious step is to determine whether an action or decision complies with
the relevant norms (values, laws, rules and so on). After this preliminary
probe, its efficiency should be evaluated according to the two remaining
dimensions: their truth or accuracy, and their profitability.
In the technical sense, networks are efficient in so far as they find accu-
rate solutions to the problems they are meant to address at the lowest
possible cost, measured in cognitive, economic, technological and related
terms. For knowledge networks, this usually means whether the expected
technological product was created or whether scientific knowledge was
transformed into economically useful goods. But it also means whether
important problems, not originally considered, where found and defined
in a creative and potentially useful way. Although sophisticated tools may
be used to gather information in this respect, the assessment made by par-
ticipants themselves might be the most important input for analysing this
kind of efficiency.
But, as said above, networks are not only problem-solving devices.
They are also exchange structures. When organizations and individuals
contribute their own resources to the collective effort, they seek actual
returns. Therefore network efficiency must also be assessed in distribu-
tive terms. Does the collaborative effort yield significant benefits to each
participant? Are the products of the interaction distributed in a way that
satisfies even the least favoured partner? Are the costs of the interaction
fairly distributed among participants? Is there no participant that would
be better off without this collaboration? In formal econometric terms, is
the network Pareto-efficient – that is to say, is it not possible to organ-
ize the exchange in a way that would produce greater benefits to at least
328 The capitalization of knowledge

Table 12.1 Criteria and dimensions for assessing practical performance

Criteria/ Normative Technical Exchange


dimension
Effectiveness Ability to produce Ability to Ability to produce
normatively sound produce profitable results
results technically
correct results
Efficacy Capacity to achieve Capacity to solve Capacity to provide
normatively sound specific problems practical benefits to
concrete goals participants
Efficiency Capacity to Capacity to provide
solve technical tangible benefits
or scientific to each and all
problems at the participants
lowest possible
cost

some of the participants without damaging any of them? Again, although


sophisticated tools may be used to analyse this kind of efficiency, perhaps
the best data come from evaluations made by participants themselves.
The important question in this respect is whether participants perceive an
acceptable proportion between, on the one hand, the efforts made and the
resources invested and, on the other, the results obtained.
Table 12.1 summarizes the criteria and dimensions that can be com-
bined to assess the practical or functional performance of knowledge
networks. It must be added that such assessment should be comparative.
For instance, the capacity of a network for efficiently solving technical
problems must be compared to the real or potential capacity of other
organizational structures to address the same problems. Would a univer-
sity laboratory, by itself, have been able to solve those problems at a lower
cost? Would the R&D department of the firm have provided the quickest
and least costly solutions? The same types of questions should be asked for
each of the other criteria and dimensions.

Organizational Performance

Decisions and instrumental actions are not the only results of networks
that merit attention when evaluating their performance. It is equally
important to observe whether, in making or undertaking them, the
network preserved, or undermined, the opportunities for future collabora-
tive exchanges.
Integration mechanisms and performance assessment 329

Table 12.2 Organizational performance: standards and conditions for the


operation of integration mechanisms

Mechanism Criteria
Trust ● Production of normative, strategic and technical trust
● Balance among the three kinds of trust
Translation ● Creation of common languages
● Institutionalization of translation
● Training of individual translators
● Diminishing interpretative flexibility
Negotiation ● Reciprocity: respect for the legitimate particular interests
of participants
● Production of rules for future negotiations
● Creation of mechanisms and sites for conflict negotiation
Deliberation ● Equal opportunity to participate in decision-making
● Definition of collective interests, objectives and problems
● Creation of institutions for deliberation

As Table 12.2 suggests, this means, in the first place, asking whether the
operation of the network satisfied the fundamental standards or ideals of
the four integration mechanisms analysed above: trust, translation, nego-
tiation and deliberation. But it also depends on whether it strengthened or
weakened the conditions necessary for the operation of such mechanisms.
Trust can be reinforced by use – but it may also be destroyed when used
improperly. Seemingly efficient solutions may violate some basic prin-
ciples of fairness, discrediting some of the participants or contravening
important values. More subtly, a decision or action may alter the balance
among trust dimensions that is necessary for a network to maintain itself
and operate efficiently in the medium term. It may, for instance, reinforce
strategic trust at the expense of normative or prestige-based trust. Thus
the preservation, creation, or destruction of trust is a crucial criterion for
evaluating network performance.
A further criterion is the degree to which the network created common
languages, institutionalized the function of translation, and trained
individual translators, thereby facilitating future collaboration.
Similarly, the interaction may define collective interests, objectives,
problems and solutions in a way that facilitates future deliberation. It can
also reinforce norms (equality, respect, reciprocity, openness) and institu-
tions that facilitate deliberation; particularly important is the equal oppor-
tunity to participate in decision-making (arguably ‘the most fundamental
condition’ of deliberation).
Decisions may be made according to the principle of reciprocity, which
330 The capitalization of knowledge

Table 12.3 Organizational performance: network dynamics

Dimensions Criteria
Autonomy ● Self-regulation capacities (organizational autonomy)
● Self-selection capacities (individual autonomy)
Network ● Stabilization of the network
development ● Organizational learning
● Creation of new networks
Learning ● Individual acquisition of organizational skills and
knowledge

is central to negotiation: respecting the legitimate particular interests of all


participants, with a fair distribution of the benefits and costs of collabora-
tion. Similarly, the interaction may reinforce existing structures, spaces,
mechanisms and procedures for conflict management and interest aggre-
gation – or it may undermine them. Especially important is the creation
of sites (meetings, boards, committees) where conflicts can be negotiated
or arbitrated.
Practical and organizational results are obviously important. But, in
some cases, the best result of collaboration may be the creation and pres-
ervation of the network itself as a space for bringing together knowledge
from different settings, interests from several organizations, problems and
potential solutions useful to a variety of actors.
To assess this, one must observe the dynamics of networks (Table 12.3).
According to the actor-network theory, such dynamics embraces three
main phases: emergence, development and stabilization. New networks
emerge out of already existing ones, by either subtle changes or revolution-
ary breakthroughs. A network can evolve into a more convergent or more
divergent structure. When coordination is stronger and different elements
are better aligned, the network becomes more stable and predictable. In
other words, stabilization, or closure, means that interpretive flexibility
diminishes. When its diverse elements are more tightly interrelated, the
network becomes more complex and stable, because to disconnect an
actor from a network, many other connections have to be undone (Stalder,
1997).
Finally, by participating in knowledge networks, individuals and organ-
izations may acquire the skills and knowledge necessary for future col-
laboration: how to interact with people from other academic or business
entities, how to communicate with and learn from them, how to distin-
guish those that are trustworthy from those that are not, how to encour-
age them to trust you, how to negotiate and deliberate with them, how to
Integration mechanisms and performance assessment 331

collectively make decisions that are normatively right, technically correct


and profitable. But participation may also be a frustrating experience. It
may show you how difficult it is to communicate with people from differ-
ent entities, how unreliable they are, how complicated it is to negotiate
with them, how futile are the efforts to rationally exchange arguments
with them. As with the previous dimensions and criteria of performance,
interviews with participants can be the best means to evaluate this learning
dimension of networks.

CONCLUSION

The proper functioning of knowledge networks requires the concurrence


of four integration mechanisms: trust, translation, negotiation and delib-
eration. Yet, within certain limits, there is an inverse relationship between
two pairs of them: between translation and trust, and between negotiation
and deliberation. When there is an optimal combination of normative,
strategic and technical trust, translation becomes less necessary; the suc-
cessful practice of deliberation makes negotiation easier and less salient
as a decision-making mechanism. Thus well-functioning knowledge net-
works should show the following distinctive characteristics: strong and
well-balanced trust that facilitates communication among participants, a
moderate need for translation, an intense practice of deliberation, and a
moderate use of negotiation.
A well-functioning network, with a proper combination of the four inte-
gration mechanisms, can be expected to have good practical or functional
performance – bringing about actions and decisions that are normatively
sound, technically true or correct, and economically profitable. They can
also be expected to have high organizational performance – preserving
and strengthening the conditions for future collaboration.
In making effective, efficacious and efficient decisions, well-performing
networks should facilitate translation, create or preserve trust, and rein-
force the institutional and personal basis for negotiation and deliberation.
Similarly, they should become more autonomous and stable and give their
members greater opportunities for acquiring the knowledge and skills that
this complex form of collaboration requires.

NOTES

1. The interviews were collectively designed and conducted by participants in a research


project on knowledge networks (Luna, 2003).
332 The capitalization of knowledge

2. For a further analysis of these findings, see Luna and Velasco (2005).
3. For a broader analysis of this topic, see Luna and Velasco (2003).
4. For further analysis on the relationship between trust and translation, see Luna and
Velasco (2006).
5. Half of the interviewees spontaneously said that these decisions were collectively made.
The other participants gave diverging replies when asked about the source of such deci-
sions, even when they were referring to the same collaborative project.
6. That is, agreements about which there is no expressed opposition by any participant, or
agreements that result from the sum of differences.
7. According to Dryzek (2000, p. 134), ‘the most appropriate available institutional
expression of a dispersed capacity to engage in deliberation . . . is the network’.
8. On this topic, see Magnette (2003a, 2003b), Eberlein and Kerwer (2002), and Smismans
(2000).
9. According to Weber (2005, p. 51), ‘The efficiency of the solution of material problems
depends on the participation of those concerned, on openness to criticism, on horizon-
tal structures of interaction and on democratic procedures for implementation.’
10. ‘Organizational effectiveness is a prerequisite for the organization to accomplish its
goals. Specifically, we define organizational effectiveness as the extent to which an
organization is able to fulfill its goals’ (Lusthaus et al., 2002, p. 109).
11. Oxford English Dictionary Online, 2006.

REFERENCES

Bell, Daniel. A. (1999), ‘Democratic deliberation: the problem of implementation’,


in S. Macedo (ed.), Deliberative Politics: Essays on Democracy and Disagreement,
New York and Oxford: Oxford University Press, pp. 70–87.
Bohman, J. and W. Rehg (eds) (1999), Deliberative Democracy: Essays on Reason
and Politics, Cambridge, MA: MIT Press.
Burt, R.S. (1992), Structural Holes, Cambridge, MA: Harvard University Press.
Burt, R.S. (2000). ‘The contingent value of social capital’, in E.L. Lesser (ed.),
Knowledge and Social Capital. Foundations and Applications, Boston, MA:
Butterworth/Heinemann, pp. 255–86.
Callon, M. and J. Law (1997), ‘L’Irruption des Non-Humaines dans les sciences
humaines: quelques leçons tirées de la sociologie des sciences et des techniques’,
in J.-P. Dupy, P. Livet and B. Reynaud (eds), Les limites de la rationalité. Tome
2. Les figures du collectif, Paris: La Découverte, pp. 99–118.
Dasgupta, P. (1988), ‘Trust as a commodity’, in D. Gambetta (ed.), Trust: Making
and Breaking Cooperative Relations, Oxford: Basil Blackwell.
Dryzek, J.S. (2000), Deliberative Democracy and Beyond: Liberals, Critics,
Contestations. Oxford: Oxford University Press, pp. 49–72.
Eberlein, B. and D. Kerwer (2002), ‘Theorising the new modes of European Union
governance’, European Integration on Line Papers (EIoP), 6 (5).
Eising, R. and B. Kohler-Koch (1999), ‘Introduction: network goverance in the
European Union’, in B. Kohler-Koch and R. Eising (eds), The Transformation
of Governance in the European Union, London: Routledge, pp. 3–12.
Elster, J. (1999), ‘The market and the forum: three varieties of political theory’, in
J. Bohman and W. Rehg (eds), Deliberative Democracy: Essays on Reason and
Politics, Cambridge, MA: MIT Press, pp. 3–34.
Granovetter, M.S. (1973), ‘The strength of weak ties’, American Journal of
Sociology, 78 (6), 1360−80.
Integration mechanisms and performance assessment 333

Gutmann, A. and D.F. Thompson (1996), Democracy and Disagreement,


Cambridge, MA: Belknap Press of Harvard University Press.
Hage, J. and C. Alter (1997), ‘A typology of interorganisational relations and
networks’, in J.R. Hollinsworth and R. Boyer (eds), Contemporary Capitalism.
The Embeddedness of Institutions, Cambridge: Cambridge University Press, pp.
94–126.
Jachtenfuchs, M. (2006), ‘The EU as a polity (II)’, in K.E. Jorgensen, M. Pollack
and B.J. Rosamond (eds), Handbook of European Union Politics, London: Sage,
pp. 159–74.
Lane, C. (1998), ‘Introduction: theories and issues in the study of trust’, in
C. Lane and R. Bachman (eds), Trust within and between Organizations:
Conceptual Issues and Empirical Applications, Oxford: Oxford University
Press, pp. 1–30.
Latour, B. (1993), We Have Never Been Modern, Cambridge, MA: Harvard
University Press.
Lewicki, R.J. et al. (2004), Essentials of Negotiation, 3rd edn, Boston, MA:
McGraw-Hill/Irwin.
Leydesdorff, L. (1997), ‘The new communication regime of university–industry–
government relations’, in H. Etzkowitz and L. Leydesdorff (eds), Universities
and the Global Knowledge Economy, London: Pinter, pp. 106–17.
Leydesdorff, L. (2001), A Sociological Theory of Communication: The Self-
Organization of the Knowledge Based Society, Parkland, FL: Universal
Publishers.
Luna, M. (ed.) (2003), Itinerarios del conocimiento: formas, dinámicas y contenido.
Un enfoque de redes, Barcelona: Anthropos/IIS-UNAM.
Luna, M. and J.L. Velasco (2003), ‘Bridging the gap between business firms and
academic institutions: the role of translators’, Industry and Higher Education, 17
(5), 313−23.
Luna, M. and J.L. Velasco (2005), ‘Confianza y desempeño en las redes sociales’,
Revista Mexicana de Sociología, 67 (1), 127−62.
Luna, M. and J.L. Velasco (2006), ‘Redes de conocimiento: principios de coor-
dinación y mecanismos de integración’, in M. Albornoz and C. Alfaraz
(eds), Redes de conocimiento: construcción, dinámica y gestión, Buenos Aires:
UNESCO, pp. 13–38.
Lundvall, B.-Å. (2000), ‘Understanding the role of education in the learn-
ing economy. The contribution of economics’, in OECD (ed.), Knowledge
Management in the Learning Society. Education and Skills, Paris: OECD, pp.
11–35.
Lusthaus, C. et al. (2002), Organizational Assessment: A Framework for Improving
Performance, Washington, DC and Ottawa: Inter-American Development Bank
and International Development Research Centre.
Magnette, P. (2003a), ‘European governance and civic participation: beyond elitist
citizenship?’, Political Studies, 51 (1), 1−17.
Magnette, P. (2003b), ‘In the name of simplification. Constitutional rhetoric in the
convention on the future of Europe’, Working Paper, Bruselas: Institut d’études
européennes, Université Libre de Bruxelles.
Martinelli, A. (2002), ‘Markets, governments, communities, and global govern-
ance’, paper presented at the XV World Congress of Sociology, Brisbane,
Australia, 7−13 July.
Messner, D. (1999), ‘Del estado céntrico a la sociedad de redes. Nuevas exigencias
334 The capitalization of knowledge

a la coordinación social’, in N. Lechner, R. Millán and F. Valdés (eds), Reforma


del estado y coordinación social, Mexico City: Plaza y Valdés, pp. 77–121.
Oléron, P. (1983), L’argumentation, Paris: PUF.
Sable, C. (1993), ‘Studied trust: building new forms of cooperation in a volatile
economy’, Human Relations, 46 (9), 1133−70.
Smismans, S. (2000), ‘The European Economic and Social Committee: towards
deliberative democracy via a functional assembly’, European Integration on Line
Papers, 4 (12).
Stalder, F. (1997), ‘Latour and actor-network theory’, paper available online at
http://amsterdam.nettime.org/Lists-Archives/nettime-l-9709/msg00012.html.
Steward, F. and S. Conway (1996), ‘Informal networks in the origination of suc-
cessful innovations’, in R. Coombs et al. (eds), Technological Collaboration:
The Dynamics of Cooperation in Industrial Innovation, Cheltenham, UK and
Brookfield, MA, USA: Edward Elgar, pp. 201–21.
Streeck, W. and P.C. Schmitter (1992), ‘Comunidad, mercado, Estado ¿y las aso-
ciaciones? La contribución esperada del gobierno de intereses al orden social’, in
R. Ocampo (ed.), Teoría del neocorporativismo: Ensayos de Philippe Schmitter,
Mexico City: UIA/ Universidad de Guadalajara, pp. 297–334.
Valente, T. (1995), Network Models of the Diffusion of Innovation, Cresskill, NJ:
Hampton Press.
Walker, G., B. Kogut and W.-J. Shan (1997), ‘Social capital, structural holes and
the formation of an industry network’, Organization Science, 8 (2), 109−25.
Warren, M.E. (1996) ‘Deliberative democracy and authority’, American Political
Science Review, 90 (1), 46−60.
Weale, A. (2000), ‘Government by committee. Three principles of evaluation’, in
T. Christiansen and E. Kirchner (eds), Committee Governance in the European
Union, Manchester: Manchester University Press, pp. 161–70.
Weber, S. (2005), ‘Network evaluation as a complex learning process’, Journal of
MultiDisciplinary Evaluation, 2, 39−73.
Index
Abernathy, W. 300 Ainsworth, S. 244
Abramson, N. 294, 297 Alcoa 32
academic and industrial researchers, Allen, R.C. 170, 171, 173
different approaches of 53–4 allocative efficiency theory 125
academic life see polyvalent Alter, C. 315
knowledge and threats to Amable, B. 227
academic life Amadae, S.M. 264
academic software and databases analytical ontic knowledge 33–42
181–4 cognitive mode of 39–42
academic–business research descriptive 33–4
collaborations (and) 74–97 explanatory 34–9
organizational ecology of innovation see also Appert, Nicholas; Pasteur,
88–92 Louis
productive shocks and problematic Anderson, J.R. 40
tensions 77–81 Anderson, P. 300
social interactions and collaborative anti-commons 181
research 82–7 effects 80, 122
university patenting for technology tragedy of 18, 20, 134–5
transfer 74–7 anti-cyclic triple helix see capitalizing
see also innovation; legislation knowledge and the triple helix;
academy–industry relations 2–3, 21, polyvalent knowledge and threats
25, 46–60 to academic life; publishing and
effects of different deontic patenting; triple helix in economic
knowledge on 46 cycles; triple helix: laboratory of
incentives for 32 innovation
obstacles to 45 Anton, J.J. 171
transformation of 204–5 Antonelli, Cristiano 16, 99
see also background knowledge; Aoki, M. 226–7
norms Appert, Nicholas 36–9, 41, 43, 62
acronyms and food preservation 36–7, 38–9
proprietary, local, authoritarian, appropriability, impact of 36
commissioned and expert appropriability, opportunities and
(PLACE) 47, 52 rates of innovation (and) 121–42
strategic, hybrid, innovative, public blurred relations between
and scepticism (SHIPS) 47, appropriability and innovation
52 rates 130–136
ACT-R (Adaptive Control of failures of ‘market failure’
Thought–Rational) networks 40 arguments 123–8
agro-food 144, 151–3 growth in patenting rates and (mis)
Agrawal, A. 5, 6, 46 uses of patent protection
ahead-of-the-curve science 146 128–30

335
336 The capitalization of knowledge

opportunities, capabilities and greed Biopolis (Singapore) R&D hub 160


136–9 bioprocessing 160
Arora, A. 136 bioregional clusters 157
Arrow, K.J. 88, 99, 121, 169, 265, biosciences
266–7 globalized 152–5
Art of Computer Programming 168 and knowledge networks 153, 157–8
Arundel, A. 68 new research institutes for 160
Ashby, W.R. 307 vehicles for implantation of 153–5
AT&T 32 Biosciences Policy Institute 252
Aventis 151 see also Life Sciences Network (LSN)
Avnimelech, G. 113 Biosystems Genomics, Centre for 153
Ayer, A.J. 264 biotechnology 41, 147–9, 152–3
Azoulay, P. 6, 7 and cluster theory 144–6
clustering 149–50
background knowledge (and) 45, collaborative aspect of 148
47–60 and dedicated biotechnological firms
groups 53–4 (DBFs) 19, 144–6, 153
shared vs unshared 64–5 in Germany 19 see also Germany
‘teamthink’ theory 53 globalization of 150–152
see also cognitive rules; language; initiatives (Cambridge University) 2
norms; research; studies (on) and nanabiotechnology 160
Bagchi-Sen, S. 143, 152, 155 networked hierarchy in 155–61
Balconi, M. 41, 64, 75 sociocultural impact of (in New
Baltimore, David 148 Zealand) 247–8
banks and polyarchic decision-making see also cluster theory; Schumpeter,
101–2 J.A.
Bar Hillel, M. 57 biotechnology firms
Barnet, G. 206 Cambridge (UK) 148
Barré, R. 223, 233 Perkin-Elmer (USA) 149
Barsalou, L.W. 40, 44 Birdzell, L.E. 34, 35, 36, 61
Barton, J. 135 Blau, P.M. 295
Bas, T. 146, 148, 159 Bloch, C. 115
Bathelt, H. 293 Blumenthal, D. 4, 206, 208, 209
Baumgartner, F. 244 Bohman, J. 323
Bayer 151 Boltanski L. 219, 227
Bazdarich, M. 151 Borras, S. 234
Becker, G.S. 273 Boulding, K.E. 266
Beffa, J.-L. 234 boundaries 244–5
Bell, Daniel A. 324 mental 245
Bell Labs 125 setting 247
Bernstein, R.J. 264 social 245
Bessen, J. 135 symbolic 245
Beston, A. 243 boundary organizations: their role in
Beth, E. 44 maintaining separation in the
big pharma 153–4, 155, 156 triple helix (and) 243–60
‘big science’ projects 63 boundaries 244–5
Bilderbeek, R. 305 boundary work 245–7
biopharma clusters 19 Life Sciences Network (LSN)
biopharmaceuticals 18, 143–8 , 151–3, 248–58
250 research project 247–8
Index 337

boundary work 245–7 Centennial Exhibition, Philadelphia


activities 246 (1876) 134
and demarcation 247 Chandlerian model of innovation 116
Bowie, J. 143 Chase Econometrics 183
Bozkaya, A. 114 chemicals complex, Grangemouth,
Braczyk, H.-J. 291 Scotland 152–3
Braine, M.D.S. 44 Cheng, P.W. 44
Branciard, A. 233 Chesbrough, H. 113, 153
Bratko, I. 306 Chiappello, E. 227
Brenner, Sidney 160 Childhood and Neglected Diseases,
Breschi, S. 8 Institute for (ICND) 155
British Petroleum (BP) 153 Clark, H.H. 50, 51, 52, 65
Broesterhuizen, E. 47 Clark, K.B. 300
Buck, P. 272 Cleeremans, A. 45
Burt, R.S. 313, 318 cluster penetration for open innovation
Bush, V. 276 154
cluster theory 144–6
Calderini, M. 8 cluster-building 19, 147
Callon, M. 226, 227, 314 Coase, R. 105, 123
Cal Tech 208 and Coase Theorem 76
Cambridge (UK) 146, 148, 150, 151 Cobbenhagen, J. 297
Cambridge University 2, 159 Coenen, L. 151, 153
Cambridge–Boston 159, 162 cognitive rules (and) 54–60, 64
and global bioscientific network application of 45
hierarchy 158 decision-making 58–60
Camerer, C. 68 differences in 66
Campart, S. 115 intuitive and analytical thinking
Campbell, E.G. 4 54–5
Canada problem-solving 55–6
biotechnology in 150–151 reasoning 56–8
bioregional clusters in 157 cognitive style, shared vs unshared
capability clusters 139 65–7
capability-based theory of the firm Cohen, J. 67
138 Cohen, W. 18, 132–3, 136, 174, 306
Capitalism, Socialism and Democracy Colbertist model 223
102 Collins, A.M. 4044
capitalism, varieties of 218 Collins, S. 174, 250, 253, 255–6
capitalizing knowledge and the triple Colman, Alan 160
helix 14–25 communication 53–4
see also triple helix in economic and information 276–7
cycles; triple helix: laboratory linguistic 64–5
of innovation non-verbal 51–2
Carlsson, B. 143, 291 theory 269–70
Carnegie, Andrew 31 Complex Adaptive Systems (CAD) 25
Carruthers, P. 39 Computer Science, National Institute
Carson, C.S. 272 of (INRIA) 233
Carter, A.P. 291 Conway, S. 318
Casper, S. 146, 148, 150, 232 Cook-Deegan, R. 90
causal reasoning 44, 57–8 Cooke, P. 18, 143, 145, 159, 292, 293
Celera 182 Cookson, J.E. 272
338 The capitalization of knowledge

copyleft license see Generalized Public and SUN, Silicon Graphics and
License (GPL) Cisco 11
copyright 17, 32 definition of
see also patents/patenting knowledge as production and
cotton research centre (Raleigh– distribution 268
Durham) 152 territorial economy 293
Cowan, R. 64, 291 Denison, E.F. 272
Craig Venter 149 DiMinin, A. 7
creative destruction 101, 146–7 dogmatism 48, 56–7
Crespi, G.A. 75, 76 Dolfsma, W. 24, 295, 302
Crystal Palace Exhibition, London Dosi, Giovanni 17, 127, 136, 137, 138
(1951) 133–4 dot.com bubble 147
Dow 151
da Vinci, Leonardo 31 Dryzek, J.S. 323
Dahlman, C.J. 105 dual careers 21
Dalle, J.M. 167 Duchin, F. 277
Dasgupta, P. 4, 172, 292, 316 due diligence 85
Data Resources, Inc 183 DuPont 31
Davenport, Sally 23
David, P.A. 4, 5, 126, 170, 172, 182, Eastman Kodak 31
191, 291, 292 Economic Co-operation and
De la Mothe, J. 143 Development, Organisation for
De Liso, N. 110 (OECD) 2, 144, 218, 219, 228,
de Solla Price, Derek J. 277 230, 234, 262, 291, 295, 296, 297,
Dealers, National Association of 113 305, 306
Deane, P. 272 STI Scoreboards of 306
Dearborn, D.C. 56 econometric software packages (case
Debreu, Gérard 265 study) 184–9
decision-making (and) 58–60, 78, 101, EasyReg 186–7
264, 277, 312, 315, 329, 331 price discrimination in 189
centralization of 223 protection of 185
collective 320–325 statistics for 186
bargaining/negotiation 320, 321–2 education 21, 23, 263, 273–6, 279
deliberation 320, 322–5 categories/sources of 267
voting 320 economics of 262
hierarchical 100–101, 103 higher 225, 230, 239
irrational escalation 59 productivity of 275
polyarchic 100–101 Eisenberg, R.S. 122, 135, 180
rational 277 Eising, R. 314
regret and loss aversion 58–9 Elster, J. 59, 320, 322
risk 58 Endres, A.M. 272
temporal discounting/‘myopia’ entrepreneurial science 201–2, 207
59–60 origins of 206
dedicated biotechnological firms entrepreneurial scientists (and) 201–17
(DBFs) 19, 144–8, 150, 152–3, academic entrepreneurial culture
155, 160, 162 212–14
exploitation-intensive 156 academic world 201–2
financing of 156 forming companies 202–5
Defense Research Projects Agency industrial penumbra of the
(DARPA) 11 university 209–11
Index 339

normative impetus to firm Finegold, D. 155, 160


foundation 211–12 First Industrial Revolution 31, 34, 35,
research funding difficulties 207–12 40, 41, 43
and transformation of industry– Fischhoff, B. 58
university relations 204–5 Fisher, D. 250
Epicyte 144–6 Fondazione Rosselli 66
patent-holder: Scripps Research Foray, D. 171, 173, 232, 261, 262, 291,
Institute 145 292
and Plantibodies: patents for DNA Fourquet, F. 272
antibody sequences 144, 145 France/French 228
Ernst & Young 231 Grands corps de l’Etat 224
Espiner, G. 248 Grandes écoles 224
Etzkowitz, Henry 3, 6, 13, 21, 60, 201, and Grenoble innovation complex
203, 207, 210, 215, 220, 226 146, 234
European Commission 78, 183, 234, RDI policies (and) 219, 232–4
291, 305 Grenoble technological district
funding of SESI project 239 234
European Environmental Agency 247 Guillaume Report 232–3
European Framework Programmes 2 and Sophia Antipolis 2
European NACE codes 295 and state as entrepreneur system
European Patent Office (EPO) 75–6 232–4
European Research Area (ERA) 77, 78 Francis, J. 146, 147, 148, 150
European Summit (Lisbon, 2000) 291 Franzoni, C. 8
European Union (EU) 218, 219, 239 Freeman, C. 146, 280, 292
and admin system of European Friends of the Earth 250, 251
universities 77 Fritsch, M. 292, 301, 306
EU-15 297 Fuchs, G. 143
knowledge ecology of 89
legislation (Bayh–Dole-inspired) 79 Gallini, N. 132
member states 74, 296 Gambardella, Alfonso 20, 21
publications on measurement of GE Plastics 153
knowledge 280 Geiger, R. 203
universities 91 Geiryn, T. 246, 247
Eurostat 291, 296, 303 Gene Technology Information Trust
Evans, R. 247 see Genepool; Life Sciences
externalities, lack of 123–4 Network (LSN)
Eymard-Duvernay, F. 219 General Electric 32
Generalized Public License (GPL) 21,
Fabrizio, K.R. 7 168–9, 183, 186, 188, 197–8
Faulkner, W. 205 as coordination device 175–7
feedback lesser (LGPL) 190–191
positive processes of 90–91, 115 and nature and consequences of
from reflective overlay 24 GPL coordination 177–8
from reflexive overlay 293 GenePool 248, 253
Feldman, M. 146, 147, 148, 150 Genetic Modification, Royal
Feldman, Stuart 15, 79 Commission into (RCGM) 244,
finance and innovation (and) 99–104 250–252, 258
Type 1 and/or Type 2 errors 100– genetic modification (GM) 248–58
101, 104 and anti-GM activists 244
see also innovation role in New Zealand 253–4
340 The capitalization of knowledge

see also Life Sciences Network Grossman, M. 45


(LSN) growth accounting 23, 263, 271–2,
genetically modified organism (GMO) 278–9
23, 251 Guillaume, H. 232
genomics 62, 154, 160, 233 Guston, D. 243, 245, 258
Centre for Biosystems Genomics 153 Gutmann, A. 323
functional 155
Genomics Institute of Novartis Hage, J. 315
Research Foundation (GNF) Hall, B. 128, 139, 134
155 Hall, B.H. 20, 21, 100, 164, 171, 174
Joint Center for Structural Hall, P. 148, 218, 227
Genomics (JCSG) 155 Hansen, W.L. 273
Gentner, D. 45 Harhoff, D. 171, 173
Georghiou, K. 229, 230 Harvard Medical School (HMS) 159,
Germany/German 2, 32, 102, 152, 162
219–20, 228, 297 Harvard University 1, 148, 202, 206
BioM 150 Haseltine, Walter 148
BioRegio programme in 150 Hawkes, J. 250, 255, 256
biotechnology in 19, 149 Hayek, F.A. 263, 265–6
Federal Ministry of Education and Health, National Institutes of (NIH)
Research 231, 232 211
ICT sector 232 research funding 156–7
professional networks/university Heims, S.J. 269
entrepreneurship 230–232 Heller, M.A. 122, 135, 180
RDI policies/regime 219, 228, 231–2 Hellstrom, T. 243
revolution in organic chemistry in 32 Henderson, R. 5, 6, 46, 149
university model 21 Hendry’s model development
Geuna, A. 68, 174 methodology 186
Gibbons, M. 4, 225 Heracleous, L. 243
Giere, R.N. 39 Hernes, T. 243, 244–5
Gieryn, T. 215 Hertzfeld, H.R. 174
Gilbert, Walter 148 Hewlett, Bill 210
Glaxo 143 Hewlett-Packard 228
global bioregions: knowledge domains, Hiatt and Hine 145
capabilities and innovation system Hicks, John 265
networks 143–66 Hickson, D.J. 295
‘Globalization 2’ 161, 162 higher education institutions (HEIs)
globalization in biosciences see 82, 83–4, 242
biosciences Hilaire-Perez, L. 171, 173
Godin, B. 23, 262, 268, 274, 277, 280, Hinsz, V. 53
291, 306 Hodgson, G.M. 98
Goffman, E. 50 Holyoak, K.J. 44
Gompers, P. 111, 113 Holyoke, T. 244
Grandstrand, O. 126 Hoppit, J. 272
Granovetter, M.S. 313, 318 Hortsmann, I. 130
Greenpeace 250, 251 Hounshell, D.A. 267, 270
Gregersen, B. 148 human genome decoding/Celera 147–8,
GREMI group 150 149
Grice, H.P. 54 human genome mapping 63
Griliches, Z. 272, 275 Hurd, J. 151
Index 341

Hussinger, K. 111 and royalty stacking 80


hybrid organizations 22, 32, 41–2, 60, weak regime of 126
64, 66–7, 98, 226, 243 intellectual property rights (IPR)
15–18, 22, 32, 80, 121–3
Incyte 182 capitalization of knowledge through
information and communication 22
technology (ICT) 5, 41, 98, 125–6, and corporate strategies on legal
147–9, 151, 229, 232, 233, 261, claim of 129
264 enforcement strategies 127
core technologies of 18 fragmentation of 183
emergence of 125–6 influence on knowledge 130–131
revolution 64, 116 and innovation 18
imagery 9 and joint R&D ventures 86
incentives for knowledge production knowledge-intensive 111–13
with many producers 169–72 -leads-to-profitability model 138
and granting of IPR 170 legal protection of 123
intellectual curiosity 170 markets for technologies/rates of
open science regime 170–171 innovation/diffusion 136
and system for ‘collective invention’ and modes of appropriation 121
171 and new technologies 126–7
Industry and Trade 88 as obstacle to research/innovation
initial public offerings (IPOs) 12, 112, 18, 80
113–14 over-the-county (OTC) offerings of
innovation(s) 1–3, 9–20, 22, 31–9, 112
42–4, 61–2, 67–8, 88–92 protection and income distribution
bundling finance and competence 122–3
with 111 protection and innovation 135–7
and finance 99–104 regimes
and imitation, model of 127 changes in 132
and innovative activity 88 lack of effects of different 133
national systems of 291 strengthening of 137–8
oilseed rape 152 -related incentives 139
patent 5, 125 and technology transfer officers
policy response 88–9 (TTOs) 21
imitation of patent-protected 131 university access to 173–4
opportunities for 137–8 see also markets
protection of 132–3 interaction and discussion between
profiting from 132 academics and business 19
rates of 122–3 investment on research and innovative
and RDI policies 218–42 outcomes 134
see also open innovation Isabelle, M. 170
innovative opportunities 125, 127–8, IT industry 228
137 Itai, S.244
intellectual property (IP) 4 Italy 75
costs of protecting 134–5 patenting and publishing study 8–9
lawyers 135 and Piedmont regional government 2
monopolies 81, 126 Politecnico of Turin 2
ownership of 77
protection 79–80, 124–6 Jachtenfuchs, M. 320, 324
and appropriability 124–5 Jackendoff, R. 44
342 The capitalization of knowledge

Jackson, S. 53 management 155


Jacob, M. 243 measurement of 279–80
Jaffe, A. 128, 133, 137 ontic (declarative) 39–40, 42, 64
Jakulin, A. 306 operational elements of 23
Janus scientist(s) 14, 22, 25, 32 operationalization of 277
and proximity 60–68 problem-solving (of firms) 138
Jensen, R. 5 production and control 292
job losses, UCLA Anderson Business productive 125
School report on 151 public domain (PD) 173, 174
Jobert, B. 235 public-good in private-good patent
Johannisson, K. 279 124
Johnson, B. 227 spillover 155
Johnson, J. 146 tacit 64
Johnson-Laird, P. 39, 44, 57 transfer processes 91
Jorgenson, D.W. 272 see also analytical ontic knowledge;
venture capitalism
Kaghan, W. 206 Knowledge: its Creation, Distribution
Kahneman, D. 45, 54–5, 57, 58 and Economic Significance 278
Kaiser, R. 146, 149, 150 knowledge-based economy 291
Karamanos, A. 146, 148, 150 knowledge capitalization 65
Karolinska University/Institute 11, 159 and norms 43
Kauffman, S. 38 software research 20
Kaufmann, D. 146, 149 knowledge economy 261–90
Kay, L.E. 269 flow of ideas (Appendix 1) 286–8
Kelly, S. 243, 244 indicators on knowledge-based
Kendrick, J.W. 272 economy (Appendix 2) 289–90
Khalil, E.L. 294 and knowledge measurement
King, R.G. 102 270–275
Kista science park (Stockholm) 151 see also Machlup, Fritz
Klevorick, A. 136, 137 knowledge networks: integration
knowledge mechanisms and performance
analytical ontic 33–42 assessment 312–34
base of an economy 24 as complex systems 313–16
as coordination mechanism 24, 292 trust and translation 316–20
creation of 268 negotiation and deliberation 320–325
defining 24 network performance 325–31
defining as production and see also decision-making; networks;
distribution 268–9 trust
deontic (procedural) 39–40, 42, 46 knowledge production, (Gibbon’s)
different deontic 46 Mode 2 of 4, 225
as economic good 99 knowledge production see incentives
elements of: education, R&D, for knowledge production with
communication, information many producers
267–8 knowledge transfer 14–15, 19, 43, 53,
fountain-heads 146 64–5, 67, 91, 153, 157
generation 124 epistemological and cognitive
governance 98–120 constraints in 32–42
and identification with information and analytical ontic knowledge
125 33–42
and IPR regimes 130–131 global 171
Index 343

and hybrid organizations 42 interactive 68, 109


technology 79 and rules 45
knowledge-based development 10 Leder and Stewart patent 18, 131
knowledge-driven capitalization of Lederberg, Joshua (Nobel Laureate)
knowledge 16, 19–20, 31–73 213
see also academy–industry relations; Lee, C. 145
background knowledge; Lee, Y.S. 170
knowledge transfer; ‘nudge’ Leech, B. 244
suggestions to triple helix legislation
knowledgeable clusters 161, 162 and antitrust litigation 125
Knudsen, C. 265 Bayh–Dole Act (USA) 74, 76, 78,
Knuth, Donald 168 79
Kohler-Koch, B. 314 Danish Law on University Patenting
Koput, K. 143 (2000) 79
Kortum, S. 128, 129, 130 Employment Retirement Income
Kossylin, S.M. 45 Security Act 113
Krimsky, S. 206, 213 European Directive on patenting of
Kuan, J. 181 software in Europe 182–3
Kuhn, T. 54 Leitch, Shirley 23
Kuznets, S. 38, 279 Lengyel, B. 306
Leontief, W. 277
Laafia, I. 291, 295, 297 Lerner, J. 111, 113, 127, 128, 129, 130,
Lakatos, I. 5 133, 175, 178, 190
Lam, A. 225, 227, 230 Lesson, K. 278
Lambert Review 82, 86, 87 Levin, R. 18, 132, 136
Lamberton, D. 266 Levin, S.G. 4
Lamont, M. 219, 244, 245, 247 Levine, R. 102
Lanciano-Morandat, C. 22, 227, 234 Levinthal, D.A. 178, 306
Landes, D. 121 Lewicki, R.J. 321
Lane, C. 316 Leydesdorff, L. 24, 220, 206, 243,
Lane, D.A. 99 292, 293, 294, 301, 306, 307, 314,
Langer, E. 58 318
Langlois, R.N. 262–3 Life Sciences Network (LSN) 23, 244,
language (and) 32–3, 50–60, 314, 248–58
317–19, 329 campaign strategy for RCGM
command 168, 184–5 251–2
co-ordination difficulties 52 Constitution of 248
mathematical 33, 44 evolution into Biosciences Policy
natural/formal 39–40 Institute 252–3
non-verbal communication 51–2 and Genepool 248
technology transfer (TT) agents 52 membership of 250
Larédo, P. 220, 232 objectives of 249
Latour, B. 212, 314 Linnaeus’ natural taxnomy 62
Law, J. 314 Liu, Edison 160
Lawton Smith, H. 143, 152, 155 loss aversion 58–9
learning 110, 139, 178 and irrational escalation 59
collective 233 Luhmann, N. 307
-curve advantages 132 Luna, Matilde 25
by doing 265 Lundvall, B.-Å. 146, 227, 234, 291,
implicit 45 292, 318
344 The capitalization of knowledge

McGill, W.J. 294 Marx, K. 121


Machlup, Fritz (and) 23–4, 126, Maskin, E. 135
261–80, 292 Matkin, G. 204
Information through the Printed Maurer, S.M. 167, 182, 184, 188
World 278 Maurice, M. 219
measuring knowledge/national Maxfield, R.R. 99
accounting 270–275 measuring knowledge base of an
policy issues 275 economy in terms of triple-helix
The Production and Distribution of relations (and) 291–311
Knowledge 262, 278 combination of theoretical
sources of insight 279–80 perspectives 293–4
see also Machlup’s construction methods and data 294–8
Machlup’s construction 263–70 data 294–5
and communication teory 269–70 knowledge intensity and high tech
defining knowledge 264–7 295–6
‘operationalizing’ knowledge 267–70 regional differences 297
McKelvey, M. 143 methodology 297–8
Mackie, J. 57 regional contributions to knowledge
Maillat, D. 150 base of Dutch economy 301–2
Mansfield, E. 5,131 results 298–300
Manz, C.C. 53 descriptive statistics 298–9
March, J.G. 60, 178 mutual information 299–300
markets 104–17 sectoral decomposition 303–6
classification of 106–7 Mendeleev’s periodic table of elements
as devices for reducing transaction 62
costs 105 Menard, C. 98, 106
as economic problem 104–6 Merges, R. 129, 131
functions of 107–10 Merton, R. 42, 47, 170, 221
as coordination mechanisms Messner, D. 319, 321
108–9 Metcalfe, J.S. 15, 88
as risk management mechanisms Meyer-Krahmer, F. 227, 231, 232
109–10 Miles, I. 305
as selection and incentive Miller, C. 243, 247
mechanisms 108 Miller’s magical number 40
as signaling mechanisms 108 Mincer, J. 273
and market failure arguments 123–8 Mirowski, P. 292
and neoclassical theory of exchange Mirowsky, P. 269
105–6 MIT (Massachusetts Institute of
public capital – focused on IPOs Technology) 6, 158–9, 171, 202,
113–14 206, 210–211
as social institutions 105 MIT–Stanford model 21
for technologies and IPR 136 Mitroff, I.I. 47
trading knowledge-intensive modus tollens 45
property rights in 112–13 Mokyr, J. 31, 38, 62
Marengo, Luigi 17 Molnar, V. 244, 245, 247
Markman, A.B. 45 Monsanto 151, 248, 253
Markusen, A. 148 Moore, G.E. 42
Marschak, J. 266, 269 Moore, K. 243, 246–7, 255, 256
Marshall, Arthur 88 Moser, P. 134
Martinelli, A. 314 Motion, J. 248
Index 345

Mowery, D. 32, 34, 62, 74, 129 New York Times 79


multi-level perspectives see research New Zealand 247–58
and development (R&D): national Association of Crown Research
policies Institutes (ACRI) 248, 254,
Mustar, P. 220, 232 255
Mykkanen, J. 272 biotechnology in 247–8
and Biotenz 250
NACE categories 295 Foundation for Research Science &
NASDAQ 17, 146 Technology 247–8
regulation (1984) and ‘Alternative 2’ King Salmon 248
129 Life Sciences Network in 23
and venture capitalism 110–116 role of GM in 253–4, 257
national accounting 24, 261, 262–3, and Royal Commission on Genetic
272–3, 275, 278 –9 Modification 243, 244, 250–252
and System of National Accounts and Royal Society of New Zealand
262, 272–4 (RSNZ) 248, 254, 255
National Science Board 207, 280 Newcastle University (and) 1
National Science Foundation (US) 39, professors of practice (PoPs) 12–13
41, 268, 273–4, 278 Regional Development Agency 12
grant 211 researchers of practice (RoPs)
Science Indicators 280 12–13
National Venture Capital Association Nguyen-Xuan, A. 45
1 Niosi, J. 143, 146 148, 159
Neck, C.P. 53 Nisbett, R.E. 44, 45, 66
Nelson, R.R. 90, 99, 131, 146, 169, Nohara, H. 227, 234
274, 279, 291, 300 Nooteboom, B. 46
Nesta, L.174 norms 25, 42–4, 47–8, 60–61, 64, 66,
Nestlé 151 67, 90, 170–171, 173, 177, 182,
Netherlands 153, 294–306 187, 190, 211, 245, 316, 323,
Chambers of Commerce of 294 326–7, 329
geographical make-up of 297 disclosure 80–81
regional contributions to knowledge institutional 60, 77
base of economy of 301–2 Mertonian 47
sectoral decomposition of 303–6 operational 48, 50, 52, 55, 56, 58, 60,
and Statistics Netherlands (CBS) 68
296 and pragmatic schemes 44
network performance 325–31 priority 22
dimensions of functional 325–6 scientific 6
functional: effective, efficacy, social 44, 48, 64, 178
efficiency 326–8 of universalism vs localism 55
organizational 328–31 and social value 21
networks 153, 157–8, 312–120 technical 43–4, 63–4
ACT-R 40 North, Douglass 60
and actor-network theory 314 Novartis 152, 161
as complex systems 313–6 Genomics Institute of the Novartis
dynamics of 330 Research Foundation (GNF)
as social coordination mechanisms 155
314 Institutes for Biomedical Research
and social network analysis 313 (NIBR) 155
neuroeconomics 68 Nozick, R. 59
346 The capitalization of knowledge

Nuvolari, A. 173 patents/patenting 5, 6–9, 18, 32, 41, 64,


‘nudge’ suggestions to triple helix 125–6, 128–30, 144–5
60–68 applications from US corporations
generality vs singularity 61–2 128
complexity vs simplicity 62–3 fences 132–3
explicitness vs tacitness 63–4 growth in applications for 128–9
shared vs unshared background imitation costs of 131
knowledge 64–5 increase in rates of 133
shared vs unshared cognitive style and innovation 132
65–7 ‘onco-mouse’ (Leder and Stewart)
see also Janus scientist(s) 18, 131
‘nudging’ capitalization of knowledge private-good 124
14 and reasons for not patenting
132
Obama Administration 2 and ‘regulatory capture’ 130
and programme for green Selden 131
technologies 63 strategic value of 129
OEU 91 transfer 65
Oléron, P. 322 uses of 132
Olson, Mancur 174, 175 wildcat 134
and theory of collective action 21 and Wright brothers 131
Open Collaborative Research Program Paulsen, N. 243
78–9 Pavitt, K. 137, 295
Cisco Systems 78 Penrose, E. 162
Hewlett-Packard 78 Perez, C. 98, 292
IBM 78 Perkin-Elmer 149
Intel 78 Personal Knowledge 265
US universities 78 Pfister, E. 115
see also research Pfizer 143, 288
open innovation 154 pharma, corporate 152
and ‘Globalization 2’ 161, 162 pharmaceuticals 228
open science 20, 80–81, 91–2, 126, 162, Phillips, C. 152
169, 170, 171, 178, 188, 221 Piaget, J. 44
open source production Polanyi, M. 23, 221, 265
complementary investments in Politzer, G. 45
179–81 Polyami (artificial fibres) 153
instability of 174–5 polyvalent knowledge and threats to
Oresund (Copenhagen/southern academic life 3–5
Sweden) 9 and IP protection for research 4
organizational re-engineering 77–8 Porat, M.U. 262, 274
Orsenigo, L. 67, 143, 144 Porter, A.L. 300
Owen-Smith, J. 144, 170, 235 Porter, M. 226
and Porterian clusters 19
Packard, Dave 210 Powell, W.W. 143, 144, 170, 201,
apon, P. 223, 233 235
Parkes, C. 151 Pozzali, A. 25, 32, 42, 64 , 68
Pasquali, Corrado 17 Principles of Economics 88
Pasteur, Louis 36–9, 62 Production and Distribution of
and function of bacteria in food Knowledge in the United States,
preservation 36–7, 38, 39 The 261, 262, 270, 278
Index 347

proprietary vs public domain licensing: Rallet, A. 156


software and research products Ranga, L.M. 8
167–98 Reber, A.S. 45
academic software and databases regional innovation organizer (RIO) 9
181–4 Rehg, W. 323
econometric software packages (case Reingold, N. 276
study) 184–9 Renfro, Charles 184, 185, 187
Generalized Public License (GPL) Republic of Science 22, 81, 167, 170,
175–8 190, 220, 221–3, 228, 234, 235
incentives for knowledge production research 3–5, 6, 61–2, 161–2, 171–2
with many producers 169–72 academic 66
open source production 179–81 collaboration 46, 65, 82–7
public domain vs proprietary as costly but fundamental 57
research 172–9 funding 156–7, 207–9
prospect theory 58–9 funding for academic 49–50
proximity 8, 14, 19, 24, 32, 60–68, 145, globalized 155
146, 230, 245, 293 industrial 49
geographical 221 as investment 24
and functional 155–6 IP protection for 4
public domain vs proprietary research IP protection and innovation 133–4
(and) 172–9 by National Institutes of Health 147
configuration of open source Open Collaborative Research
equilibrium 172–4 Program 78–9
Generalized Public Licence productivity of 276
(copyleft) as coordination proprietary (PR) 173, 174
device 175–7 public domain vs proprietary 172–9
instability of open source production publication collaboration 158
174–5 and Report of Forum on University-
nature and consequences of GPL based Research (EC, 2005) 78
coordination 177–9 scientific 41
public research organizations (PROs) on sociocultural impact of
89, 90–91, 170 biotechnology in New Zealand
and ‘common-use pools’ of patents 247–8
80 strategic, hybrid, innovative, public
publishing and scepticism (SHIPS) 47
bioscientific 157–60 proprietary, local, authoritarian,
collaborations 154 commissioned and expert
and patenting 5 (PLACE) 47
complementarity between 6–9 time constraints on 48–9
in Europe 7–8 universities 224
Pugh, D.S. 295 in Western universities 92
see also public domain vs
Quéré, M. 99 proprietary research; studies
Quillian, M.R. 40, 44 (on)
Quinn, S. 209 research, development and innovation
(RDI) policies 218–42
R&YD 263 and four patterns of RDI policy-
Raagmaa, G. 160 making conventions 221–7
Racal 228–9 ‘Republic of Science’ 221–3
Rahm, D. 205 state as entrepreneur 223–4
348 The capitalization of knowledge

state as facilitator (of risk 16, 66–7, 85–6, 99–100, 102–3, 324
technological products) adversity 15
225–7 aversion 85, 108
state as regulator 224–5 capital 1
methodological appendix for 239–42 management 109, 111, 249
research and development (R&D) 1–2, perception 66
8, 22, 32, 77, 85–6, 89, 91, 92, 124, propensity 58–9
127, 130, 132, 134–5, 147, 156, of short-sightedness and
160, 161, 205, 207 merchandization in UK 228–30
collaboration 67 Rogers, E.M. 269
expenditure 143, 144, 295 Rolleston, William 250
investment 80–81, 122, 123 Rohm & Haas 153
and IPR 86 Rosenberg, N. 32, 34, 35, 36, 61, 62
-intensive corporations/firms 80, Ross, B.H. 45
83 Ruggles, N. 272
national policies 218–42 Ruggles, P. 272
policy issues 275–6 Rumain, B. 44
spending, growth in 133 Ryan, C. 152
statistics 280 see also France; Ryle, G. 23, 264–5, 266, 267
Germany; United Kingdom
(UK Sable, C. 316
research and development (R&D): Sah, R.K. 100
national policies 218–42 Salais, R. 220, 224, 226
analytical framework: construction Sampat, B.N. 74
of policy-making conventions Samson, A. 248, 250
219–21 Samuelson, Paul 265
public regimes of action in the UK, San Diego 156
Germany and France 228–34 Sassen, S. 160
see also France; Germany; research, Sauvy, A. 272
development and innovation Saxenian, A.L. 226
(RDI) policies; United schemas 44–5
Kingdom (UK) pragmatic 44
research institutes Schippers, M.C. 53
Burnham 145 Schmitter, P.C. 314
Gottfreid-Wilhelm-Leibniz Schmoch, U. 227
Association of Research Schmookler, J. 114
Institutes 231 Schoenherr, R. 295
La Jolla 145 Schultz, T.W. 273,275
Salk 145, 154, 158, 159 Schumpeter, J.A. 101–2, 121, 146–9,
Scripps Research Institute 145, 155, 292, 297
158, 159 and post-Schumpeterian symbiotics
Torrey Mesa Research Institute (San 144
Diego) 152 Schumpeterian(s)
reasoning 56–8 corporation 102
causal 57–8 innovation/entrepreneurship 161
deductive 57 insights 147
probabilistic 56–7 model 19
Richardson, G.B. 110 and Neo-Schumpeterian innovation
Richter, R. 109 theorists 146, 148
Rip, A. 47 tradition 148
Index 349

Schwartz, D. 292 Stanford University 158, 159, 201, 208,


Science 279 210–211
Science, Technology & Human Values and MIT model 21
247 Stankiewicz, R. 291
science and technology, integration Stanovich, K. 59
between 4 Stephan, P.E. 4, 6, 169
Science Board, National 207 Sternberg, R.J. 40, 56
Science Foundation, National (NSF) Stevens, A. 207
Grant 211 Steward, F. 318
science–industry relations 218–20, 229, STI Scoreboard 295
231 Stigler, G.J. 266
scientific software and databases Stiglitz, J.E. 100, 101, 125, 266
182–3 Stokes, D.E. 4
in Europe 182–3 Storper, M. 219, 220, 224, 292, 293–4
privatization of 182 Streeck, W. 314
Scotchmer, S. 131, 169, 170 structural change and unemployment
Scott, Anthony 254 277
Scully, J. 152, 155 Studenski, P. 272
Searle, J. 45 studies (on)
Selden patent 131 with Alzheimer patients 45
Second Academic Revolution 6 social norms 47–8
Second Industrial Revolution 35, 41, Suárez, F.F. 300
43 Sunstein, C.R. 3, 14, 61, 67
Siegel, D. 5, 46, 47 surveys
Senker, J. 205 on effects of changes in IPR regimes
Sent, E.-M. 292 (Jaffe, 2000) 137
Shannon, C.E. 269, 294 Survey of Doctorate Recipients 6
formulae of 297 Sweden 9, 10, 11, 151
Shimshoni, D. 206 Syngenta 151, 152, 154
Shinn, T. 225 System of National Accounts 262,
Shirley, M.M. 98 272–4
Shoemaker, F.F. 269
Silicon Valley 9, 11 Tamm, P. 160
venture capitalists 2 Tartu (Estonia) 160–161
Simon, H. 56, 60 technological paradigm 127
Sloman, S.A. 54 technologies 5
small and medium-sized enterprises biotechnology 5, 41
(SMEs) 143, 145, 161, 231–2 information and communication
Small Business Economics 143 (ICT) 5
Smith, Adam 109, 121 nanotechnology 5, 41
Smith, E.E. 45 technology, description of 127–8
Smith-Doerr, L. 143 Technology Kingdom 228
social interactions and collaborative Technology Investment Program
research 82–7 (Advanced Technology
social network analysis 313 Programme) 2
Solow, R.M. 23, 270, 271–2, 279 technology transfer 12, 15, 34, 67–8,
Soskice, D. 218, 227 74–5, 81, 206, 210, 229, 232
Space Imaging Corporation 182 agents (TTAs) 21, 25, 46, 52, 65, 67
Sperber, D. 54 offices 12, 85, 202, 203, 204
Standard Oil 32 Teece, D. 122, 125, 138, 39
350 The capitalization of knowledge

Terman, Frederick 210 science–industry relations in 229


Teubal, Morris 16, 113 universities in 229–30
Thaler, R.H. 3, 14, 50, 61, 67 University Challenge 229
Theil, H. 294, 301 virtual centres of excellence (VCE)
theories 229–30
DNA 62 United States of America (US) 150,
information 62 207–9
generality of application of 62 agricultural innovation 13
Theory of Economic Development 101 Bush Administration in 2
Thevenot, L. 219 Court of appeals for the Federal
Thompson, D.F. 323 Circuit (CAFC) 129
Thursby, J. 179 Department of Commerce 130
Thursby, M. 5, 179 Justice Department 125
time constraints on National Science Foundation 273–4
problem-solving 56 patent applications/cases in 128,
research 48–9 129–30
Tirole, J. 129, 172, 175, 178, 190 and privatization of Landsat images
Tomlinson, M. 303 182
top-down and bottom-up initiatives 13 Reagan Administration in 182
Torre, A. 156 research universities 82–5
triple helix in economic cycles 1–3 universities in 6, 7, 15
triple helix: laboratory of innovation University of California and San
9–14 Diego 2, 145
triple-helix model 60 universalism 47, 55, 60, 66
anticyclic role of 2 vs localism 48
triple-helix relations and economy universities 1–3 passim; 32, 82–5
knowledge base 291–311 passim; 125, 129, 153, 156, 159,
triple-helix spaces 10–11 170, 174, 201–7, 210, 215, 221,
trust (and) 243, 250, 274, 193, 295
internal cohesion 316–17 ‘Centres of Excellence’ and research
or translation 319–20 143–4
translation and communication in EU 91
317–19 industrial penumbra of 209–11
Tushman, M.L. 300 and research 11–12, 46, 201, 224
Tversky, A. 45, 57, 58 and strategic alliances/joint ventures
11
Ulanowicz, R.E. 294, 306 UK and corporate liaison offices/
Unilever 151 officers 86–7
United Kingdom (UK) (and) virtual 11
chemicals companies in 153 see also academy–industry relations
government reforms in R&D 229–30 university patenting for technology
Higher Education Innovation Fund transfer 74–81
229 Utterback, J.M. 300
Joint Infrastructure Fund 230
Oxbridge 230 Valente, T. 314, 318
Racal 228–9 Van der Panne, G. 24, 295, 302
RDI policies 219, 228–30 Van Knippenberg, D. 53
risks of short-sightedness/ Van Looy, B. 7
merchandization 228–30 Vanoli, A. 272
Science Enterprise Challenge 229 Velsasco, José Luis 25
Index 351

venture capitalism 10, 16–17 Wason, P.C. 57


as mechanism for knowledge Watson, James 148
governance 98–120 Watts, R.J. 300
and NASDAQ 110–116 Weale, A. 324
bundling finance and competence Weaver, C.K. 248
with innovation 111 Weaver, W. 269
knowledge intensive property Weiss, W. 100
rights 111–12 Wevers, Francis 248, 252, 253, 254
see also finance and innovation; Whitley, R.D. 292
innovation; markets; NASDAQ Wiener, N. 269
Verdier, E. 22, 233 Wilson, D. 54
Viale, Riccardo 3, 14, 25, 32, 35, 38, Windrum, P. 303
39, 42, 43, 46, 47, 54, 64, 66, 67, Winter, S. 125, 127, 134, 138, 265
68 Wintjes, R. 297
Vinck, D. 226 Wolter, K. 146
Von Hippel, E. 171, 173 Woolgar, S. 212
Von Krogh, G. 173 Worrall, J. 250
von Mises, Ludwig 263
Von Wright, G.H. 42, 43, 44 Yao, D.A. 171
Vonortas, N.S. 295 Yoshaki Ito 160
Young, A. 280
Wakoh, H. 174
Walker, G. 314 Zeller, C. 154–5
Walsh, J.R. 273 Ziedonis, R. 129, 132, 134, 171
Walshok, M. 145 Ziman, J.M. 47
Warren, M.E. 323, 324 Zucker, L. 144, 162

Potrebbero piacerti anche