Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
The Capitalization of
Knowledge
A Triple Helix of University–Industry–
Government
Edited by
Riccardo Viale
Fondazione Rosselli, Turin, Italy
Henry Etzkowitz
Stanford University, H-STAR, the Human-Sciences and
Technologies Advanced Research Institute, USA and the
University of Edinburgh Business School, Centre for
Entrepreneurship Research, UK
Edward Elgar
Cheltenham, UK • Northampton, MA, USA
© The Editors and Contributions Severally 2010
Published by
Edward Elgar Publishing Limited
The Lypiatts
15 Lansdown Road
Cheltenham
Glos GL50 2JA
UK
v
vi The capitalization of knowledge
Index 335
Contributors
Cristiano Antonelli, Dipartimento di Economia S. Cognetti de Martlis
Università di Torino, Italy and BRICK (Bureau of Research in Innovation,
Complexity and Knowledge) Collegio Carlo Alberto, Italy.
Philip Cooke, Centre for Advanced Studies, Cardiff University, Wales,
UK.
Sally Davenport, Victoria Management School, Victoria University of
Wellington, New Zealand.
Paul A. David, Department of Economics, Stanford University, CA,
USA.
Wilfred Dolfsma, School of Economics and Business, University of
Groningen, The Netherlands.
Giovanni Dosi, Laboratory of Economics and Management, Sant’Anna
School of Advanced Studies, Pisa, Italy.
Henry Etzkowitz, Stanford University, H-STAR, the Human-Sciences
and Technologies Advanced Research Institute, USA and the University
of Edinburgh Business School, Centre for Entrepreneurship Research,
UK.
Alfonso Gambardella, Department of Management, Università Luigi
Bocconi, Milano, Italy.
Benoît Godin, Institut National de la Recherche Scientifique, Montréal,
Québec, Canada.
Bronwyn H. Hall, Department of Economics, University of California at
Berkeley, Berkeley, CA, USA.
Caroline Lanciano-Morandat, Laboratorie d’économie et de sociologie
du travail (LEST), CNRS, Université de la Méditerranée et Université de
Provence, Aix en Provence, France.
Shirley Leitch, Swinburne University of Technology, Melbourne,
Australia.
vii
viii The capitalization of knowledge
ix
Abbreviations
ACRI Association of Crown Research Institutes (New Zealand)
AIM Alternative Investment Market (UK)
CADs Complex Adaptive Systems
CAFC Court of Appeals for the Federal Court (USA)
CEO chief executive officer
CNRS National Centre for Scientific Research (France)
CRI Crown Research Institute (New Zealand)
DARPA Defense Research Projects Agency (USA)
DBF dedicated biotechnological firms
EPO European Patent Office
ERISA Employment Retirement Income Security Act (USA)
FDI foreign direct investment
GDP gross domestic product
GM genetic modification
GMO genetically modified organism
GNF Genomics Institute of the Novartis Research Foundation
(USA)
GPL Generalized Public License (USA)
HEI higher education institution
HMS Harvard Medical School
HR human resources
ICND Institute of Childhood and Neglected Diseases (USA)
ICT information and communication technology
INRIA National Institute of Computer Science (France)
IP intellectual property
IPO initial public offering
IPR intellectual property rights
JCSG Joint Center for Structural Genomics (USA)
KIS knowledge-intensive services
LGPL lesser Generalized Public License (USA)
LSN Life Sciences Network (New Zealand)
MIT Massachusetts Institute of Technology
NACE Nomenclature générale des Activités économiques dans les
Communautés
x
Abbreviations xi
The year 2009 may have represented a turning point for research and
innovation policy in Western countries, with apparently contradictory
effects. Many traditional sources of financing have dried up, although
some new ones have emerged, for example as a result of the US stimu-
lus package. Manufacturing companies are cutting their R&D budgets
because of the drop in demand. Universities saw their endowments fall
by 25 per cent or more because of the collapse in financial markets.
Harvard interrupted the construction of its new science campus, while
Newcastle University speeded up its building projects in response to the
economic crisis. Risk capital is becoming increasingly prudent because
of the increased risk of capital loss (according to the International
Monetary Fund, the ratio between bank regulatory capital and risk-
weighted assets increased on average between 0.1 and 0.4 for the main
OECD countries during 2009) while sovereign funds, like Norway’s,
took advantage of the downturn to increase their investments. According
to the National Venture Capital Association, American venture capital
shrank from US$7.1 billion in the first quarter of last year to US$4.3
billion in the first quarter of 2009 (New York Times, 13 April 2009).
Many of the pension funds, endowments and foundations that invested
in venture capital firms have signalled that they are cutting back on the
assets class. The slowdown is attributable in part to venture capitalists
and their investors taking a wait-and-see approach until the economy
improves.
The future outlook for R&D looks poor unless a ‘white knight’ comes
to its rescue. This help may come from an actor whose role was down-
played in recent years, but that now, particularly in the USA, seems to
be in the ascendant again. It is the national and regional government
that will have to play the role of the white knight to save the R&D
system in Western economies (Etzkowitz and Ranga, 2009). In the previ-
ous 20 years the proportion of public financing had gradually fallen in
1
2 The capitalization of knowledge
percentage terms, while the private sector had become largely dominant
(the percentage of Gross Domestic Expenditure in R&D financed by
industry now exceeds 64 per cent in OECD countries). In some technolog-
ical sectors, such as biotechnology, the interaction between academy and
industry has become increasingly autonomous from public intervention.
University and corporate labs established their own agreements, created
their own joint projects and laboratories, exchanged human resources
and promoted the birth of spin-off and spin-in companies without rel-
evant help from local and national bodies. Cambridge University biotech
initiatives or University of California at San Diego relations with biotech
companies are just some of many examples of double-helix models of
innovation. In other countries and in other technological sectors the
double-helix model didn’t work and needed the support of the public
helix. Some European countries, like France, Germany and Italy, saw a
positive intervention of public institutions. In France, Sophia Antipolis
was set up with national and regional public support. In Italy, support
from Piedmont regional government to the Politecnico of Turin allowed
the development of an incubator of spin-off companies that incubated
more than 100 companies.
In sectors such as green technologies, aerospace, security and energy,
public intervention to support the academy–industry relationship is
unavoidable. Silicon Valley venture capitalists invested heavily in renew-
able energy technology in the upturn, and then looked to government
to provide funding to their firms and rescue their investments once the
downturn took hold. In emerging and Third World economies, the role of
the public helix in supporting innovation is also unavoidable. In the least
developed countries industry is weak, universities are primarily teach-
ing institutions and government is heavily dependent upon international
donors to carry out projects. In newly developed countries the universities
are developing research and entrepreneurship activities and industry is
taking steps to promote research, often in collaboration with the universi-
ties, while government plays a creative role in developing a venture capital
industry and in offering incentives to industry to support research through
tax breaks and grants.
The novelty of the current crisis is that the public helix becomes crucial
even in countries and in sectors where the visible public role was minimal
in the past. The Advanced Technology Program, the US answer to the
European Framework Programmes, shrunk to virtual inactivity with zero
appropriations under the Bush Administration but has found a second
life under the Obama Administration and has been renamed the TIP (the
Technology Investment Program).
The triple-helix model seems to play an anti-cyclic role in innovation.
Introduction: anti-cyclic triple helix 3
et al., 2002). Both these mechanisms may reduce the quantity and the
quality of scientific production. This behaviour supports the thesis of a
trade-off between scientific research and industrial applications.
On the contrary, a non-rivalry hypothesis between publishing and
patenting is based on complementarity between the two activities. The
decision of whether or not to patent is made at the end of research and not
before the selection of scientific problems (Agrawal and Henderson, 2002).
Moreover, relations with the licensee and the difficulties arising from the
development of patent innovation can generate new ideas and suggestions
that point to new research questions (Mansfield, 1995). In a study, 65 per
cent of researchers reported that interaction with industry had positive
effects on their research. A scientist said: ‘There is no doubt that working
with industry scientists has made me a better researcher. They help me to
refine my experiments and sometimes have a different perspective on a
problem that sparks my own ideas’ (Siegel et al., 1999).
On the other hand, the opposition between basic and technological
research seems to have been overcome in many fields. In particular, in
the area of key technologies such as nanotechnology, biotechnology,
ICT (information and communication technologies), new materials and
cognitive technologies, there is continuous interaction between curiosity-
driven activities and control of the technological consequences of the
research results. This is also borne out by the epistemological debate. The
Baconian ideal of a science that has its raison d’être in practical applica-
tion is becoming popular once again after years of oblivion. And the
technological application of a scientific hypothesis, for example regarding
a causal link between two classes of phenomena, represents an empirical
verification. An attempt at technological application can reveal anomalies
and incongruities that make it possible to define initial conditions and
supplementary hypotheses more clearly.
In short, the technological ‘check’ of a hypothesis acts as a ‘positive
heuristic’ (Lakatos, 1970) to develop a ‘positive research programme’
and extend the empirical field of the hypothesis. These epistemological
reasons are sustained by other social and economic reasons. In many
universities, scientists wish to increase the visibility and weight of their
scientific work by patenting. Collaboration with business and licensing
revenues can bring additional funds for new researchers and new equip-
ment, as well as meeting general research expenses. This in turn makes it
possible to carry out new experiments and to produce new publications.
In fact Jensen and Thursby (2003) suggest that a changing reward struc-
ture may not alter the research agenda of faculty specializing in basic
research. Indeed, the theory of polyvalent knowledge suggests that dual
goals may enhance the basic research agenda.
6 The capitalization of knowledge
Non-patenters Patenters
70
60
50
% of scientists
40
30
20
10
0
0 100 200 300 400 500 600 0 100 200 300 400 500 600
Total number of publications
that have more in common than previously believed. Figure I.1 shows the
complementary of patenting and publishing in Azoulay et al. (2005). It
plots the histogram for the distribution of publication counts for our 3884
scientists over the complete sample period, separately for patenting and
non-patenting scientists.
The study that makes the most extensive analysis of the complementa-
rity between patenting and publishing is by Fabrizio and DiMinin (2008).
It uses a broad sample drawn from the population of university inventors
across all fields and universities in the USA, with a data set covering 21
years. Table I.1 provides the annual and total summary statistics for the
entire sample and by inventor status. A difference of mean test for the
number of publications per year for inventors and non-inventors suggests
that those researchers holding a patent applied for between 1975 and 1995
generate significantly more publications per year than non-inventors. The
inventors in their sample are more prolific in terms of annual publications,
on the order of 20–50 per cent more publications than their non-inventor
colleagues. The results suggest also that there is not a significant posi-
tive relationship between patenting and citations and a faculty member’s
publications.
Nor was evidence of a negative trade-off between publishing and pat-
enting found in Europe. Van Looy et al. (2004) compared the publishing
8 The capitalization of knowledge
Table I.1 Patenting and publishing summary statistics for inventors and
non-inventors
Controls
Chem. eng. & materials tech. 63 1.3 1.10 1.1
Pharmacology 83 1.7 1.11 1.6
Biology 78 1.8 1.27 1.5
Electronics & Telecom 72 1.3 1.18 1.0
All fields 296 1.6 1.28 1.3
(1) the available resources that can be used to start the incubation
process for knowledge-based development;
(2) what is missing and how and where those missing resources can be
found, either locally or internationally.
spaces’ may be created in any order, with any one of them used as the basis
for the development of others (Etzkowitz and Ranga, 2010).
Creating new technology-based economic niches has become a third
strategy for regional and local development. As the number of niches
for science-based technology increases, the opportunity for more players
to get involved also increases. Universities not traditionally involved in
research are becoming more research-oriented, often with funding from
their state and local governments, which increasingly realize that research
is important to local economic growth. A firm may start from a business
concept looking for a technology to implant within it or a technology
seeking a business concept to realize its commercial potential. The entre-
preneur propelling the technology may be an amateur or an experienced
professional. Whichever the case, the technology comes with a champion
who is attempting to realize its commercial potential by forming a firm.
Universities, as well as firms, are developing strategic alliances and
joint ventures. Karolinska University has recruited schools in the health
and helping professions across Sweden into collaborations in order to
increase its ‘critical mass’ in research. Groups of universities in Oresund,
Uppsala and Stockholm have formed ‘virtual universities’, which are then
translated into architectural plans for centres and science parks to link the
schools physically.
As entrepreneurial academic activities intensify, they may ignite a self-
generating process of firm-formation, no longer directly tied to a particu-
lar university. The growth of industrial conurbations around universities,
supported by government research funding, has become the hallmark of
a regional innovation system, exemplified by Silicon Valley; the profile of
knowledge-based economic development was further raised by the found-
ing of Genentech and other biotechnology companies based on academic
research in the 1980s. Once take-off occurs in the USA, only the private
sector is usually credited; the role of government, for example, the Defense
Research Projects Agency (DARPA), in founding SUN, Silicon Graphics
and Cisco is forgotten.
The triple helix denotes not only the relationship of university, industry
and government, but also the internal transformation within each of these
spheres. The transformation of the university from a teaching institution
into one that combines teaching with research is still ongoing, not only in
the USA, but in many other countries. There is a tension between the two
activities, but nevertheless they coexist in a more or less compatible rela-
tionship. Although some academic systems still operate on the principle of
separating teaching and research, it has generally been found to be both
more productive and more cost-effective to combine the two functions, for
example by linking research to the PhD training process. Will the same
12 The capitalization of knowledge
relationship hold for three functions, with the emerging third mission of
economic and social development combined with teaching and research?
A recent experiment at Newcastle University points the way towards
integration of the three academic functions. A project for the redevelop-
ment of the region’s economy as a Science City was largely predicated on
building new laboratories for academic units and for firms in the expecta-
tion that the opportunity to ‘rub shoulders’ with academics in related fields
would be a sufficient attractor. However, a previous smaller-scale project,
the Centre for Life, based on the same premise, did not attract a signifi-
cant number of firms and the allotted space was turned over to academic
units. To jump-start Science City, the professor of practice model, based
on bringing distinguished practitioners into the university as teachers, has
been ‘turned on its head’ to attract researchers of a special kind: PhD sci-
entific entrepreneurs who have started successful firms but may have been
pushed aside as the firm grew and hired professional managers.
Newcastle University, in collaboration with the Regional Development
Agency in Northeast UK, established four professors of practice (PoPs),
one in each of the Science City themed areas – a scheme for knowledge-
based economic development from advanced research. The PoPs link
enterprise to university and are intentionally half-time in each venue so
that they retain their industrial involvement at a high level and do not
become traditional academics. The PoPs have initiated various projects,
ranging from an interdisciplinary centre drawing together the university’s
drug discovery expertise, which aims to undertake larger projects and
attract higher levels of funding, to a new PhD programme integrating
business, engineering and medical disciplines to train future academic and
industrial leaders in the medical devices field.
The next step in developing the PoP model is to extend it down the
academic ladder by creating researchers of practice (RoPs), postdoctoral
fellows and lecturers, who will work half-time in an academic unit and
half-time in the business development side of the university, e.g. technol-
ogy transfer office, incubator facility or science park. The RoPs would be
expected to involve their students in analysing feasibility of technology
transfer projects and in developing business plans with firms in the uni-
versity’s incubator facility. Each PoP could mentor three or four RoPs,
extending the reach of the senior PoPs as they train their junior colleagues.
Moreover, the PoP model is relevant to all academic fields with practi-
tioner constituencies, including the arts, humanities and social sciences.
Until this happens, entrepreneurial activities will typically be viewed as an
adjunct to, rather than an equal partner with, the now traditional missions
of teaching and research.
In the medium term, the PoP model may be expected to become a
Introduction: anti-cyclic triple helix 13
successfully through ‘quasi firms’, were the most suitable to become entre-
preneurial and to capitalize knowledge.
The capitalization of knowledge through IPR is losing ground accord-
ing to the analysis made by Caroline Lanciano-Morandat and Eric Verdier
in Chapter 8. National R&D policies can be divided into four categories
based on numerous important factors:
labour) and equated the residual in his equation with technical change.
According to Machlup, the production function is only an abstract con-
struction that correlates input and output, without any causal meaning.
The only reliable way to measure knowledge is by national accounting,
that is the estimate of costs and sales of knowledge products and services
(according to his broad definition). Where the data were not available, as
in the case of internal production and the use of knowledge inside a firm,
he looked at complementary data, such as occupational classes of the
census, differentiating white-collar workers from workers who were not
knowledge producers, like blue-collar workers. His policy prescriptions
were in favour of basic research and sceptical about the positive influence
of the patent system on inventive activity.
Basic research is an investment, not a cost. It leads to an increase in eco-
nomic output and productivity. Too much emphasis on applied research
is a danger because it drives out pure research, which is its source of
knowledge. Finally, his policy focus on information technologies was very
supportive. Information technologies are a source of productivity growth
because of improved records, improved decision-making and improved
process controls, and are responsible for structural changes in the labour
market, encouraging continuing movement from manual to mental and
from less to more highly skilled labour.
The knowledge economy is difficult to represent. The representation
must not focus only on economic growth and knowledge institutions.
It should focus also on the knowledge base and on the dynamic dis-
tribution of knowledge. To reach this goal, knowledge should not be
represented only as a public good but as a mechanism for coordinating
society. Machlup was the first to describe knowledge as a coordination
mechanism when he qualified it in terms of the labour force. In Chapter
11, Loet Leydesdorff, Wilfred Dolfsma and Gerben Van der Panne try
to define a model for measuring the knowledge base of an economy. In
their opinion it can be measured as an overlay of communications among
differently codified communications. The relevant variables are the ter-
ritorial economy, its organization and technology. The methodological
tools are scientometrics, which measures knowledge flow, and economic
geography. Territorial economies are created by the proximity – in terms
of relational dimensions – of organizations and technologies. New niches
of knowledge production emerge as densities of relations and as a conse-
quence of the self-organization of these interactions. The triple helix is an
exemplification of this dynamics. It is the emergence of an overlay from
the academy–industry–government interaction. In some cases feedback
from the reflective overlay can reshape the network relations from which
it emerged.
Introduction: anti-cyclic triple helix 25
ACKNOWLEDGEMENTS
not have been completed without her support. Thanks to Chiara Biano
for her editorial processing. We also thank Raimondo Iemma for supply-
ing some of the data for the introduction. We also wish to thank the staff
of Fondazione Rosselli and in particular Daniela Italia, Paola Caretta,
Rocío Ribelles Zorita, Elisabetta Nay, Carlotta Affatato, Giulia Datta,
Elena Bazzanini, Anna Mereu, Giovanni De Rosa, Michele Salcito,
Maria Cristina Capetti, Francesca Villa, Fabiana Manca and Laura
Alessi for the excellent organization of the conference and the follow-up
initiatives.
REFERENCES
INTRODUCTION
31
32 The capitalization of knowledge
AT&T, General Electric, Standard Oil, Alcoa and many others under-
stood the importance of scientific research for innovation (Rosenberg and
Mowery, 1998). Moreover, the revolution in organic chemistry in Germany
shifted industrial attention towards the fertility of collaboration between
universities and companies. Searching for a scientific base for inventions
meant developing specific parts of declarative knowledge. Depending on
the different disciplines, knowledge could be more or less formalized and
could contain more or fewer tacit features. In any case, from the Second
Industrial Revolution onwards, the capitalization of technological knowl-
edge began to change: a growing part of knowledge became protected by
intellectual property rights (IPR); patents and copyrights were sold to
companies; institutional links between academic and industrial labora-
tories grew; companies began to invest in R&D laboratories; universities
amplified the range and share of applied and technological disciplines
and courses; and governments enacted laws to protect academic IPR and
introduced incentives for academy–industry collaboration. New institu-
tions and new organizations were founded with the aim of strengthening
the capitalization of knowledge.
The purpose of this chapter is to show that one of the important deter-
minants of the new forms of the capitalization of knowledge is its episte-
mological structure and cognitive processing. The thesis of this chapter
is that the complexity of the declarative part of knowledge and the three
tacit dimensions of knowledge – competence, background and cognitive
rules (Pozzali and Viale, 2007) – have a great impact on research behav-
iours and, consequently, on the ways of capitalizing knowledge. This
behavioural impact drives academy–industry relations towards greater
face-to-face interactions and has led to the development of a new aca-
demic role, that of the ‘Janus scientist’1. The need for stronger and more
extensive face-to-face interaction is manifested through the phenomenon
of the close proximity between universities and companies and through
the creation of hybrid organizations of R&D. The emergence of the new
academic role of the Janus scientist, one who is able to interface both with
the academic and industrial dimensions of research, reveals itself through
the introduction of new institutional rules and incentives quite different
from traditional academic ones.
Analytical ontic knowledge is divided into two main types, descriptive and
explanatory.
Descriptive
The first type comprises all the assertions describing a particular event
according to given space-time coordinates. These assertions have many
names, such as ‘elementary propositions’ or ‘base assertions’. They cor-
respond to the perceptual experience of an empirical event by a human
epistemic agent at a given time.2 A descriptive assertion has a predicative
field limited to the perceived event at a given time. The event is excep-
tional because its time-space coordinates are unique and not reproducible.
Moreover, this uniqueness is made stronger by the irreproducibility of
the perception of the agent. Even if the same event were reproducible, the
perception of it would be different because of the continuous changes in
34 The capitalization of knowledge
Explanatory
These assertions, contrary to descriptive ones, have a predicative field that
is wide and unfixed. They apply to past and future events and, in some
cases (e.g. theories), to events that are not considered by the discoverer.
They can therefore allow the prediction of novel facts. These goals are
achieved because of the syntactic and semantic complexity and flexibility
of explanatory assertions. Universal or probabilistic assertions, such as
the inductive generalization of singular observations (e.g. ‘all crows are
Knowledge-driven capitalization of knowledge 35
black’ or ‘a large percentage of crows are black’) are the closest to descrip-
tive assertions. They have little complexity and their application outside
the predicative field is null. In fact, their explanatory and predictive power
is narrow, and the phenomenon is explained in terms of the input–output
relations of a ‘black box’ (Viale, 2008). In contrast, theories and models
tend to represent inner parts of a phenomenon. Usually, hypothetical
entities are introduced that have no direct empirical meaning. Theoretical
entities are then linked indirectly to observations through bridge principles
or connecting statements. Models and metaphors often serve as heuristic
devices used to reason more easily about the theory. The complexity,
semantic richness and plasticity of a theory allow it to have wider applica-
tions than empirical generalizations. Moreover, theories and models tend
not to explain a phenomenon in a ‘black box’ way, but to represent the
inner mechanisms that connect input to output. Knowing the inner causal
mechanisms allows for better management of variables that can change
the output. Therefore they offer better technological usage.
needed to find a way to preserve food. In 1795, at the height of the French
Revolution, Nicholas Appert, a French confectioner who had been testing
various methods of preserving edibles using champagne bottles, found a
solution. He placed the bottles containing the food in boiling water for a
certain length of time, ensuring that the seal was airtight. This stopped
the food inside the bottle from fermenting and spoiling. This apparently
commonplace discovery would be of fundamental importance in years
to come and earned Appert countless honours, including a major award
from the Napoleonic Society for the Encouragement of Industry, which
was particularly interested in new victualling techniques for the army. For
many years, the developments generated by the original invention were
of limited application, such as the use of tin-coated steel containers intro-
duced in 1810. When Appert developed his method, he was not aware of
the physical, chemical and biological processes that prevented deteriora-
tion once the food had been heated. His invention was a typical example
of know-how introduced through ‘trial and error’. The extension of the
invention into process innovation was therefore confined to descriptive
knowledge and to an empirical generalization. It was possible to test new
containers or to try to establish a link between the change in tempera-
ture, the length of time the container was placed in the hot water and the
effects on the various bottled foods, and to then draw up specific rules for
the preservation of food. However, this was a random, time-consuming
approach, involving endless possible combinations of factors and lacking
any real capacity to establish a solid basis for the standardization of the
invention. Had it been a patent, it would have been a circumscribed inno-
vation, whose returns would have been high for a limited initial period
and would then have gradually decreased in the absence of developments
and expansions of the invention itself.4 The scientific explanation came
some time later, in 1873. Louis Pasteur discovered the function of bacte-
ria in certain types of biological activity, such as in the fermentation and
deterioration of food. Microorganisms are the agents that make it difficult
to preserve fresh food, and heat has the power to destroy them. Once the
scientific explanation was known, chemists, biochemists and bacteriolo-
gists were able to study the effects of the multiple factors involved in food
spoilage: ‘food composition, storage combinations, the specific microor-
ganisms, their concentration and sensitivity to temperature, oxygen levels,
nutritional elements available and the presence or absence of growth
inhibitors’ (Rosenberg and Birdzell, 1986; Italian translation 1988, pp.
300–301). These findings and many others expanded the scope of the inno-
vation beyond its original confines. The method was applied to varieties of
fruit, vegetables and, later, meats that could be heated. The most suitable
type of container was identified, and the effects of canning on the food’s
Knowledge-driven capitalization of knowledge 37
ultraviolet light, with alcohol or with other substances, which, for a variety
of reasons, would subsequently be termed disinfectants.
So to answer our opening question, the scientific explanation for an
invention expands the development potential of the original innovation
because it ‘reduces’ the ontological level of the causes and extends the
predicative reach of the explanation. Put simply, if the phenomenon to
be explained is a ‘black box’, the explanation identifies the causal mecha-
nisms inside the box (‘reduction’ of the ontological level) that are common
to other black boxes (extension of the ‘predicative reach’ of the explana-
tion). Consequently, it is possible to develop many other applications or
micro-inventions, some of which may constitute a product or process
innovation. Innovation capacity cannot be expanded, however, when
the knowledge that generates the invention is simply an empirical gener-
alization describing the local relationship between antecedent and causal
consequent (in the example of food preservation, the relationship between
heat and the absence of deterioration). In this case, knowledge merely
describes the external reality of the black box, that is, the relationship
between input (heat) and output (absence of deterioration); it does not
extend to the internal causal mechanisms and the processes that generate
the causal relationship. It is of specific, local value, and may be applied
to other contexts or manipulated to generate other applications to only a
very limited degree.
The knowledge inherent in Appert’s invention, which can be described
as an empirical generalization, is regarded as genuine scientific knowledge
by some authors (Mokyr, 2002a, Italian translation 2004). We do not want
to get involved here in the ongoing epistemological dispute over what
constitutes scientific knowledge (Viale, 1991): accidental generalizations
of ‘local’ value only (e.g. the statement ‘the pebbles in this box are black’),
empirical generalizations of ‘universal’ value (e.g. Appert’s invention) and
causal nomological universals (e.g. a theory such as Pasteur’s discovery).
The point to stress is that although an empirical generalization is ‘useful’
in generating technological innovation (useful in the sense adopted by
Mokyr, 2002b, p. 25, derived from Kuznets, 1965, pp. 84–7), it does not
possess the generality and ontological depth that permit the potential of
the innovation to be easily expanded in the way that Pasteur’s discovery
produced multiple innovative effects. In conclusion, after Pasteur’s dis-
covery of the scientific basis of Appert’s invention, a situation of ‘increas-
ing economic returns’ developed, driven by the gradual expansion of the
potential of the innovation and a causal concatenation of micro-inventions
and innovations in related areas. This could be described as a recursive
cascade phenomenon, or as a ‘dual’ system (Kauffman, 1995), where the
explanation of the causal mechanism for putrefaction gave rise to a tree
Knowledge-driven capitalization of knowledge 39
al., 2007; Pozzali and Viale, 2007). Therefore the only way to allow transfer
is to create hybrid organizations that put together, face-to-face, the varied
expertise of inventors with that of entrepreneurs and industrial researchers
aiming to capitalize knowledge through successful innovations.
are many rules that are not applied when the format is abstract but are
applied when the format is pragmatic – that is, when it is linked to every-
day experience. For example, the solution of the ‘selection task problem’,
namely, the successful application of modus tollens, is possible only when
the questions are not abstract but are linked to problems of everyday life
(Politzer, 1986; Politzer and Nguyen-Xuan, 1992). The second point is that
most of the time rules are implicitly learned through pragmatic experience
(Reber, 1993; Cleeremans, 1995; Cleeremans et al., 1998). The phenom-
enon of implicit learning seems so strong that it occurs even when the
cognitive faculties are compromised. From recent studies (Grossman et
al., 2003) conducted with Alzheimer patients, it appears that they are able
to learn rules implicitly but not explicitly. Lastly, the rules that are learnt
explicitly in a class or that are part of the inferential repertoire of experts
are often not applied in everyday life or in tests based on intuition (see the
experiments with statisticians of Tversky and Kahneman, 1971).
At the same time, pragmatic experience and the meaning that people
give to social and natural events is driven by background knowledge
(Searle, 1995, 2008; Smith and Kossylin, 2007). The values, principles
and categories of background knowledge, stored in memory, allow us to
interpret reality, to make inferences and to act, that is, to have a pragmatic
experience. Therefore background knowledge affects implicit learning and
the application of cognitive rules through the pragmatic and semantic
dimension of reasoning and decision-making7. What seems likely is that
the relationships within schemas and among different schemas allow us
to make inferences, that is, they correspond to implicit cognitive rules.
For example, let us consider our schema for glass. It specifies that if an
object made of glass falls onto a hard surface, the object may break. This
is an example of causal inference. Similar schemas can allow you to make
inductive, deductive or analogical inferences, to solve problems and to take
decisions (Markman and Gentner, 2001; Ross, 1996). In conclusion, the
schema theory seems to be a good candidate to explain the dependence of
cognitive rules on background knowledge. If this is the case, we can expect
that different cognitive rules should correspond to different background
knowledge, characterizing, in this way, different cognitive styles. Nisbett
(2003) has shown that the relation between background knowledge and
cognitive rules supports the differences of thinking and reasoning between
Americans and East Asians. These differences can explain the difficulties
in reciprocal understanding and cooperation between people of different
cultures. If this is the situation in industrial and academic research, we
can expect obstacles to collaboration and the transfer of knowledge, and
the consequent emergence of institutions and organizations dedicated to
overcoming these obstacles to the capitalization of knowledge.
46 The capitalization of knowledge
1 Background Knowledge
may derive from: (1) the way in which funding is communicated and the
ways it can constitute a decision frame (with more frequency and relevance
within the company because it is linked to important decisions concerning
the annual budget); (2) the symbolic representation of money (with much
greater emphasis in the company, whose raison d’etre is the commercial
success of its products and increased earnings); (3) the social identity of
the researchers is linked more or less strongly to the monetary levels of the
wage (with greater importance on the monetary level as an indicator of a
successful career in a private company than in the university). The differ-
ent psychological weight of money has been analysed by many authors,
and in particular by Thaler (1999).
To summarize, operational norms can be schematized in loose time
versus pressing time; undefined results versus well-defined results; and
financial lightness versus financial heaviness.
How can the values in background knowledge and operational norms
influence the implicit cognitive rules of reasoning and decision-making,
and how can they be an obstacle to collaboration between industrial and
academic researchers?
Many aspects of cognition are important in research activity. We can
say that every aspect is involved, from motor activity to perception, to
memory, to attention, to reasoning, to decision-making and so on. My
aim, however, is to focus on the cognitive obstacles to reciprocal commu-
nication, understanding, joint decision-making and coordination between
academic and corporate researchers, and how these might hinder their
collaboration.
I shall analyse briefly three dimensions of interaction: language, group
and inference (i.e. the cognitive rules in thinking, problem-solving, reason-
ing and decision-making).
2 Language
is the real-world interaction among the actors, the second is the fictional
role of the actors; and the third is the communication with the audience.
In face-to-face conversation there is only one layer and no decoupling.
The roles of vocalizing, formulating and producing meaning are per-
formed by the same person. The domain of action identifies itself with the
conversation; coordination is direct without intermediaries. Thus face-to-
face conversation is the most effective way of coordinating meaning and
understanding, resulting in only minor distortions of meaning and fewer
misunderstandings. Academic and industrial researchers are members
of different cultural communities and, therefore, have different back-
ground knowledge. In the collaboration between academic and industrial
researchers, coordination between meanings and understandings can be
difficult if background knowledge is different. When this is the case, as we
have seen before, the result of the various linguistic settings will probably
be the distortion of meaning and an increase in misunderstanding. When
fundamental values are different (as in SHIPS versus PLACE), and also
when the operational norms of loose time versus pressing time, undefined
product versus well-defined product and financial lightness versus finan-
cial heaviness are different, it is impossible to transfer knowledge without
losing or distorting shares of meaning.
Moreover, difficulty in coordination will increase in settings that utilize
intermediaries between the academic inventor and the potential industrial
user (‘mediated settings’ in Clark, 1996, p. 5). These are cases in which
an intermediate technology transfer agent tries to transfer knowledge
from the university to corporate labs. In this case, there is a decoupling
of speech. The academic researcher is the one who formulates and gives
meaning to the linguistic message (also in a written format), while the
technology transfer (TT) agent is merely a vocalizer. As a result, there
may be frequent distortion of the original meaning, in particular when
the knowledge contains a large share of tacit knowledge. This distortion
is strengthened by the likely difference in background knowledge between
the TT agent and that of the other two actors in the transfer. TT agents are
members of a different cultural community (if they are professional, from
a private TT company) or come from different sub-communities inside the
university (if they are members of a TT office). Usually, they are neither
active academic researchers nor corporate researchers. Finally, the trans-
fer of technology can also be accompanied by the complexity of having
more than one domain of action. For example, if the relation between an
academic and an industrial researcher is not face-to-face, but is instead
mediated, there is an emergent second layer of discourse. This is the layer
of the story told by the intermediary about the original process and the
techniques needed to generate the technology invented by the academic
Knowledge-driven capitalization of knowledge 53
researchers. The story can also be communicated with the help of a written
setting, for example, a patent or publication. All three points show that
common background knowledge is essential for reciprocal understanding
and that face-to-face communication is a prerequisite for minimizing the
distortion of meaning and the misunderstandings that can undermine the
effectiveness of knowledge transfer.
3 Group
The second dimension of analysis is that of the group. When two or more
persons collaborate to solve a common problem, they elicit interesting
emergent phenomena. In theory, a group can be a powerful problem-
solver (Hinsz et al., 1997). But in order to be so, members of the group
must share information, models, values and cognitive processes (ibid.). It
is likely that heterogeneity of skill and knowledge is very useful for detect-
ing solutions more easily. Some authors have analysed the role of hetero-
geneity in cognitive tasks (e.g. the solution of a mathematical problem)
and the generation of ideas (e.g. the production of a new logo), and have
found a positive correlation between it and the successful completion of
these tasks (Jackson, 1992). In theory, this result seems very likely, since
finding a solution entails looking at the problem from different points of
view. Different perspectives allow the phenomenon of entrenched mental
set to be overcome; that is, the fixation on a strategy that normally works
well in solving many problems but that does not work well in solving
this particular problem (Sternberg, 2009). However, the type of diver-
sity that works concerns primarily cognitive skills or personality traits
(Jackson, 1992). In contrast, when diversity is based on values, social
categories and professional identity, it can hinder the problem-solving
ability of the group. This type of heterogeneity generates the categoriza-
tion of differences and similarities between the self and others, and results
in the emergent phenomenon of the conflict/distance between ‘ingroup’
and ‘outgroup’ (Van Knippenberg and Schippers, 2007). The relational
conflict/distance of ingroup versus outgroup is the most social expres-
sion of the negative impact of diversity of background knowledge on
group problem-solving. As was demonstrated by Manz and Neck (1995),
without a common background knowledge, there can be no sharing of
goals, of the social meaning of the work, of the criteria to assess and to
correct the ongoing activity, of foresight on the results nor on the impact
of the results and so on. As described by the theory of ‘teamthink’ (Manz
and Neck, 1995), the establishment of an effective group in problem-
solving relies on the common sharing of values, beliefs, expectations and,
a priori, on the physical and social world. For example, academic and
54 The capitalization of knowledge
4 Cognitive Rules
support a particular causal explanation. Often, once the expert has identi-
fied one of the suspected causes of a phenomenon, he/she stops searching
for additional alternative causes. This phenomenon is called ‘discounting
error’. From this point of view, the hypothesis posits that the different
operational norms and social values of academic and corporate research
may produce different discounting errors. Financial heaviness, pressing
time and well-defined results compared to financial lightness, slow time
and ill-defined results may limit different causal fields in the entire project.
For example, the corporate scientist can consider time as a crucial causal
variable for the success of the project, whereas the academic researcher is
unconcerned with it. At the same time, the academic researcher can con-
sider the value of universal scientific excellence of the results to be crucial,
whereas the industrial researcher is unconcerned with it.
The fourth dimension deals with decision-making. Decision-making
involves evaluating opportunities and selecting one choice over another.
There are many effects and biases connected to decision-making. I shall
focus on certain aspects of decision-making that can differentiate academic
from industrial researchers.
The first deals with risk. According to ‘prospect theory’ (Kahneman
and Tversky, 1979; Tversky and Kahneman, 1992), risk propensity is
stronger in situations of loss and weaker in situations of gain. A loss of
$5 causes a negative utility bigger than the positive utility caused by the
gain of $5. Therefore people react to a loss with risky choices aimed at
recovering the loss. Two other conditions that increase risk propensity are
overconfidence (Fischhoff et al., 1977; Kahneman and Tversky, 1996) and
illusion of control (Langer, 1973). People often tend to overestimate the
accuracy of their judgements and the probability of the success of their
performance. Both the perception of loss and overconfidence occur when
there is competition; decisions are charged with economic meaning and
have economic effects. The operational norm of financial heaviness and
pressing time, and the social value of exclusivity and the interests of the
industrial researcher can increase the economic value of choices and inten-
sify the perception of competitiveness. This, consequently, can increase
risk propensity. In contrast, the social values of communitarianism and
indifference, and the operational norms of financial lightness and the slow
time of academic scientists may create an environment that doesn’t induce
a perception of loss or overconfidence. Thus behaviour tends to be more
risk-averse.
A second feature of decision-making is connected to regret and loss
aversion. We saw before that, according to prospect theory, an indi-
vidual doesn’t like to lose, and reacts with increased risk propensity.
Loss aversion is based on the regret that loss produces in the individual.
Knowledge-driven capitalization of knowledge 59
This regret is responsible for many effects. One of the most important is
‘irrational escalation’ (Stanovich, 1999) in all kinds of investments (not
only economic, but also political and affective). When one is involved in
the investment of money in order to reach a goal, such as the building of
a new missile prototype or the creation of a new molecule to cure AIDS,
one has to consider the possibility of failure. One should monitor the
various steps of the programme and, especially when funding ends, one
must coldly analyse the project’s chances for success. In this case, one must
consider the monies invested in the project as sunk cost, forget them and
proceed rationally. People tend, however, to become affectively attached
to their project (Nozick, 1990; Stanovich, 1999). They feel strong regret in
admitting failure and the loss of money, and tend to continue investment
in an irrational escalation of wasteful spending in an attempt to attain the
established goal. This psychological mechanism is also linked to prospect
theory and risk propensity under conditions of loss. Irrational escalation
is stronger when there is a stronger emphasis on the economic importance
of the project. This is the typical situation of a private company, which
links the success of its technological projects to its commercial survival.
Industrial researchers have the perception that their job and the possibil-
ity of promotion are linked to the success of their technological projects.
Therefore they are likely to succumb more easily to irrational escalation
than academic researchers, who have the operational norm of financial
lightness and the social norm of indifference, and whose career is only
loosely linked to the success of research projects.
The third aspect of decision-making has to do with an irrational bias
called ‘myopia’ (Elster, 1979) or temporal discounting. People tend to
strongly devalue long-term gains over time. They prefer small, immediate
gains to big gains projected in the future. Usually, this behaviour is associ-
ated with overconfidence and the illusion of control. Those who discount
time prefer the present, because they imagine themselves able to control
output and results beyond any chance estimations. In the case of industrial
researchers, and of entrepreneurial culture in general, the need to have
results at once, to find fast solutions to problems and to assure sharehold-
ers and the market that the company is stable and growing seems to align
with the propensity towards time discounting. Future results don’t matter.
What it is important is the ‘now’ and the ability to have new competitive
products in order to survive commercially. Financial heaviness, pressing
time and well-defined results may be responsible for the tendency to give
more weight to the attainment of fast and complete results at once, even
at the risk of making products that in the future will be defective, obso-
lete and easily overcome by competing products. In the case of academic
scientists, temporal discounting might be less strong. In fact, the three
60 The capitalization of knowledge
mainly tacit, and its transfer through linguistic media almost impossible.
The organizational centre of capitalization was the inventor’s laboratory,
where he/she attempted to transfer knowledge to apprentices through face-
to-face teaching and by doing and interacting. Selling patents was pointless
without ‘transfer by head’ or proper apprenticeship. According to some
authors with the growth of science-based innovation the situation changed
substantially. In life sciences, for example, ontic knowledge is composed
of explanatory assertions, mainly theories and models. Technical norms
are less represented by competential know-how than by explicit condition–
action rules. Thus the degree of tacitness seems, at first sight, to be less.
Ontic knowledge explaining an invention might be represented, explicitly,
by general theories and models, and the process for reproducing the inven-
tion would be little characterized by know-how. A patent might be sold
because it would allow complete knowledge transfer. Academic labs and
companies might interact at a distance, and there would be no need for
university–industry proximity. The explicitness of technological knowl-
edge would soon become complete with the ICT revolution (Cowan et al.,
2000), that would be able even to automatize know-how. As I have shown
in previous articles (Balconi et al., 2007; Pozzali and Viale, 2007), this opti-
mistic representation of the disappearance of tacit knowledge is an error. It
considers tacitness only at the level of competential know-how and does not
account for the other two aspects of tacitness, namely, background knowl-
edge and cognitive rules. Background knowledge not only includes social
norms and values but also principles and categories that give meaning to
actions and events. Cognitive rules serve to apply reason to the data and
to find solutions to problems. Both tend to be individually variable. The
knowledge represented in a patent is, obviously, elliptical from this point
of view. A patent can’t explicitly contain background knowledge and cog-
nitive rules used to reason and interpret information contained in it. These
irreducible tacit aspects of knowledge oblige technology generators and
users to interact directly in order to stimulate a convergent calibration of
the conceptual and cognitive tools needed to reason and interpret knowl-
edge. This entails a stimulus towards proximity between university and
company and the creation of hybrid organizations between them to jointly
develop knowledge towards commercial aims.
Norms and values used for action, together with principles and concepts
used for understanding, constitute background knowledge. Beyond knowl-
edge transfer, shared background knowledge is necessary for linguistic
Knowledge-driven capitalization of knowledge 65
knowledge. In any case, they are influenced by norms and values contained
in background knowledge, as was shown by Nisbett (2003) in his study on
American and East Asian ways of thinking. The hypothesis of different
cognitive rules generated by different background knowledge seems likely
but must still be confirmed empirically (Viale et al., forthcoming). I shall
now look at some examples of these differences (analysed in the pilot
study of Fondazione Rosselli, 2008). Time perception and the operational
norm of the loose time versus pressing time differentiate business-oriented
academics from entrepreneurial researchers. For the latter, time is press-
ing, and it is important to find concrete results quickly and not waste
money. Their responses show a clear temporal discounting. The business
participants charge academics with looking too far ahead and not caring
enough about the practical needs of the present. The short-term logic of
the industrial researchers seems to follow the Latin saying Primum vivere
deinde philosophare (‘First live, then philosophize’). For them, it is better
to concentrate their efforts on the application of existing models in order
to obtain certain results. The academic has the opposite impetus, that is,
to explore boundaries and uncertain knowledge. The different temporal
perceptions are linked to risk assessment. The need to obtain fast results
for the survival of the company increases the risk perception of the money
spent on R&D projects. In contrast, even if the academic participants
are not pure but business oriented, they don’t exhibit the temporal dis-
counting phenomenon, and for them risk is perceived in connection with
scientific reputation inside the academic community (the social norm of
universalism). What is risky to the academic researchers is the possibil-
ity of failing to gain scientific recognition (vestiges of academic values).
Academic researchers also are more inclined towards communitarianism
than exclusivity (vestiges of academic values). They believe that knowl-
edge should be open and public and not used as exclusive private property
to be monopolized. For all participants, misunderstandings concerning
time and risk are the main obstacles to collaboration. University members
accuse company members of being too short-sighted and overly prudent
in the development of new ideas; entrepreneurial participants charge
university members with being too high-minded and overly far-sighted in
innovation proposals. This creates organizational dissonance in planning
the milestones of the projects and in setting the amount of time needed for
the various aspects of research. Differences in cognitive rules are a strong
factor in creating dissonance among researchers. The likely solution to this
dissonance is the emergence in universities of a new research figure trained
in close contact with industrial labs. This person should have the academic
skills of his/her pure scientist colleagues and, at the same time, knowledge
of industrial cognitive styles and values. Obviously, hybrid organizations
Knowledge-driven capitalization of knowledge 67
can also play an important role, acting as a type of ‘gym’ in which to train
towards the convergence between cognitive styles and values.
NOTES
1. Janus is a two-faced god popular in the Greek and Roman tradition. One face looks to
the past (or to tradition) and the other looks toward the future (or to innovation).
2. From a formal point of view, the descriptive assertion may be expressed in the following
way:
(E x, t) (a S b)
This means that an event x exists in a given time t such that if x is perceived by the agent
a, then it has the features b. Contrary to the pure analytical interpretation, this formula-
tion is epistemic; that is, it includes the epistemic actor a who is responsible for perceiving
the feature b of event x.
Knowledge-driven capitalization of knowledge 69
REFERENCES
74
Academic–business research collaborations 75
patenting output of European universities lags behind only one among the
US universities – and in that exception the difference was quite marginal.
If there are grounds for suspecting that it may not really have been
necessary for Europe to embrace the Bayh–Dole regime’s approach to
effecting ‘technology transfers’ from academic labs to industrial firms,
there also are doubts as to whether the likelihood of innovative success
ensuing from such transactions is raised by having universities rather than
firms own the patents on academic inventions. There are theoretical argu-
ments about this, pro and con, because the issue turns essentially on the
comparative strength of opposing effects: are firms likely to make a better
job of the innovation process because they have greater control over the
development of their own inventions? Or is it less likely that viable aca-
demic inventions will be shelved if the inventor’s institution retains control
of the patent and has incentives to find a way of licensing it to a company
that will generate royalty earnings by direct exploitation?
Since the issue is one that might be settled on empirical grounds, it is
fortunate that Crespi et al. (2006) have recently carried out a statistical
analysis of the effects of university ownership on the rate of commercial
application (diffusion) of patents, and on patents’ commercial values,
based upon the experience of European academic inventions for which
patents were issued by the EPO. Their analysis controls for the different
(ex ante observed) characteristics of university-owned and non-university-
owned patents, and therefore accords with theoretical considerations
that suggest one should view university ownership of a patent as the
endogenously determined outcome of a bargaining game.5 Both before
and after controlling for such differences between patents, they find no
statistically significant effects of university ownership of patents. The only
significant (positive) effect reported is that university-owned patents are
more often licensed out, but this does not lead to an overall increase in
the rate of commercial use. Hence the authors conclude that they can find
no evidence of ‘market failure’ that would call for additional legislation
in order to make university patenting more attractive in Europe. Their
inference is that whether or not universities own commercially interesting
patents resulting from their research makes little difference, because what-
ever private economic benefits might be conveyed by ownership per se are
being adjusted for by the terms set in the inter-organizational bargaining
process. This interpretation of the findings surely should gratify admirers
of the Coase Theorem’s assertion that the locus of ownership of valuable
property does not carry efficiency implications when transactions costs are
not very high.
Nonetheless, even though impelled by misconceptions of the realities
both in the USA and in Europe, there is now a general sense that, by
Academic–business research collaborations 77
These have been important steps toward the flexibility needed for R&D
collaborations throughout the European Research Area (ERA), even
though a considerable distance remains to be traveled by the respec-
tive national government authorities along the path towards granting
greater autonomy to their institutions; and also by consortia and regional
coalitions of the institutions themselves to remove the impediments to
collaboration and inter-university mobility of personnel that continues
to fragment the European market for academic science and engineering
researchers.
Furthermore, although European governments have not hesitated
to urge business corporations to accept the necessity of investments in
‘organizational re-engineering’ to take full advantage of new technolo-
gies and consequent new ways of working, they have not been so quick
to put this good advice into practice ‘closer to home’ – when urging
78 The capitalization of knowledge
work, by freeing themselves from the lengthy delays and costly, frustrat-
ing negotiations over IPR that proposals for such collaborative projects
typically encounter.
This development reflects a growing sense in some corporate and uni-
versity circles during the past five years that the Bayh–Dole legislation
had allowed (and possibly encouraged) too great a swing of the pendulum
towards IP protection as the key to appropriating economic returns from
public and private R&D investments alike; that the vigorous assertion of
IPR was being carried too far, so that it was impeding the arrangement
of inter-organization collaborations involving researchers in the private
and publicly funded spheres. As Stuart Feldman, IBM’s vice-president for
computer science, explained to the New York Times: ‘Universities have
made life increasingly difficult to do research with them because of all the
contractual issues around intellectual property . . . We would like the uni-
versities to open up again.’ A computer scientist at Purdue University is
quoted in the same report as having echoed that perception: ‘Universities
want to protect their intellectual property but more and more see the
importance of collaboration [with industry].’
The empirical evidence about the effects of Bayh–Dole-inspired legisla-
tion in the EU that has begun to appear points, similarly, to some negative
consequences for research collaboration. Thus a recent study has investi-
gated the effect of the January 2000 Danish Law on University Patenting
and found that it led to a reduction in academic–industry collaboration
within Denmark (Valentin and Jensen, 2007). But the new law, which gave
the employing university patent rights to inventions produced by faculty
scientists and engineers who had worked alone or in collaboration with
industry, appears also to have been responsible for an increase in Danish
biotech firms’ readiness to enter into research collaborations with scientists
working outside Denmark – an outcome that must have been as surprising
as it was unwelcome to the legislation’s proponents. Clearly, the transfer
of institutional rules from the USA to Europe is not a matter to be treated
lightly; their effects in different regimes may not correlate at all well.
It remains to be seen just how widely shared are these skeptical ‘second
thoughts’ about the wisdom of embracing the spirit of the Bayh–Dole
experiment, and how potent they eventually may become in altering
the modus of industry–university interactions that enhance ‘technology
knowledge transfers’, as distinguished from ‘technology ownership trans-
fers’. At present it is still too early to speculate as to whether many other
academic institutions will spontaneously follow the example of the Open
Collaborative Research Program. Moreover, it seems unlikely that those
with substantial research programs in the life sciences and portfolios of
biotechnology and medical device patents will find themselves impelled to
80 The capitalization of knowledge
exploits the ‘public-goods’ properties that make it possible for data and
information to be concurrently shared in use and reused indefinitely, and
thereby promote the faster growth of the stock of reliable knowledge. This
contrasts with the information control and access restrictions that gener-
ally are required in order to appropriate private material benefits from the
possession of (scientific and technological) knowledge. In the proprietary
research regime, discoveries and inventions must either be held secret or
be ‘protected’ by gaining monopoly rights to their commercial exploita-
tion. Otherwise, the unlimited entry of competing users could destroy the
private profitability of investing in R&D.11
One may then say, somewhat baldly, that the regime of proprietary
technology (qua social organization) is conducive to the maximization
of private wealth stocks that reflect current and expected future flows of
economic rents (extra-normal profits). While the prospective award of
exclusive ‘exploitation rights’ have this effect by strengthening incentives
for private investments in R&D and innovative commercialization based
on the new information, the restrictions that IP monopolies impose on the
use of that knowledge perversely curtail the social benefits that it will yield.
By contrast, because open science (qua social organization) calls for liberal
dissemination of new information, it is more conducive to both the maxi-
mization of the rate of growth of society’s stocks of reliable knowledge
and to raising the marginal social rate of return from research expendi-
tures. But it, too, is a flawed institutional mechanism: rivalries for priority
in the revelation of discoveries and inventions induce the withholding of
information (‘temporary suspension of cooperation’) among close com-
petitors in specific areas of ongoing research. Moreover, adherents to open
science’s disclosure norms cannot become economically self-sustaining:
being obliged to quickly disclose what they learn and thereby to relinquish
control over its economic exploitation, their research requires the support
of charitable patrons or public funding agencies.
The two distinctive organizational regimes thus serve quite different pur-
poses within a complex division of creative labor, purposes that are com-
plementary and highly fruitful when they coexist at the macro-institutional
level. This functional juxtaposition suggests a logical explanation for their
coexistence, and the perpetuation of institutional and cultural separations
between the communities of researchers forming ‘the Republic of Science’
and those who are engaged in commercially oriented R&D conducted
under proprietary rules. Yet these alternative resource allocation mecha-
nisms are not entirely compatible within a common institutional setting; a
fortiori, within the same project organization there will be an unstable com-
petitive tension between the two and the tendency is for the more fragile,
cooperative micro-level arrangements and incentives to be undermined.
82 The capitalization of knowledge
Vice-chancellors often have links with the CEOs of major local companies, with
chambers of commerce, with their development agency and with NHS Trusts
and other community service providers in their region. Academics work with
individual businesses through consultancy, contract or collaborative research
services. University careers services cooperate with the businesses which wish to
recruit their graduates or provide work placements for their students.
operation so that success breeds success. The profits from existing activi-
ties that provide the basis for subsequent innovation in a firm have their
equivalent in the university in terms of research reputations that serve to
attract high-quality staff and funding. Indeed, the institutions of science
are partly designed to create and reinforce this process. The currently
articulated attempts by some member states to accelerate this reputation
effect through the competitive allocation of teaching and research funds
are bound to further concentrate reputations on a relatively small number
of universities.
Because there are strong potential complementarities between the
conduct of exploratory, fundamental research in institutions organized
on the ‘open science’ principle, and closed proprietary R&D activities in
the private business sector, it is doubly important to establish market and
non-market arrangements that facilitate information flows between the
two kinds of organization. The returns on public investment in research
carried on by PROs can be captured through complementary, ‘valorizing’
private R&D investments that are commercially oriented, rather than by
encouraging PROs to engage in commercial exploitation of their knowl-
edge resources. This is why the strategy that has been expressed in the EU’s
Barcelona targets is important: by raising the rate of business investment
in R&D, Europe can more fully utilize the knowledge gained through its
public research and training investments, and correspondingly capture the
(spillover) benefits that private producers and consumers derive from the
application of advances in scientific and technological knowledge.
Knowledge transfer processes can be made more effective by attention
to the arrangements that are in place at the two main points of the public
research institutions’ connections with their external environments. That
a research institute or a university may acquire the attributes of an iso-
lated, inward-looking ‘ivory tower’ is well understood, and their internal
processes in many cases tend to encourage this. Universities in the EU are
frequently criticized for operating with internal incentive structures that
reward academic excellence in teaching and research independently of
any potential application to practice in the business or policy realms. This
concern is reflected in the newly attributed ‘third stream’ or ‘triangulation’
of the university system, defined as ‘the explicit integration of an economic
development mission with the traditional university activities of scholar-
ship, research and teaching’.19 Third-stream activities are of many different
kinds, and here it is important to distinguish those activities that seek the
commercialization of university research (technology licenses, joint ven-
tures, spin-offs and so on) from activities of a more sociopolitical nature
that include professional advice to policy-makers, and contributions to
cultural and social life (see OEU, 2007). What is significant about the
92 The capitalization of knowledge
A SUMMARY
ACKNOWLEDGMENT
This chapter was previously published in Research Policy, Vol. 35, No. 8,
2006, pages 1110–21.
NOTES
patent in the hands of the inventing professor(s). In principle the latter could assign the
rights to a university, which could in turn bargain with a firm over the terms of a license
to exploit it.
6. In this regard it is significant that the latter considerations led the Italian government
to award ownership rights in patents to their faculty employees, whereas the industrial
treatment of ‘work for hire’ by employed inventors was applied to university faculty
by all the other European states. Thus, in Denmark, PROs including universities were
given the rights to all inventions funded by the Ministry of Research and Technology
(in 1999); French legislation authorized the creation of TTOs (technology transfer
offices) at universities (in 1999), and university and PRO assertion of rights to employee
inventions was ‘recommended’ by the Ministry of Research (in 2001); the ‘professor’s
privilege’ was removed in Germany by the Ministry of Science and Education (in 2002);
in Austria, Ireland, Spain and other European countries the employment laws have
been altered to removed ‘professor’s exemption’ from the assignment to employers
of the IP rights to the inventions of their employees. See OECD (2003); Mowery and
Sampat (2005).
7. The quoted phrase is the single most frequently cited national policy development
among those listed in a country-by-country summary of the 25 EU member states’
‘National policies toward the Barcelona Objective’, in European Commission (2003),
Table 2.1, pp. 29ff.
8. See Lohr (2006). The universities involved are UC Berkeley, Carnegie Mellon,
Columbia University, UC Davis, Georgia Institute of Technology, Purdue University
and Rutgers University.
9. For further discussion of the literature on the economics of the so-called ‘anti-
commons’, and the critical importance of ‘multiple-marginalization’ as a source of
inefficiency that is potentially more serious than that which would result from the
formation of a cartel, or profit-maximizing pool among the holders of complementary
patents, see David (2008).
10. There was something not so foolish, after all, in the old-fashioned idea of upstream
public science ‘feeding’ downstream research opportunities to innovative firms. The
worry that this will not happen in the area of nanotechnology (see Lemley, 2005) brings
home the point about the unintended consequences of the success of national policies
that aimed at building a university-based research capacity in that emerging field. The
idea was not to allow domestic enterprise to be blocked by fundament patents owned
by other countries. That they might now be blocked by the readiness of PROs on their
home terrain seeking to exploit their control of those tools is a disconcerting thought.
For points of entry into the growing economics literature on the impact of academic
patenting upon exploratory research investments, and the ‘anti-commons’ question
(specifically, the ambiguities of recent empirical evidence regarding its seriousness), see
David (2003); Lemley and Shapiro (2007).
11. This and the following discussion draw upon Dasgupta and David (1994) and David
(2003).
12. The value of the screening function for employers is the other side of the coin of the
‘signaling’ benefits that are obtained by young researchers who trained and chose to
continue in postdoctoral research positions in academic departments and labs where
publication policies conform to open science norms of rapid and complete disclosure.
On job market signaling and screening externalities in this context see, e.g. Dasgupta
and David (1994), section 7.1, pp. 511–513.
13. This caution might be subsumed as part of the general warning against the ‘mix-and-
match’ approach to institutional reform and problem selection in science and policy-
making, a tendency that is encouraged by international comparative studies that seek
to identify ‘best practices’, as has been pointed out by more than one observer of this
fashionable practice (but see, e.g., David and Foray (1995). Examining particular insti-
tutions, organizational forms, regulatory structures, or cultural practices in isolation
from the ecologies in which they are likely to evolve, and searching for correlations
Academic–business research collaborations 95
between desired system-level outcomes and their presence in the country or regional
cross-section data, has been fashionable but as a rule offers little if any guidance about
how to move from one functional configuration to another that will be not only viable
but more effective.
14. The difficulties occasioned by this internal organizational structure of universities,
which contributes to separating the interest of the institution as a ‘research host’
from that of its faculty researchers, thereby placing these research ‘service units’ in a
regulatory role vis-à-vis the latter, are considerable. But they are far from arbitrary or
capricious, in view of the potential legal complexities that contractual agreements for
collaborative research performance may entail. For further discussion see David and
Spence (2003).
15. See the 2003 survey results reported by Hertzfeld et al. (2006). See also David (2007),
esp. table 1 and text discussion.
16. It is consequently a bit surprising to find the following statement, attributed to the
Lambert Review of Business–University Collaboration (HM Treasury, 2003), p. 52, n.
110: ‘Indeed, the best forms of knowledge transfer involve human interaction, and
European society would greatly benefit from the cross-fertilization between university
and industry that flows from the promotion of inter-sectoral mobility.’
17. These issues are examined in some detail in David and Spence (2003).
18. While this does not imply that other institutions and organizations are more inter-
changeable with the universities in the performance of a number of the latter’s key
functions in modern society, it has contributed to the recent tendency of some observers
to suggest that universities as deliverers of research and training services might be more
effective if they emulated business corporations that perform those tasks.
19. See Minshull and Wicksteed (2005). Activities of this nature are not linked solely
to academy–industry interactions. The tripartite missions in health care to link bio-
medical research with clinical service delivery and clinical education across hospitals
and university medical schools have been widely adopted in the USA and UK. In
the latter they are known as academic clinical partnerships, and they provide the
framework within which much NHS-funded research is carried out. See Wicksteed
(2006).
REFERENCES
Arrow, K.J. (1974), The Limits of Organization, London and New York: W.W.
Norton.
Balconi, M., S. Breschi and F. Lissoni (2004), ‘Networks of inventors and the role
of academia: an exploration of Italian patent data’, Research Policy, 33 (1),
127–45.
Cook-Deegan, R. (2007), ‘The science commons in health research: structure,
function and value’, Journal of Technology Transfer, 32, 133–56.
Crespi, G.A., A. Geuna and B. Verspagen (2006), ‘University IPRs and knowl-
edge transfer. Is the IPR ownership model more efficient?’, presented to the
6th Annual Roundtable of Engineering Research, Georgia Tech College of
Management, 1–3 December, available at http://mgt.gatech.edu/news_room/
news/2006/reer/files/reer_university_iprs.pdf.
Dasgupta, P. and P.A. David (1994), ‘Toward a new economics of science’,
Research Policy, 23, 487–521.
David, P.A. (2003), ‘The economic logic of “open science” and the balance
between private property rights and the public domain in scientific data and
information: a primer’, in J. Esanu and P.F. Uhlir (eds), The Role of the Public
96 The capitalization of knowledge
Metcalfe, J.S. (2007), ‘Innovation systems, innovation policy and restless capital-
ism’, in F. Malerba and S. Brusoni (eds), Perspectives on Innovation, Cambridge:
Cambridge University Press.
Minshull, T. and B. Wicksteed (2005), University Spin-Out Companies: Starting to
Fill the Evidence Gap, Cambridge: SQW Ltd.
Mowery, D.C. and B.N. Sampat (2005), ‘Bayh–Dole Act of 1980 and university–
industry technology transfer: a model for other OECD governments?’, Journal
of Technology Transfer, 20 (1–2), 1115–27.
Mowery, D.C., R.R. Nelson, B. Sampat and A.A. Ziedonis (2001), ‘The growth
of patenting and licensing by US universities: an assessment of the effects of the
Bayh–Dole act of 1980’, Research Policy, 30, 99–119.
Nelson, R.R. (2004), ‘The market economy and the scientific commons’, Research
Policy, 33, 455–71.
Observatory of the European University (OEU) (2007), Position Paper, PRIME
Network: http://www.prime-noe.org.
OECD (2003), Turning Science into Business: Patenting and Licensing at Public
Research Organizations, Paris: OECD.
Valentin, F. and R.L. Jensen (2007), ‘Effects on academia–industry collaboration
of extending university property rights’, Journal of Technology Transfer, 32,
251–76.
Wicksteed, S.Q. (2006), The Economic and Social Impact of UK Academic Clinical
Partnerships, Cambridge: SQW.co.uk.
3. Venture capitalism as a mechanism
for knowledge governance1
Cristiano Antonelli and Morris Teubal
1. INTRODUCTION
98
Venture capitalism and knowledge governance 99
larger the risks of failure of new companies. Banks bear the risks of the
failure of firms that had access to their financial support but cannot share
the benefits of radical breakthroughs. As Schumpeter himself realized,
this model, although practiced with much success in Germany in the last
decades of the nineteenth century, suffered from the severe limitations
brought about by this basic asymmetry.
Schumpeter not only realized the limits of the first model but identified
the new model emerging in the US economy at the beginning of the twen-
tieth century. The analysis of the corporation as the institutional alterna-
tive to the ‘innovative banker’ has been laid down in Capitalism, Socialism
and Democracy. Here Schumpeter identifies the large corporation as the
driving institution for the introduction of innovations. His analysis of
the corporation as an innovative institutional approach to improving the
relationship between finance and innovation has received less attention
than other facets (King and Levine, 1993). The internal markets of the
Schumpeterian corporation substitute external financial markets in the
key role of the effective provision and correct allocation of funds combin-
ing financial resources and entrepreneurial vision within competent hierar-
chies. Corporations, however, are much less able to manage the screening
process. Internal vested interests and localized technological knowledge
help reduce the risks of funding bad projects but risk reducing the chances
that radical innovations are funded.
The Schumpeterian corporation confirms that equity finance is more
effective than debt finance for channeling resources towards innovative
undertakings, but with a substantial bias characterized by continuity with
the existing knowledge base. The model of finance for innovation based
upon the corporation ranks higher than the model based upon banks in
that equity finance is more efficient than debt-based finance with respect
to risk-sharing, but has its own limitations arising from the reduction of
the centers able to handle the decision-making and the ensuing reduction
of the scope of competence that filters new undertakings.
In the second part of the twentieth century a few corporations con-
centrated worldwide a large part of the provision of finance for innova-
tion. The limited span of competence of a small and decreasing number
of incumbents became less and less able to identify and implement new
radical technologies: a case of lock-in competence could be observed. The
corporation has been able for a large proportion of the twentieth century
to fulfill the pivotal role of intermediary between finance and innovations,
but with a strong bias in favor of incremental technological change. The
screening capabilities of corporations fail to appreciate radical novelties.
The integration of these two strands of analysis highlights the radical mis-
match between the distinctive competence and the competitive advantage
Venture capitalism and knowledge governance 103
Polyarchies Hierarchies
Debt finance Banks experience more
Type 1 errors funding
bad projects because of
low competence levels but
favor the introduction of
radical innovations; as
lenders however they cannot
participate into their extra
profits
Equity finance Venture capitalism favor Corporations can participate
the introduction of radical into the fat tail of profits of new
innovations and participate ventures, and are better able to
into the fat tails of profits of sort out bad projects, but are
new ventures limited by higher probability to
commit Type 2 errors reducing
the rate of introduction of
radical innovations
can decline and emerge. At each point in time, markets differ. Markets
can be classified according to their characteristics and their functionality.
The emergence and upgrading of a market is the result of an articulated
institutional process that deserves to be analyzed carefully.
There are three basic notions of market in the literature: (1) in the
textbook theory of exchange, markets exist and are self-evident; and
any transaction presupposes the existence of an underlying market; (2)
markets as devices for reducing transaction costs (Coase); (3) markets as
social institutions promoting division of labor, innovation and economic
growth.
A major contribution to the discussion of markets comes from Coase
whose work clarifies both (1) and (2) above. ‘In mainstream economic
theory the firm and the market are for the most part assumed to exist and
are not themselves the subject of investigation’ (Coase, 1988, p. 5; italics
added). By mainstream economic theory Coase means an economic theory
without transaction costs. Transaction costs are the costs of market trans-
actions that include ‘search and information costs, bargaining and deci-
sion costs, and policing and enforcement costs’ (Dahlman, 1979, quoted
by Coase), which, of course, includes the costs of contracting. In Coase’s
theory, transaction costs exist and can be important; and they explain the
existence of the firm.3
In the old neoclassical theory of exchange that Coase refers to, the exist-
ence of markets (and also the creation of new markets) is assumed but not
analyzed. It is an axiom, a self-evident truth, similar to Coase’s criticism of
the notion of consumer utility, which is central to the above theory: ‘a non
existing entity which plays a part similar, I suspect, to that of ether in the
old physics’ (Coase, 1988, p. 2; italics added). This view of markets implies
that any transaction assumes an underlying market, or that there is no
such thing as a transaction without a market. This is not only not correct
but, following Coase or the implications of his analysis, we assert that the
distinction between individual transactions and a market is important.4
For our purposes, markets are social institutions where at least a critical
mass of producers and a critical mass of consumers interact and transact.
There is an important element of collective interaction and of collective
transacting; that is, any one transaction takes into account the conditions
of all other transactions.
From this viewpoint a market contrasts with an institutional context
characterized by three relevant conditions. First, it is a lower set of trans-
actions than that of the subsequent market. Second, transactions are iso-
lated and sporadic, both synchronically and diachronically. Third, agents
do not rely upon exchanges but on self-sufficiency; that is, users produce
the products they consume/use.
106 The capitalization of knowledge
5. CONCLUSIONS
NOTES
1. Morris Teubal acknowledges the funding and support of ICER (International Center
for Economic Research) where he was a Fellow in 2005 and 2008 and the Prime (NoE)
Venture Fun Project. Preliminary versions have been presented at the Fifth Triple Helix
Conference ‘The capitalization of knowledge: cognitive, economic, social and cultural
aspects’ organized in Turin by the Fondazione Rosselli, May 2005 and the following
workshops: ‘The emergence of markets and their architecture’, jointly organized by
CRIC (University of Manchester) and CEPN-IIDE (University Paris 13) in Paris, May
2006; ‘Instituting the market process: innovation, market architectures and market
dynamics’ held at the CRIC of the University of Manchester, December 2006; ‘Search
regimes and knowledge based markets’ organized by the CEPN Centre d’Economie de
Paris Nord at the MSH Paris Nord, February 2008.
2. So far, this contribution complements and integrates Antonelli and Teubal (2008),
which focuses on the emergence of knowledge-intensive property rights.
3. Concerning the nature and function of markets, again following Coase: ‘Markets are
institutions that exist to facilitate exchange, that is they exist in order to reduce the
118 The capitalization of knowledge
cost of carrying out exchange transactions. In Economic Theory which assumes that
transaction costs are non-existent markets have no function to perform’ (Coase, 1988,
p. 7); and ‘when economists do speak about market structure, it has nothing to do with
markets as an institution, but refers to such things as the number of firms, product dif-
ferentiation and the like, the influence of the social institutions that facilitate exchange
being completely ignored’.
4. Coase (1988) discusses the elements comprising a market, e.g. the medieval fairs and
markets that comprise both physical facilities and legal rules governing the rights and
duties of those carrying out transactions. Modern markets will also involve collective
organizations, that is technological institutes and mechanisms for the provision of
market-specific public goods. They also require a critical mass of buyers and sellers,
and institutions assuring standards and quality on the one hand and transparency of
transactions and inter-agent information flow on the other.
5. Marshall makes it clear that markets are themselves the product of a dynamic process:
‘Originally a market was a public place in a town where provisions and other objects
were exposed for sale; but the word has been generalized, so as to mean any body of
persons who are in intimate business relations and carry on extensive transactions in
any commodity. A great city may contain as many markets as there are important
branches of trade, and these markets may or may not be localized. The central point
of a market is the public exchange, mart or auction rooms, where the traders agree to
meet and transact business. In London the Stock Market, the Corn Market, the Coal
Market, the Sugar Market, and many others are distinctly localized; in Manchester the
Cotton Market, the Cotton Waste Market, and others. But this distinction of locality
is not necessary. The traders may be spread over a whole town, or region of country,
and yet make a market, if they are, by means of fairs, meetings, published price lists, the
post-office or otherwise, in close communication with each other’ (Marshall, 1920, pp.
324–5).
6. Markets can also signal new product or product feature requirements (‘unmet needs’)
within the ‘product category’ being traded.
7. Our agenda is therefore not only to define and explain the role of markets but also to
identify the processes of emergence of new markets. This will include analyzing the con-
ditions under which a set of ‘precursor’ transactions will not lead to the emergence of a
new market. In terms of system dynamics, this could be termed ‘left-hand truncation’.
Moreover, explaining emergence will require making reference to other variables, that
is scale economies in building the marketplace (Antonelli and Teubal, 2008).
8. The benefits include savings in transaction costs that should cover the fixed costs of
creating and the variable costs of operating a new market (see above).
9. The above framework suggests that failed market emergence could be the result of two
general causes. One is failed selection processes resulting from too little search/experi-
mentation and/or inappropriate selection mechanisms due to institutional rigidity.
The other is failure to spark or sustain an evolutionary cumulative emergence process
(e.g. due to system failures that policy has not addressed). Not all radical inventions,
even those leading to innovations and having potential, will automatically lead to new
product markets.
10. Students of regional high-tech clusters such as Saxenian (1994) and Fornahl and
Menzel (2004) have intuitively recognized the relevance of such dynamics, but not quite
elaborated it.
REFERENCES
Lane, D.A. (1993), ‘Artificial worlds and economics, part II’, Journal of Evolutionary
Economics, 3, 177–97.
Lane, D.A. and R.R. Maxfield (2005), ‘Ontological uncertainty and innovation’,
Journal of Evolutionary Economics, 15, 3–50.
Lane, D.A., S.E. van Der Leeuw, A. Pumain and G. West (eds.) (2009), Complexity
Perspectives in Innovation and Social Change, Berlin: Springer, pp. 1–493.
Marshall, A. (1890), Principles of Economics, London: Macmillan (8th edn,
1920).
Menard, C. (ed.) (2000), Institutions Contracts and Organizations. Perspectives
from New Institutional Economics, Cheltenham, UK and Northampton, MA,
USA: Edward Elgar.
Menard, C. (2004), ‘The economics of hybrid organizations’, Journal of Institutional
and Theoretical Economics, 160, 345–76.
Menard, C. and M.M. Shirley (eds) (2005), Handbook of New Institutional
Economics, Dordrecht: Springer.
Nelson, R.R. (1994), ‘The co-evolution of technology, industrial structure and
supporting institutions’, Industrial and Corporate Change, 3, 47–63.
Nelson, R.R. (1995), ‘Recent evolutionary theorizing about economic change’,
Journal of Economic Literature, 23, 48–90.
Perez, C. (2003), Technological Revolutions and Financial Capital: The Dynamics
of Bubbles and Golden Ages, Cheltenham, UK and Northampton, MA, USA:
Edward Elgar.
Quéré, M. (2004), ‘National systems of innovation and national systems of govern-
ance: a missing link?’, Economics of Innovation and New Technology, 13, 77–90.
Richardson, G.B. (1972), ‘The organization of industry’, Economic Journal, 82,
883–96.
Richardson, G.B. (1998), The Economics of Imperfect Knowledge, Cheltenham,
UK and Northampton, MA, USA: Edward Elgar.
Richter R. (2007), ‘The market as organization’, Journal of Institutional and
Theoretical Economics, 163, 483–92.
Sah, R.K. and J.E. Stiglitz (1986), ‘The architecture of economic systems’,
American Economic Review, 76, 716–27.
Sah, R.K. and J.E. Stiglitz (1988), ‘Committees, hierarchies and polyarchies’,
Economic Journal, 98, 451–70.
Saxenian, A. (1994), Regional Development: Silicon Valley and Route 128,
Cambridge, MA: Harvard University Press.
Schmookler, J. (1966), Invention and Economic Growth, Cambridge, MA: Harvard
University Press.
Schumpeter, J.A. (1934), The Theory of Economic Development, Cambridge, MA:
Harvard University Press.
Schumpeter, J.A. (1942), Capitalism, Socialism and Democracy, New York:
Harper and Brothers.
Stiglitz, J.E. (1985), ‘Credit markets and capital control’, Journal of Money, Credit
and Banking,), 133–52.
Stiglitz, J.E. and A. Weiss (1981), ‘Credit rationing in markets with imperfect
information’, American Economic Review, 71, 912–27.
4. How much should society fuel
the greed of innovators? On the
relations between appropriability,
opportunities and rates of
innovation
Giovanni Dosi, Luigi Marengo and
Corrado Pasquali
1. INTRODUCTION
121
122 The capitalization of knowledge
The economic foundations of both theory and practice of IPR rest upon
a standard market failure argument, without any explicit consideration
of the characteristics of the knowledge whose appropriation should be
granted by patent or other forms of legal monopoly.
The proposition that a positive and uniform relation exists between
innovation and intensity of IP protection in the form of legally enforced
rights such as patents holds only relative to a specific (and highly disput-
able) representation of markets, their functioning and their ‘failures’, on
the one hand, and of knowledge and its nature on the other.
The argument falls within the realm of the standard ‘Coasian’ positive
externality problem, which can be briefly stated in the following way.
There exists a normative set of efficiency conditions under which markets
perfectly fulfill their role of purely allocative mechanisms.
The lack of externalities is one of such conditions because their appear-
ance amounts (as with positive externalities) to underinvestment and
underproduction of those goods involved in the externality itself. Facing
any departure from efficiency conditions, a set of policies and institutional
devices must be put in place with the aim of re-establishing them in order to
124 The capitalization of knowledge
secured by IPR, its R&D behavior and its IPR enforcement strategies
cannot be unaffected by the actions of other firms acquiring and exploit-
ing their own IPR. The effect of firms exploiting IPR invariably raises
the costs that other firms incur when trying to access and utilize existing
knowledge. Similar dilemmas apply to the effects of a strong IP system on
competition process. Static measures of competition may decrease when a
monopoly right is granted but dynamic measures could possibly increase if
this right facilitates entry into an industry by new and innovative firms.
Are these trade-offs general features of the relationship between static
allocative efficiency and dynamic/innovative efficiency? There are good
reasons to think that such trade-offs might not theoretically even appear
in an evolutionary world, as Winter (1993) shows.
On the grounds of a simple evolutionary model of innovation and
imitation, Winter (1993) compares the properties of the dynamics of a
simulated industry with and without patent protection to the innovators.
The results show that, first, under the patent regime the total surplus (i.e.
the total discovered present value of consumers’ and producers’ surplus) is
lower than under the non-patent one. Second, and even more interesting,
the non-patent regime yields significantly higher total investment in R&D
and displays higher best-practice productivity.
More generally, an evolutionary interpretation of the relation between
appropriability and innovation is based on the premise that no model
of invention and innovation and no answer to the patent policy ques-
tion is possible without a reasonable account of inventive and innovative
opportunities and their nature.
The notion of technological paradigm (Dosi, 1982), in this respect,
is precisely an attempt to account for the nature of innovative activi-
ties. There are a few ideas associated with the notion of paradigm worth
recalling here.
First, note that any satisfactory description of ‘what technology is’
and how it changes must also embody the representation of the specific
forms of knowledge on which a particular activity is based and cannot
be reduced to a set of well-defined blueprints. It primarily concerns
problem-solving activities involving – to varying degrees – also tacit forms
of knowledge embodied in individuals and in organizational procedures.
Second, paradigms entail specific heuristics and visions on ‘how to do
things’ and how to improve them, often shared by the community of prac-
titioners in each particular activity (engineers, firms, technical societies
and so on), that is, they entail collectively shared cognitive frames. Third,
paradigms often also define basic templates of artifacts and systems, which
over time are progressively modified and improved. These basic artifacts
can also be described in terms of some fundamental technological and
128 The capitalization of knowledge
What is the effect of the increase in patent protection on R&D and techni-
cal advance? Interestingly, in this domain also, the evidence is far from
conclusive. This is for at least two reasons. First, innovative environments
are concurrently influenced by a variety of different factors, which makes
it difficult (for both the scholar and the policy-maker) to distinguish patent
policy effects from effects due to other factors. Indeed, as we shall argue
below, a first-order influence is likely to be exerted by the richness of
opportunities, irrespective of appropriability regimes. Second, as patents
are just one of the means to appropriate returns from innovative activity,
changes in patent policy might often be of limited effect.
At the same time, the influence of IPR regimes upon knowledge dis-
semination appears to be ambiguous. Hortsmann et al. (1985) highlight
the cases in which, on the one hand, the legally enforced monopoly rents
should induce firms to patent a large part of their innovations, while, on
Appropriability, opportunities and innovation 131
the other hand, the costs related to disclosure might well be greater than
the gain eventually attainable from patenting.
In this respect, to our knowledge, not enough attention has been devoted
to question whether the diffusion of technical information embodied in
inventions is enhanced or not by the patent system.
The somewhat symmetric opposite issue concerns the costs involved in
the imitation of patent-protected innovations. In this respect, Mansfield
et al. (1981) find, first, that patents do indeed entail some significant imi-
tation costs. Second, there are remarkable intersectoral differences. For
example, their data show 30 percent in drugs, 20 percent in chemicals and
only 7 percent in electronics. In addition, they show that patent protection
is not essential for the development of at least three out of four patented
innovations. Innovators introduce new products notwithstanding the fact
that other firms will be able to imitate those products at a fraction of the
costs faced by the innovator. This happens both because there are other
barriers to entry and because innovations are felt to be profitable in any
case. Both Mansfield et al. (1981) and Mansfield (1986) suggest that the
absence of patent protection would have little impact on the innovative
efforts of firms in most sectors. The effects of IPR regimes on the propen-
sity to innovate are also likely to depend upon the nature of innovations
themselves and in particular whether they are, so to speak, discrete ‘stand-
alone’ events or ‘cumulative’. So it is widely recognized that the effect of
patenting might turn out to be a deleterious one on innovation in the case
of strongly cumulative technologies in which each innovation builds on
previous ones.
As Merges and Nelson (1994) and Scotchmer (1991) suggest, in this
realm stronger patents may represent an obstacle to valuable but poten-
tially infringing research rather than an incentive.
Historical examples, such as those quoted by Merges and Nelson on
the Selden patent of a light gasoline in an internal combustion engine
to power an automobile, and the Wright brothers’ patent on an efficient
stabilizing and steering system for flying machines, are good cases to the
point, showing how the IPR regime probably slowed down considerably
the subsequent development of automobiles and aircraft. The current
debate on property rights in biotechnology suggests similar problems,
whereby granting very broad claims on patents might have a detrimental
effect on the rate of innovation, in so far as they preclude the exploration
of alternative applications of the patented invention. This is particularly
the case with inventions concerning fundamental pieces of knowledge:
good examples are genes or the Leder and Stewart patent on a genetically
engineered mouse that develops cancer. To the extent that such techniques
and knowledge are critical for further research that proceeds cumulatively
132 The capitalization of knowledge
Source: Levin et al. (1987) and Cohen et al. (2000), as presented in Winter (2002).
massive increase in patenting rates. Still, in Cohen et al. (2000) patents are
not reported to be the key means to appropriate returns from innovations
in most industries. Secrecy, lead time and complementary capabilities are
often perceived as more important appropriability mechanisms.
It could well be that a good deal of the increasing patenting activities
over the last two decades might have gone into ‘building fences’ around
some key invention, thus possibly raising the private rate of return to
patenting itself (Jaffe, 2000), without however bearing any significant rela-
tion to the underlying rates of innovation. This is also consistent with the
evidence discussed in Lerner (2002), who shows that the growth in (real)
R&D spending pre-dates the strengthening of the IPR regime.
The apparent lack of effects of different IPR regimes upon the rates
of innovation appears also from broad historical comparisons. So, for
example, based on the analysis of data from the catalogues of two
nineteenth-century world fairs – the Crystal Palace Exhibition in London
134 The capitalization of knowledge
increase, together with the uncertainty about the extent of legal liability
in using knowledge inputs. Hence, as convincingly argued by Heller and
Eisenberg (1998) and Heller (1998), a ‘tragedy of anti-commons’ is likely to
emerge wherein the IP regime gives too many subjects the right to exclude
others from using fragmented and overlapping pieces of knowledge, with
no one having ultimately the effective privilege of use.
In these circumstances, the proliferation of patents might turn out to
have the effect of discouraging innovation. One of the by products of the
recent surge in patenting is that, in several domains, knowledge has been so
finely subdivided into separate property claims (on essentially complemen-
tary pieces of information) that the cost of reassembling constituent parts/
properties in order to engage in further research charges a heavy burden on
technological advance. This means that a large number of costly negotia-
tions might be needed in order to secure critical licenses, with the effect of
discouraging the pursuit of certain classes of research projects (e.g. high-risk
exploratory projects). Ironically, Barton (2000) notes that ‘the number of
intellectual property lawyers is growing faster than the amount of research’.
While it is not yet clear how widespread are the foregoing phenomena
of a negative influence of strengthening IPR protection upon the rates of
innovation, a good deal of evidence does suggest that, at the very least,
there is no monotonic relation between IPR protection and propensity
to innovate. So, for example, Bessen and Maskin (2000) observe that
computers and semiconductors, while having been among the most inno-
vative industries in the last 40 years, have historically had weak patent
protection and rapid imitation of their products. It is well known that the
software industry in the USA experienced a rapid strengthening of patent
protection in the 1980s. Bessen and Maskin suggest that ‘far from unleash-
ing a flurry of new innovative activity, these stronger rights ushered in a
period in which R&D spending leveled off, if not declined, in the most
patent-intensive industries and firms’. The idea is that in industries such
as software, imitation might be promoting innovation and that, on the
other hand, strong patents might inhibit it. Bessen and Maskin argue that
this phenomenon is likely to occur in those industries characterized by a
relevant degree of sequentiality (each innovation builds on a previous one)
and complementarity (the simultaneous existence of different research
lines enhances the probability that a goal might be eventually reached). A
patent, in this perspective, actually prevents non-holders from the use of
the idea (or of similar ideas) protected by the patent itself, and in a sequen-
tial world full of complementarities this turns out to slow down innovation
rates. Conversely, it might well happen that firms would be better off in
an environment characterized by easy imitation, whereby it would be true
that imitation reduced current profits but it would be also true that easy
136 The capitalization of knowledge
There are some basic messages from the foregoing discussion of the
theory and empirical evidence on the relationship between degrees of IPR
protection and rates of innovation.
The obvious premise is that some private expectation of ‘profiting from
innovation’ is and has been throughout the history of modern capitalism,
Appropriability, opportunities and innovation 137
to the extent that firms’ attention and resources are, at the margin, diverted
from innovation itself toward the acquisition, defense and assertion against
others of property rights, the social return to the endeavor as a whole is likely to
fall. While the evidence on all sides is scant, it is fair to say that there is at least
as much evidence of these effects of patent policy changes as there is evidence of
stimulation of research. (Jaffe, 2000, p. 555)
But if IPR regimes have at best second-order effects upon the rates of
innovation, what are the main determinants of the rates and directions of
innovation? Our basic answer, as argued above and elsewhere (see Dosi,
1988, 1997, Dosi et al., 2005) is the following. The fundamental determi-
nants of observed rates of innovation in individual industries/technologies
appear to be nested in levels of opportunities that each industry faces.
‘Opportunities’ capture, so to speak, the width, depth and richness of the
sea in which incumbents and entrants go fishing for innovation. In turn,
such opportunities are partly generated by research institutions outside
the business sector, partly stem from the very search efforts undertaken by
incumbent firms in the past and partly flow through the economic system
via supplier/user relationships (see the detailed intersectoral comparisons
in Pavitt, 1984, and in Klevorick et al., 1995). Given whatever level of
innovative opportunities is typically associated with particular techno-
logical paradigms, there seems to be no general lack of appropriability
138 The capitalization of knowledge
conditions deterring firms from going out and fishing in the sea. Simply,
appropriability conditions vary a great deal across sectors and across
technologies, precisely as highlighted by Teece (1986). Indeed, one of the
major contributions of that work is to build a taxonomy of strategies and
organizational forms and map them into the characteristics of knowledge
bases, production technologies and markets of the particular activity in
which the innovative/imitative firm operates.
As these ‘dominant’ modes of appropriation of the returns from inno-
vation vary across activities, so also should vary the ‘packets’ of winning
strategies and organizational forms: in fact, Teece’s challenging conjecture
still awaits a thorough statistical validation on a relatively large sample of
statistical successes and failures.
Note also that Teece’s taxonomy runs counter to any standard ‘IPR-
leads-to-profitability’ model according to which turning the tap of IPR
ought to move returns up or down rather uniformly for all firms (except
for noise), at least within single sectors. Thus the theory is totally mute
with respect to the enormous variability across firms even within the same
sector and under identical IPR regimes, in terms of rates of innovation,
production efficiencies and profitabilities (for a discussion of such evidence
see Dosi et al., 2005).
The descriptive side – as distinguished from the normative, ‘strategic’
one – of the interpretation by Teece (1986) puts forward a promising can-
didate in order to begin to account for the patterns of successes and failures
in terms of suitability of different strategies/organizational arrangements
to knowledge and market conditions. However, Teece himself would
certainly agree that such interpretation could go only part of the way
in accounting for the enormous interfirm variability in innovative and
economic performances and their persistence over time.
A priori, good candidates for an explanation of the striking differences
across firms even within the same line of business in their ability to both
innovate and profit from innovation ought to include firm-specific fea-
tures that are sufficiently inertial over time and only limitedly ‘plastic’ to
strategic manipulation so that they can be considered, at least in the short
term, ‘state variables’ rather than ‘control variables’ for the firm (Winter,
1987). In fact, an emerging capability-based theory of the firm to which
Teece himself powerfully contributed (see Teece et al., 1990; and Teece et
al., 1997) identifies a fundamental source of differentiation across firms
in their distinct problem-solving knowledge, yielding different abilities of
‘doing things’ – searching, developing new products, manufacturing and
so on (see Dosi et al., 2000, among many distinguished others). Successful
corporations, as is argued in more detail in the introduction to Dosi et al.
(2000), derive competitive strength from their above-average performance
Appropriability, opportunities and innovation 139
ACKNOWLEDGMENT
NOTE
1. Winter is here pursuing an analogy between patents and ‘wildcat banknotes’ in the US
free banking period (1837–65).
140 The capitalization of knowledge
REFERENCES
INTRODUCTION
143
144 The capitalization of knowledge
BEYOND SCHUMPETER?
or, later, the equally heroic R&D team of the large corporation as the
key driver of competition, they developed a heterodox perspective that
attracted a large academic and policy following.
This was not least because of the School’s rejection of the desiccated
theorems of neoclassical economics that treated Schumpeter’s wellspring
of capitalism as a commodity to be acquired ‘exogenously’ off the shelf.
This is no longer believed, even by neoclassicals. Although the national
innovation systems authors mentioned in this section had no spatial brief –
other than for a poorly theorized ‘nation state’ which in the case of Taiwan
and South Korea could look odd, their adherence to Schumpeterian
insights implied, by his concept of ‘swarming’, clusters. These are logical
consequences where imitators pile in to mimic the innovator and seek
early superprofits before over-competition kills the goose that lays the
golden eggs. Hence ‘creative destruction’, as some older technology –
sailing ships – gives way to newer – steamships. Many shipyards arise only
to be competed into a few or none, as competitive advantage moves their
epicentre to cheaper regions around the world. It can be argued that the
swift destruction of value observable in such processes, and more recently
in the bursting of the dot.com and financial bubble (see below), with their
negative upstream implications for world trade, constitutes a problem of
‘over-innovation’ rather as, in a completely opposite way, German car
and machinery makers used to be accused of ‘over-engineering’. Mobile
telephony and computing may equally be accused of having too many
‘bells and whistles’ that most people never use.
Biotechnology is not like this. It is in many ways either non-Schumpeterian
or post-Schumpeterian, although, as we have seen, it certainly clusters,
even concentrates in ‘megacentres’ of which there are very few in the
world. But it does this not to outcompete or otherwise neutralize competi-
tor firms, but rather to maximize access to knowledge at an early stage in
the knowledge exploration phase that is difficult to exploit in the absence
of multidimensional partnerships of many kinds. Feldman and Francis’s
(2003) paper presents a remarkable, even original, picture of the manner
in which the driver of cluster-building in Washington and Maryland’s
Capitol region was the public sector. Importantly, the National Institutes
of Health conduct basic research as well as funding others to do it in
university and other research laboratories. Politics means that sometimes
bureaucracies must downsize, something that was frequently accompa-
nied by scientists transferring knowledge of many kinds into the market,
not least in DBFs that they subsequently founded. Interactions based on
social capital ties with former colleagues often ensued in the marketplace.
Before founding Human Genome decoding firm Celera, Craig Venter was
one of the finest exploiters of such networks, not least with the founding
148 The capitalization of knowledge
CEO of Human Genome Sciences Walter Haseltine, who had been a col-
league of Walter Gilbert at Harvard and James Watson, co-discoverer
of DNA, and later of Nobel Prize-winner David Baltimore (Wickelgren,
2002). Thus, as Feldman and Francis (2003) show, inter-organizational
capabilities networks from government to firms as distinct from more
normally understood laboratory–firm or firm–firm links may also be
pronounced in biotechnology.
This is seldom observed in the classical Schumpeterian tradition, which
rests to a large degree on engineering metaphors, where entrepreneurship
is privileged. Moreover, few Neo-Schumpeterians devote attention to
the propulsive effect of public decisions (for an exception, see Gregersen,
1992), even though regional scientists had long pointed out the central-
ity of military expenditure to the origins of most ICT clusters (Hall and
Markusen, 1985). In Niosi and Bas (2003) it is shown how relations are
symbiotic rather than creatively destructive among firms in the Montreal
and Toronto clusters. Arguing for the ‘technology’ definition of biotech-
nology, they say it is not a generic but a diverse set of activities, each
with specificity. Of interest is that in biopharmaceuticals they interact in
networks to produce products for different markets. Hence there is cross-
sectoral complexity in markets as well as research, although healthcare is
the main market. Accordingly, they too point to the ‘regional innovation
system’ nature of biotechnology, spreading more widely and integrating
more deeply into the science base than is normally captured by the local-
ized, near-market idea of ‘cluster’. Thus CROs and biologics suppliers
may exist in the orbit, not the epicentre of the cluster.
This collaborative aspect of biotechnology is further underlined by
Casper and Karamanos (2003). For example, they show that, regarding
joint publication, while there is substantial variation in the propensity to
publish with laboratory, other academic or firm partners by Cambridge
(UK) biotechnology firms, only 36 per cent of firms are ‘sole authors’ and
the majority of partners are those from the founders or their laboratory.
Academic collaborators are shared equally between Cambridge and the
rest of the UK, with international partners a sizeable minority. Hence
the sector is again rather post-Schumpeterian in its interactive knowledge
realization, at least in respect of the all-important publication of results
that firms will probably seek to patent. Moreover, they will, in many cases,
either have or anticipate milestone payments from ‘pharma’ companies
with whom they expect to have licensing agreements.
Finally, biotechnolgy is interesting for the manner in which epistemo-
logical, methodological and professional shifts in the development of the
field created openings for DBFs when it might have been thought that
already large pharmaceuticals firms would naturally dominate the new
Global bioregions 149
MU
HHHaH HMS
New York
Munich Cambridge(MA)
UM NYU GH
Singapore
MIPS
MIT NUS
ColmU MI DSI
Zurich Cam(UK) HU
RU
ZU
CamULondon
MSR London
Geneva BPRC Oxford UCL ICL
UG
OU
JRH NIMR
3 4–6
NIMR
7–8 >10
Three key things have been shown with implications for understanding of
knowledge management, knowledge spillovers and the roles of collabora-
tion and competition in bioregions. The first is that two kinds of proximity
156 The capitalization of knowledge
Sources: NIH; NRC; BioM, Munich; VINNOVA, Sweden; Dorey (2003); Kettler and
Casper (2000); ERBI, UK; Lawton Smith (2004); Kaufmann et al. (2003).
the second smallest number of life scientists in Table 5.1, scores highest
at $510 000 per capita, although Seattle, at $276 000, is comparable to
Boston, New York and San Francisco. How should we interpret these
statistics? One way is to note the very large amounts of funding from ‘big
pharma’ going especially to the Boston, and to a lesser extent New York
and both Californian centres. Canada’s bioregional clusters challenge
many elsewhere in the world regarding cluster development.
Some of Europe’s are rather large on the input side (for example life
scientists) but less so on the output side (for example funding, firms). The
process of bioregional cluster evolution has occurred mainly through aca-
demic entrepreneurship supported by well-funded research infrastructure
and local venture capital capabilities. In Israel, there is a highly promis-
ing group of bioregions including Rehovot and Tel Aviv as well as the
main concentration in Jerusalem, where patents are highest although firm
numbers are of lesser significance.
The most striking feature of the global network of bioscientific knowl-
edge transfer through exploration collaboration as measured by research
158 The capitalization of knowledge
HMS
HMS
HMS
Stanford
Uni
HMS
UCSF
HMS
UT
Scripps UBerkley
HMS Cam Un
RU
HMS
Notes: Results (log scale) are based on articles from the top two or three journals (with
the highest impact factor provided by Web of Knowledge) of each sub-field, which are
as follows: immunology: Annual Review of Immunology; Nature Immunology; Immunity;
molecular biology & genetics: Cell Molecular; Journal of Cell Biology; microbiology:
Microbiology and Molecular Biology Review; Annual Review of Microbiology; Clinical
Microbiology Review; neuroscience: Annual Reviews Neuroscience; Trends Neuroscience;
biotechnology & applied microbiology: Drug Discovery; Nature Biotechnology; cell &
developmental biology: Annual Review of Cell and Developmental Biology; Advances
in Anatomy, Embryology and Cell Biology; Anatomy and Embryology; biophysics &
biochemistry: Annual Review of Biophysics; Current Opinions in Biophysics.
CONCLUSIONS
ACKNOWLEDGEMENTS
NOTES
1. The ‘exploration’ and ‘exploitation’ distinction was, of course, first made in organiza-
tion science by March (1991). It is nowadays necessary to introduce the intermediary
‘examination’ form of knowledge to capture the major stages of the move from research
to commercialization in biotechnology because of the long and intensive trialling process
in both agro-food and pharmaceutical biotechnology. However, a moment’s reflection
about other sectors suggests that this kind of knowledge and its organizational processes
have been rather overlooked as it applies outside biotechnology (see Cooke, 2005).
Global bioregions 163
2. San Francisco’s megacentre capability in ICT gives its biotechnology competitive advan-
tage in bioinformatics, screening, gene sequencing and genetic engineering software and
technologies.
3. As shown below, a strong public role in cluster-building is also evident in Finegold et al.’s
(2004) analysis of biopharmaceuticals developments in Singapore.
4. But disappointment has grown since the downturn at the only modest growth and drying
pipeline dynamics associated with Germany’s policy-induced startup biotechnology
businesses. Moreover, German high-tech entrepreneurship in general suffered a heavy
blow with the demise of the Neuer Markt, its now defunct take on NASDAQ.
5. Unusually, the role of ‘big pharma’ is rather under-emphasized in this analysis of its
relation to health biotechnology. This is not because large pharmaceuticals firms are
unimportant in this context, for they clearly are. However, for exploration and even,
increasingly, examination and some exploitation knowledge production they practise
‘open innovation’, as Chesbrough (2003) demonstrates for the case of Millennium
Pharmaceuticals, a leading bioinformatics supplier that redesigned itself as a biotechno-
logical drugs manufacturer through investment of ‘open innovation’ contract earnings
from the likes of Monsanto and Eli Lilly. These practices are now emulated by specialist
suppliers in industries like ICT, automotives and household care, according to the same
author. This chimes with a more general hypothesis we can call ‘Globalization 2’, in which
in a ‘knowledge economy’ the drivers of globalization become ‘knowledgeable clusters’
of various kinds. These exert an irresistible attraction for large corporates, who become
‘knowledge supplicants’ as their in-house R&D becomes ineffective and inefficient. They
pay for, but no longer generate, leading-edge analytical knowledge for innovation.
6. As Owen-Smith and Powell (2004) show, ‘open science’ conventions in such clusters as
Cambridge–Boston ‘irrigate’ the milieu with knowledge spillovers, giving to some clus-
ters an element of ‘increasing returns’ from ‘spatial knowledge monopoly’ to a significant
degree.
7. National Institute of Health (NIH) funding for medical and bioscientific research in
Cambridge–Boston was in excess of $1.1 billion by 2000, $1.5 billion by 2002 and $2.1
billion in 2003. Cooke (2004) shows that it exceeded all of California by 2002, and by
2003 the gap widened to $476 billion ($2021 as against $1545 billion). Interestingly, this
is a recent turnaround since the 1999 total of $770 million was marginally less than the
amount of NIH funding passing through the northern California cluster in 1999, a statis-
tic that only increased to $893 million in 2000. Thus Greater Boston’s supremacy is recent
but definitive. San Diego’s NIH income includes that earned by Science Applications
International Corporation. This firm is based in San Diego but performs most of its NIH
research outside its home base as a research agent for US-wide clients. Thus it warrants
mention but is excluded from totals calculated by this author. However, it is included
in the Milken International report ‘America’s Biotech and Life Science Cluster’ (June
2004), which ranks San Diego the top US cluster. This oversight seriously weakens its
claims for San Diego’s top US cluster position. Further reasons for rejecting the Milken
Institute’s ranking of San Diego first as well as inclusion of questionable research funds
are that the Institute deploys a spurious methodology based on research dollars per
metropolitan inhabitant to promote San Diego’s ranking. Finally, the research was com-
missioned by local San Diego interests (Deloitte’s San Diego) and excludes ‘big pharma’
funding, on which San Diego performs less than half as well as Boston (Table 5.1).
REFERENCES
Bazdarich, M. and J. Hurd (2003), Anderson Forecast: Inland Empire & Bay Area,
Los Angeles, CA: Anderson Business School.
Carlsson, B. (ed.) (2001) New Technological Systems in the BioIndustries: An
International Study, London: Kluwer.
Casper, S. and A. Karamanos (2003), ‘Commercialising science in Europe: the
Cambridge biotechnology cluster’, European Planning Studies, 11, 805–22.
Chesbrough, H. (2003), Open Innovation, Boston, MA: Harvard Business School
Press.
Coenen, L., J. Moodysson and B. Asheim (2004), ‘Nodes, networks and prox-
imities: on the knowledge dynamics of the Medicon Valley biotech cluster’,
European Planning Studies, 12, 1003–18.
Cooke, P. (2002), ‘Rational drug design and the rise of bioscience megacentres’,
presented at the Fourth Triple Helix Conference, ‘Breaking Boundaries –
Building Bridges’, Copenhagen and Lund, 6–9 November.
Cooke, P. (2003), ‘The evolution of biotechnology in three continents:
Schumpeterian or Penrosian?’, European Planning Studies, 11, 757–64.
Cooke, P. (2004), ‘Globalization of bioregions: the rise of knowledge capability,
receptivity and diversity’, Regional Industrial Research Report 44, Cardiff:
Centre for Advanced Studies.
Cooke, P. (2005), ‘Rational drug design, the knowledge value chain and bioscience
megacentres’, Cambridge Journal of Economics, 29 (3), 325–42.
De la Mothe, J. and J. Niosi (eds) (2000), The Economics & Spatial Dynamics of
Biotechnology, London: Kluwer.
Dorey, E. (2003), ‘Emerging market Medicon Valley: a hotspot for biotech affairs’,
BioResource, March, www.investintech.com, accessed 1 March 2004.
DTI (1999), Biotechnology Clusters, London: Department of Trade & Industry.
Feldman, M. and J. Francis (2003), ‘Fortune favours the prepared region: the case
of entrepreneurship and the Capitol Region biotechnology cluster’, European
Planning Studies, 11, 757–64.
Finegold, D., P. Wong and T. Cheah (2004), ‘Adapting a foreign direct investment
strategy to the knowledge economy: the case of Singapore’s emerging biotech-
nology cluster’, European Planning Studies, 12, 921–42.
Freeman, C. (1987), Technology Policy & Economic Performance: Lessons from
Japan, London: Pinter.
Fuchs, G. (ed.) (2003), Biotechnology in Comparative Perspective, London:
Routledge.
Gregersen, B. (1992), ‘The public sector as a pacer in National Systems of
Innovation’, in B.A. Lundvall (ed.), National Systems of Innovation, London:
Pinter, pp. 129–45.
Hall, P. and A. Markusen (eds) (1985), Silicon Landscapes, London: Allen &
Unwin.
Henderson, R., L. Orsenigo and G. Pisano (1999), ‘The pharmaceutical industry
and the revolution in molecular biology: interactions among scientific, institu-
tional and organisational change’, in D. Mowery and R. Nelson (eds), Sources
of Industrial Leadership, Cambridge: Cambridge University Press, pp. 99–115.
Johnson, J. (2002), ‘Valley in the Alps’, Financial Times, 27 February, p. 10.
Kaiser, R. (2003), ‘Multi-level science policy and regional innovation: the case
of the Munich cluster for pharmaceutical biotechnology’, European Planning
Studies, 11, 841–58.
Kaufmann D., D. Schwartz, A. Frenkel and D. Shefer (2003), ‘The role of location
Global bioregions 165
Schumpeter, J. (1975), Capitalism, Socialism & Democracy, New York: Harper &
Row.
Small Business Economics (2001), Special Issue: ‘Biotechnology in Comparative
Perspective – Regional Concentration and Industry Dynamics’ (guest editors:
Gerhard Fuchs and Gerhard Krauss), 17, 1–153.
Wickelgren, I. (2002), The Gene Masters, New York: Times Books.
Wolter, K. (2002), ‘Can the US experience be repeated? The evolution of biotech-
nology in three European regions’, Mimeo, Duisburg: Duisburg University.
Zeller, C. (2002), ‘Regional and North Atlantic knowledge production in the phar-
maceutical industry’, in V. Lo and E. Schamp (eds), Knowledge – the Spatial
Dimension, Münster: Lit-Verlag, pp. 86–102.
Zucker, L., M. Darby and J. Armstrong (1998), ‘Geographically localised knowl-
edge: spillovers or markets?’, Economic Inquiry, 36, 65–86.
6. Proprietary versus public domain
licensing of software and research
products
Alfonso Gambardella and Bronwyn H. Hall
1. INTRODUCTION
167
168 The capitalization of knowledge
that encourage such a coordination. A key idea of the chapter is that the
Generalized Public License (GPL) used in the provision of open source
software is one such mechanism. We then discuss another limitation in
the production of this type of knowledge. To make it useful for commer-
cial or other goals, one needs complementary investments (e.g. develop-
ment costs). If the knowledge is freely available, there could be too many
potential producers of such investments, which reduces the incentives of
all of them to make the investments in the first place. Paradoxically, if the
knowledge were protected, its access would be more costly, which might
produce the necessary rents to enhance the complementary investments.
But protecting upstream knowledge has many drawbacks, and we argue
that a more effective solution is to protect the downstream industry prod-
ucts. Finally, we discuss how our framework and predictions apply to the
provision of scientific software and databases.
An example of the difference between free and commercial software
solutions that should be familiar to most economists and scientific
researchers is the scientific typesetting and word processing package TeX.1
This system and its associated set of fonts was originally the elegant inven-
tion of the Stanford computer scientist Donald Knuth, also famous as the
author of the Art of Computer Programming, the first volume of which
was published in 1969. Initially available on mainframes, and now widely
distributed on UNIX and personal computer systems, TeX facilitated the
creation of complex mathematical formulae in a word-processed manu-
script and the subsequent production of typeset camera-ready output. It
uses a set of text-based computer commands to accomplish this task rather
than enabling users to enter their equations via the graphical WYSIWYG
interface now familiar on the personal computer.2 Although straightfor-
ward in concept, the command language is complex and not easily learned,
especially if the user does not use it on a regular basis. Although many
academic users still write in raw TeX but work on a system with a graphi-
cal interface such as Windows, there now exists a commercial program,
Scientific Word, which provides a WYSIWYG environment for generat-
ing TeX documents, albeit at a considerable price when compared to the
freely distributed original.
This example illustrates several features of the academic provision of
software that we shall discuss in this chapter. First, it shows that there
is willingness to pay for ease of software use even in the academic world
and even if the software itself can be obtained for free. Second, the most
common way in which software and databases are supplied to the aca-
demic market is a kind of hybrid between academic and commercial,
where they are sold in a price-discriminatory way that preserves access for
the majority of scientific users. Such products often begin as open source
Proprietary versus public domain licensing 169
them (Von Hippel, 1988; Von Hippel and Von Krogh, 2003; Harhoff et
al., 2003); communities of technologists who coordinate to share their ‘col-
lective’ inventions, as opposed to keeping their knowledge secret (Allen,
1983; Nuvolari, 2004; Foray and Hilaire-Perez, 2005).
Like any individuals, researchers gain utility from monetary income,
but their utility also increases with the stock of public domain (PD) knowl-
edge. Their benefits from this knowledge are from two sources: their own
contributions and other contributions. First, they enjoy utility from the
fact that they contribute to public knowledge. This is because they ‘like’
contributing to PD knowledge per se, or because they enjoy utility from
a larger stock of public knowledge and hence they wish to contribute to
its increase. There could also be instrumental reasons. Contribution to
public knowledge makes their research visible, which provides fame, glory
or potential future monetary incomes in the form of increased salary,
funding for their research, or consultancy. Second, the researchers gain
utility from the fact that others contribute to PD knowledge. Again this
could be because they care about the state of public knowledge. In addi-
tion, a greater stock of public knowledge provides a larger basis for their
own research, which implies that, other things being equal, they would like
others contribute to it.
We assume that the benefits from the contributions of other researchers
to public knowledge will be enjoyed whether one works under PD or in
the proprietary research (PR) regime. This implies that a researcher will
operate under PD if the benefit that she enjoys from her public contribu-
tion is higher than the foregone monetary income from not privatizing
her findings. In the Appendix we show that in equilibrium this is true of
all the researchers who operate under PD, while the opposite is true of the
researchers who operate under PR. In general, the equilibrium will involve
a share of researchers operating under PD or PR that is between 0 and
1. The first prediction of our analysis is, then, that the two regimes can
coexist, as we shall also see with some examples in the following sections.
Our model also predicts that new profit opportunities common to all
the researchers in a field reduce the share of PD researchers in equilibrium,
while a stronger taste for research (e.g. because of particular systems of
academic values) raises it. There is fairly widespread evidence that in fields
like software or biotechnology there are pressures on academic research-
ers to place their findings in a proprietary regime. Also, our examples in
the later sections show that shifts from academic to commercial software
are more prominent when the market demand for the products increases,
which raises the profitability of the programming efforts. Finally, there are
several accounts of the fact that tension between industrial research and
academic norms becomes higher if university access to IPRs is increased
174 The capitalization of knowledge
(Cohen et al., 1998; Hall et al., 2001; Hertzfeld et al., 2006; Cohen et al.,
2006). As these authors report, such tension has already been observed in
the USA, as the latter country has pioneered the trend towards stronger
IPRs and the use of intellectual property protection by universities, but it
is becoming more pronounced in Europe as well, as European universities
follow the path opened up by the US system (Geuna and Nesta, 2006).
Collins and Wakoh (1999) describe similar changes in Japan, and show
how the regime shift to patenting by universities is inconsistent with the
previous system of collaborative research with industry in that country,
implying increasing stress for the system.
Our model also shows that the only way to get a stable equilibrium con-
figuration with individuals operating under open sharing rules is when
there is coordination among them. Otherwise, the sharing (cooperative)
equilibrium tends to break down because some individuals find it in their
interest to defect. The instability of the open sharing equilibrium is just an
application of the famous principle by Mancur Olson (1971) that without
coordination collective action is hard to sustain. Our contribution is
simply to highlight that Olson’s insight finds application to the analysis of
the instability of open systems. When many researchers contribute to PD
knowledge, an individual deviation to PR is typically negligible compared
to the (discrete) jump in income offered by a proprietary regime. Thus,
individually, the researchers always have an incentive to deviate.
Another way to see this point is to note that some of the tensions that are
created in the open research systems can be attributed to the asymmetry
between the open and the proprietary mode. The researchers shift to pro-
prietary research only if it is individually profitable. By contrast, in the col-
lective production of knowledge, a desirable individual outcome depends
on the actions of others. In our framework this is because the individuals
care about the fact that others contribute to the stock of knowledge, and
because this may affect their benefits from their own contribution as well.
As we show in the Appendix, this creates situations in which the lack of
coordination produces individual incentives to deviate in spite of the fact
that collectively the researchers would like to produce under PD. The
intuition is that a group of individuals can produce a sizable increase in
the stock of public knowledge if they jointly deviate from the PR regime.
Thus, if there were commitment among them to stay within the PD rules,
they could be better off than with private profits. In turn, this is because
the larger the group of people who deviate in a coordinated fashion, the
higher the impact on the public knowledge good, while the private profits,
Proprietary versus public domain licensing 175
which do not depend so heavily on the collective action, are not affected
substantially by the joint movement of researchers from one regime to the
other. But even if they all prefer to stay with the PD system, because of the
larger impact of their PD contributions as a group, individually they have
an incentive to deviate because if the others stay with PD, the individual
deviation does not subtract that much from public knowledge, while it
does produce a discrete jump in the individual’s private income. Since
everyone knows that everyone else faces this tension, and could deviate, it
will be difficult to keep the researchers under the PD system unless some
explicit coordination or other mechanism is in place.
Ultimately, this asymmetry in the stability of the two configurations
suggests why there may be a tendency to move from public to private pro-
duction of knowledge, while it is much harder to move back from private
to public. The implication is that there is little need for policy if more
proprietary research is desirable, as the latter is likely to arise naturally
from the individual actions. By contrast, policy or institutional devices
that could sustain the right amount of coordination is crucial if the system
underinvests in knowledge that is placed in the public domain.
The Generalized Public License (GPL) used in open source software can
be an effective mechanism for obtaining the required coordination. As dis-
cussed by Lerner and Tirole (2002), inter alia, with a GPL the producer of
an open source program requires that all modifications and improvements
of the program be subject to the same rules of openness; most notably the
source code of all the modifications ought to be made publicly available
like the original program.4 To see how a GPL provides the coordination to
solve the Mancur Olson problem, imagine the following situation. There is
one researcher who considers whether to launch a new project or not. We
call her the ‘originator’. She knows that if she launches the project, others
may follow with additional contributions. The latter are the ‘contribu-
tors’. If the originator attaches a GPL to the project, the contributors can
join only under PD. If no GPL is attached, they have the option to priva-
tize their contribution. Of course, once (and if) the project is launched, the
contributors always have the option not to join the project and work on
some alternative activities. Given the expected behavior of the contribu-
tors, the originator will choose whether to launch the project or not. She
also has potential alternatives. If she decides to launch it, she will choose
whether to put her contribution under PD or PR, and if the former, she
considers whether to attach a GPL to the project. We can safely rule out
the possibility that the originator operates under PR and attaches a GPL
176 The capitalization of knowledge
to the project. It will be odd to think that she can enforce open source
behavior given that she does not abide by the same rules.
The key implication of a GPL is that it increases the number of con-
tributors operating under PD. The intuition, which we formalize in the
Appendix, is simple. Without a GPL the contributors have three choices:
work on the project under PD, or under PR, or not join because they have
better alternatives. PD contributors to the project will still choose PD if a
GPL is imposed. If they preferred PD over both PR and other alternatives,
they will still prefer PD if the PR option is ruled out. Those who did not
join the project will not join with a GPL either. They preferred their alter-
natives over PD and PR, and will still prefer them if PR is not an option.
Finally, some of those who joined under PR will join under PD instead,
while others who joined under PR will no longer join the project. As a
result, a GPL reduces the total number of researchers who join the project,
but raises the number of researchers working under PD. The reduced
number of participants is consistent with the fact that the GPL is a restric-
tion on the behavior of the researchers. However, this is a small cost to
the public diffusion of knowledge because those who no longer participate
would have not joined under PD. By contrast, the GPL encourages some
researchers who would not have published their results without the GPL
to do so.
Given the behavior of the contributors, will the originator launch the
project and issue a GPL? We know that the originator, like any other
researcher, enjoys greater utility from a larger size of the public knowl-
edge stock. At the same time, she enjoys utility from monetary income
or, as we noted, from alternative projects. Here we want to compare her
choice when she can employ a GPL vis-à-vis a world in which there is no
GPL. With a GPL she knows that the number of contributors to public
knowledge increases, which in turn increases the size of the expected public
knowledge stock when compared to a no-GPL case. As a result, when
choosing whether to launch the project under PD with a GPL, under PD
and no GPL, under PR, or work on alternative projects, she knows that
the GPL choice raises the future public knowledge stock in the area while
not raising her monetary income from the project or her utility from alter-
natives. This makes it more likely that the originator will choose to work
on the project under PD cum GPL. More generally, a GPL will increase
the number of projects launched under PD and the size of the public
knowledge contributions.
To summarize, the way the GPL works is by giving rise to an implicit
coordination among a larger number of researchers to work on PD. The
originator knows that there will be researchers who would prefer PR but
choose PD if the former opportunity is not available, while all those who
Proprietary versus public domain licensing 177
would choose PD will stick to it in any case. This enlarges the number of
expected PD researchers, thereby placing greater advantages on the PD
choice. Our intuition is that those with a strong taste for PD research will
always work under PD, whether there is a GPL or not. By contrast, those
with a high opportunity cost will never join the project. But those who
have a small opportunity cost, and a weak taste for PD research, might
contribute via PD if a GPL is introduced. The GPL then lures people
who are on the border between doing PD research on the project or not.
For example, a GPL may be crucial to enhance the participation under
PD of young researchers, who do not have significant opportunity costs
(e.g. because they do not yet have high external visibility), but who do not
have a strong taste for PD research either, and hence would privatize their
findings if it were profitable to do so. There might also be dynamic impli-
cations, for example the GPL helps young researchers to ‘acquire’ a taste
for PD research. This might help create a system of norms and values for
public research that could sustain the collective action. We leave a more
thorough assessment of such dynamic implications to future research.
the project. When the projects are in new areas, the opportunities of the
individuals may change substantially, and the researchers who might
profit the most from the new projects can be different from those who
benefited in the old projects. New skills, or new forms of learning are nec-
essary in the new fields, and the people who have made substantial invest-
ments in the old projects may have greater difficulties in the new areas
(see, e.g. Levinthal and March, 1993). In these cases, researchers with low
opportunity costs may instead find that they have great opportunities to
commercialize knowledge in the new fields (high private rewards). Thus
the GPL is more likely to be a useful coordination device when the project
is in a new field rather than an incrementally different one from previous
projects, and when it is socially desirable to run these projects under PD.
Our mechanism relies on the fact that there is enforcement of the GPL.
But can the copyleft system be enforced? In some settings people seem to
abide by the copyleft rules, as Lerner and Tirole (2002) have noted, in spite
of the lack of legal enforcement. In many situations, there may be a repu-
tation effect involved when the copyleft agreement is not complied with. In
this respect, the reason why a copyleft license may be useful is that without
it, it may not be clear to the additional contributors whether the intention
of the initial developers of the project was to keep it under PD or not.
But if the will is made explicit, deviations may be seen as an obvious and
explicit challenge to the social norms, and this may be sanctioned by the
community. The GPL then acts as a signal that clears the stage of potential
ambiguities about individual behavior and the respect of social norms.
Even in science, if a researcher develops a certain result, others may build
on it, and privatize their contributions. This might be seen as a deviation
from the social norms. While this behavior could be sanctioned, according
to the strength with which the norms of open science are embedded in and
pursued by the community, with no explicit indication that the original
contributor did not want future results from her discoveries to be used for
private purposes, the justification for the sanctions or the need for them
can be more ambiguous.
A GPL removes ambiguity about the original intentions of the develop-
ers, and any behavior that contradicts the GPL is more clearly seen as not
proper. This reduces privatization of future contributions compared to a
situation with no GPL, increases the expectations that more researchers
will make their knowledge public, and, other things being equal, creates
greater incentives to make projects public in the first place. It is in this
respect that we think that explicit indications of the norms may be a
stronger signal than the mere reliance on the unwritten norms of open
science or open source software.
A related point is that the literature has typically been concerned with
Proprietary versus public domain licensing 179
the need to protect the private property of knowledge when this is neces-
sary to enhance the incentives to innovate. The inherent assumption is that
when it is not privately protected, the knowledge is by default public, and
it enriches the public domain. Yet our model points out that this is not
really true. The public nature of knowledge needs itself to be protected
when commitments to the production of knowledge in the public domain
are socially desirable. In other words, there is a need for making it explicit
that the knowledge has to remain public, and this calls for positive actions
and institutions to protect it. Not allowing for private property rights on
some body of knowledge is not equivalent to assuming that the knowledge
will be in the public domain. One may then need to assign property rights
not just to private agents, but also to the public. For example, the IPRs are
typically thought of as being property rights to private agents. But we also
need to have institutions that preserve the public character of knowledge.
The copyleft license is a beautiful example of this institutional device. A
natural policy suggestion is therefore to make it legal and enforceable as
copyright, patents and other private-based IPRs.
profits in order to carry out the investments that are necessary to produce
the complementary downstream assets of the good. Since the downstream
assembling agents are typically firms, we now refer to them as such. There
are two issues. First, the firm needs to obtain some economic returns to
finance its investment. Clearly, there are many ways to moderate its poten-
tial monopoly power so that the magnitude of the rents will be sufficient
to make the necessary investments but not high enough to produce serious
extra-normal profits. However, it would be difficult for the firm to obtain
such rents if it operated under perfect competition, or if it operated under
an open, public domain system itself.
The second issue is more subtle. The firm uses the public domain contri-
butions of the individual agents (software programmers, scientists etc.) as
inputs in its production process. If these contributions are freely available
in the public domain, and particularly if they are not available on an exclu-
sive basis, many downstream firms can make use of them. As a result, the
downstream production can easily become a free entry, perfectly competi-
tive world, with many firms having access to the widely available knowl-
edge inputs. If so, each firm could not make enough rents to carry out the
complementary investments. This would be even harder for the individual
knowledge producers, who are normally scattered and have no resources
to cover the fixed set-up costs for the downstream investments. The final
implication is that the downstream investments will not be undertaken, or
they will be insufficient. Of course, there can be other factors that would
provide the firms with barriers to entry, thereby ensuring that they can
enjoy some rents to make their investments. However, in productions
where the knowledge inputs are crucial (e.g. software), the inability to
use them somewhat exclusively can generate enough threats of wide-
spread entry and excessive competition to discourage the complementary
investments.
Paradoxically, if the knowledge inputs were produced under proprietary
rules, the producers of them could charge monopoly prices (e.g. because
they could obtain an exclusive license), or at least enjoy some positive
price cost margins. This raises the costs of the inputs. In turn, this height-
ens barriers to entry in the downstream sector, and adjusts the level of
downstream investment upward. In other words, if the inputs are freely
available, there could be excessive downstream competition, which may
limit the complementary investments. If they are offered under proprietary
rules, the costs of acquiring the inputs are higher, which curbs entry and
competition, and allows the downstream firms to make enough rents to
carry out such investments.5
But the privatization of the upstream inputs has several limitations.
For one, as Heller and Eisenberg (1998) have noted, the complementarity
Proprietary versus public domain licensing 181
more in order to offer the good to others at lower prices, thus increasing
the overall quantity supplied. The problem with applying this mechanism
generally is the difficulty of segmenting the markets successfully and of
preventing resale.
In the case of academic software and databases, however, it is quite
common for successful price-discriminating strategies to be pursued.8 There
are several reasons for this: (1) segmentation is fairly easy because academ-
ics can be identified via addresses and institutional web information; (2)
resale is difficult in the case of an information product that requires signing
on to use it and also probably not very profitable; (3) the two markets (aca-
demic and commercial) have rather different tastes and attitudes toward
technical support (especially towards the speed with which it is provided),
so the necessary price discrimination is partly cost-based.
language and enables the use of a variety of modeling, estimating and fore-
casting methods on datasets of varying magnitudes. Most of these packages
are now available for use on personal computers, although their origins are
often a mainframe computer implementation. For a complete history of the
development of this software, see Renfro (2003).
Like most software, econometric software can be protected via various
IP measures. The most important is a combination of copyright (for the
specific implementation in source code of the methods provided) and trade
secrecy (whereby only the ‘object’ code, or machine-readable version of
the code, is released to the public). This combination of IP protection has
always been available but has only become widely used during the personal
computer era. Before that time, distributors of academic software usually
provided some form of copyrighted source code for local installation on
mainframes, and relied on the fact that acquisition and maintenance were
performed by institutions rather than a single individual to protect the
code. This meant that the source code could be modified for local use, but
because the size of the potential market for ‘bootleg’ copies of the source
was rather small, piracy posed no serious competitive threat. The advent
of the personal computer, which meant that in many cases software was
being supplied to individuals rather than institutions, changed this situa-
tion, and today the copyright-trade secrecy model is paramount.9 Thus it
is possible to argue that developments in computing have made the avail-
able IP protection in the academic software sector stronger at the same
time that the potential market size grew, which our model implies will lead
to more defection from public domain to proprietary rules.
In Table 6.1, we show some statistics for the 30 packages identified by
Renfro. The majority (20 of the 30) have their origins in academic research,
either supported by grants or, in many cases, written as a by-product of
thesis research on a student’s own time.10 A further five were written
specifically to support the modeling or research activities of a quasi-
governmental organization such as a central bank. Only five were written
with a specific commercial purpose in mind. Two of those five were forks
of public domain programs, and in contrast to those of academic origin
(whose earliest date of introduction was 1964 and whose average date was
1979), the earliest of the commercial programs was developed in 1981/82,
a date that clearly coincides with the introduction of the non-hobbyist
personal computer. Notwithstanding the academic research origin of
most of these packages, today no fewer than 25 out of the 30 have been
commercialized, with an average commercialization lag of nine years.
Reading the histories of these packages supplied in Renfro (2003), it
becomes clear that although many of them had more than one contribu-
tor, normally there was a ‘lead user’ who coordinated development, the
186 The capitalization of knowledge
Indeed, the second statement suggests that one reason to leave the soft-
ware in the public domain was that the researcher’s commercial profits
were not large enough. Likewise, the third statement suggests that the
researcher cared about research and this was an important reason for not
privatizing it. This is suggestive of the fact that the individual displayed
a relatively low utility of commercial profits vis-à-vis his preference for
research, which in turn affected his choice of staying public. In sum, the
model’s prediction that both private and public modes of provision can
coexist when at least some individuals adhere to community norms is
borne out, at least for one example.
We also discussed explicitly the role of complementary services or
enhanced features for non-inventor users in the provision of software.
This is clearly one of the motivations behind commercialization, as was
illustrated by the example of TeX. Table 6.2, which is drawn from data in
Renfro (2004), attempts to give an impression of the differences between
commercialized and non-commercialized software, admittedly using a
rather small sample. To the extent that ease of use can be characterized
by the full WIMP interface, there is no difference in the average perform-
ance of the two types of software. The main difference seems to be that the
commercialized packages are larger and allow both more varied and more
complex methods of interaction. Note especially the provision of a macro
facility to run previously prepared programs, which occurs in 84 percent
of the commercial programs, but only in two out of the five free programs.
188 The capitalization of knowledge
Such programs are likely to require more user support and documentation
because of their complexity, which increases the cost of remaining in the
PD system. In short, as our earlier discussion suggested, a commercial
operation, which is likely to imply higher profits, also provides a greater
degree of additional investments beyond the mere availability of the
research inputs.
To summarize, the basic predictions of our model, which are that
participants in an open science community will defect to the private (IP-
using) sector when profit opportunities arise (e.g. the final demand for the
product grows, or IP protection becomes available) are confirmed by this
example. We also find some support for the hypothesis that commercial
operations are likely to undertake more complementary investments than
pure open source operations. We do not find widespread use of the GPL
idea in this particular niche market yet, although use of such a license
could evolve. In the broader academic market, Maurer (2002) reports that
a great variety of open source software licenses is in use, both viral (GPL,
LPL) and non-viral (BSD, Apache-CMU).
Finally, our model in Section 3 does not explicitly incorporate all the
factors that are clearly important in the case of software and databases.
Specifically, one area seems worthy of further development. We did not
model the competitive behavior of the downstream firms in the database
and software industries. In practice, in some cases, there is competition to
supply these goods, and in others, it is more common for the good to be
supplied at prices set by a partially price-discriminating monopolist. We
report the evidence on price discrimination for our sample briefly here.
Table 6.3 presents some very limited data for our sample of 30 econo-
metric software packages. Of the 30, five are distributed freely and a
further eight are distributed as services, possibly bundled with consulting
(such sales are essentially all commercial); this is the ‘added value’ business
Proprietary versus public domain licensing 189
Sold as a service 8
Free 5
Total 30
model discussed earlier. Of the remaining 17, we were able to collect data
from their websites for 15. Of these, only two did not price-discriminate,
three discriminate by the size and complexity of the problem that can be
estimated, and ten by the type of customer, academic or commercial.12 A
number of these packages were also offered in ‘student’ versions at sub-
stantially lower prices, segmenting the market even further. This evidence
tends to confirm that in some cases, successful price discrimination is fea-
sible and can be used to serve the academic market while covering some of
the fixed costs via the commercial market.
Although price discrimination is widely used in these markets, it does
have some drawbacks as a solution to the problem of software provision.
The most important one is that features important to academics or even
programs important to academics may fail to be provided or maintained in
areas where there is either a very small commercial market or no market,
because their willingness to pay for them is much lower. Obviously this is
not a consequence of price discrimination per se, but simply of low will-
ingness to pay; the solution is not to eliminate price discrimination, but
to recognize that PD production of some of these goods is inevitable. For
example, a database of elementary particle data has been maintained by
an international consortium of particle physicists for many years. Clearly
such a database has little commercial market.
7. CONCLUSIONS
It’s free because Knuth chose to make it so. He is nevertheless apparently happy
that others should earn money by selling TeX-based services and products.
While several valuable TeX-related tools and packages are offered subject to
restrictions imposed by the GNU General Public License (‘Copyleft’), TeX
itself is not subject to Copyleft. (http://www.tug.org)
Thus part of the reason for the spread of TeX and its use by a larger
number of researchers than just those who are especially computer-
oriented is the fact that the lead user chose not to use the GPL to enforce
the public domain, enabling commercial suppliers of TeX to offer easy-to-
use versions and customer support.
The so-called ‘lesser’ GPL (LGPL) or other similar solutions can in
part solve the problem. As discussed by Lerner and Tirole (2002), among
others, the LGPL and analogous arrangements make the public domain
requirement less stringent. They allow for the mixing of public and
private codes or modules of the program. As a result, the outcome of the
process is more likely to depend on the private incentives to make things
Proprietary versus public domain licensing 191
private or public, and this might encourage the acquisition of rents in the
downstream activities. But following the logic of our model, as we allow
for some degree of privatization, the efficacy of the license as a coordina-
tion mechanism is likely to diminish. We defer to future research a more
thorough assessment of this trade-off. Here, however, we want to note
that when the importance of complementary investments is higher, one
would expect LPGL to be socially more desirable. The benefits of having
the downstream investments may offset the disadvantage of a reduced
coordination in the production of the public good. By contrast, when such
investments are less important, or the separation between upstream and
downstream activities can be made more clearly (and hence one can focus
the GPL only on the former), a full GPL system is likely to be socially
better.
ACKNOWLEDGMENTS
Conversations with Paul David on this topic have helped greatly in clarify-
ing the issues and problems. Both authors acknowledge his contribution
with gratitude; any remaining errors and inconsistencies are entirely our
responsibility. We are also grateful to Jennifer Kuan for bringing some of
the open source literature to our attention.
This chapter was previously published in Research Policy, Vol. 35, No.
6. 2006, pp. 875–92.
NOTES
1. This brief history of TeX is drawn from the TeX Users’ Group website, http://www.tug.
org. In giving a simplified overview, we have omitted the role played by useful programs
based on TeX such as LaTeX, etc. See the website for more information.
2. WYSIWYG is a widely used acronym in computer programming design that stands for
‘What You See Is What You Get’.
3. We can subsume both cases as instances of ‘patronage’ – self-patronage of the donated
efforts is a special case of this. See David (1993) and Dasgupta and David (1994).
4. There are many variants of a GPL, with different possibilities of privatizing future
contributions. See, for example, Lerner and Tirole (2005). However, in this chapter we
want to focus on some broad features of the effect of a GPL as a coordinating device,
and therefore we simply consider the extreme case in which the GPL prevents any pri-
vatization of the future contributions.
5. This argument should be familiar as it is the same as the argument used by some
to justify Bayh–Dole and the granting of exclusive licenses for development by
universities.
6. The usual commercial web-based provision of data is based on a model where the user
constructs queries to access individual items in the database, like looking up a single
word in the dictionary. The pricing of such access reflects this design and is ill suited (i.e.
192 The capitalization of knowledge
very costly) for researcher use in the case where research involves studying the overall
structure of the data.
7. This can be a real cost. The US Patent Office, which provides a large patent database
free to the public at large on its web server, has a notice prominently posted on the
website saying that use of automated scripts to access large amounts of these data is
prohibited and will be shut down, because of the negative impact this has on individuals
making live queries.
8. Another type of academic information product deserves mention here, academic jour-
nals. The private sector producers of these journals face the same type of cost struc-
ture and have pursued a price discrimination strategy for many years, discriminating
between library and personal use, and also among the income levels of the purchasers
in some cases, where income level is proxied by country of origin.
9. In principle, in the aftermath of the (1981) Diamond v. Diehr decision, patent protec-
tion might also be available for some features of econometric software. In this area, as
in many other software areas, there is tremendous resistance to this idea on the part of
existing players, perhaps because they are well aware of the nightmare that might ensue
if patent offices were unacquainted with prior art in econometrics (as is no doubt cur-
rently the case).
10. Unfortunately, it is not possible to identify precisely the nature of the seed money
support for many of the packages from the histories supplied in Renfro (2003), other
than the simple fact that the development took place at a university.
11. This quotation is from Hermann Bierens’s website at http://econ.la.psu.edu/~hbierens/
EASYREG.HTM.
12. The average ratio of commercial to academic price was 1.7. Assuming an iso-elastic
demand curve with elasticity h and letting s = share of commercial (high-demand) cus-
tomers, one can perform some very rough computations using the relationship ΔQ/Q =
− h ΔP/P or (1 − s) = h 0.7. If h = 1, then the implied share of academic customers is 70
percent. If the share of academic customers is only 30 percent, then the implied demand
elasticity is about 0.42.
REFERENCES
1 1
n/N
E2
n/N
E F(n)
F(n)
E1
ne N ne ne N
One stable equilibrium (E) Two stable equilibria (E1 and E2)
under PD after the move, i.e. (ne − 1)/N. Hence the move is not profitable.
Similarly, whenever an individual moves from PR to PD in equilibrium,
the share of researchers with p ≤ x(ne + 1) becomes smaller than the share
of researchers who now work under PD, i.e. (ne + 1)/N. The stability
conditions are then F(0| ne − 1) > (ne − 1)/N and F(0|ne + 1) < (ne + 1)/N.
Multiple equilibria are also possible. There may be more than one ne that
satisfies (1) with F(n) cutting n/N from above, as shown by Figure 6A.1.
The share of researchers working under PD decreases if the economic
profitability of research increases relatively to the researchers’ utility from
their public contributions. This can be thought of as a first-order stochas-
tic downward shift in F(·) which would stem from an increase in p − x(n)
for all the individuals. Likewise, a stronger taste for research would be
represented by an upward shift in F as p − x(n) decreases for all the indi-
viduals. This raises ne.
Given the behavior of the contributors, will the originator who launches
the project working under PD issue a GPL? With PD-GPL his utility will
be x(nG + 1) + qX(nG). With PD and no GPL it will be x(nNG + 1)+qX(nNG).
By using the fact that x(n) ; X(n) − X(n − 1), the former will be higher
than the latter if X(nG + 1) − (1 − q)X(nG) ≥ X(nNG + 1) − (1 − q)X(nNG). A
sufficient condition for this inequality to hold is that q ≥ 1. This follows
198 The capitalization of knowledge
from nG ≥ nNG and the fact that X(n) increases with n, which in turn follows
from x(n) ≥ 0. Thus, if the originator chooses to work under PD, setting
a GPL will be a dominant strategy unless q is close to zero (i.e. the impact
of the others’ behavior is not that important) and some special conditions
occur. For simplicity, we assume that q is large enough, and therefore
choosing a GPL always dominates when the originator chooses PD.
|NG). As a result,
If the originator chooses PR, his utility will be p + qX(n
the originator will choose to work on the project under PD (and issue
a GPL) if x(nG + 1) + q ∙ X(nG) ≥ B, and x(nG + 1) + q ∙ X(nG) ≥ p + q ∙
X(n|NG). If there were no GPL, the condition would be the same with nNG in
lieu of nG. Since nG ≥ nNG, with no GPL the condition becomes more restric-
tive. As a result, the possibility to use a GPL implies not only that more
researchers will join under PD, but also that more projects will be launched
under PD with a GPL.
As discussed in the text, the GPL is least effective when there is a strong
positive correlation between B and p. This implies that many individuals
with small B tend to have a small p as well. As a result, the restriction p ≤
x associated with B ≤ x does not restrict the set of PD researchers much
more than B ≤ x alone, which means that nG is close to nNG, and the addi-
tional set of PD researchers created by the GPL is not large. In turn, this
implies that the GPL does not encourage a more intensive coordination
than without it.
The GPL raises the number of contributors working under PD in spite
of the fact that the total number of contributors to the project decreases.
To see this, assume for simplicity that x is roughly constant with respect
to n, so that x(nNG) ≈ x(nG) ; x. Consider Table 6A.1, which shows that,
with a GPL, some researchers who joined under PR switch to PD, while
the opposite is not true. The researchers who no longer join the project
with the GPL are only those who joined under PR. Thus they do not affect
n in equilibrium.
PART II
In electrical engineering about every third student starts his own company.
In our department [computer science] it’s starting as well. That’s a change in
student behaviour and faculty acceptance because the faculty are involved in
companies and interacting a lot with companies and the attitude is . . . we talk
to them, we teach them. Why not try it . . . this is my experience.
201
202 The capitalization of knowledge
In science you kind of sit down and you share ideas . . . There tends to be a
very open and very detailed exchange. The business thing when you sit down
with somebody, the details are usually done later and you have to be very
careful about what you say with regard to details because that is what business
is about: keeping your arms around your details so that you can sell them to
somebody else, otherwise there is no point.
Entrepreneurial Scientists
Quasi-firm High-tech
research growth
firm
University input
Industry contribution
The formation of firms out of research activities occurred in the late nine-
teenth century at Harvard, as well as at MIT, in the fields of industrial con-
sulting and scientific instrumentation (Shimshoni, 1970). However, these
commercial entities were viewed as anomalies rather than as a normal
outcome of academic research. In recent decades, an increasing number of
academic scientists have taken some or all of the steps necessary to start
a scientific firm by writing business plans, raising funds, leasing space,
recruiting staff and so on (Blumenthal et al., 1986a; Krimsky et al., 1991).
These studies probably underestimate the extent of faculty involvement,
especially in molecular biology. For example, in the biology department at
MIT, where surveys identified half the faculty as industrially involved in
the late 1980s, an informant could identify only one department member
as uninvolved at the time.
While the model of separate spheres and technology transfer across
strongly defined boundaries is still commonplace, academic scientists are
often eager and willing to marry the two activities, nominally carrying
out one in their academic laboratory and the other in a firm with which
they maintain a close relationship. Thus, technology transfer is a two-way
flow from university to industry and vice versa, with different degrees and
forms of academic involvement:
The amount of money from industry is a pittance in the total budget, therefore
everybody’s wasting their time to try to improve it . . . it’s still a drop in the
bucket . . . we were running about 3 per cent total in our dept. We do value our
industrial ties . . . have good friends, interact strongly with them in all kinds of
respects and . . . the unrestricted money is invaluable, you wouldn’t want to lose
a penny of it and would like to increase it a lot, but its impact vis-à-vis federal
funding is almost non-existent.
It’s also motivating for us to try to identify things that we do that may be
licensable or patentable and to make OTL [Office of Technology and Licensing]
Entrepreneurial scientists and the capitalization of knowledge 209
aware of that because according to University policy, 30 per cent of the money
comes back to the scientist, 30 per cent comes back to the Department as well
as 30 per cent for the University. So, almost all the computing equipment and
money for my post docs have been funded by the work that we did. So there’s
motivation.
firm formation: ‘It got to the point where I was making money consulting
and needed some sort of corporate structure and liability insurance; so I
started [the company] a couple of years ago. From me [alone, it has grown]
to eight people. We’re still 70 per cent service oriented, but we do produce
better growth media for bacteria and kits for detecting bacteria.’ The firm
was built, in part, on the university’s reputation but was symbiotic in that
its services to clients brought them into closer contact with on-campus
research projects.
In another instance, an attempt was made to reconcile the various
conflicting interests in firm formation and make them complementary
with each other by the university having some equity in the company and
holding the initial intellectual property rights.
Despite the integrated mode arrived at, some separation, worked out on
technical grounds, was still necessary to avoid conflicts.
There is no line. It’s just a complete continuum. It is true that I have a notebook
that says [university name] and a notebook that says [firm name] and if I make
an invention in the [company] notebook then the assignment and the exclusive
license goes to [the firm] and if I make an invention in the university notebook
then the government has rights to the invention because they are funding the
work. [Interviewer: How do you decide which notebook you are going to write
in?] We have ways of dividing it up by compound class. In the proposals that
I write to the government I propose certain compound classes. There is no
overlap between the compound classes that we work on campus and the com-
pound classes that we work on off campus so there is a nice objective way of
distinguishing that.
NOTE
Data on university–industry relations in the USA are drawn from studies conducted by the
author with the support of the US National Science Foundation. More than 100 in-depth
interviews were conducted with faculty and administrators at universities, both public and
private, with long-standing and newly emerging industrial ties.
REFERENCES
Rahm, Diane (1996), R&D Partnering and the Environment of U.S. Research
Universities, Proceedings of the International Conference on Technology
Management: University/Industry/Government Collaboration, Istanbul:
Bogazici University.
Shimshoni, D. (1970), ‘The mobile scientist in the American instrument industry’,
Minerva, 8 (1), 59–89.
8. Multi-level perspectives:
a comparative analysis of national
R&D policies
Caroline Lanciano-Morandat and Eric Verdier
218
A comparative analysis of national R&D policies 219
Relevant I II III IV
dimensions Republic of The state as an The state as a The state as a
Science entrepreneur regulator facilitator
Overriding The progress of State service Market: Project:
principle: science and national shareholder technological
research ethic interest value creativity
Level of state Discipline National Regional Multi-level
regulation (local faculty) integration
(‘Europe’)
Governance Independence Control by Co- Delegation of
of the of academic central state: determination responsibility
public–private communities ministry or of the for technico-
relationship agency entrepreneurial scientific
university and coordination
firms
Organizational Academia Large Contract Network
architecture (faculties) programme (negotiation (interaction
(hierarchical between and alignment
management individuals or within the
and organizations) network)
organization)
Category of Renowned Managerial and Individual Diversity
mediating scientific political elites mobility of of actors as
actors personalities scientists intermediaries
between the between
private and university and
public spheres firms
Key Disciplinary Meritocratic Operational Interdis-
competencies knowledge excellence versatility of ciplinary and
individuals ability to
cooperate
Incentive Peer evaluation Power over Property rights, Salary increases
institution (disclosure and scientific and patents and and stock
priority norms) industrial profit-sharing options
development
Funding Public grants Public Joint Multiplicity
institution and individual subsidies and contribution of of sources
fees government higher education and levels of
orders and firms financing
Labour Occupational Public and External labour Labour
institution labour markets private internal markets markets
markets peculiar to
networks
A comparative analysis of national R&D policies 223
In the past ten years the literature on the economics of science and innova-
tion has emphasized the importance of interactions between the different
partners in scientific and technical production: government higher educa-
tion or research institutions, firms with their own R&D capacities, and
226 The capitalization of knowledge
weak technology’ (Barré and Papon, 1998) – which inspired the 12 July
1999 blueprint law on research. With its related measures concerning
innovation, this law was designed to ease statutory constraints, to develop
incubators and to facilitate access to venture capital, in order to promote
the development of high-tech companies based on public research results.
In this perspective, INRIA (National Institute of Computer Science) and
its 436 spin-offs are presented as a model to follow7. It is significant that at
the end of 2003 the director of INRIA was appointed as General Director
of CNRS, an institution focusing first and foremost on basic research,
with a staff of 25 000.
But the new law also aims to move towards a ‘state as a facilitator’ insti-
tutional setting. The idea is no longer to set up large programmes but to
encourage the creation of precise industry–research cooperative projects.
This goal is reflected in a semantic and institutional change: it refers to
‘network’ rather than ‘programme’, in a logic similar to that of ‘consortia’
advocated by the Guillaume Report, which inspired the law to a large
extent. Two areas in which France lags behind other leading industrialized
countries are priorities: the life sciences, and information and communica-
tion technologies (see Table 8A.5 in the Appendix). In the first field case
the aim is to promote activities in the field of genomics – by supporting
genopoles and startups developing computer technology applications for
use in the biotech field – and health-related technology – by setting up a
national health technology research and innovation network (the RNTS).
The example of genomics is relevant since it shows how narrow this French
strategy for integrating models and institutions developed in other societal
contexts is in its conception. It dates back to before the 1999 law, since
the Evry genopole was launched in February 1998. This experiment was
intended to make up for the fact that France had fallen behind, by encour-
aging ‘interpenetration’ between technological and scientific advances.
It was to promote collaboration between public and private laboratories
and firms, while attempting to avoid the pitfalls associated with orienting
academic research too strongly towards short-term objectives (Branciard,
2001).
The problem of ‘catching up’ with the most competitive countries has
been an incentive to develop state-led policy-making in which coordination
by the hierarchy takes precedence over cooperation. Although the latter is
indispensable for producing collective learning, the time taken to establish
it is not necessarily compatible with the need to catch up with competi-
tors. Reaching a compromise between the different institutional settings
is thus difficult. In the meantime the French RDI regime is witnessing its
structural position, strongly supported by the ‘state as an entrepreneur’,
gradually being undermined (Branciard and Verdier, 2003).
234 The capitalization of knowledge
Recently the French government took steps to extend the ‘good prac-
tice’ of the Grenoble technological district. With the support of national
and regional public agencies and bodies, this innovative district is now
becoming a key player in the nanotechnologies field with the creation of
a new cluster ‘Minatec’. Under the initiative of researchers of the public
Commissariat général à l’énergie atomique, the cluster has attracted
MNCs as its main private stakeholders, including firms from Europe
(e.g. ST Microelectronics; Philips) and the USA (e.g. Motorola, Atmel).
Moreover, based on the main conclusions of the ‘Beffa report’ (Beffa,
2005)8, state and regional governments decided to support 60 projects
for competitiveness clusters, after a selective process. This new genera-
tion of public programmes expresses the search for compromises between
‘the state as an entrepreneur’ and ‘the state as a facilitator’. It may also
meet another challenge facing the French research system, the ‘under-
specialization’ of public funding devoted to basic research (Lanciano-
Morandat and Nohara, 2005).
CONCLUSION
State reforms are made and implemented at national level but are based
on the recommendations of supranational authorities (the OECD and
the European Commission), which are themselves influenced by the ideas
produced by the sociology and the economics of innovation (Lundvall and
Borras, 1997).
The resulting compromises and arrangements define rapidly changing
national regimes that are symptomatic of specific compromises with the
international and scientific references mentioned above. The increasing
weight of the ‘state as a regulator’ and the ‘state as a facilitator’ insti-
tutional settings, at the expense of the ‘state as an entrepreneur’ and, to
a lesser degree, the ‘Republic of Science’, is generating increasing diver-
sification of collective action. This action is less dependent on national
institutional frames than previously. It is more and more the result of the
initiatives of cooperative networks or the local configurations in which
multinational firms, among others, develop practices that could not be
explained only in terms of a ‘global’ strategy. If we want to continue refer-
ring to a ‘national system’, we will have to conceive of it more and more
as the outcome of a ‘set’ of networks and configurations whose coherence
stems only partially from the direct influence of national institutions.
The approach in terms of conventions of policy-making enables us
to define regimes of action for each country. These regimes are com-
promises between different patterns and are continually moving. They
A comparative analysis of national R&D policies 235
NOTES
1. In one sentence, ‘state reform is about building new precedents that would lead to new
conventions; to do this, they need to involve the actors, which requires talk among the
actors so that they might ultimately build confidence in new patterns of mutual interac-
tion, which is a prerequisite of new sets of mutual expectations which are, in effect, con-
vention’ (Storper, 1998, p. 13).
2. See Branciard and Verdier (2003) on the French case and the influence of the OECD’s
expertise.
3. The role of the senior levels of the French civil service, which are staffed by graduates of
the elite engineering schools and the civil-service college (ENA), in conducting a state-led
economic policy and controlling France’s largest firms (nationalized in 1945 and 1981),
has often been emphasized (Suleiman, 1995).
4. This thesis of one regime of production of science supplanting another is criticized by
Pestre (1997), who sees the two modes as having functioned in parallel for the past few
centuries in the West.
5. Firms studied during our European research project: see a presentation of the methodol-
ogy in the Appendix.
6. With the advent of New Labour, official reports on these issues proliferated: see com-
petitive White Paper (DTI, 1999a), special reports on biotechnology clusters (1999b) and
Genome Valley (1999c), White Paper on enterprise, skills and innovation (DTI/DfEE,
2001).
7. It is worth noting that this particular configuration is a hybrid between the French and
American models (Lanciano-Morandat and Nohara, 2002). Those who created and
managed it had previously visited the USA, where they learned how to handle applied
research and to launch private entrepreneurial initiatives. This mind-set has since been
handed on to the younger generations.
236 The capitalization of knowledge
8. Jean-Louis Beffa, former chairman of the multinational Saint-Gobain, was the leader of
an expert commission created by President Chirac.
REFERENCES
Amable, B., R. Barré and R. Boyer (1997), Les systèmes d’innovation à l’ère de la
globalisation, Paris: Economica.
Aoki, M. (2001), Toward a Comparative Institutional Analysis, Cambridge, MA:
MIT Press.
Barré, R. and P. Papon (1998), ‘La compétitivité technologique de la France’, in
H. Guillaume (1998), ‘Rapport de mission sur la technologie et l’innovation’,
submitted to the Ministry of National Education, Research and Technology,
the Ministry of the Economy, Finances and Industry, and the State Secretary for
Industry, Paris, mimeo, pp. 216–27.
Beffa, J.-L. (2005), ‘Pour une nouvelle politique industrielle’, Rapport remis au
Président de la République Française, Paris, mimeo.
Boltanski, L. and E. Chiappello (1999), Le nouvel esprit du capitalisme, Paris:
Gallimard.
Boltanski, L. and L. Thévenot (1991), De la Justification. Les Economies de la
Grandeur, Paris: Gallimard.
Branciard, A. (2001), ‘Le génopole d’Evry: une action publique territorialisée’,
Journées du Lest, avril, mimeo, Aix en Provence.
Branciard, A. and E. Verdier (2003), ‘La réforme de la politique scientifique
française face à la mondialisation: l’émergence incertaine d’un nouveau référen-
tiel d’action publique’, Politiques et Management Public, 21 (2), 61–81.
Callon, M. (1991), ‘Réseaux technico-économiques et flexibilité’, in R. Boyer and
B. Chavance (eds), Figures de l’irréversibilité, Editions de l’EHESS.
Callon, M. (1992), ‘Variété et irréversibilité dans les réseaux de conception et
d’adoption des techniques’, in D. Foray and C. Freeman (eds), Technologie et
richesse des nations, Paris: Economica, 275–324.
Casper, S., M. Lehrer and D. Soskice (1999), ‘Can high-technology industries
prosper in Germany? Institutional frameworks and the evolution of the
German software and biotechnology industries’, Industry and Innovation, 6
(1), 6–26.
DTI (Department of Trade and Industry) (1999a), Our Competitive Future: UK
Competitive Indicators 1999, London: Department of Trade and Industry.
DTI (Department of Trade and Industry) (1999b), Biotechnology Clusters, London:
Department of Trade and Industry.
DTI (Department of Trade and Industry) (1999c), Genome Valley, London:
Department of Trade and Industry.
DTI/DfEE (Department of Trade and Industry/Department for Education and
Employment) (2001), Opportunity for All in a World of Change: A White Paper
on Enterprise, Skills and Innovation, London: HMSO.
Ergas, H. (1992), A Future for Mission-oriented Industrial Policies? A Critical
Review of Developments in Europe, Paris: OECD.
Ernst & Young (2000), Gründerzeit. Zweiter Deutscher Biotechnologie-Report
2000, Stuttgart: Ernst & Young.
Etzkowitz, H. and L. Leydesdorff (1997), Universities and the Global Knowledge
A comparative analysis of national R&D policies 237
METHODOLOGICAL APPENDIX
APPENDIX
Germany UK France
1995 1999 2001 2003 1995 1999 2001 2003 1995 1999 2001 2003
(1) 60.0 65.4 65.7 66.1 48.2 48.5 46.9 43.9 48.3 54.1 54.2 50.8
(2) 37.9 32.1 31.4 31.1 32.8 29.9 29.1 31.3 41.9 36.8 36.9 40.8a
(3) 0.3 0.4 0.4 0.4 4.5 5.0 5.7 5.4 1.7 1.9 1.7 –
(4) 1.8 2.1 2.5 2.3 14.5 17.3 18.2 19.4 8.0 7.0 7.2 8.4
Notes: (1) = Industry; (2) = Public funding; (3) = Other national sources;
(4) = Foreign funding.
a
Notes: EU-25 in 2004.
a
Notes: EU-25 in 2004.
INTRODUCTION
Life Sciences Network, an umbrella group of industry and scientists who
support genetic engineering, wants the chance to contradict evidence given by
groups opposed to GE and to put new evidence before [the New Zealand Royal
Commission on Genetic Modification]. (Beston, 2001, p. 8)
A new hybrid organizational representation of action that is neither purely
scientific nor purely political is created. (Moore, 1996, p. 1621)
243
244 The capitalization of knowledge
BOUNDARIES
BOUNDARY WORK
around what issues they are unified’. ‘Scientists exhibit a wide range of
political opinions, religious beliefs, and income levels; these differences
impinge upon the kinds of claims that scientists make about the proper
relationships between science and politics as well as forming the basis for
conflict among scientists. Thus, the process of setting boundaries is not
simply a struggle between a unified group of scientists and non-scientists,
but a process of struggle among scientists as well’ (Moore, 1996, p. 1596).
Geiryn also saw this variation as a source of ambiguity in the notion of
boundary work, noting that ‘demarcation is as much a practical problem
for scientists as an analytical problem for sociologists and philosophers’
(1983, p. 792). He argued that ambiguity surfaces because of inconsisten-
cies in the boundaries constructed by different groups of scientists inter-
nally or in response to different external challenges, as well as because of
the conflicting goals of different scientists. Moore noted that the formation
of the boundary organizations helped perpetuate the perception of unity
in science by preserving ‘the professional organizations that represented
“pure” science and unity among scientists’ (1996, p. 1594).
The demarcation problem is still of great interest to science studies
(e.g. Evans, 2005 and papers in that special issue of Science, Technology
& Human Values). However, in more recent times, boundary work has
evolved to mean the ‘strategic demarcation between political and scien-
tific tasks in the advisory relationship between scientists and regulatory
agencies’ (Guston, 2001, p. 399). This newer version of boundary work
has taken on a less instrumental tone and the boundaries are viewed as
means of communication rather than of division (Lamont and Molnar,
2002). The demarcation between politics and science is viewed as some-
thing to bridge, with boundary organizations seen as mediators between
the two realms (Miller, 2001), and successful boundary organizations are
said to be those that please both parties. In this new framing, boundary
organizations may help manage boundary struggles over authority and
control but are primarily focused on facilitating cooperation across social
domains in order to achieve a shared objective. They ‘are useful to both
sides’ but ‘play a distinctive role that would be difficult or impossible for
organizations in either community to play’ (Guston, 2001, p. 403, citing
the European Environmental Agency).
Implications of strategy
could have written the Royal Commission’s report . . . the Royal Commission
basically accepted the moderate position we took . . . all that reflects is the fact
that the people in organizations which made up the Life Sciences Network were
not some radical outrageous group of people looking at this in an irrespon-
sible way . . . [but] that the position that we had come to was one which was
consistent with good policy, and consistent with the strategic interests of New
Zealand, which is what the Royal Commission was about.
DISCUSSION
There is no doubt that, for its member organizations, the LSN was a very
successful boundary organization in that it achieved the outcome that was
desired through being very actively involved in the political discussion and
negotiation over the future of GM in New Zealand. Like any lobby group,
the purpose of this type of boundary organization was to supply informa-
tion from a particular perspective, in the hope of influencing decisions
made. Even though the physical aspects of the LSN (staff numbers, office
space etc.) were very small relative to those of other organizations, the iden-
tity of the organization and the ‘boundary’ between the LSN and ‘science’
was very effectively created by its constitution, its membership consisting
of representative industry organizations (not individual companies, having
learnt from the maligned participation of Monsanto in GenePool) and
research organizations (rather than individual scientists), its website and
prolific press releases and media articles. By responding at every opportu-
nity and rebutting any anti-GM sentiment expressed ‘in public’, the LSN
became the ‘voice’ for pro-GM. As one industry representative stated, ‘the
pro-GM side would have been a mess without [the LSN]’.
In a societal debate, such as that which occurred around the role of
GM in New Zealand, the audience was very diffuse and LSN’s aim was
to directly influence ‘public opinion’, and ultimately political decision-
making and resource allocation, in favour of the member organizations’
objectives but without the member organizations needing to actively
254 The capitalization of knowledge
The [RSNZ] avoided being involved in this issue because it was schizophrenic.
It had one view which was the dominant view of the physical scientists and
another view called ‘traditional scientist’ which was a view espoused by a group
of social scientists and the two views were in conflict. So the [RSNZ] was unable
to take a leadership role in this and it was uncomfortable about that but realised
its own political reality. The [LSN] was specifically set up to be a focal point and
we weren’t going to beat around the bush and pretend we weren’t advocates.
Even though the ACRI had common membership with the LSN, with
four of the nine CRIs also members of the LSN, it was constrained by
its representative function. As explained by Anthony Scott, executive
director of ACRI:
[ACRI] is the voice for the nine CRIs on matters which the CRIs have in
common and on which they agree to have a common voice. We’re representa-
tive when we’ve been authorised but cannot bind any member . . . Almost all
of [the CRIs] were involved in GM research. Some are really leading from the
front. Some were, or were likely to be, primarily researching whether GM was
‘safe’ or what was likely to happen [if GM release occurred] in the New Zealand
situation. So some CRIs felt that they had to not only be, but had to be seen to
be, independent of an advocacy role. Some of course, were doing both. So the
ACRI role was to say to New Zealand ‘let’s talk about it, let’s understand what
the issues are, let’s have an informed discussion’.
Scott continued, ‘I think it was a wise strategy for those CRIs that
wanted to more strongly advocate the use of GM to join the LSN. It
avoided cross-fertilisation, if you like.’
Boundary organizations in the triple helix 255
CONCLUSION
NOTES
1. During the lead-up to the election, the government was accused of having covered up
the importation of corn with a modicum of GM contamination. ‘Corngate’ began with a
very public ‘ambush’ of the prime minister with the accusations in a live TV interview.
2. An organization was able to claim ‘interested person’ status if it could prove it had an
interest in the RCGM, over and above that of any interest in common with the public
(Davenport and Leitch, 2005).
Boundary organizations in the triple helix 259
REFERENCES
Ainsworth, S. and S. Itai (1993), ‘The role of lobbyists: entrepreneurs with two
audiences’, American Journal of Political Science, 37, 834–66.
Baumgartner, F. and B. Leech (1996), ‘Issue niches and policy bandwagons: pat-
terns of interest group involvement in national politics’, Journal of Politics, 63,
1191–213.
Beston, A. (2001), ‘GE parties fight to finish’, New Zealand Herald, 26 February.
Collins, S. (2002a), ‘Taxpayer cash in pro-GE adverts’, New Zealand Herald, 25
July.
Collins, S. (2002b), ‘Varsity unit gives cash to pro-GE fund’, New Zealand Herald,
26 July.
Collins, S. (2004), ‘Pro-GM lobby institute closes’, New Zealand Herald, 7 May.
Davenport, S. and S. Leitch (2005), ‘Agora, ancient & modern and a framework
for public debate’, Science & Public Policy, 32 (3), 137–53.
Espiner, G. (1999), ‘Accepting Monsanto money naïve, says Kirton’, The Evening
Post, 3, September, 2.
Evans, R. (2005), ‘Introduction: demarcation socialized: constructing boundaries
and recognizing difference’, Special Issue of Science, Technology & Human
Values, 30, 3–16.
Fisher, D. (2003), ‘Money talks for pro-GE spin machine’, Sunday Star Times, 16
November.
Geiryn, T. (1983), ‘Boundary-work and the demarcation of science from non-
science: strains and interests in the professional ideologies of scientists’, American
Sociological Review, 48, 781–95.
Guston, D. (2001), ‘Boundary organizations in environmental policy and science:
an introduction’, Special Issue of Science, Technology, & Human Values, 26 4),
399–408.
Guston, D. (2004), ‘Forget politicizing science. Let’s democratise science!’, Issues
in Science and Technology, 21, 25–8.
Hawkes, J. (2000), ‘Row flares over council’s pro-GE stance’, Waikato Times 28
August, 1.
Hellstrom, T. and M. Jacob (2003), ‘Boundary organizations in science: from dis-
course to construction’, Science & Public Policy, 30 (4), 235–8.
Heracleous, L. (2004), ‘Boundaries in the study of organization’, Human Relations,
57, 95–103.
Hernes, T. (2003), ‘Enabling and constraining properties of organizational bound-
aries’, in N. Paulsen and T. Hernes (eds), Managing Boundaries in Organizations:
Multiple Perspectives, New York: Palgrave, pp. 35–54.
Hernes, T. (2004), ‘Studying composite boundaries: a framework for analysis’,
Human Relations, 57, 9–29.
Hernes, T. and N. Paulsen (2003), ‘Introduction: boundaries and organization’,
in N. Paulsen and T. Hernes (eds), Managing Boundaries in Organizations:
Multiple Perspectives, New York: Palgrave, pp. 1–13.
Holyoke, T. (2003), ‘Choosing battlegrounds: interest group lobbying across mul-
tiple venues’, Political Research Quarterly, 56 (3), 325–36.
Kelly, S. (2003), ‘Public bioethics and publics: consensus, boundaries, and partici-
pation in biomedical science policy’, Science, Technology, & Human Values, 28
(3), 339–64.
260 The capitalization of knowledge
Lamont, M. and V. Molnar (2002), ‘The study of boundaries in the social sciences’,
Annual Review of Sociology, 28, 167–95.
Leydesdorff, L. (2000), ‘The triple helix: an evolutionary model of innovations’,
Research Policy, 29, 234–55.
Miller, C. (2001), ‘Hybrid management: boundary organizations, science policy,
and environmental governance in the climate regime’, Science, Technology &
Human Values, 26 (4), 478–501.
Moore, K. (1996), ‘Organizing integrity: American science and the creation of
public interest organizations, 1955–1975’, American Journal of Sociology, 6,
1592–627.
Samson, A. (1999), ‘Experts discuss action over deformed fish’, The Dominion, 23
April, 6.
Samson, A. (2000), ‘Scientists seek active role in GE Inquiry’, The Dominion, 14
August, 8.
Weaver, C.K. and J. Motion (2002), ‘Sabotage and subterfuge: public relations,
democracy and genetic engineering in New Zealand’, Media, Culture & Society,
24, 325–43.
Worrall, J. (2002), ‘GM man in the middle’, The Timaru Herald, 19 January.
10. The knowledge economy: Fritz
Machlup’s construction of a
synthetic concept1
Benoît Godin
INTRODUCTION
261
262 The capitalization of knowledge
MACHLUP’S CONSTRUCTION
and television. Also, organizations rely more and more on ‘brain work’ of
various sorts: ‘besides the researchers, designers, and planners, quite natu-
rally, the executives, the secretaries, and all the transmitters of knowledge
. . . come into focus’ (ibid., p. 7). To Machlup, these kinds of knowledge
deserve study.
Machlup (ibid., pp. 9–10) listed 11 reasons for studying the economics
of knowledge, among them:
Defining Knowledge
The first point about Machlup’s concept of knowledge was that it included
all kinds of knowledge, not only scientific knowledge, but ordinary knowl-
edge as well. Until then, most writings on knowledge were philosophical,
and were of a positivistic nature: knowledge was ‘true’ knowledge (e.g.
Ayer, 1956). As a consequence, the philosophy of practical or ordinary
action ‘intellectualized’ human action. Action was defined as a matter of
rationality and logic: actions start with deliberation, then intention, then
decision (see Bernstein, 1971). Similarly, writings on decision-making
were conducted under the assumption of strict rationality (rational choice
theory) (e.g. Amadae, 2003).
In 1949, the philosopher Gilbert Ryle criticized what he called the cul-
tural primacy of intellectual work (Ryle, 1949). By this he meant under-
standing the primary activity of mind as theorizing, or knowledge of true
propositions or facts. Such knowledge or theorizing Ryle called ‘knowing
Fritz Machlup’s construction of a synthetic concept 265
‘Operationalizing’ Knowledge
Machlup was here taking stock of the new literature on the economics of
innovation and its linear model (e.g. Godin, 2006b, 2008). To economists,
innovation included more than R&D. Economists defined innovation as
different from invention as studied by historians. Innovation was defined
as the commercialization of invention by firms. To Machlup, adhering
to such an understanding was part of his analytical move away from the
primacy of scientific knowledge, or intellectual work.
The third component of Machlup’s ‘operationalization’ of knowledge
was media (of communication). Since all kinds of knowledge were relevant
knowledge to Machlup, not only scientific knowledge but also ordinary
knowledge, he considered a large range of vehicles for distribution: printing
(books, periodicals, newspapers), photography and phonography, stage
and cinema, broadcasting (radio and television), advertising and public
relations, telephone, telegraph and postal service, and conventions.
The final component of Machlup’s ‘operationalization’ was informa-
tion, itself composed of two elements: information services and infor-
mation machines (technologies). Information services, the eligibility for
inclusion of which ‘may be questioned’ in a narrow concept of knowledge
(Machlup, 1962a, p. 323), were: professional services (legal, engineering,
accounting and auditing, and medical), finance, insurance and real estate,
wholesale trade, and government. Information machines, of which he says
‘the recent development of the electronic-computer industry provides a
story that must not be missed’ (ibid., p. 295), included signalling devices,
instruments for measurement, observation and control, office information
machines and electronic computers.
Where does Machlup’s idea of defining knowledge as both production
and distribution come from? Certainly, production and distribution have
been key concepts of economics for centuries. However, the idea also
Fritz Machlup’s construction of a synthetic concept 269
To Machlup,
MEASURING KNOWLEDGE
Table 10.1
capital and labour), and equated the residual in his equation with techni-
cal change – although it included everything that was neither capital nor
labour – as ‘a shorthand expression for any kind of shift in the produc-
tion function’ (p. 312). Integrating science and technology was thus not
a deliberate initiative, but it soon became a fruitful one. Solow estimated
that nearly 90 per cent of growth was due to the residual. In the follow-
ing years, researchers began adding variables to the equation in order to
better isolate science and technology (e.g. Denison, 1962, 1967), or adjust-
ing the input and capital factors to capture quality changes in output (e.g.
Jorgenson and Griliches, 1967).
According to Machlup, a mathematical exercise such as the production
function was ‘only an abstract construction designed to characterize some
quantitative relationships which are regarded as empirically relevant’
(Machlup, 1962b, p. 155). What the production function demonstrated
was a correlation between input and output, rather than any causality: ‘a
most extravagant increase in input might yield no invention whatsoever,
and a reduction in inventive effort might by a fluke result in the output that
had in vain been sought with great expense’ (ibid., p. 153). To Machlup,
there were two schools of thought:
According to the acceleration school, the more that is invented the easier it
becomes to invent still more – every new invention furnishes a new idea for
potential combination . . . According to the retardation school, the more that
is invented, the harder it becomes to invent still more – there are limits to the
improvement of technology. (Ibid., p. 156)
To Machlup, the first hypothesis was ‘probably more plausible’, but ‘an
increase in opportunities to invent need not mean that inventions become
easier to make; on the contrary, they become harder. In this case there
would be a retardation of invention . . .’ (ibid., p. 162), because ‘it is pos-
sible for society to devote such large amounts of productive resources to
the production of inventions that additional inputs will lead to less than
proportional increases in output’ (ibid., p. 163).
For measuring knowledge, Machlup chose another method than econo-
metrics and the production function, namely national accounting. National
accounting goes back to the eighteenth century and what was then called
political arithmetic (see Deane, 1955, Buck, 1977, 1982; Cookson, 1983;
Endres, 1985; Mykkanen, 1994; Hoppit, 1996). But national accounting
really developed after World War II with the establishment of a standard-
ized System of National Accounts, which allowed a national bureau of sta-
tistics to collect data on the production of economic goods and services in
a country in a systematic way (see Studenski, 1958; Ruggles and Ruggles,
1970; Kendrick, 1970; Sauvy, 1970; Carson, 1975; Fourquet, 1980; Vanoli,
Fritz Machlup’s construction of a synthetic concept 273
2002). Unfortunately for Machlup, knowledge was not – and is still not – a
category of the National System of Accounts.
There are, argued Machlup, ‘insurmountable obstacles in a statistical
analysis of the knowledge industry’ (Machlup, 1962a, p. 44). Usually,
in economic theory, ‘production implies that valuable input is allocated
to the bringing forth of a valuable output’, but with knowledge there
is no physical output, and knowledge is most of the time not sold on
the market (ibid., p. 36). The need for statistically operational concepts
forced Machlup to concentrate on costs, or national income account-
ing. To estimate costs8 and sales of knowledge products and services,
Machlup collected numbers from diverse sources, both private and public.
However, measuring costs meant that no data were available on the inter-
nal (non-marketed) production and use of knowledge, for example inside
a firm: ‘all the people whose work consists of conferring, negotiating,
planning, directing, reading, note-taking, writing, drawing, blueprinting,
calculating, dictating, telephoning, card-punching, typing, multigraphing,
recording, checking, and many others, are engaged in the production of
knowledge’ (Machlup, 1962a, p. 41). Machlup thus looked at comple-
mentary data to capture the internal market for knowledge. He conducted
work on occupational classes of the census, differentiating classes of
white-collar workers who were knowledge-producing workers from those
that were not, and computing the national income of these occupations
(ibid., pp. 383 and 386). Machlup then arrived at his famous estimate: the
knowledge economy was worth $136.4 million, or 29 per cent of GNP in
1958, had grown at a rate of 8.8 per cent per year over the period 1947–58,
and occupied people representing 26.9 per cent of the national income (see
Table 10.2).
In conducting his accounting exercise, Machlup benefited from the
experience of previous exercises conducted on education (e.g. Wiles,
1956) and human capital (e.g. Walsh, 1935; Mincer, 1958; Schultz, 1959,
1960, 1961a, 1961b, 1962; Becker, 1962; Hansen, 1963), and, above all, on
research or R&D. The US National Science Foundation, as the producer
Table 10.2
$ (millions) %
Education 60.194 44.1
R&D 10.990 8.1
Media of communication 38.369 28.1
Information machines 8.922 6.5
Information services 17.961 13.2
274 The capitalization of knowledge
on input and output (see Appendix 1), and relationships or ratios between
the two9.
Machlup was realistic about his own accounting, qualifying some of his
estimates as being speculative (Machlup, 1962a, p. 62), that is, ideas of
magnitude and trends based on conjecture rather than exact figures (ibid.,
p. 103), and he qualified some of his comparisons ‘with several grains of
salt’ (ibid., p. 374). To Machlup, it was the message rather than the statisti-
cal adequacy that was important. The very last sentence of the book reads
as follows: ‘concern about their accuracy [statistical tables] should not
crowd out the message it conveys’ (ibid., p. 400).
THE MESSAGE
subject was his first interest in the field of knowledge production. The
temptation to expand the area of study to cover the entire industry came
later, and proved irresistible’ (Machlup, 1962a, p. 48). To Machlup, the
policy issues involving R&D were twofold. One was the decline of inven-
tions. From the early 1950s, Machlup had studied monopolies and the role
of patents in competition (Machlup, 1952), and particularly the role of
the patent system in inducing invention (e.g. Machlup and Penrose, 1950;
Machlup, 1958b). Following several authors, among them J. Schmookler
(e.g. Schmookler, 1954), he calculated a decline in patenting activity after
1920 (Machlup, 1961). He wondered whether this was due to the patent
system itself, or to other factors. In the absence of empirical evidence,
he suggested that ‘faith alone, not evidence, supports’ the patent system.
To Machlup, it seems ‘not very likely that the patent system makes much
difference regarding R&D expenditures of large firms’ (Machlup, 1962a,
p. 170).
A second policy issue concerning R&D was the productivity of research,
and his concern with this issue grew out of previous reflections on the allo-
cation of resources to research activities and the inelasticity in the short-
term supply of scientists and engineers (Machlup, 1958a). To Machlup,
research, particularly basic research, is an investment, not a cost. Research
leads to an increase in economic output and productivity (goods and
services), and society gains from investing in basic research with public
funds: the social rate of return is higher than private ones (see Griliches,
1958), and ‘the nation has probably no other field of investment that yields
return of this order’ (Machlup, 1962a, p. 189). But there actually was a
preference for applied research in America, claimed Machlup: ‘American
preference for practical knowledge over theoretical, abstract knowledge
is a very old story’ (ibid., pp. 201–2). That there was a ‘disproportionate
support of applied work’ (ibid., pp. 203) was a popular thesis of the time
among scientists (see Reingold, 1971). To Machlup, there was a social cost
to this: echoing V. Bush, according to whom ‘applied research invariably
drives out pure’ research (Bush, 1945 [1995], p. xxvi), Machlup argued
that industry picks up potential scientists before they have completed
their studies, and dries up the supply of research personnel (shortages).
Furthermore, if investments in basic research remain too low (8 per cent
of total expenditures on R&D), applied research will suffer in the long run,
since it depends entirely on basic research. Such was the rhetoric of the
scientific community’s members at the time.
These were the main policy issues Machlup discussed. Concerning the
last two components of his definition – communication and information
– Machlup was very brief. In fact, his policy concern was mainly with
information technologies and the technological revolution. To Machlup,
Fritz Machlup’s construction of a synthetic concept 277
the important issue here was twofold. The first part was rational decision-
making: the effects of information machines are ‘improved records,
improved decision-making, and improved process controls . . . that permit
economies’ (Machlup, 1962a, p. 321). Machlup was here offering what
would become the main line of argument for the information economy
in the 1980s and after: information technologies as a source of growth
and productivity. The second part was the issue of structural change and
unemployment (‘replacement of men by machines’). Structural change
was a concern for many in the 1940s and 1950s, and the economist Wassily
Leontief devoted numerous efforts to measuring it using input–output
tables and accounting as a framework (e.g. Leontief, 1936, 1953, 1986; see
also Leontief, 1952, 1985; Leontief and Duchin, 1986). ‘There has been’,
stated Machlup, ‘a succession of occupations leading [the movement to a
knowledge economy], first clerical, then administrative and managerial,
and now professional and technical personnel . . . , a continuing move-
ment from manual to mental, and from less to more highly trained labor’
(Machlup, 1962a, pp. 396–7). To Machlup, ‘technological progress has
been such as to favor the employment of knowledge-producing workers’
(ibid., p. 396), but there was the danger of increasing unemployment
among unskilled manual labour (ibid., p. 397). In the long run, however,
‘the demand for more information may partially offset the labor-replacing
effect of the computer-machine’ (ibid., p. 321).
With regard to communication, the fourth component of his ‘opera-
tionalization’ of knowledge, Machlup discussed no specific policy issue.
But there was one in the background, namely the information explosion
(see Godin, 2007). In the 1950s, the management of scientific and techni-
cal literature emerged as a concern to many scientists and universities,
and increasingly to governments. According to several authors, among
them science historian Derek J. de Solla Price, scientific and technical
information, as measured by counting journals and papers, was growing
exponentially. Science was ‘near a crisis’, claimed Price, because of the pro-
liferation and superabundance of literature (Price, 1956). Some radically
new technique must be evolved if publication is to continue as a useful
contribution (Price, 1961). The issue gave rise to scientific and technical
information policies starting from the early 1960s, as a precursor to poli-
cies on the information economy and, later, on information technology
(see Godin, 2007).
In 1962, Machlup did not discuss the issue of information explosion. He
even thought that counting the number of books was a ‘very misleading
index of knowledge’ (Machlup, 1962a, p. 122). However, in the 1970s, he
conducted a study on ‘The production and distribution of scientific and
technological information’, published in four volumes as Information
278 The capitalization of knowledge
through the Printed World (Machlup and Lesson, 1978–80). Produced for
the National Science Foundation, the study looked at books, journals,
libraries, and their information services from a quantitative point of
view, as had been done in The Production and Distribution of Knowledge:
the structure of the industries, markets, sales, prices, revenues, costs,
collections, circulation, evaluation and use.
Machlup wrote on knowledge at a time when science, or scientific
knowledge, was increasingly believed to be of central importance to society
– and scientists benefited largely from public investments in research.
Economists, according to whom ‘if society devotes considerable amounts
of its resources to any particular activity, will want to look into this alloca-
tion and get an idea of the magnitude of the activity, its major breakdown,
and its relation to other activities’ (Machlup, 1962a, p. 7), started measur-
ing the new phenomenon, and were increasingly solicited by governments
to demonstrate empirically the contribution of science to society – cost
control on research expenditures was not yet in sight. Machlup was part
of this ‘movement’, with his own intellectual contribution.
CONCLUSION
Economics Information
(Hayek, Arrow)
NOTES
1. The author thanks Michel Menou for comments on a preliminary draft of this chapter.
2. He taught at the University of Buffalo (1935–47), then Johns Hopkins (1947–60), then
Princeton (1960–71). After retiring in 1971, he joined New York University until his
death.
3. At about the same time, B. Russell distinguished between what he called social and indi-
vidual knowledge, the first concerned with learned knowledge, the other with experience.
See Russell (1948); see also Schutz (1962) and Schutz and Luickmann (1973).
Fritz Machlup’s construction of a synthetic concept 281
REFERENCES
Amadae, S.M. (2003), Rationalizing Capitalist Democracy: The Cold War Origins
of Rational Choice Liberalism, Chicago, IL: University of Chicago Press.
Arrow, K.J. (1962a), ‘The economic implication of learning-by-doing’, Review of
Economic Studies, 29, 155–73.
Arrow, K.J. (1962b), ‘Economic welfare and the allocation of resources for inven-
tion’, in National Bureau of Economic Research, The Rate and Direction of
Inventive Activity, Princeton, NJ: Princeton University Press, pp. 609–25.
Arrow, K.J. (1973), ‘Information and economic behavior’, lecture given at the
1972 Nobel Prize Celebration, Stockholm: Federation of Swedish Industries.
Arrow, K.J. (1974), ‘Limited knowledge and economic analysis’, American
Economic Review, 64, 1–10.
Arrow, K.J. (1979), ‘The economics of information’, in M.L. Deltouzos and J.
Moses (eds), The Computer Age: A Twenty-Year View, Cambridge, MA: MIT
Press, pp. 306–17.
Arrow, K.J. (1984), The Economics of Information, Cambridge, MA: Harvard
University Press.
Ayer, A.J. (1956), The Problem of Knowledge, Harmondsworth: Penguin Books.
Becker, G.S. (1962), ‘Investment in human capital: a theoretical analysis’, Journal
of Political Economy, 70 (5), 9–49.
Bernstein, R.J. (1971), Praxis and Action, Philadelphia, PA: University of
Pennsylvania Press.
Boulding, K.E. (1966), ‘The economics of knowledge and the knowledge of eco-
nomics’, American Economic Review, 56 (1–2), 1–13.
Buck, P. (1977), ‘Seventeenth-century political arithmetic: civil strife and vital
statistics’, ISIS, 68 (241), 67–84.
Buck, P. (1982), ‘People who counted: political arithmetic in the 18th century’,
ISIS, 73 (266), 28–45.
Bush, V. (1945) [1995], Science: The Endless Frontier, North Stratford, NH: Ayer
Company Publishers.
Carson, C.S. (1975), ‘The history of the United States National Income and
Product Accounts: the development of an analytical tool’, Review of Income and
Wealth, 21 (2), 153–81.
Cookson, J.E. (1983), ‘Political arithmetic and war in Britain, 1793–1815’, War
and Society, 1, 37–60.
Deane, P. (1955), ‘The implications of early national income estimates for the
282 The capitalization of knowledge
Kay, L.E. (2000), ‘How a genetic code became an information system’, in A.C.
Hughes and H.P. Hughes (eds), Systems, Experts, and Computers, Cambridge,
MA: MIT Press, pp. 463–91.
Kendrick, J.W. (1970), ‘The historical development of national-income accounts’,
History of Political Economy, 2 (1), 284–315.
Knudsen, C. (2004), ‘Alfred Schutz, Austrian economists and the knowledge
problem’, Rationality and Society, 16 (1), 45–89.
Lamberton, D. (1971), Economics of Information and Knowledge, Harmondsworth:
Penguin.
Langlois, R.N. (1985), ‘From the knowledge of economics to the economics of
knowledge: Fritz Machlup on methodology and on the knowledge society’,
Research in the History of Economic Thought and Methodology, 3, 225–35.
Leontief, W. (1936), ‘Quantitative input and output relations in the economic
systems of the United States’, Review of Economic Statistics, 18 (3), 105–25.
Leontief, W. (1952), ‘Machines and man’, Scientific American, 187, 150–60.
Leontief, W. (ed.) (1953), Studies in the Structure of the American Economy, New
York: Oxford University Press.
Leontief, W. (1985), ‘The choice of technology’, Scientific American, 252, 37–45.
Leontief, W. (1986), Input–Output Economics, Oxford: Oxford University Press.
Leontief, W. and F. Duchin (1986), The Future Impact of Automation on Workers,
Oxford: Oxford University Press.
Machlup, F. (1952), The Political Economy of Monopoly: Business, Labor and
Government Policies, Baltimore, MD: Johns Hopkins Press.
Machlup, F. (1958a), ‘Can there be too much research?’, Science, 128 (3335),
1320–25.
Machlup, F. (1958b), An Economic Review of the Patent System, Study no. 15,
Committee on the Judiciary, 85th Congress, Second Session, Washington, DC.
Machlup, F. (1960), ‘The supply of inventors and inventions’, Weltwirtschaftliches
Archiv, 85, 210–54.
Machlup, F. (1961), ‘Patents and inventive efforts’, Science, 133 (3463), 1463–6.
Machlup, F. (1962a), The Production and Distribution of Knowledge in the United
States, Princeton, NJ: Princeton University Press.
Machlup, F. (1962b), ‘The supply of inventors and inventions’, in National Bureau
of Economic Research, The Rate and Direction of Inventive Activity, Princeton,
NJ: Princeton University Press, pp. 143–69.
Machlup, F. (1980–84), Knowledge: Its Creation, Distribution, and Economic
Significance, Princeton, NJ: Princeton University Press.
Machlup, F. (1983), ‘Semantic quirks in studies of information’, in F. Machlup
and U. Mansfield (eds), The Study of Information: Interdisciplinary Messages,
New York: John Wiley, p. 657.
Machlup, F. and K. Lesson (1978–80), Information through the Printed World: The
Dissemination of Scholarly, Scientific, and Intellectual Knowledge, New York:
Praeger.
Machlup, F. and E. Penrose (1950), ‘The patent controversy in the nineteenth
century’, Journal of Economic History, 1 (1), 1–29.
Marschak, J. (1959), ‘Remarks on the economics of information’, in Contributions
to Scientific Research in Management, Western Data Processing Center, Los
Angeles: University of California, pp. 79–98.
Marschak, J. (1968), ‘Economics of inquiring, communicating, deciding’, American
Economic Review, 58 (2), 1–18.
284 The capitalization of knowledge
286
stock and output
C. New practical —
from I-B, II-B and
problems and
III-B)
ideas
287
hunches
C. New practical —
problems and
ideas
288
output from II-A)
IV 1. Developed Entrepreneurs A. New practical New-type plant
‘New-type plant inventions (output Managers problems and producing
construction’ from III-A) Financiers and ideas a. novel products
[Intended output: bankers b. better products
2. Business acumen
‘New-type plant’] Builders and c. cheaper
and market
contractors products
forecasts
Engineers
3. Financial
$ investments in
resources
new-type plant
4. Enterprise Building materials
(venturing) Machines and tools
Investments in knowledge
Domestic R&D expenditure
R&D financing and performance
Business R&D
R&D in selected ICT industries and ICT patents
Business R&D by size classes of firms
Collaborative efforts between business and the public sector
R&D performed by the higher education and government sectors
Public funding of biotechnology R&D and biotechnology patents
Environmental R&D in the government budget
Health-related R&D
Basic research
Defence R&D in government budgets
Tax treatment of R&D
Venture capital
Human resources
Human resources in science and technology
Researchers
International mobility of human capital
International mobility of students
Innovation expenditure and output
Patent applications
Patent families
Scientific publications
B. Information Economy
International trade
Exposure to international trade competition by industry
Foreign direct investment flows
Cross-border mergers and acquisitions
Activity of foreign affiliates in manufacturing
Activity of foreign affiliates in services
Internationalization of industrial R&D
International strategic alliances between firms
Cross-border ownership of inventions
International co-operation in science and technology
Technology balance of payments
291
292 The capitalization of knowledge
an emerging overlay
of relations
Data
basis. Our data specifically correspond to the CD-Rom for the second
quarter of 2001 (Van der Panne and Dolfsma, 2003). Because registration
with the Chamber of Commerce is obligatory for corporations, the dataset
covers the entire population. We brought these data under the control of
a relational database manager in order to enable us to focus on the rela-
tions more than on the attributes. Dedicated programs were developed for
further processing and computation where necessary.
The data contain three variables that can be used as proxies for the
dimensions of technology, organization and geography at the systems
level. Technology will be indicated by the sector classification (Pavitt,
1984; Vonortas, 2000), organization by the company size in terms of
numbers of employees (Pugh and Hickson, 1969; Pugh et al., 1969; Blau
and Schoenherr, 1971), and the geographical position by the postal codes
in the addresses. Sector classifications are based on the European NACE
codes.2 In addition to major activities, most companies also provide infor-
mation about second and third classification terms. However, we use the
main code at the two-digit level.
Postal codes are a fine-grained indicator of geographical location. We
used the two-digit level, which provides us with 90 districts. Using this
information, the data can be aggregated into provinces (NUTS-2) and
NUTS-3 regions.3 The Netherlands is thus organized in 12 provinces and
40 regions, respectively.
The distribution by company size contains a class of 223 231 companies
without employees. We decided to include this category because it con-
tains, among others, spin-off companies that are already on the market,
but whose owners are employed by mother companies or universities.
Given our research question, these establishments can be considered as
relevant economic activities.
Regional Differences
Methodology
Abramson (1963, p. 129) derived from the Shannon formulae that the
mutual information in three dimensions is:
While the bilateral relations between the variables reduce the uncer-
tainty, the trilateral term adds to the uncertainty. The layers thus alternate
in terms of the sign. The sign of Txyz depends on the magnitude of the tri-
lateral component (Hxyz) relative to the mutual information in the bilateral
relations.
For example, the trilateral coordination can be associated with a new
coordination mechanism that is added to the system. In the network mode
(Figure 11.1) a system without central integration reduces uncertainty by
providing a differentiated configuration. The puzzles of integration are
then to be solved in a non-hierarchical, that is, reflexive or knowledge-
based mode (Leydesdorff, 2010).
RESULTS
Descriptive Statistics
Table 11.2 shows the probabilistic entropy values in the three dimensions
(G = geography, T = technology/sector and O = organization) for the
Netherlands as a whole and the decomposition at the NUTS-2 level of the
provinces.
The provinces are very different in terms of the numbers of firms and
Table 11.3 provides the values for the transmissions (T) among the
various dimensions. These values can be calculated straightforwardly
from the values of the probabilistic entropies provided in Table 11.2
using Equations 11.1 and 11.2 provided above. The first line for the
Netherlands as a whole shows that there is more mutual information
between the geographical distribution of firms and their technological
specialization (TGT = 72 mbits) than between the geographical distribu-
tion and their size (TGO = 19 mbits). However, the mutual information
300 The capitalization of knowledge
ΔT > –0.50
> –1.00
≤ –1.00
SECTORAL DECOMPOSITION
1 2 3 4 5 6 7 8 9 10 11
Txyz in All High- % change N High- and % change N Knowledge- % change N
millibits sectors tech (2/3) medium-tech (2/6) intensive (2/9)
manufacturing services
NL −34 −60 80.2 45 128 −219 553 15 838 −24 −27.3 581 196
Drenthe −56 −93 67.6 786 −349 526 406 −34 −39.1 11 312
Flevoland −30 −36 20.6 1307 −206 594 401 −18 −37.9 10 730
304
Friesland −56 −136 144.9 983 −182 227 951 −37 −32.6 14 947
Gelderland −43 −94 120.1 4 885 −272 536 2 096 −25 −40.8 65 112
Groningen −45 −66 48.1 1 204 −258 479 537 −29 −34.0 14 127
Limburg −33 −68 105.9 2 191 −245 647 1 031 −18 −45.1 30 040
N.-Brabant −36 −58 61.2 6 375 −190 430 2 820 −30 −16.6 86 262
N.-Holland −17 −34 103.4 9 346 −173 943 2 299 −17 1.0 126 516
Overijssel −35 −79 127.6 2 262 −207 496 1 167 −20 −42.8 30 104
Utrecht −24 −39 65.9 4 843 −227 859 1 020 −13 −45.0 52 818
S.-Holland −27 −44 61.7 10 392 −201 635 2 768 −15 −45.5 128 725
Zeeland −39 −67 73.3 554 −180 365 342 −28 −27.8 10 503
Measuring the knowledge base of an economy 305
and Overijssel. The effects of this selection for North-Brabant and North-
Holland, for example, are among the lowest. However, this negative rela-
tion between high- and medium-tech manufacturing on the one hand, and
high-tech services on the other, is not significant (r = −0.352; p = 0.262).
At the NUTS-3 level, the corresponding relation is also not significant.
Thus the effects of high- and medium-tech manufacturing and high-tech
services on the knowledge base of the economy are not related to each
other.
ACKNOWLEDGMENT
NOTES
REFERENCES
Abernathy, W. and Clark, K.B. (1985), ‘Mapping the winds of creative destruc-
tion’, Research Policy, 14, 3–22.
308 The capitalization of knowledge
L. Soete (eds), Technical Change and Economic Theory, London: Pinter, pp.
38–66.
Fritsch, M. (2004), ‘Cooperation and the efficiency of regional R&D activities’.
Cambridge Journal of Economics, 28 (6), 829–46.
Godin, B. (2006), ‘The knowledge-based economy: conceptual framework or
buzzword’, Journal of Technology Transfer, 31 (1), 17–30.
Han, T.S. (1980), ‘Multiple mutual information and multiple interactions in fre-
quency data’, Information and Control, 46 (1), 26–45.
Hatzichronoglou, T. (1997), Revision of the High-Technology Sector and Product
Classification, Paris: OECD, http://www.olis.oecd.org/olis/1997doc.nsf/LinkTo/
OCDE-GD(97)216.
Jakulin, A. and I. Bratko (2004), Quantifying and Visualizing Attribute Interactions:
An Approach Based on Entropy, http://arxiv.org/abs/cs.AI/0308002.
Khalil, E.L. (2004), ‘The three laws of thermodynamics and the theory of produc-
tion’, Journal of Economic Issues, 38 (1), 201–26.
Laafia, I. (1999), Regional Employment in High Technology, Eurostat, http://epp.
eurostat.ec.europa.eu/cache/ITY_OFFPUB/CA-NS-99-001/EN/CA-NS-99-
001-EN.PDF.
Laafia, I. (2002a), ‘Employment in high tech and knowledge intensive sectors in
the EU continued to grow in 2001’, Statistics in Focus: Science and Technology,
Theme, 9 (4), at http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-NS-
02-004/EN/KS-NS-02-004-EN.PDF.
Laafia, I. (2002b), ‘National and regional employment in high tech and knowl-
edge intensive sectors in the EU – 1995–2000’, Statistics in Focus: Science
and Technology, Theme 9 (3), http://epp.eurostat.ec.europa.eu/cache/ITY_
OFFPUB/KS-NS-02-003/EN/KS-NS-02-003-EN.PDF.
Lengyel, B. and L. Leydesdorff (2010), ‘Regional innovation systems in Hungary:
the failing synergy at the national level’, Regional Studies (in press).
Leydesdorff, L. (1994), ‘Epilogue’, in L. Leydesdorff and P. Van den Besselaar
(eds), Evolutionary Economics and Chaos Theory, London and New York:
Pinter, pp. 180–92.
Leydesdorff, L. (1995), The Challenge of Scientometrics: The Development,
Measurement, and Self-Organization of Scientific Communications, Leiden:
DSWO Press.
Leydesdorff, L. (1997), ‘The new communication regime of university–industry–gov-
ernment relations’, in H. Etzkowitz and L. Leydesdorff (eds), Universities and the
Global Knowledge Economy, London and Washington, DC: Pinter, pp. 106–17.
Leydesdorff, L. (2003), ‘The mutual information of university–industry–
government relations’, Scientometrics, 58 (2), 445–67.
Leydesdorff, L. (2006), The Knowledge-Based Economy: Modeled, Measured,
Simulated, Boca Rota, FL: Universal Publishers.
Leydesdorff, L. (2010), ‘Redundancy in systems which entertain a model them-
selves: interaction information and self-organization of anticipation’, Entropy,
12 (1), 63–79.
Leydesdorff, L. and M. Fritsch (2006), ‘Measuring the knowledge base of regional
innovation systems in Germany in terms of a triple helix dynamics’, Research
Policy, 35 (10), 1538–53.
Leydesdorff, L., W. Dolfsma and G. Van der Panne (2006), ‘Measuring the knowl-
edge base of an economy in terms of triple-helix relations among technology,
organization, and territory’, Research Policy, 35 (2), 181–99.
310 The capitalization of knowledge
Li, T.-Y. and J.A. Yorke (1975), ‘Period three implies chaos’, American
Mathematical Monthly, 82 (10), 985–92.
Luhmann, N. (1986), Love as Passion: The Codification of Intimacy, Stanford, CA:
Stanford University Press.
Luhmann, N. (1995), Social Systems, Stanford, CA: Stanford University Press.
Lundvall, B.-Å. (1988), ‘Innovation as an interactive process: from user–producer
interaction to the national system of innovation’, in G. Dosi, C. Freeman,
R. Nelson, G. Silverberg and L. Soete (eds), Technical Change and Economic
Theory, London: Pinter, pp. 349–69.
Lundvall, B.-Å. (ed.) (1992), National Systems of Innovation, London: Pinter.
Machlup, F. (1962), The Production and Distribution of Knowledge in the United
States, Princeton, NJ: Princeton University Press.
McGill, W.J. (1954), ‘Multivariate information transmission’, Psychometrika, 19
(2), 97–116.
Miles, I., N. Kastrinos, K. Flanagan, R. Bilderbeek, P. Den Hertog, W. Hultink
and M. Bouman (1995), Knowledge-Intensive Business Services: Users, Carriers
and Sources of Innovation, European Innovation Monitoring Service, No. 15,
Luxembourg.
Mirowski, P. and E.-M. Sent (2001), Science Bought and Sold, Chicago, IL:
University of Chicago Press.
Nelson, R.R. (ed.) (1993), National Innovation Systems: A Comparative Analysis,
New York: Oxford University Press.
Nelson, R.R. (1994), ‘Economic growth via the coevolution of technology and
institutions’, in L. Leydesdorff and P. Van den Besselaar (eds), Evolutionary
Economic and Chaos Theory, London and New York: Pinter, pp. 21–32.
OECD (1986), OECD Science and Technology Indicators: R&D, Invention and
Competitiveness, Paris: OECD.
OECD (1996), New Indicators for the Knowledge-Based Economy: Proposals for
Future Work, DSTI/STP/NESTI?GSS/TIP (96) 6.
OECD (2000), Promoting Innovation and Growth in Services, Paris: OECD.
OECD (2001), Science, Technology and Industry Scoreboard: Towards a Knowledge-
based Economy, Paris: OECD.
OECD (2003), Science, Technology and Industry Scoreboard, Paris: OECD.
OECD/Eurostat (1997), Proposed Guidelines for Collecting and Interpreting
Innovation Data, ‘Oslo Manual’, Paris: OECD.
Pavitt, K. (1984), ‘Sectoral patterns of technical change’, Research Policy, 13 (6),
343–73.
Pugh, D.S. and D.J. Hickson (1969), ‘The context of organization structures’,
Administrative Science Quarterly, 14 (1), 91–114.
Pugh, D.S., D.J. Hickson and C.R. Hinings (1969), ‘An empirical taxonomy of struc-
tures of work organizations’, Administrative Science Quarterly, 14 (1), 115–26.
Schumpeter, J. [1939] (1964), Business Cycles: A Theoretical, Historical and
Statistical Analysis of Capitalist Process, New York: McGraw-Hill.
Schwartz, D. (2006), ‘The regional location of knowledge based economy activities
in Israel’, The Journal of Technology Transfer, 31 (1), 31–44.
Shannon, C.E. (1948), ‘A mathematical theory of communication’, Bell System
Technical Journal, 27 (July and October), 379–423 and 623–56.
Storper, M. (1997), The Regional World, New York: Guilford Press.
Suárez, F.F. and J.M. Utterback (1995), ‘Dominant design and the survival of
firms’, Strategic Management Journal, 16 (6), 415–30.
Measuring the knowledge base of an economy 311
312
Integration mechanisms and performance assessment 313
Since the early 1990s, the network metaphor has inspired a variety of
theoretical, methodological and technical developments. Despite their
differences, three seminal approaches tend to converge on the notion
of networks as complex systems: social network analysis, the study of
networks as social coordination mechanisms, and actor-network theory.
These approaches provide some theoretical foundations for the study of
integration mechanisms and network performance.
Social network analysis developed from the notion that networks are
systems of bonds between nodes and that bonds are structures of interper-
sonal communication. Nodes can be individuals, collective entities (e.g.
organizations or countries), or positions in the network. This approach
has centred on the morphological dimension of networks, asking how
actors are distributed in informal structures of relations and what are the
boundaries or limits of a network. Its main concerns include the opera-
tionalization, measurement, formalization and representation of ties. Only
recently has this approach developed mathematical models and tools to
analyse network dynamics.
The dominant image in this literature is one of a dense, egocentric
network, comprising homogeneous actors linked by strong (intense) ties.
One of the most interesting problems analysed through this approach is
‘the strength of weak ties’ (Granovetter, 1973). A tie with a low level of
interpersonal intensity, it is argued, may have a high level of informative
strength if it is a ‘bridge’, that is to say, the only link between two or more
groups, each of them formed by individuals connected by strong bonds.
Seeing networks as complex organizations, Burt (1992) put forward
the notion of ‘structural holes’: sparse regions located between two or
more dense regions of relations and representing opportunities for the
314 The capitalization of knowledge
Trust or Translation?
People from industry always claim to know what the problem is from a non-
technical standpoint. But if you get into it, you realize that what lies behind is
very different . . . Sometimes, they think they have a problem of processes, and
perhaps it is in fact a problem of materials; or they see it as a problem of materi-
als and indeed is a problem of characterisation . . . To break down [the problem]
also allows us to identify other problems, of which they were probably unaware
even though the problems were about to explode.
Negotiation
Deliberation
Differences are solved by seeking more information and studying it more care-
fully. That is, everybody has to learn how to understand the others . . . We must
understand how to measure and evaluate things . . . But this is achieved through
studying, and afterward, in some meeting, talking more deeply about mistakes,
definitions, and so on.
You have to back up your proposals with indicators . . . You have to show
the viability of numbers.
allow networks to embed deliberation in their daily life and to take full
advantage of its potential for producing legitimate decisions.
Deliberation has several advantages over other forms of collective
decision-making. In contrast to voting, where the acceptance of majority
decision by the minority is a structural problem, negotiation and delibera-
tion do not create absolute losers. But whereas the main goal of negotia-
tion is to reach a compromise among conflicting interests, the main goal of
deliberation is to convince the other partners. Thus collective agreements
reached through deliberation are self-enforcing and therefore less vulner-
able to unilateral action, which is a weakness of negotiated agreements.
Deliberation may reinforce the efficacy of networks. According to
Weale (2000, p. 170), under certain conditions, transparent processes
based on deliberative rationality must lead to solutions that are function-
ally efficacious in most cases. This will happen if the solution to a given
problem complies with the following conditions: it must arguably belong
to the set of those decisions that may be reasonably chosen, even if there
were other options that could have been reasonably chosen as well; it must
be open to scrutiny by those affected or benefited by it. If this is the case,
then negotiation and the pressure for unanimity are irrelevant to the extent
that their potential outcomes belong to the set of decisions that may be
made through deliberation.
However, deliberation has important drawbacks. Agreements often
exact a heavy price, as they are usually achieved through long and compli-
cated processes of discussion. Besides, there is always the risk that deliber-
ation may lead to non-decisions (Jachtenfuchs, 2006). Deliberation is not
only a time-consuming activity; it also requires energy, attention, infor-
mation and knowledge, which have been considered scarce deliberative
resources (Warren, 1996).
Moreover, by stimulating public discussion, deliberation may intensify
disagreement and increase ‘the risk that things could go drastically wrong’
(Bell, 1999, p. 73). It may even create disagreement where there was none.
It can impede or at least complicate the adoption of rules guiding collec-
tive discussion and decision-making, which form the basis for subsequent
deliberations.
Although it has been considered that collective agreements reached
through deliberation are ideal for heterogeneous actors, several interview-
ees think that some differences among participants were never solved.
But this apparent deficit of efficiency is perhaps a structural problem of
knowledge networks. Members frequently complain that to reach agree-
ments they have to participate in multiple committees of all sorts and to
spend much time discussing in formal and informal settings. This is not
surprising, as knowledge networks are characterized by a permanent
Integration mechanisms and performance assessment 325
NETWORK PERFORMANCE
1. Normative dimension. Whether decisions and actions are right: the extent
to which they comply with the normative standards of participants.
2. Technical dimension. Whether they are true or accurate: how suc-
cessful they are in solving the problems that the network is meant to
address and in finding correct answers to relevant questions.
3. Exchange dimension. Whether they are profitable: how much they
satisfy the interests of all participants and how well they deal with
their concerns.
326 The capitalization of knowledge
Actions and decisions must perform reasonably well in each of the three
dimensions simultaneously. A good decision or action should be right,
true or accurate, and profitable9. An action that is judged normatively
sound but fails to bring about accurate solutions to the problems or profit-
able results to participants would be practically useless. One that has tech-
nically accurate and profitable consequences but violates norms and rules
that are fundamental to any of the participant entities may undermine the
collaborative project.
Moreover, given that these networks bring together people from dif-
ferent institutional settings, each of the three dimensions listed above
necessarily comprises a variety of standards. The norms and values held
by academic and business organizations are obviously different, and all of
them must be taken into account when determining whether a decision or
action was right or wrong. Similarly, to determine whether a given deci-
sion was correct, it is necessary to consider the definitions of truth and
accuracy that prevail in all the participant entities. The same holds for
finding out whether the results of an action were profitable, since universi-
ties and firms have distinct interests and goals, which result in different
views of what a ‘profit’ should be.
Equally, or even more importantly, those dimensions also comprise the
standards created by the network itself. The relevant norms, the nature
of the technical problems to be solved, and the interests and goals of par-
ticipants are defined, shaped and transformed by means of the interaction
itself. To the extent that the interaction crystallizes into a genuinely new
entity and becomes autonomous from its sponsors, the network acquires
its own performance standards.
Organizational Performance
Decisions and instrumental actions are not the only results of networks
that merit attention when evaluating their performance. It is equally
important to observe whether, in making or undertaking them, the
network preserved, or undermined, the opportunities for future collabora-
tive exchanges.
Integration mechanisms and performance assessment 329
Mechanism Criteria
Trust ● Production of normative, strategic and technical trust
● Balance among the three kinds of trust
Translation ● Creation of common languages
● Institutionalization of translation
● Training of individual translators
● Diminishing interpretative flexibility
Negotiation ● Reciprocity: respect for the legitimate particular interests
of participants
● Production of rules for future negotiations
● Creation of mechanisms and sites for conflict negotiation
Deliberation ● Equal opportunity to participate in decision-making
● Definition of collective interests, objectives and problems
● Creation of institutions for deliberation
As Table 12.2 suggests, this means, in the first place, asking whether the
operation of the network satisfied the fundamental standards or ideals of
the four integration mechanisms analysed above: trust, translation, nego-
tiation and deliberation. But it also depends on whether it strengthened or
weakened the conditions necessary for the operation of such mechanisms.
Trust can be reinforced by use – but it may also be destroyed when used
improperly. Seemingly efficient solutions may violate some basic prin-
ciples of fairness, discrediting some of the participants or contravening
important values. More subtly, a decision or action may alter the balance
among trust dimensions that is necessary for a network to maintain itself
and operate efficiently in the medium term. It may, for instance, reinforce
strategic trust at the expense of normative or prestige-based trust. Thus
the preservation, creation, or destruction of trust is a crucial criterion for
evaluating network performance.
A further criterion is the degree to which the network created common
languages, institutionalized the function of translation, and trained
individual translators, thereby facilitating future collaboration.
Similarly, the interaction may define collective interests, objectives,
problems and solutions in a way that facilitates future deliberation. It can
also reinforce norms (equality, respect, reciprocity, openness) and institu-
tions that facilitate deliberation; particularly important is the equal oppor-
tunity to participate in decision-making (arguably ‘the most fundamental
condition’ of deliberation).
Decisions may be made according to the principle of reciprocity, which
330 The capitalization of knowledge
Dimensions Criteria
Autonomy ● Self-regulation capacities (organizational autonomy)
● Self-selection capacities (individual autonomy)
Network ● Stabilization of the network
development ● Organizational learning
● Creation of new networks
Learning ● Individual acquisition of organizational skills and
knowledge
CONCLUSION
NOTES
2. For a further analysis of these findings, see Luna and Velasco (2005).
3. For a broader analysis of this topic, see Luna and Velasco (2003).
4. For further analysis on the relationship between trust and translation, see Luna and
Velasco (2006).
5. Half of the interviewees spontaneously said that these decisions were collectively made.
The other participants gave diverging replies when asked about the source of such deci-
sions, even when they were referring to the same collaborative project.
6. That is, agreements about which there is no expressed opposition by any participant, or
agreements that result from the sum of differences.
7. According to Dryzek (2000, p. 134), ‘the most appropriate available institutional
expression of a dispersed capacity to engage in deliberation . . . is the network’.
8. On this topic, see Magnette (2003a, 2003b), Eberlein and Kerwer (2002), and Smismans
(2000).
9. According to Weber (2005, p. 51), ‘The efficiency of the solution of material problems
depends on the participation of those concerned, on openness to criticism, on horizon-
tal structures of interaction and on democratic procedures for implementation.’
10. ‘Organizational effectiveness is a prerequisite for the organization to accomplish its
goals. Specifically, we define organizational effectiveness as the extent to which an
organization is able to fulfill its goals’ (Lusthaus et al., 2002, p. 109).
11. Oxford English Dictionary Online, 2006.
REFERENCES
335
336 The capitalization of knowledge
copyleft license see Generalized Public and SUN, Silicon Graphics and
License (GPL) Cisco 11
copyright 17, 32 definition of
see also patents/patenting knowledge as production and
cotton research centre (Raleigh– distribution 268
Durham) 152 territorial economy 293
Cowan, R. 64, 291 Denison, E.F. 272
Craig Venter 149 DiMinin, A. 7
creative destruction 101, 146–7 dogmatism 48, 56–7
Crespi, G.A. 75, 76 Dolfsma, W. 24, 295, 302
Crystal Palace Exhibition, London Dosi, Giovanni 17, 127, 136, 137, 138
(1951) 133–4 dot.com bubble 147
Dow 151
da Vinci, Leonardo 31 Dryzek, J.S. 323
Dahlman, C.J. 105 dual careers 21
Dalle, J.M. 167 Duchin, F. 277
Dasgupta, P. 4, 172, 292, 316 due diligence 85
Data Resources, Inc 183 DuPont 31
Davenport, Sally 23
David, P.A. 4, 5, 126, 170, 172, 182, Eastman Kodak 31
191, 291, 292 Economic Co-operation and
De la Mothe, J. 143 Development, Organisation for
De Liso, N. 110 (OECD) 2, 144, 218, 219, 228,
de Solla Price, Derek J. 277 230, 234, 262, 291, 295, 296, 297,
Dealers, National Association of 113 305, 306
Deane, P. 272 STI Scoreboards of 306
Dearborn, D.C. 56 econometric software packages (case
Debreu, Gérard 265 study) 184–9
decision-making (and) 58–60, 78, 101, EasyReg 186–7
264, 277, 312, 315, 329, 331 price discrimination in 189
centralization of 223 protection of 185
collective 320–325 statistics for 186
bargaining/negotiation 320, 321–2 education 21, 23, 263, 273–6, 279
deliberation 320, 322–5 categories/sources of 267
voting 320 economics of 262
hierarchical 100–101, 103 higher 225, 230, 239
irrational escalation 59 productivity of 275
polyarchic 100–101 Eisenberg, R.S. 122, 135, 180
rational 277 Eising, R. 314
regret and loss aversion 58–9 Elster, J. 59, 320, 322
risk 58 Endres, A.M. 272
temporal discounting/‘myopia’ entrepreneurial science 201–2, 207
59–60 origins of 206
dedicated biotechnological firms entrepreneurial scientists (and) 201–17
(DBFs) 19, 144–8, 150, 152–3, academic entrepreneurial culture
155, 160, 162 212–14
exploitation-intensive 156 academic world 201–2
financing of 156 forming companies 202–5
Defense Research Projects Agency industrial penumbra of the
(DARPA) 11 university 209–11
Index 339
state as facilitator (of risk 16, 66–7, 85–6, 99–100, 102–3, 324
technological products) adversity 15
225–7 aversion 85, 108
state as regulator 224–5 capital 1
methodological appendix for 239–42 management 109, 111, 249
research and development (R&D) 1–2, perception 66
8, 22, 32, 77, 85–6, 89, 91, 92, 124, propensity 58–9
127, 130, 132, 134–5, 147, 156, of short-sightedness and
160, 161, 205, 207 merchandization in UK 228–30
collaboration 67 Rogers, E.M. 269
expenditure 143, 144, 295 Rolleston, William 250
investment 80–81, 122, 123 Rohm & Haas 153
and IPR 86 Rosenberg, N. 32, 34, 35, 36, 61, 62
-intensive corporations/firms 80, Ross, B.H. 45
83 Ruggles, N. 272
national policies 218–42 Ruggles, P. 272
policy issues 275–6 Rumain, B. 44
spending, growth in 133 Ryan, C. 152
statistics 280 see also France; Ryle, G. 23, 264–5, 266, 267
Germany; United Kingdom
(UK Sable, C. 316
research and development (R&D): Sah, R.K. 100
national policies 218–42 Salais, R. 220, 224, 226
analytical framework: construction Sampat, B.N. 74
of policy-making conventions Samson, A. 248, 250
219–21 Samuelson, Paul 265
public regimes of action in the UK, San Diego 156
Germany and France 228–34 Sassen, S. 160
see also France; Germany; research, Sauvy, A. 272
development and innovation Saxenian, A.L. 226
(RDI) policies; United schemas 44–5
Kingdom (UK) pragmatic 44
research institutes Schippers, M.C. 53
Burnham 145 Schmitter, P.C. 314
Gottfreid-Wilhelm-Leibniz Schmoch, U. 227
Association of Research Schmookler, J. 114
Institutes 231 Schoenherr, R. 295
La Jolla 145 Schultz, T.W. 273,275
Salk 145, 154, 158, 159 Schumpeter, J.A. 101–2, 121, 146–9,
Scripps Research Institute 145, 155, 292, 297
158, 159 and post-Schumpeterian symbiotics
Torrey Mesa Research Institute (San 144
Diego) 152 Schumpeterian(s)
reasoning 56–8 corporation 102
causal 57–8 innovation/entrepreneurship 161
deductive 57 insights 147
probabilistic 56–7 model 19
Richardson, G.B. 110 and Neo-Schumpeterian innovation
Richter, R. 109 theorists 146, 148
Rip, A. 47 tradition 148
Index 349