A methodological perspective on the evaluation
of
the promotion of university-industry-government
relations
Amsterdam
School of Communications Research (ASCoR)
Kloveniersburgwal
48, 1012 CX Amsterdam, The Netherlands
Small Business Economics (forthcoming)
Abstract
Evaluation
criteria can be expected to differ with the institutional perspectives in
university-industry-government relations. How can one use evaluation for the
improvement of the innovative capacity of these networks? Indicators used for
the evaluation, can be specified as variables in a model. The model can be
used, among other things, to distinguish between intended and unintended
outcomes of the practices under study. Institutionalized arrangements generate
filters which stimulate innovation selectively. A focus on failures is fruitful
for knowledge-based innovation since it allows for the further specification of
expectations. The latter can also be turned into research questions.
University-industry-government
relations can be considered as complex and adaptive networks of communication.
Three functions have to be fulfilled by these systems: knowledge production,
wealth generation, and (public and/or private) control at the relevant
interfaces. However, these functions do no longer prescribe who takes which role.
While the institutional missions of the different carriers predicate a
distribution of labour among them, terms like the “entrepreneurial university”
and the “knowledge economy” indicate that one can no longer assume a one-to-one
correlation between functions and institutions. Particularly the function of
knowledge generation seems to be shifting in this era of globalization.
In a complex
arrangement one is no longer justified to assume that the objectives of
evaluation are shared among the partners involved. Different expectations can
be adjusted to each other in new forms of collaboration and knowledge transfer,
but the carrying agencies also develop along their own trajectories according
to their institutional rationalities. The science system itself is changing
while entrained in these social transformation processes (e.g., Gibbons et al., 1994; Etzkowitz &
Leydesdorff, 1997). The new techno-sciences like biotechnology and computer
science do not build on strong institutional frameworks of discipline formation
during decades. Different bodies of knowledge are selectively recombined in new
programs, which are mission driven with reference to social problems and
scientific puzzle-solving. Issues of quality control and validation across
boundaries can then be expected to become pressing (Fujigaki & Leydesdorff,
2000).
The evaluation of performance, relations, and systems
The discussions
and negotiations among the partners have an analytical and a normative
component. While one option can be assessed as synergetic from the perspective
of the collaboration, one may nevertheless wish to choose for one’s specific
interests. At other moments, one may have good reasons to compromise. Not only
the perspectives, but also the relevant partners can be changed over time. The
dynamic perspective of the participants and the participation further
complicates the evaluation.
Although
functionality can be expected to prevail in the long-term, interests at each
moment in time (or during shorter time spans) provide an institutional
criterion for the evaluation. Questions can be raised like “What is best for us
now?”; “Can we afford this investment?”, etc. The meta-question of how one
defines the “us”, the “best” or the “now” in such questions leads to further
questions without necessarily simple answers.
How can one
proceed in the case of the evaluation of triple helix-type arrangements of
university-industry-government relations? Let me contribute to these questions
from a methodological perspective. In my opinion, there is first the problem of
the nature of the indicators. In other words: what is indicated by the
indicators? Second, a reflexive model of how the indicated variables are
related, is always running in the background of the analysis. The indicators
refer to variables of a system represented in terms of these indicators. Third,
the evaluation may focus on the intended or the unintended outcomes (e.g., external costs). In the latter case, the
variable language and geometrical metaphors have to be replaced with a language
of contexts and fluxes. If everything is in flux, what can then function as a
baseline for the evaluation? How is one able to compare between different
points in time?
Indicators
The choice of an
indicator implies by definition a decision that can be discussed reflexively.
The data never speak for themselves. For example, in the measurement of
scientific output we have witnessed competition among schools measuring
scientific performance in terms of publications, citations, and/or keywords (Leydesdorff,
1995). The various indicators address other layers of the scientific
communication system. Relational indicators like co-authorship and co-word
relations indicate network links, while performance indicators point to agency
at the nodes.
Similarly, in the
case of patenting one may wish to focus on patent portfolios, on patent
clusters, on technological trajectories in terms of patent citations, etc. The
various representations lead to different appreciations of the systems under
study. One generates different (and sometimes perpendicular) windows on a
complex dynamics. In many cases, the observed structure can be considered so
robust that one finds similar structures however one constructs the indicator.
On various indicators, for example, AT&T is a large company. However, the
obvious cases do not provide us with much information. It is precisely when we
are uncertain about the differences that one needs indicators.
The analyst can
provoke the specification of the (implicit) theory by turning the attention
from the observations to the expectations. Which system does one expect to be
indicated by the indicators? The elaboration of this reflection improves the
quality of the indicators because the theoretical reflection may enable us to
specify the pros and cons of the specific measurement instrument. The
measurement can then be made functional to a specification of what one wishes
to measure.
Models
Although the
choice of a specific indicator can sometimes be made on pragmatic grounds (e.g.,
because of the availability of rich data), one has always to reason why a
specific measurement would be valuable for the assessment. Does one wish to
measure output or input? Input and output indicators can be related in terms of
efficiency (= output/input) or in terms of other concepts (e.g.,
throughput). Furthermore, the evaluation should inform us about policy options,
and therefore, provide information about relations between input (that is, independent) variables and output (that is,
dependent) variables.
As noted, one can
expect the relevant output to be differently defined from both sides of an
interface. For example, a university department may engage in relations with
industry for a number of reasons. One of them can be purely financial, but
there may be also reasons like opening new domains for future research,
providing students with carreer opportunities, or even more idealistic ones
like providing the regional environment with knowledge inputs. Whatever the
incentives on this side, the firms involved will have another set of
objectives. These may complement to those of the university, but they can only
be made compatible because they are also different. Thus, the efficiency of the
cooperation cannot be measured in terms of a single set of indicators. Other
partners can be expected to entertain different indicators for the performance
measurement.
Can a government
agency take the role of an ‘objective’ arbiter? Government agencies, however,
entertain bureaucratic criteria for the management. For example, one can always
raise the question of whether the policy objectives have been achieved? Has
unemployment gone down independent perhaps of the quality of the change
processes which have been accomplished by the knowledge-intensive SMEs? The political
discourse provides criteria for the evaluation different from economic
performance and/or scientific and technical excellence.
Second-order evaluation
In a second-order
model one does not assess only the cooperation, but also the feedback effects
of the cooperation on the development of the partners themselves. For example,
one can raise the question of whether a liaison office had an effect on
academic research in the broader university context. Did the transfer officers
only generate a clientele of academic advisors or did the transfer also have an
impact on higher education? If so, how would one be able to measure this
feedback?
Analogously, one
can raise the question of what it means for small businesses to turn to the
university for advice. How can one prevent a dependency relation as an
unintended outcome? Is the shape and the substance of the relation different in
the case of knowledge-intensive industries from enterprises which have no
academics among their employees? Which difference does it make for an
enterprise to turn to a university, to a branch organization or to a
knowledge-intensive firm? Perhaps, the latter is better equipped for advice in
turning the theoretical knowledge into practice than a university department.
These issues have to be seriously discussed in an evaluation of the pros and
cons of ongoing collaborations.
Note that in a
second-order design, the focus is no longer on what happened, but on what could
have happened or what did not (yet) happen. Each solution can be considered as
a suboptimal one in a phase space of other possible solutions and at the
structural level the specific solution locks us in into arrangements which may
inhibit further innovation (Leydesdorff, 2001). A university transfer office,
for example, can easily turn into a gatekeeper that hinders direct
communication between university staff (and students) and entrepreneurs by
monopolizing the communication (e.g., for administrative reasons)?
The Internet
provides us with opportunities to directly address expertise at a worldwide
scale and knowledge-intensive firms are often able to do so. At the University
of Amsterdam, for example, we once experimented with disclosing information
about available expertise and research profiles on floppy disks and CD-ROMs,
but the university administration resisted the inclusion of contact information
like direct telephone numbers because one was afraid that the academic staff
would not be able to ask the right prices for their service and advice. Thus,
these second-order considerations may have very concrete implications: how does
the new operation fit into existing routines?
If successful,
the development of the relations can be expected to disturb the normal routines
of the relating agencies. Each system has various options for handling these
disturbances. One can try, for example, to construct an interface like a
transfer office for regulating the flows of communication or one can use the
input for educational reform, e.g., in the case of exploring new markets for
student enrolements. Similarly, the enterprise involved in the collaboration
may wish to entertain a window on the market of relevant knowledge and
expertise or it may wish to use this window as a competitive advantage.
Thus, one returns
to the questions of what is being measured by the indicators and what is being
promoted by the policies? Whose interests are served with which promotion? How
can interests be aligned for the further development of a knowledge-based
economy, for example, at the level of a region? But analogously: which
institutions and (old-boys) networks have hitherto prevented a free flow of
information across institutional boundaries; which elements of the system are
the next candidates for “creative” destruction; which institutions should be
devolved? Such questions require a focus on failures in addition to an
evaluation of the “best practices.”
References
Gibbons, Michael, Camille Limoges, Helga
Nowotny, Simon Schwartzman, Peter Scott, and Martin Trow, 1994, The new production of knowledge: the
dynamics of science and research in contemporary societies. London: Sage.
Etzkowitz, Henry and Loet Leydesdorff (eds.), 1997, Universities and the Global Knowledge Economy: A Triple Helix of
University-Industry-Government Relations
London: Cassell Academic.
Fujigaki, Yuko and
Loet Leydesdorff, 2000, ‘Quality Control and Validation Boundaries in a Triple
Helix of University-Industry-Government Relations: ‘Mode 2’ and the Future of
University Research,’ Social Science
Information 39(4), 635-655.
Leydesdorff,
Loet, 1995, The Challenge of
Scientometrics: the development, measurement, and self-organization of
scientific communications. Leiden:
DSWO Press, Leiden University (2nd edition at <http://www.upublish.com/books/leydesdorff-sci.htm >,
forthcoming).
Leydesdorff, Loet, 2001, A Sociological Theory of Communication: The Self-Organization of the
Knowledge-Based Society. uPUBLISH.COM: Univesal Publishers; at <http://www.upublish.com/books/leydesdorff.htm>.