A number of measures, called performance indicators are usually invented in order to simplify the evaluation process and make it more easily readable by non-expert groups (e.g. investors). "Simply put, performance indicators are measures that describe how well a programme is achieving its objectives Indicators are usually quantitative measures but may also be qualitative observations. They define how performance will be measured along a scale or dimension" (USAID Center for Development Information and Evaluation, 1996). The question that is raised in this essay can be formulated as follows: is it possible to rely on performance indicators without evaluation itself, and what will be the consequences To answer that question, the essay clarifies at first a concept of evaluation, its development in research policy, its relations with performance indicators (PI), limitations of PI, and finally demonstrates with the help of two examples that the substitution of evaluation with merely PI will lead to the decline of investor-funded science itself.
Let us at first get acquainted with the concept of evaluation answering a simple question: what is the evaluation and why do we need it in research Generally the evaluation can be defined as follows: Evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object" (Trochim, 2002). So, in other words, evaluation provides the interested parties with the feedback, which will be useful, i.e. will help in the decision-making process. This leads us to the answer on the second part of the expressed question: evaluation is needed in research to make the funding policy more effective. If the evaluation processes provide the correct feedback about the usefulness of candidate scientific projects then the most 'useful' projects will receive funding, which will lead to the development of 'useful' science. The word 'useful' is placed in quotation marks advisedly, as it is also an important question: what science can be called useful However, this question leaves out of the scope of this essay.
Initially, evaluation can be divided into two types: formative and summative. Whereas formative evaluation examine the delivery of the project or technology, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on, the summative evaluation analyses the effects of the project, determining its overall impact (Trochim, 2002). Each of these types benefits from the use of performance indicators, because to determine both the implementation and the impact a number o measures have to be devised.
Development of evaluation
It is evident that the evaluation process itself constantly endures changes. To put it differently, the accent of evaluation changes in accordance with the current research evaluation policy. "In most European countries an "evaluation culture" in science, technology and innovation policies has evolved since the 1980s, including the ex post evaluation of research programmes and other policy initiatives, the evaluation of R&D centres and universities, and the evaluation of R&D funding agencies. (Kuhlmann, 2000)" Rip characterises the changes of R&D evaluation through the use of triangular metric with accountability, strategic change,