We still know very little about who, what and how is being learned from the extensive evaluation of governmental programs. Our recent findings show that only two out of four types of learners actually learn, and that they learn different things. This is good and bad.
Many governments have increased their efforts on evaluating policy programs. This is particularly the case for R&D, innovation and environmental programs, where governments have been investing substantial public resources. But why do governments need to undertake evaluation in the first place? Generally speaking evaluation has two main goals. The first is accountability, that is, to make a retrospective asessment of the program in view of attaining political legitimacy of that particular governmental intervention. The second one is learnig, that is, to bring lessons that can be used in the improvement of the evaluated program. Yet the questions of who, what and how is being learned from the evaluation of (research) programs is still largely unanswered.
In our recent paper, Steven Højlund and I, have aimed at answering this question by looking at the agents of learning, namely, the learners. Our understanding is that, if we identify who they are, what they say and what they do, as well as how they do it, we will be able to understand patterns and forms of learning.
We selected three cases of most-similar EU program evaluations, all of which directly or closely related to research activities. The EU Commission has created a single evaluation framework that is used in all its evaluation activities. This framework provides clear-cut guidelines for the process and goals of evaluations of EU-level policy programs. But as the context and needs of each evaluation are individual, there is some room for variation in the way these evaluation processes and goals are being defined. These ”differences in similarity” offer fantastic analytical possibilities for social scientists like us, interested in identifying patterns and deviations in those.
The first case we studied is the midterm evaluation of the Program for the Environment and Climate Action (LIFE). The second case is the midterm evaluation of the environmental research program within the Framework Program 7 for Research and Development that was conducted in 2010. The third case is the interim Evaluation (2009) of Competitiveness and Innovation Framework Program (CIP) – Intelligent Energy – Europe (IEE).
Our findings show that only two types of actors involved in the evaluation are actually learning (program units and external evaluators), that learners learn different things (program overview, small-scale programme adjustments, policy change, and evaluation methods), and that different learners are in control of different aspects of the evaluation (learning objectives and processes) according to the evaluation framework established by the European Commission.
Is this good or bad? It is both. It is good that some learners actually learn something from the costly evaluations, and that their learning seems to be relevant for the improvement of the program. It is also good that they learn different things, securing a plurality of views and inputs in the process of improving the program in question. However, it is less good that only two out of four types of learners actually learn. Perhaps this indicates a division of labour between the actors who learn and the actors who control (recalling the overall accountability goal of evaluations). Yet, determining this division of labour would need further actor-level analysis of learning and accountability roles in evaluation processes. And it would need as well serious theoretical developments in order to bridge the current gap between the literature on policy learning and on research evaluation.
The article:
Borrás, S and Højlund, S. (2015): “Evaluation and Policy Learning – The Learners’ Perspective”, in European Journal of Political Research vol 54 (1) pp. 99–120 doi: 10.1111/1475-6765.12076
You can read the article clicking here.
Photo credit: Susana’s own