Consequences of evaluation
Regardless of whether people are assessed by metrics or by more qualitative criteria, the fact that they are assessed will lead them to orient their actions towards that assessment (Wullum Nielsen, 2018). In other words, if you judge and promote people only by their h-index, it requires little imagination to see that they will simply work hard to boost their h-index. What is more, since assessment will usually feed into the distribution of scarce goods – think of research grants, research positions, promotion to the next level of seniority – such evaluation is de facto made into a competition. This is in fact a widely-heard complaint about present-day academia: that it is so competitive…
While competition may be stimulating in some sense – it makes people try harder – it also has important downsides. Obviously, it may make people try too hard, and incur work pressure of an unhealthy level. But in addition, it may lead to people showing risk-evasive behaviour (Moore, Neylon, Eve, O’Donnell, & Pattinson, 2017). If only published outcomes matter and if these outcomes are crucial to making a next step in one’s career, then it becomes a rational choice to do research that will certainly lead to outcomes. This is at odds with doing innovative research, which may lead to surprising outcomes, but may also fail to produce outcomes.
A final undesired consequence of evaluation per se is that many criteria pretend to be neutral and universal, whereas in fact they match the practices of some disciplines much better than others. Also, they may de facto work differently for men than for women, if we take into account how these genders are differently approached in practice (Wullum Nielsen, 2018). While it can usually be convincingly explained that philosophers prefer writing books over articles, which makes their h-indexes dwarfed by those of medical researchers, it sets some disciplines back to others if university administrators ‘simply want an overview of how productive their respective departments are’. What is more, even if university governors are willing and able to pay proper attention to such differentiation, it is still the case that these metrics feed into the single numbers of rankings, to which their universities are made subject.