Academic performance measurement

Written by Govert Valkenburg working at the Centre for Science and Techology Studies of Leiden University

The past decades have witnessed the transition to so-called New Public Management in public universities, which aims to bring corporate management principles to public universities so as to ensure that tax money is well spent on research and education. With this novel paradigm, the need to measure and assess the performance of researchers, and the impact of their research grew (Gläser & Laudel, 2007). Today, researchers are typically required to provide evidence of their output, how it compares to the output of peers, and how they contribute to the reputation of their institution.

One important proxy for the amount of output is simply the number of publications a researcher produces (Maimela & Samuel, 2016; Tijdink, Verbeke, & Smulders, 2014). Also, the number of times a publication is cited matters. Combining these two produces a third variable, the h-index: the number of papers by a researcher that reach at least citations. An h-index of 11 thus means that of all the papers a researcher has produced, at least 11 papers have been cited 11 times or more. While this may appear as a clever measure, it is also very abstract, and it remains debatable what it really says. Yet, even though it may ultimately measure quantity more than quality, the h-index is taken very seriously in some fields of academic research (Ball, 2012).

Recognizing the shortcomings of these and other quantitative quality measures, researchers as well as academic governors have called for other criteria than mere publication output (Benedictus, Miedema, & Ferguson, 2016; Halffman & Radder, 2015; Hicks, Wouters, Waltman, de Rijcke, & Rafols, 2015). For example, leadership and mentorship, quality in education, societal outreach and responsibility, and contribution to inclusivity and gender balance can be included as criteria by which academics are evaluated. However, for such criteria there is usually no quick way of evaluating and measuring, and they rather require methods such as portfolio building, narrative self-reports, and the acquisition of extensive peer feedback. This is time consuming, and it can be felt to be unfair because of a perceived lack of transparency.