Inter-Research > ESEP > pp11  
Ethics in Science and Environmental Politics

via Mailchimp

ESEP 8:pp11 (2008)  -   doi:10.3354/esep00088

Validating research performance metrics against peer rankings

Stevan Harnad*

Chaire de recherche du Canada, Institut des sciences cognitives, Université du Québec à Montréal, Montréal, Québec H3C 3P8, Canada Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton SO17 1BJ, UK

ABSTRACT: A rich and diverse set of potential bibliometric and scientometric predictors of research performance quality and importance are emerging today—from the classic metrics (publication counts, journal impact factors and individual article/author citation counts) to promising new online metrics such as download counts, hub/authority scores and growth/decay chronometrics. In and of themselves, however, metrics are circular: They need to be jointly tested and validated against what it is that they purport to measure and predict, with each metric weighted according to its contribution to their joint predictive power. The natural criterion against which to validate metrics is expert evaluation by peers; a unique opportunity to do this is offered by the 2008 UK Research Assessment Exercise, in which a full spectrum of metrics can be jointly tested, field by field, against peer rankings.

KEY WORDS: Bibliometrics · Citation analysis · Journal impact factor · Metric validation · Multiple regression · Peer review · Research assessment · Scientometrics · Web metrics

Full text in pdf format  |  Mail this link

ESEP THEME SECTION: The use and misuse of bibliometric indices in evaluating scholarly performance