The Nuffield Council on Bioethics welcomes The Metric Tide, the report of the Independent Review of the Role of Metrics in Research Assessment and Management, published today, as a valuable contribution to the development and review of the assessment of academic research. The report of the Review provides a thoughtful analysis of the effects of the use of metrics on different aspects of research culture.

The Review draws upon and supports many of the findings of the Council’s project on The culture of scientific research in the UK, which included a survey of almost 1000 scientists and others, and a series of discussion events at universities around the UK.

"How research and researchers are assessed has a huge influence on what science is done, who does it and how they behave," said Ottoline Leyser, Professor of Plant Development at the University of Cambridge, and Chair of the Council's project on research culture. "We very much support the recommendation that HEI leaders should develop a clear statement of principles on their approach to research management and assessment, including the role of quantitative indicators, and should champion these principles and the use of responsible metrics within their institutions"

Comments on some of the specific conclusions of the Review are provided below.

“Inappropriate indicators can create perverse incentives”

We also found this to be true. We heard that publishing in high impact factor journals is still thought to be the most important element in determining whether researchers gain funding, jobs and promotions, along with article-level metrics such as citation numbers. Combined with very high levels of competition in the research environment, the participants of our project think that this is leading to:
  • poor quality research practices such as rushing to finish and publish research and the use of less rigorous research methods

  • some important kinds of research not being published or recognised, such as negative findings or replication studies, and multidisciplinary research

  • non-article research outputs such as datasets and patents being undervalued

  • the other activities of researchers that contribute to high quality science, such as mentoring, training, peer review and public engagement, being undervalued.

“Peer review, despite its flaws and limitations, continues to command widespread support across disciplines"

We also found strong support for the peer review system. 71% of our survey respondents believe that the peer review system is having a positive or very positive effect on scientists. However, a need for a review of the system was also raised and there is support for some of the new approaches already being used or piloted by journals, such as open peer review. Peer reviewers and grant assessment committee members need careful training and guidance to ensure they are aware of and follow assessment policies, and high quality peer review and committee service should be properly recognised and rewarded.

“HEI leaders should develop a clear statement of principles on their approach to research management and assessment, including the role of quantitative indicators, and should champion these principles and the use of responsible metrics within their institutions”

In drawing up a statement of principles, HEI leaders should:
  • ensure the track record of researchers is assessed broadly, without undue reliance on journal impact factors, in processes for making appointments, conducting staff appraisals and awarding promotions.

  • sign up to the principles of the Athena SWAN Charter and adopt other employment practices that support diversity and inclusion.

“HR managers and recruitment or promotion panels in HEIs should be explicit about the criteria used for academic appointment and promotion decisions. Like HEIs, research funders should develop their own context-specific principles for the use of quantitative indicators in research assessment and management and ensure that these are well communicated, easy to locate and understand.”

This is important. Throughout our project we found there to be widespread misperceptions or mistrust among researchers about the use of metrics in research. For example, many are unaware or untrusting of the instructions given to REF panels not to make any use of journal impact factors in assessing the quality of research outputs.

“Individual researchers should be mindful of the limitations of particular indicators”

We agree. When assessing the track record of fellow researchers, for example as a grant reviewer or appointments panel member, researchers should use a broad range of criteria, without undue reliance on journal impact factors.

“Publishers should reduce emphasis on journal impact factors as a promotional tool, and only use them in the context of a variety of journal-based metrics that provide a richer view of performance.”

Publishers should use a broad range of metrics to highlight journal and article strengths, and tackle biases in research publishing by considering ways of ensuring that the findings of a wider range of research meeting standards of rigour can be published.