Evaluations and follow-ups of public research are becoming more and more frequent. Funding agencies demand that the research at universities is carried out efficiently, is of good quality and on a par with the international competition. It has also become common for universities and university colleges to perform self-evaluations in order to identify strong research environments and make strategic investments.
The growing competition for research funds within the scientific community has made the need for transparent and objective measures in evaluations more and more apparent. The use of quantitative measures, developed in the field of bibliometrics, has therefore become an increasingly important tool for research administrators and funding agencies. By using performance indicators it is possible to study the quantity and impact of scientific output.
As the term suggests, indicators do not show the whole picture. Instead, they represent a simplification of a complex reality. The most basic indicators are simple quantity measures, e.g. the number of publications in peer review journals or the number of citations received.
The varying citation and publication traditions in different disciplines make these indicators unsuitable for comparisons across research fields. To make such comparisons possible it is necessary to normalize the indicators, i.e. relate the citation rate to a reference value. The so called crown indicator normalizes the citation rate with regards to field, document type and age, and is regarded as one of the most reliable and comparable indicators.
The most well-known indicator is probably the journal impact factor, which measures the mean citation rate for articles published in a certain journal. The relative simplicity of this measure has made the impact factor popular for other purposes than measuring journal impact. It is not uncommon to evaluate the performance of individual scientists or research groups by counting the number of submitted articles in high impact journals. Since the journal impact factor only measures the impact of a journal, and not of individual articles, this use is inappropriate and can lead to false conclusions.
Evaluations of humanities and social sciences
Citation analysis can be a useful tool when evaluating research areas in science, medicine and technology but is less suited for evaluating the humanities and social sciences. The citation databases of today only cover a small portion of all research papers published in these disciplines. Also, in many of these disciplines research findings are published in monographs and not in journals.
An alternative to using citations in evaluations is to measure the research output by giving publications different weights based on publication channel and publication type. This method is used in Norway for distributing research funds to universities. The calculation is based on a categorization of journals and publishers in two different levels, where articles published in level 2 journals and monographs published by level 2 publishers give more points. The Research Council of Norway is responsible for maintaining and updating the list of categorized journals and publishers.
Articles in most scientific journals are subject to a quality control before publication. In this process, called peer review, experts within the field scrutinize the manuscript to make sure it meets with the scientific standard of the discipline. The reviewers are usually anonymous and sometimes the identity of the author is also masked in the review process, so called double-masked review. Journals in humanities and social sciences often have a less standardized process for quality control, where submitted articles are reviewed directly by the editorial staff.
Peer review can also be used for screening funding applications and for evaluations of research institutions. The peer review process is sometimes criticized for lacking accountability and clear criteria. One way of compensating for these flaws is to combine bibliometric analysis with peer review.