Posted by: Alexandre Borovik | June 14, 2008


International Mathematical Union comprehensively trashes bibliometric and citation indices as means of evaluation of mathematical research. A letter from Laslo Lovasz, President of IMU:

Dear colleagues,

Today the IMU has released an important document, called “Citation Statistics”, which we want to bring to your attention.

IMU-Net 24 (July 2007) announced the creation of a committee on “Quantitative assessment of research” that was asked to investigate various aspects of impact factors and similar statistics based on citations. The committee was appointed jointly by the Executive Committees of the International Mathematical Union (IMU), the International Council for Industrial and Applied Mathematics (ICIAM), and the Institute of Mathematical Statistics (IMS). It consisted of:

- John Ewing (Providence, USA), chair, appointed by IMU
- Robert Adler (Haifa, Israel), appointed by IMS
- Peter Taylor (Melbourne, Australia), appointed by ICIAM.

The terms of reference given to the committee can be found at:

The committee has addressed this charge by reviewing and discussing current practices along with an extensive literature on the use of citations to evaluate research. Its report, written from the perspective of mathematical scientists, was submitted to the Executive Committees of IMU, ICIAM, and IMS, and all three endorsed the report. The three organizations are making the report “Citation Statistics” public today.

The report can be found at the following URL:

A press release that was mailed out today to journalists is at:

This effort was triggered by numerous requests from IMU member countries, mathematical societies, important mathematical institutions, and individuals who reported the increasing use (and misuse) of impact factors and similarly of other citation-based indicators to measure the quality of research of individuals, departments, or whole institutions.

IMU suggests that the readers of IMU-Net not only read the report but also distribute it to administrators and decision-makers who are involved in the assessment of research quality, in order to give them a mathematical science perspective. IMU, ICIAM and IMS have agreed that, in order to assure as wide distribution as possible, journals, newsletters and similar publications that are interested in publishing this report will have the non-exclusive right to publish it in one of their issues. Please contact the newsletters/journals you are connected with and suggest publication of the report “Citation Statistics”.

All 3 organizations, representing the world community of pure, applied, and industrial mathematics and statistics, hope that the careful analysis and recommendations in this report will be considered by decision-makers who are making use of citation data in research assessment.

Best regards

L. Lovasz
IMU President

About these ads


  1. This is wonderful. We need more reports of that kind – the main point should be to move the subject from corridor-level discussion to sound assessments.

    In my view, those impact factor indices say something, just like various body indicators say something about someone’s health. But they do not replace a well-formed, experienced MD.

    The problem is with too many bureaucrats reading indexes in a too simplistic way – just like physicians who would not look at their patients but would just read lab results…

  2. The UK Computing Research Committee (UKCRC), which comprises leading computer scientists from academia and industry, has also strongly criticized the proposed use of citation indexes for the assessment of research quality and achievements in UK universities. See their report here:

    It is worth quoting two short sections from this report:

    “It would be incompetent and unprofessional to introduce a citation-based Research Excellence Framework until it has been established that there is an adequately complete, consistent and auditable set of data, available from multiple sources free of any commercial bias, that can be relied on to be kept up to date, that includes citations in journals, conferences, PhD theses, industrial reports and institutional repositories — and that assessments based on citation counts from these sources leads to cost-effective assessment of research quality that does not lead to undesirable changes in the way research is carried out or published or on standards or variety of teaching. We do not see any convincing evidence that these criteria have been met.”


    “Note that UKCRC members include internationally renowned experts in the automated collection, processing analysis and storage of information – the theories, tools and methods that underlie the proposed bibliometric indicators. Our authoritative view is that the bibliometric indicators are not currently fit for the proposed purpose.”

    It is hard to imagine how the proposed assessment structure could go now ahead with any integrity in the light of such damning criticism from leading experts in the domain.



Get every new post delivered to your Inbox.

Join 75 other followers

%d bloggers like this: