San Francisco Declaration on Research Assessment

An impact factor of an academic journal is the average number of citations to their recent published articles. For example, a journal with an impact factor of 10 receives, on average, 10 citations per each article it publishes. Of course, very highly cited articles greatly boost the factor, and by it's nature, the impact factor cannot tell anything about the merits of individual articles published in the journal.

However, this is exactly what the impact factor is mostly used for. Time-stressed tenure and funding committees routinely use the impact factors of the journals (via their names) that an applicant has published in as a fast way of gauging the publication record of the applicant. Hence the self-perpetuating competition to publish in the "top" journals that is the mainstay of academic life. Incidentally, this practice has obvious  implications for the attractiveness of new open access journals.

As the editorial of Science puts it:

The impact factor, a number calculated annually for each scientific journal based on the average number of times its articles have been referenced in other articles, was never intended to be used to evaluate individual scientists, but rather as a measure of journal quality. However, it has been increasingly misused in this way, with scientists now being ranked by weighting each of their publications according to the impact factor of the journal in which it appeared.

Much (virtual) ink has been spilled to decry this rampant misuse of the impact factor. However, concrete actions to remedy the situation have been few and far between. Thus it may not be a surprise that the San Francisco Declaration on Research Assessment (DORA) has  gotten so much attention. As the description on their site states:

The San Francisco Declaration on Research Assessment (DORA), initiated by the American Society for Cell Biology (ASCB) together with a group of editors and publishers of scholarly journals, recognizes the need to improve the ways in which the outputs of scientific research are evaluated. The group met in December 2012 during the ASCB Annual Meeting in San Francisco and subsequently circulated a draft declaration among various stakeholders. DORA as it now stands has benefited from input by many of the original signers listed below. It is a worldwide initiative covering all scholarly disciplines. We encourage individuals and organizations who are concerned about the appropriate assessment of scientific research to sign DORA.

The Nature News report includes comments from the DORA chairman:

“We, the scientific community, are to blame — we created this mess, this perception that if you don’t publish inCellNature or Science, you won’t get a job,” says Stefano Bertuzzi, executive director of the American Society for Cell Biology (ACSB), who coordinated DORA after talks at the ACSB’s annual meeting last year. “The time is right for the scientific community to take control of this issue,” he says.

Their first and main recommendation is clear and striking – impact factors are declared unfit for duty:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

Interestingly, Nature is not amongst the signatories:

Nature Publishing Group, which publishes this blog, has not signed DORA: Nature’s editor-in-chief, Philip Campbell, said that the group’s journals had published many editorials critical of excesses in the use of JIFs, “but the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to”.)

If journal impact factors are not to be used anymore, the way forward is seen to lie with more modern article-level metrics. Altmetric indicators are developing rapidly, and new research on their relationships to traditional metrics appears at an increasing pace.

3 Comments