From Left to Right: Bernd Pulverer, David G. Drubin, Mike Rossner, Mark Patterson, and Stefano Bertuzzi.
There is something very appealing about the simplicity of using a single number to indicate the worth of a scientific paper.
But a growing group of scientists, publishers, funders, and research organizations are increasingly opposed to the broad use of the Journal Impact Factor (JIF) as the sole measure used to assess research and researchers.
In May of this year, 237 individuals and institutions signed the San Francisco Declaration on Research Assessment (DORA), which calls for an improvement in the way the output of scientific research is evaluated. The declaration poses a bold proposition: that journal-based metrics should not be used as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in making hiring, promotion or funding decisions.
Since it was issued just two short months ago, the statement has resonated in diverse corners of the scientific community and now over 8,000 supporters have pledged their support to the campaign by signing the online declaration.
For its leadership in addressing this critical issue, the Scholarly Publishing and Academic Resources Coalition recognizes the creators of DORA as the SPARC Innovators for July 2013.
In the 1960s, a private company, Thomson Reuters (then known as The Institute for Scientific Information), invented JIF, which measures the frequency with which the average article in an academic journal has been cited in a particular period. Each June, publishers eagerly await the annual release of rankings in its computer-compiled statistical reports. The JIF was originally designed as a metric that could help librarians assess the utility of a journal in their purchasing decisions. Many librarians feel it is still useful as a tool for that purpose.
The JIF may be a quick and easy way to assess the average citations to a journal, but DORA advocates say it has subsequently been used inappropriately as a proxy to assess individual research papers and researchers. They note that the JIF has taken on a dominant role in publishing decisions and academic personnel matters in a way that that is warping scientific judgment.
David Drubin, editor-in-chief of The American Society for Cell Biology’s journal, Molecular Biology of the Cell (MBoC), and professor of cell and developmental biology at the University of California at Berkeley, helped organize the initial gathering of 15 people to discuss the issue of impact factors at the annual meeting of the ASCB in San Francisco in December 2012.
“It’s basically a broken system,” says Drubin. “People are really frustrated. It’s driven by administrators who want a shortcut to assess contributions of scientists….To me, publishing papers in journals with the highest impact factors is largely about marketing and self promotion. Selectivity is not something you can equate with quality.”
Drubin says the issue hits him from both fronts. As an editor, Drubin says he doesn’t want to try to “game the system,” and organize content based on getting a higher impact factor. “I don’t believe the journal impact factor reflects the value and quality of the journal,” he says. “The philosophy of MBoC is to serve the scientist.” As an author, Drubin notes that he wants to send a manuscript to a journal that is a good fit with a great editorial board, but is aware that many of his students and postdocs are obsessed with placing their work at a journal with a high impact factor because they believe that such publications are critical to finding jobs and securing funding.
At the initial meeting, there was a consensus that too much attention was being paid to JIF in decisions about hiring, promotion, funding and publishing. Concerns over the lack of transparency about how the measure is produced added additional urgency to the participants’ call for a new way to assess research.
“Scientific research should be assessed on its own merits and not on the basis of where it is published,” said Stefano Bertuzzi, executive director of the ASCB who was an original organizer along with Drubin. “Science is a complex business. We believe it requires reading a paper and using an array of metrics – not relying on one.”
The group agreed within less than two hours to do something tangible to raise awareness of the issue. “Everybody was immediately on board and started thinking about what we should do,” recalls Bertuzzi.
A core group spent the next five months drafting recommendations that addressed – and they hoped would appeal – to all stakeholders involved. The result was the declaration, which included 18 recommendations for change directed at funding agencies, institutions, publishers, researchers and organizations that supply metrics.
The aim was to be broad enough with the ideas so it would be attractive to both publishers and editors, who are under pressure to adjust editorial criteria to maximize the impact factor of their journal in order to attract high quality papers to their journal. DORA’s declarations were also made to appeal to funders, research institutions, and scientists who know that their career and funding may depend on the JIF or the name of the journal where they publish their work.
“It’s a constructive strategy,” says Mike Rossner, former executive director of The Rockefeller University Press, who attended the meeting in December. “We talked about the importance of including many stakeholders and the declaration specifically addresses several of them.”
Rossner was among many who have chimed in in recent years on the issue. He wrote editorials in 2007 and 2008 examining the reliability of the data used to determine the 2006 JIF for The Journal of Cell Biology in 2007. Discontent with the metrics being used to evaluate individuals and institutions steadily accumulated and the response to DORA demonstrates the widespread concern about the issue, he says.
“DORA is a step in the right direction,” said Rossner. “I think there is momentum in the community to make changes of the sort outlined in the declaration.”
In the spirit of keeping the conversation – and action – moving, the declaration includes many suggestions – some immediate and others more long-term that would require a cultural shift within science.
The overall message in DORA is that content should be more important than metrics in the evaluation of scholarly work by funders or institutions. DORA calls on companies to be transparent about how their metrics are calculated and asks publishers not to emphasize JIF in their promotional materials.
Rossner said that he feels that the best outcome would be if DORA led hiring, promotion, and funding committees to assess the value of the quality of individual articles on their own merit and not on the basis of JIF.
Bernd Pulverer, head of scientific publications for the European Molecular Biology Organization and one of the core group of DORA creators, agreed with this assessment. He noted that JIF has become so prevalent in personnel decisions that editors now effectively have taken on some of the role of hiring committees. “I don’t feel comfortable with this. Our job is to publish, not to judge people’s careers,” he says.
The DORA creators were also concerned that some researchers have begun to include JIF of their publications on their curriculum vitae and are leaving off work if it appeared in publications with lower metrics.
The JIF affects the entire “ecosystem” from researchers to publishers to funders, says Pulverer. “That makes this closed system very hard to change – we are looking in particular to funders and research institutions to send a signal by setting evaluation policies, and by resourcing and incentivizing evaluation based on the research rather than metrics,” he says. But Pulverer is pleased that the declaration includes concrete recommendations, rather than being generic and he is encouraged by the broad spectrum of signatories ranging across the sciences from mathematics to biology and the humanities.
The movement started within cell biologists, but the supporters now include social scientists, mathematicians, and chemists from both the U.S. and around the world. Among notable institutions that are signatories: The American Association for the Advancement of Science, the Public Library of Science, and the Howard Hughes Medical Institute, Wellcome Trust, a number of prominent members of the Chinese Academy of Sciences and several universities.
As of July 22, 2013, 8,747 individuals and 347 organizations have signed DORA. An analysis of available data on individual DORA signers as of June 24, 2013, showed that 6% were in the humanities and 94% in scientific disciplines; 46.8% were from Europe, 36.8% from North and Central America, 8.9% from South America, 5.1% from Asia and the Middle East, 1.8% from Australia and New Zealand, and 0.5% from Africa.
The growing list of signatories, including thought leaders in various disciplines and prominent editors, is encouraging to those behind DORA. Editor-in-Chief of Science, Bruce Alberts signed the declaration and wrote in support of the recommendations in a May 17 editorial, “Impact Factor Distortion.”
Drubin says once the declaration began circulating, the reaction came from sectors of science he never imagined. “It really started to snowball. There is no way I thought it would resonate with so many,” he says.
While reliance on journal metrics has been a concern for a while, Pulverer says the issue came to a head as funding for research has stagnated in many countries and competition has intensified to get into prestigious journals. “People are finding it harder to get grants now and more are getting frustrated when they get rejected at prestigious journals,” he says.
The current system of evaluation is embedded but there is a general view that reliance on metrics has gone too far, says Mark Patterson of Cambridge, England. He was another key player in the crafting of DORA from the beginning and serves as executive director of eLife, a new open access journal that is a collaboration between researchers and three funders of research in life science and biomedicine to advance science through effective research communication.
“The group does not feel everything would be solved if the impact factor is wiped out. It is not the root of all evil,” says Patterson. “What we are saying is that the emphasis is wrong and there should be better ways to do this – especially now in a digital environment.”
Patterson says DORA and its signatories hope to see a shift from an emphasis on journals to the articles themselves as well as other research outputs, such as data.
“We all need to recognize that research evaluation is very difficult. It’s like peer review. You can’t take shortcuts if you want to assess a paper,” says Patterson. “It’s the same with research evaluation. It’s complicated and multidimensional and can’t be reduced to a single number. You need to put in time and resources to do it properly.”
In addition to the declaration, the DORA creators sent a letter to Thomson Reuters outlining their concerns and suggesting improvements to their system. The company’s own website emphasizes its intention that the Journal Impact Factors not be used more broadly.
To keep the issue moving forward, there will be a public panel discussion at the ASCB meeting in December, where the original core group of DORA creators plan to meet to consider next steps.
By Caralee Adams