Topic: Publication Processes
by Drew MacKellar
Harvard Medical School
While journals remain a crucial part of the dissemination and evaluation of research results, I consider the basic paper format to be in need of extensive updating. The model of summarizing a year or more of research with 3-5 pages of text and a handful of figures was a product of the Gutenberg age, when journals had to be printed and mailed in order for scientists to communicate their findings. Today, most results are accessed via the internet, and may include tons of data that have to be hosted in databases and subjected to extensive computational analyses before they can be interpreted. Presenting these in a static document is unwieldy, and some journals are starting to experiment with alternative formats that include extensive hyperlinking to databases, hosting raw data, and providing videos to demonstrate techniques used and summarize results. Of course, a concise, human-readable summary will always be required, but journals should encourage authors to augment this wherever possible with other media and online tools to make their data easier to grasp as well as reinterpret and build upon.
Another issue related to the publication process is attribution of credit and evaluation of a researcher’s impact. Counting papers or citation scores was fine when papers had two authors each, but such cases are rare today. If multiple authors are present, how are they evaluated, other than by the assumption that the first author (or co-authors) did most of the work and the last/corresponding author contributed resources and guidance as the PI? Some journals currently require lists of each authors’ contribution for publication, and this practice should be expanded, I think. But as science increasingly becomes a collaboration between many individuals with unique specialties, alternative metrics need to be developed before authors can feel comfortable that they’re being recognized and rewarded in proportion to their effort.
Even comparisons between journals are problematic: does one high-impact publication equal the merit of a few pedestrian, solid, society-level journal papers? A dozen? Well-read scientists are able to discern the significance of a result independently from the journal it appears in, but might be hard pressed to quantify that significance. This ambiguity helps promote the anxiety among young scientists that publication in Cell, Nature, or Science is a prerequisite for a successful career. Some remedies to address this situation are simple, such as offering paper-level data about the number of times a report has been viewed or downloaded. But the development of even more tools to correlate individual researchers with the impact they have upon their field will be necessary to reduce the perverse incentives that arise from the zero-sum competition among scientists who expect to be evaluated primarily by the aggregate impact factor of the journals in which they publish.