The Review and Editorial Process at Circulation: Cardiovascular Quality and Outcomes
The Worst System, Except for All the Others
Peer review has long held a sacred position in the scientific process. Yet, for peer review’s participants, including authors, reviewers, and editors, it often feels like sausage making—sufficiently disconcerting that you would rather not think about it even when the outcome might be the best possible. This perspective details the process by which we evaluate articles using peer review and our policies moving forward to recognize peer reviewers for their contribution.
Processing articles at Circulation: Cardiovascular Quality and Outcomes
Here is how we make the sausage at Circulation: Cardiovascular Quality and Outcomes (CQO). After an author submits an article, it is screened by a senior editor who will review the article for appropriateness and relevance to the journal. If the article does not seem to be a good fit for the journal because of its content (eg, a study on coronary physiology), it will be rejected without review. This happens rarely but is a critical step. If the article is a potential fit, it will be assigned to an associate editor with relevant expertise. The associate editor reviews the article and makes an initial decision—if it is clear that the article is not again believed to be a good fit for CQO or is felt to not reach a high enough level of priority, it will be rejected without review, and we target ≈50% of articles in this step. At least 2 editors are involved in these decisions (and often more). This is an important step, and it is critical for authors to understand there are several reasons behind these decisions, many of which do not always reflect the quality of the work. We hear a lot from authors who have their work rejected without review and want to ensure them that this does not mean their work is fatally flawed in any manner. The goal of this process is to ensure a timely decision and to target our resources toward review to those articles that we think have the highest yield moving forward. This is in the best interest of authors and us—although we appreciate that it might not feel like that at the time.
When we do send an article out for review, we find (hopefully) appropriate peer reviewers using a variety of sources including our Editorial Board, references within the article, literature searches, the online submission system, and suggestions from authors (and these do matter, so keep sending them). Eventually, at least 2 reviewers will evaluate the article and provide (sometimes highly variable) feedback on the article’s quality and priority. With these reviews in hand, the associate editor makes a decision about whether to solicit further reviews, accept the article, suggest revisions based on the reviews, or reject the article with a possible referral to one of the other journals in the Circulation family. We also get statistical reviewers for every article that we are considering for publication. Not only do the statistical reviewers provide methodologic feedback, but they often make important contributions regarding clarity and content as well. For decisions that are not clear-cut (ie, most of them), we will discuss the article at our weekly editorial meeting to solicit broader feedback. A key principle of our editorial team is to use the reviews carefully to guide our decisions but never to have them reflexively lead to a decision. Generously, the process that a submitted article undergoes might be characterized as art, but certainly not as science. Rarely is it obvious how best to navigate any single step in the process, much less all of them.
Peer Review at CQO
Perhaps that explains why there is surprisingly modest evidence that peer review improves the quality of research.1 So, why do it at all? Although hardly a rousing call to arms, we can all strive to make the best sausage possible. In the face of increasing concerns about the reproducibility of science, peer review is one of the few approaches to push toward reproducible science although it is certainly no panacea. On average, peer review identifies some major errors, improves article readability, and may have a small impact on the perceived quality of research.2,3 Peer review does not transform bad science into good science and it does not even always identify bad science. But, on the whole, it probably makes science incrementally better, and when the right reviewer evaluates the right article, a reviewer may contribute as much as to science as the authors. Even if that scenario is rare, it is of considerable societal and scientific value. As Churchill had noted about democracy, it might be the worst editorial tool available except for all the others.
Although the best approach to conducting peer review is not clear, it is perfectly clear that peer review requires a massive investment of valuable time. The total cost of scientific peer review is conservatively estimated at almost 3 billion dollars per year.4 Even at a single journal like CQO, it would cost $40 000 per year to pay reviewers for all of their time and yet, for that considerable time investment, peer reviewers get little credit. For the most part, peer review is thankless work done in the dark. Ongoing efforts throughout the scientific community have focused on how to make peer review less thankless, bring it in from dark, and maximize its impact. At CQO, we are interested in all these goals and are considering strategies to address each of these efforts.
The easiest and most obviously necessary step is to acknowledge the time invested by reviewers and say thanks. We plan to do that in several ways. First, we will evaluate our reviewers based on the number, timeliness, and quality of reviews annually. For the reviewers judged to make the greatest contributions, we will recognize them with an award at the American Heart Association’s Quality of Care and Outcomes Research Annual Meeting. Second, to help bring peer review out of the dark, we are working to make it easier for reviewers to receive academic credit for their work. Specifically, we are working to integrate our review systems with Publons.5 Publons is a publicly facing database of reviews. For reviewers, it creates the opportunity to verify their reviews and collate them in one place. Particularly for reviewers with active peer review profiles across multiple journals, it may help with claiming and quantifying the academic credit deserved. In the long term, public review databases have the potential to help identify the right reviewers for the right articles, although the use of these tools is currently limited for outcomes research given the small size of the community. Our goal is to make it easier for reviewers who would like credit for their review to claim it. Finally, we are always seeking to find ways to optimize the quality of the research published in CQO and the peer reviews that underlie them. Given the limited evidence base for how best to do it, our current approach is ad hoc, and we are exploring ways to make it better. Peer review requires some sausage making, but that does not mean we should not work together to improve the process. Maybe some day, these tools will do even more to justify peer review’s sacred position in science.
The opinions expressed in this article are not necessarily those of the American Heart Association.
- © 2017 American Heart Association, Inc.
- Jefferson T,
- Rudin M,
- Brodney Folse S,
- Davidoff F.
- 4.↵Research Information Network. Activities, Costs and Funding Flows in the Scholarly Communications System in the UK. http://www.rin.ac.uk/system/files/attachments/Activites-costs-flows-report.pdf. Accessed April 17, 2017.
- 5.↵Publons. https://publons.com/home/. Accessed April 17, 2017.