MEPS Discussion Forum 2:
Initiator and Editor: H. U. Riisgård (MEPS Review Editor)
Marine Biological Research Centre, University of Southern Denmark, Hindsholmsvej 11, DK-5300 Kerteminde, Denmark
Further Contributions invited! Address to Editor H.U. Riisgård
Initiator's Foreword: Riisgård HU (MEPS Contributing Editor) (2000)
Realising the general importance for science of the Theme Sections in MEPS* we wish to facilitate a continuation of the discussion on a broader basis in this forum. Readers are invited to express their opinions. Please send your text (as brief and concise as possible) to the Forum Editor; fax:+45 65321433, e-mail: hur(at)biology.sdu.dk. Thank you.
*"The peer-review system: time for re-assessment?" (MEPS, Volume 192, p 305-313, 2000), and "Misuse of the peer review system: time for countermeasures" (MEPS Volume 258, p 297-309, 2003) (Idea and Coordination: Hans Ulrik Riisgard), and "Peer review: journal articles versus research proposals" (MEPS Volume 277: 301-309, 2004) (Idea and Coordination: Hans Ulrik Riisgård)
T. R. Anderson George Deacon Division
Southampton Oceanography Centre
Waterfront Campus, European Way
Southampton SO14 3ZH, UK
Editorial responsibility: Hans Ulrik Riisgård
Published: March 23, 2000
A common problem in the review process is a lack of dialogue between editors and reviewers. Contribution 1 illustrates this problem.
Several contributors in Riisgård (2000) suggest that scientists might be barred from publishing in journals for which they refuse to review. However I propose that such sanctions might be counter productive because the result could be more hurried and (not intentionally) below-par reviews. I concur with the general condemnation of those who repeatedly refuse to review, but cannot see any simple solution to the problem. It is important to ensure that the majority who are willing to review remain suitably motivated and that the time they are prepared to spend is put to best use.
I realize that each manuscript I submit must be peer reviewed, and I accept a moral obligation to return the compliment. More important, however: I feel that by reviewing I am contributing an important service to the community, and take considerable personal satisfaction in seeing manuscripts published which my comments have helped to improve. Nevertheless, the job of a reviewer is seen by many as a thankless task. Financial remuneration for referees does not seem to me an appropriate use of funds. How about free electronic subscription to a journal of the referee's choice (of the publisher involved)?
The one opportunity simply to thank reviewers for their work lies with journal editors, and should not be underestimated. I have refereed many articles over the years, but in only a minority of instances have I received such appreciation. But worse: In recent times, on two separate occasions (different journals - not MEPS!) I found serious flaws in manuscripts which I detailed in reviews, only to see the articles appear in print more or less unchanged, without any correspondence subsequent to my reviews from the editors. Nothing is more demotivating than putting serious effort into reviews only to have them disregarded by editors. Although this only occurs in a small minority of cases it is extremely damaging as these reviews are surely the most important of all.
My example illustrates the worst case scenario in what may be a common problem in the peer review process: a lack of dialogue between editors and reviewers. I appreciate that pulling together disparate reviews must be a difficult task for an editor but, as with all problems in life, differences in opinion are best overcome by active dialogue between all concerned. I always write in my covering letter to the editor: "If you have further queries regarding my review please do not hesitate to contact me"; but in the cases described above no such contact was made. It does not take much to pick up the phone and talk. Simply dumping a review without further ado will leave the reviewer disillusioned and reluctant to participate in future reviewing. In order to maintain a high-quality peer review system all sides must accept their responsibilities. We need a constructive dialogue based on mutual respect for each other's roles in the system.
R. T. Kneib
(MEPS Contributing Editor)
The University of Georgia Marine Institute, Sapelo Island, Georgia 31327, USA.
Editorial responsibility: Hans Ulrik Riisgård
Published: May 23, 2000
The 'publish or perish syndrome', identified as the core problem challenging the peer-review system, should be replaced with a 'contribute or perish philosophy'. Contribution 2 highlights and expands issues raised by previous contributors. Some potential solutions for further discussion are suggested.
The recent Theme Section in MEPS dealing with peer-review (Riisgård 2000: MEPS 192, p 305) conveyed a sense that the task of evaluating manuscripts for publication is not equitably distributed and that authors need to show a greater willingness to serve as reviewers for the journals in which they publish. The 'publish or perish syndrome', which has led to a dramatic increase in the number of manuscripts submitted to scientific journals, was identified as one potential source of the problem. Contributors to the forum offered few solutions to remedy the problem of dealing with the volume of manuscripts being processed. Monetary incentives to encourage participation in the review process were largely rejected because the economy of science is based in the currency of time as well as money, and the former is much more highly valued. I would like to highlight and expand on some issues raised either explicitly or implicitly by previous contributors in this debate and suggest some potential solutions for further discussion.
Any self-regulating system must function by a common set of rules or principles, and all components must share responsibility for the outcome. The cause of any problem with the current peer-review system lies within the entire scientific community. An effective solution must include the capacity to assign and accept responsibility at the level of each participant in the process. Publishers, editors and reviewers may be easy targets for blame, but responsibility begins with the authors, who determine the initial quality of their manuscripts and research proposals. Submission of these documents to the peer-review system is a professional request for colleagues to donate their most cherished of currencies -- time.
Some seem to view peer-review as a means of helping their colleagues to improve their contributions, as a way of training students, apprentices and younger colleagues, or as a means of keeping abreast of current scientific developments. Although these are worthy goals, they are accomplished better and more efficiently outside the professional arena of peer-review. Those who are not yet qualified to participate in the process as professionals are not peers. As for keeping abreast of current developments in the field, it is much more efficient to review the published literature than to sift through all of the material at the front-end of the process. Peer-review is designed to function as a quality-filter intended to promote the efficient use of the community's resources (e.g. funding in the case of proposals and the readership's time in the case of publications). As with any filter, it can become obstructed and may function improperly if exposed to an excessive amount of the type of material it was intended to remove, or if used for unintended purposes.
The responsibility of authors in mitigating the situation should not be underestimated. Many could be more critical of their motivation for submitting a manuscript, and more careful in its preparation. Expediency should never replace good judgement and a sense of pride in the presentation of findings to the scientific community. Some authors seem to expect their manuscripts, and those of their students, to be 'cleaned-up' by the peer-review process and so expend little effort in writing a well-crafted document. It might be instructive to compare rejected and accepted manuscripts with respect to the proportion that acknowledged colleagues for reading and improving earlier drafts prior to the official submission for consideration of publication.
There is a substantial imbalance between positive and negative consequences of submitting a manuscript, depending on one's perspective. For authors, manuscript submission and re-submission always has the potential for a positive outcome (i.e. improvement and publication) and relatively little risk of negative consequences. Authors receive all of the credit for their positive contributions to the field, but when poor or faulty work is published, participants in the peer-review process (i.e. reviewers and editors) share blame with the author(s) for wasting the readership's time.
Rejected manuscripts cost reviewers their valuable time but usually generate comments that benefit the author(s), who have virtually unlimited opportunities to revise and resubmit their submissions. Manuscripts that remain in the system through multiple cycles prior to publication are likely of low quality and minor value. If manuscripts of low quality carried a greater risk of negative consequences for the authors, it would discourage re-submission and yield a considerable savings in time with little cost in lost value to the scientific community. Just as reviewers are listed in many journals to acknowledge their service, perhaps there should be an annual listing of titles and authors of manuscripts that were declined for publication by the journal. This would help the scientific community identify individuals who are costing the system more than they contribute, and would encourage authors to be more discriminating in their submissions.
The quality of peer-review depends on the ability to identify professional individuals who are both qualified and willing to provide critical and objective evaluations. As the scientific community has become more inclusive and networked along with the rest of developed society, it has become increasingly difficult to identify the contribution of individuals to any effort. A trend toward group research is apparent from the increasing number of authors seemingly required to write a paper. For example, in the 21 most recent volumes (176 to 196) of Marine Ecology Progress Series, an average paper had 3.1 authors and >33% of the papers had 4 or more. Compare this to 10 yr ago (MEPS vols 50 to 68) when the average number of authors paper-1 was 2.3 and only 13.5% of the contributions were by > 3 authors.
Involving more individuals in a project should bring a greater diversity of expertise to bear on a particular problem, resulting in a greater capacity to address more complex and difficult questions. In practice, I think that laudable goal is met less often than claimed and the quality and originality of research conducted by large groups may be substantially less than that produced by scientists working alone or in small (2 to 3) groups. This will undoubtedly be branded as heresy by some, and I am mindful that it can be unwise to function as a free-thinking individual in an environment where the collective is more highly valued. However, increasing emphasis on 'team research' may be having consequences for the peer-review system that should be considered. Even if we assume that the large-team approach functions as intended, it is difficult to find individuals with sufficiently broad expertise to fairly and critically evaluate the often complex findings of group research projects.
Perhaps submissions involving more than 3 authors should be evaluated by a different type of peer-review process - one based on panels of reviewers who could collaborate and communicate with each other during the review process as is commonly the case in the evaluation of research proposals. If an organised group was required to produce the research being reported, it stands to reason that an organised group would be required to judge its quality. Journals might consider establishing secure internet web-sites where appointed panel members could freely exchange their views on sets of multi-authored manuscript submissions and reach a consensus recommendation for consideration by the editor. It would take longer to review such manuscripts, but this should be an expected cost associated with conducting and reporting on complex research projects.
The trend to 'team research' may have other consequences for the peer-review process as well. Groups tend to function on the basis of compromise. All members can accept credit for positive consequences of group activities, while none need take full responsibility for negative ones. Groups also are likely to have both strong and weak members. Weak members, under protection of the group, can be kept in positions to do substantial damage to the peer-review system. Individuals whose intellectual contributions to the collective effort were minor still receive credit for the products of the group. This bolsters their careers and maintains harmony on the team. The same individuals are less likely to be good judges of quality research, but are retained as potential members of the peer-review pool. The more competent members of the group probably have less time, and are more likely to pass off the tedious tasks, such as reviewing, to the lesser qualified members.
An author's recent publications often serves as a means of identifying appropriate reviewers for manuscripts. Given the trend toward group contributions, it is becoming increasingly difficult to assign responsibility and credit for contributions made by individuals whose records are inextricably linked with one or more groups. Perhaps the assignment of proportional responsibility (credit) for the research should be required of multi-authored publications. Prior to submission of a manuscript, the co-authors should agree on the proportional contribution of each contributor to the collective effort and that value would be assigned in parentheses next to their name in the publication [e.g. A.C. Jones (50%), B.D. Platt (20%), .... J.S. Cotes (5%)].
When evaluating the records of colleagues for promotions, jobs, research funding, etc., authors should not be credited for the number of papers they publish but for their impact adjusted by the individual's proportional contribution to the effort. An author having numerous publications with little impact on the field has cost the scientific community dearly in time and effort that could have been spent more productively, and there should be a penalty -- or at least no reward-- in terms of career advancement, etc. Any reward system should encourage quality over quantity. I also agree with the previous contributors to this discussion who have already highlighted the need to better recognise those who routinely donate their time and effort to insuring the quality of information available to the entire scientific community. Scientists should promote administrative procedures that discourage the 'publish or perish syndrome', identified as the core problem challenging the peer-review system, and replace it with a healthier 'contribute or perish philosophy'.
J. R. Dolan
Station Zoologique, BP 28, F 06230 Villefranche-Sur-Mer, France. Email: firstname.lastname@example.org
Editorial responsibility: Hans Ulrik Riisgård
Published: February 1, 2001
The peer-review system will probably remain unchanged although we enter the age of electronic publication with all of its potential capabilities of changing. A small-scale poll reveals overall satisfaction with the average quality of scientific publications as well as anonymous peer review (APR) as it is presently practised. Contribution 3 deals with attitudes toward anonymous peer review.
Most scientific journals use anonymous peer review (APR) as the quality control measure to assess submitted manuscripts. However, variations and alternatives exist. These range from employing anonymous authorship as well as anonymous reviewers, to open review in which reviewers sign their opinions. I was curious to know if I was part of a majority or a minority who never thought about alternatives to APR and yet periodically exclaims "How did THIS get through review?!"
First, I sought to gauge the general level of satisfaction of researchers with regard to the articles they read. If most people see little room for improvement, quality control would appear to function well. Second, I wished to estimate the 'openness' of fellow researchers to potential changes in APR to palliate the common complaints of reviews as slow, sloppy, biased, etc., (or some combination). Possible remedies presented in question form were (1) publicly identifying reviewers through listing in 'Acknowledgements' when the paper is printed, (2) paying reviewers, (3) employing anonymous authorship during the review process, (4) keeping the name of the journal from reviewers.
Overall, the level of satisfaction with scientific publications appears very acceptable, considering that many researchers where not sufficiently interested enough to reply (40 %) and among those who replied, only one was 'rather dissatisfied'. Hence, it is not surprising that there was little potential support for any of the changes explored. For example, a system in which reviewers signed reports and were then listed in the 'Acknowledgements' appears unlikely as a minority of authors are presently in the habit of listing self-identified reviewers in the acknowledgements (31 %). Furthermore, several researchers stated an unwillingness to be associated with papers which they as reviewers 'passed' but were not in total agreement. Interestingly, while anonymous authorship is now employed in the peer-review process for European Community Commission research grants and is quite popular, anonymous authorship of papers appears to be another matter. Most prefer that reviewers examine their manuscripts knowing who wrote the paper (54 %) and they prefer to review signed manuscripts (62 %).
The least popular change explored was that of not revealing the name of the journal to which a manuscript was submitted; I thought scientific papers ideally should be judged using absolute standards. This view was roundly rejected by over 90 % of the respondents. The most common comment was that reviewers should decide if a manuscript is appropriate for a given journal. The comment, I believe, may be a rationalisation because editors are the individuals charged with deciding if a paper falls within the scope of a journal. Rather, it seems likely that most of us have a vested interest in maintaining the existing system of different journals with widely ranging standards as it allows almost everyone to publish somewhere.
The proposition which appeared the most divisive was that of paying reviewers. The rationale behind payment was that reviewers would do a more thorough job if paid. Many replied that while the review itself would be unaltered, it would likely be done sooner and they would feel much less exploited.
In conclusion, the small-scale poll revealed overall satisfaction with the average quality of scientific publications as well as APR as it is presently practised. Thus, as we enter the age of electronic publication with all of its potential capabilities of changing the ways in which we treat manuscripts, APR will probably remain with us much as it is today.
QUESTIONS & RESPONSES
Sent to 65 'Review Editors' of the journal Aquatic Microbial Ecology (January 15, 2001), 40 responses.
1) As a RESEARCHER, your satisfaction with the quality of the published papers (all publications combined) you read is:
a. High = 31 %
b. Moderate = 67 %
c. Rather dissatisfied = 2 %
d. Very dissatisfied = 0 %
2) As a RESEARCHER, have you thought about alternatives to anonymous review:
a. Yes = 68 %
b. No = 32 %
3) As an AUTHOR, you prefer to receive:
a. Unsigned reviews = 28 %
b. Signed review = 26 %
c. No preference = 46 %
4) As an AUTHOR receiving signed reviews, you:
a. Always list reviewers in the Acknowledgements = 31 %
b. Never list reviewers in the Acknowledgements = 14 %
c. Sometimes list reviewers in the Acknowledgements = 27 %
d. No opinion or experience = 28 %
5) As an AUTHOR, you:
a. Favor withholding AUTHOR'S names from reviewers = 18 %
b. Prefer reviewers examine SIGNED manuscripts = 54 %
c. No preference = 28 %
6) As an AUTHOR, you:
a. Prefer that reviewers know to which journal your manuscript was submitted = 90 %
b. Favor withholding the JOURNAL NAME from the reviewers = 3 %
c. No preference = 7 %
7) As an AUTHOR, you
a. Prefer that reviewers are NOT paid = 46%
b. Prefer that reviewers are paid = 14%
c. No preference =40 %
8) As an REVIEWER, you generally prefer:
a. Signing reviews = 8 %
b. Anonymous review = 54 %
c. No strong preference = 38 %
9) As a REVIEWER, if told your name would appear in the Acknowledgements of a paper, you:
a. Would review FEWER papers compared to at present = 22 %
b. Would review MORE papers than at present = 3 %
c. Would review the same number of papers as you do now = 75 %
10) As an REVIEWER who signed a review, you would:
a. Favor acknowledgement as a reviewer = 29 %
b. Not favor acknowledgement = 24 %
c. No preference = 47 %
11) As an REVIEWER, you:
a. Would favor withholding AUTHOR'S names from reviewers =23 %
b. Prefer examining SIGNED manuscripts = 62 %
c. No preference = 15 %
12) As an REVIEWER, you:
a. Favor withholding the JOURNAL NAME from the reviewers = 0 %
b. Prefer you know to which journal a manuscript was submitted = 92 %
c. No preference = 8 %
13) As an REVIEWER, you:
a. Prefer that reviewers be paid = 25 %
a. Do not prefer that reviewers are paid = 36 %
c. No preference = 39 %
14) As a PAID REVIEWER, your reviews would be:
a. The same as when NOT paid = 81 %
b. Different than when NOT paid (please explain briefly below) = 19 % (e. g., faster, longer)
Stazione Zoologica 'A. Dohrn'
Laboratorio Ecologia del Benthos
Punta San Pietro, 80077 Ischia, Italy.
Editorial responsibility: Hans Ulrik Riisgård
Published: November 30, 2001
Still new aspects of the peer-review process come to light: the reviewing process should follow a way similar to the one of football reviewers in order to separate bad from deserving referees. Contribution 4 proposes that journals invite readers to be enrolled on a list of serious reviewers working for the journal within their own specific fields of expertise. The reviewing work would then be rewarded by the possibility to remain on a recognised and prestigious list of scientists.
I found the discussion about refereeing very interesting and I agree about the central role of reviewers as the backbone of quality control in Science. However, I would like to stress an aspect still not covered. I observed, on several occasions, that reviewers felt unable to perform their job yet they still gave negative comments about a paper. It is the case of several reviewers writing I am not a specialist in this field, but I think the paper is not worth publishing. In this case, of course, Science is served very badly because important information can remain unpublished for a long time if it is necessary to re-submit to another journal. I believe that paying reviewers would encourage these unqualified individuals to continue doing a bad job. In relation to this, I do not support the argument that refusal to referee mss for a journal should mean that mss from that scientist are not accepted in that journal: refusal could be due to inability to perform that job (e.g. to provide an honest evaluation of knowledge in that field) and acceptance of a paper should be related only to the value of the scientific work, within the declared scope of the journal.
In my opinion, the reviewing process should follow a way similar to the one for football reviewers: every year, journals could subscribe to a list of reviewers, by filling out a form, stating personal expertise, number of papers published on each subject, academic position, etc. I am sure the response would be massive, for the reasons previously stated, e.g. prestige. In this way, it could be easy to choose the right reviewer for each topic, just by consulting the huge database held by each editor. The reviewers, however, should be annually scored too, based on the answers of authors. When reviewers receive, several times, concrete criticisms from authors, or they answer very late to the requests of the Editor, they should be removed from the list. They should also be removed when their positive answers are too generic, superficial, insufficient to enhance the quality of the paper or when negative answers are not constructive, too personal and speculative, or not based on current knowledge of the subject matter. A list of serious reviewers would evolve, allowing for rapid publication of good scientific papers, and the reviewing work would be rewarded by the possibility to remain on a recognised and prestigious list of scientists.
Hans Ulrik Riisgård
Editorial responsibility: Hans Ulrik Riisgård
Published: July 4th, 2001
Call for action by the Danish Academy of Natural Sciences: proposal for survey and improvements for reviewers
In the light of the ongoing discussion of the peer-review system, the Danish Academy of Natural Sciences has recently tried to pin down some of the many national and international obligations that scientific researchers are expected to perform. In the annual report (from June 2001), the Academy says that 'many researchers in recent years have experienced that important international tasks which they hitherto have considered to be an honour and a duty to handle, are no longer properly appreciated by the local employers who often think that other tasks may be more relevant and urgent to carry out'. As an example of this trend, the Danish Academy emphasises the work as editor and reviewer for international scientific journals: 'to act as a reviewer and to be a member of the editorial board for a scientific journal should be recognised as an essential part of scientific research'. Therefore, the Danish Academy of Natural Sciences (which has approximately 160 members) now calls for a survey to lay down how much time Danish researchers spend on such international tasks that require a high degree of specialised knowledge. The Academy also proposes that 'time and funds are being set aside so that the researchers may carry out such important jobs without damaging their career - but on the contrary - are being rewarded'.
This note has been approved by Prof. Vagn Lundsgaard Hansen, chairman of the Danish Academy of Natural Sciences hansen(at)mat.dtu.dk
Santos-Sacchi J (2002)
Dept. of Surgery (Otolaryngology) and Neurobiology, Yale University School of Medicine BML 244, 333 Cedar St., New Haven, CT 06510Email: email@example.com
Editorial responsibility: Hans Ulrik Riisgård
Published: July 15, 2002
Not all reviewers need to be praised, because authors may frequently realise that at least some reviewers produce inaccurate statements. Contribution 5 calls for a way to limit the effect of bad reviewers.
I have looked at the peer review discussion on the MEPS discussion website and I am glad that this is going on. It appears the issue that bothers me most about anonymous peer review has been touched on by your contributors.
My chief concern is that some reviewers may produce inaccurate, misleading or even malicious statements that are taken as fact by editors. In a small field, this can be very disconcerting, and slow down progress. Many times authors who reply to editors with valid concerns still have no recourse, although some exceptional editors who are capable of evaluating the issues may disregard a flawed review and either request another review or trust their own evaluation. I have suggested to the members of my society, the Association for Research in Otolaryngology (ARO), that mandatory reviewer identification (MRI) might help to alleviate this problem for our societal journal, JARO.
I find it interesting that the poll of reviewing editors conducted by J. R. Dolan revealed that responses on MRI depend on whether the responders view themselves as authors, or as reviewers. As authors, more would prefer MRI. The reason for this difference could be quite significant. Nevertheless, it is clear that errant reviewers are considered a real threat by a quarter of the responders, and that MRI is viewed as potentially helpful. Thus far, I have received only a handful of responses from the > 1000 members of ARO (I e-mailed on 13 July 2002), but the view is mixed on MRI.
I am not necessarily arguing for MRI, but simply for a way to limit the effect of bad reviewers. I especially liked V. Zupos idea about rating reviewers, but I imagine the process would be too unwieldy for a journal to perform. Instead of MRI perhaps journals could identify each reviewer to the authors by code. At least then a particular author could request that that person not review his manuscripts anymore. Ideally, the code would extend across all journals. Wishful thinking!
Boero F (2003)
DiSTeBA (Dipartimento di Scienze e Tecnologie Biologiche e Ambientali) Università di Lecce, 73100 Lecce, Italy
Editorial responsibility: Hans Ulrik Riisgård
Published: January 27, 2003
Contribution 6 argues that peer review is a service to both authors and journals, and should be recognised formally (e.g. by publishing the name of the reviewers in the printed article) and substantially (e.g. allowing free on-line subscriptions and discounts on books). To prevent presentation of poor papers, journals might publish a list of the rejected papers in every issue.
Peer reviews have two main goals: 1) Protect the journal from disseminating low quality information affecting both its respectability and the time of its readers, 2) Protect the authors from publishing low quality papers that might become a boomerang for their careers. The second goal is rarely perceived as such by the authors who receive negative comments. Sometimes authors are really sloppy, disregard instructions, or simply "give it a try". It is possible, however, that the goals are misunderstood and peer reviewing is a also a way to enforce a "view" of a given scientific domain. Papers that are outside the current trends in a given scientific area might have a hard time passing through,whereas papers that go with the flow have no problem. I think that, especially with the never-enough praised tendency to produce formally impeccable work it might be the case that a flawless paper contains not very relevant information.
An example: the experimental marine biology and ecology literature is flooded with articles on the impact of grazers on inter-tidal algal populations. If the manipulations are right, the statistics are impeccable, and the results are significant, a paper telling the impact of limpets on algal mats is invariably published. This is called "confirmatory" by some journals and is discouraged, but such papers always find their way. On the other hand, if you try to publish a paper on a system that is not so easy to manipulate (e.g. the plankton) you might be requested to achieve the same rigorous approach, even if this is simply impossible. No keystone predators are demonstrated for plankton because removal experiments are not so easy to perform in that domain. The result is that scientific production is pushed in a given direction by publication success, whereas alternative directions are discouraged (by negative reviews) simply because they cannot be pursued with the same tools. Some areas of marine biology and ecology even go so far as to have their own journals, like the Journal of Plankton Research, that, on the one hand, "protect" planktonologists from reviewers that demand, for instance, statistically significant replicates of their samplings (a rather difficult thing to do with plankton long-term series, while leading anyway to a confusion between temporal and spatial variability) but, on the other hand, make the impact of plankton studies less general in contributing to the development of marine biology and ecology.
Bearing in mind that the reviewer helps the authors to produce better articles, I think that it should be normal that the reviewers are not anonymous and that their names are added editorially at some part of the printed paper. Also, it could be a journal's policy to publish a list of rejected papers, with the names of authors and titles. This section might become the most-read one. And this might prevent the "give it a try" syndrome. It might also become a way to identify areas of disregard by the editorial policy of a given journal. I think that it has happened to all authors to have had a paper that has been repeatedly refused by top journals and that, once published, became much praised. I had at least three such experiences. Some of the reviews I received were helpful, but others were simply negative without many specific points. While reviewing papers, I always identify myself with the authors and I try not to behave in a way I would not like others to behave to me. Today (Jan. 18th) I have received the fifth paper to review in 2003. the chances are good that the year will go on at this pace, which means nearly 100 papers to handle in a year. Some are easy and take just an hour, others are difficult and require even months (did you ever review a book for a publishing house?).
I think that if a person reviews more than one paper per year for a given journal, s/he should have access to the online version of that journal for a year, with the possibility of renewal if further reviews are performed. Another way is to give substantial discounts on books printed by the publisher. Springer Verlag is enormously expensive and its books can rarely be purchased by individuals. Having a chance to buy them at a price comparable to that of University Press books might encourage reviewers to do their job, and to do it quickly, and more discounts could be given if the review is done within days.
Hans Ulrik Riisgård (2003)
Editorial responsibility: Hans Ulrik Riisgård
Published: June 23, 2003
How to make the referee work more attractive? This question of current interest was recently raised in the Editorial of the bimonthly journal Aquaculture Nutrition (2003, 9: 65). As it appears from the below extract, it is suggested that as a first step, employers should acknowledge regular refereeing as part of a scientist's job and this activity should be given merit in scientific evaluations.
Understandably, authors of submitted papers are eager to see their scientific contributions published as quickly as possible. The present publication cycle often feels long and cumbersome and as editors we often receive impatient reminders from our authors during this process. Like other journals our publishing system includes involvement from editors, some secretarial and administrative support, the authors themselves and the publisher. However, perhaps the most important, and sometimes inadequately acknowledged, input is from our referees and not surprisingly it is refereeing that leads to the greatest delays in processing manuscripts. Recently, the referee system was discussed with respect to how to acknowledge the enormous amount voluntary communal work which is essential to the present publishing system (Riisgård 2000, Marine Ecology Progress Series, 192, 305-313). A 'payback in kind' was favoured by researchers, which means that authors must be willing to review for the journals in which they publish (ratio 3:1). Referees are normally busy scientists and refereeing is an additional unpaid demand on their time. On the other hand perhaps this should be considered a part of their job, after all they benefit from the same system when they publish their own work and gain early insight into the development of topics in their field of expertise. How could we make this work more attractive? As editors we feel that a satisfactory answer to this question may contribute significantly to reduced publication times. As a first step, employers should acknowledge regular refereeing as part of a scientist's job and this activity should be given merit in scientific evaluations.
Beninger PG (2003)
Laboratoire de Biologie Marine, Université de Nantes, 44322 Nantes Cédex, France
Editorial responsibility: Hans Ulrik Riisgård
Published: June 23, 2003
The previous contributors have highlighted some negative aspects of the rules now in place, and suggested new rules to solve these problems. However, these suggested rules each have their own drawbacks. Contribution 7 discusses some of the drawbacks.
The peer-review system is a product of human beings who happen to be scientists. It is therefore subject to the same foibles of human nature as every other human enterprise, and we try to minimize the impact of these foibles by instituting rules. The debate here is about the validity and effectiveness of these rules. Rules are often double-edged swords: they take care of one problem, but their overuse or misuse can create others.
Contribution 2 (by Kneib) suggests that it might be interesting to correlate manuscript success rate with written acknowledgement of colleagues who have read and improved earlier drafts. While not formulated as a rule, and while the intended result is laudable, this idea will result in an exacerbation of 'reviewer overload', which is what started this debate. Although I practiced this some years ago, I have stopped simply out of empathy and respect for the time demands it makes on people already swamped with their own work and with real reviewing. The people who could really make suggestions for improvement are precisely the ones already overburdened. Furthermore, when you see how long a formally submitted paper can stagnate in a reviewers' office, it does not encourage you to precede this with an informal review. It could take as long as 2 years to see your paper in print.
Contribution 2 also suggests that authors all agree on the percentage contributions to a paper and that these should be published next to their names. If you have already had harrowing experiences in deciding authorship order, this proposition promises to ignite civil wars. Aside from the fact that everyone tends to magnify their own contributions, how does one estimate the relative weights of such disparate aspects as crucial thinking, the idea for the work in the first place, time spent doing the experiments, and the often esoteric craftsmanship of literature selection, integration, and writing? In rigidly hierarchical laboratories (such as are common in some parts of the world), this would invite even more egregious abuse of authority - lab directors who hold sway over researchers' material well-being and professional advancement could falsely claim 60 or 70 % and get away with it. Similarly, what better way to advance one's career than by proposing such a thing to one's narcissistic boss?
Contribution 2 further suggests that scientist's careers be evaluated on the basis of the impact of their publications. Such an approach assumes that the scientific community infallibly recognizes the importance of each scientist's work, and this is not true. In some cases, such an index would only serve to show how closely an author has followed fashion trends in science. Finally, this contribution suggests that we should replace 'publish or perish' with 'contribute or perish'. Keeping in mind that the original problem here was reviewer overload, this will just about guarantee reviewer collapse.
Contribution 4 (by Zupo) suggests that 'good' reviewers be rewarded with the recognition and prestige of being on a published list in the journal. I have seen the annual 'In appreciation' list of CBP, and I can just imagine the nightmare it will be for an editor to score all of these reviewers, but ideally I think this is a good point. However, I do not like to see the word 'prestige' mixed with science. One of the biggest problems in science is the inability of some to dissociate their egos from their work - we should do our jobs for the satisfaction of advancing science and for the recognition that we have done so, rather than cobble for 'prestige'.
Contribution 5 (by Santos-Sacchi) brings up the recurrent debate over 'Mandatory Reviewer Identification' (MRI). While this could help to identify and reduce sloppy or biased reviews, it would also have an undesirable effect. Since our research fields tend to be small, a negative, signed review could boomerang the next time we send in a manuscript, so that we perpetuate either slugfests or love-ins; neither of which is acceptable. Although I am acutely aware of the imperfections and occasional injustices associated with reviewer anonymity, I think its drawbacks are less frequent than the systematic problems that would be generated by MRI.
Contribution 6 (by Boero): One of the most alarming suggestions is that journals publish a 'black list' of rejected papers and their authors. Again, this is a human-created system, and rejection does not infallibly mean the work was poor. Rejection is sanction enough, and it is already too much when it is unfair. This is a modern version of the medieval stocks; such stigmatisation and public mockery is undignified and unacceptable. Similarly, I do not agree that reviewers should have their names published with the paper. Since the paper appears in print, this strongly implies that the reviewers were favourable, when in fact one of them may not agree at all.
So do I have any positive suggestions? Recently the editorial staff of Biological Bulletin not only thanked me for a review, but they later sent me a status report on a paper I had reviewed. Standard practice in this field is to leave the reviewer thankless and in the dark until he finds out himself. When I queried the editor, he sent a very informative email that assuaged my concerns. It was the first occasion in a very, very long time that I have felt really good about the process. This echoes the problem of dialogue between editors and reviewers, raised at the beginning of this section. If the editors do not have the time for this, then it is time to delegate responsibilities to regional editors who do.
Since the real limiting factor in the process is often time, I can only applaud the decision of the Danish Academy of Natural Sciences' decision (cf. Note by Riisgård) to set aside time and funds for scientists who carry out editorial and review responsibilities. Other governing bodies would do well to emulate this decision, especially in European countries where university teaching loads are often counted in hundreds of contact hours per year, and where administrative paperwork defies the imagination.
Jenkinson I (2003)
Acting Editor in Chief, Journal of Plankton Research, A.C.R.O., Lavergne, 19320 La Roche Canillac, France
Editorial responsibility: Hans Ulrik Riisgård
Published: September 17, 2003
Contribution 8 deals with refereeing and editing. Referees should not make decisions about mss, and a good editor knows how to make allowances for the referees' conflicting views. The second part of the contribution addresses problems with scientific publishing. A more suitable balance in the resources going to different areas of scientific activity would result in better science for less public funds. A way to achieve this goal is suggested.
I would like to contribute to the debate at 2 scales: (1) that of the year-to-year work of the editor interacting with referees, i.e. the editor's managing of the overall make-up of subjects in the published papers; (2) the place and status of journal editing in decade-scale distribution of funding within scientific research.
(1) In a good journal, it is not the referees who decide what shall and shall not be published. It is the founder, and then the subsequent editors who, perhaps in consultation with the publishing house, write and amend the Instructions to Authors, and may solicit submissions personally as well. The editor decides, after consulting referees, which mss to accept, and what improvements to require. Only the editor has the overall view of the journal, and can control the balance of subjects it publishes. For reasons of time and confidentiality, the referees cannot always be shown everything that is going on, or why their comments are not always incorporated in the final publication.
(2) The distribution of resources between different activities within aquatic science is in my view grossly unbalanced. The average ms submitted to the Journal of Plankton Research, for example, represents (my estimate) funding of tens to hundreds of thousands of euros, let us say ca. $/€ 100 000. The funds available for scientific editing (evaluation, and improvement of accepted mss) including refereeing, are less than $/€100 per ms, only 0.1% of this. Is it thus any surprise that editors are receiving stick from all sides? Science publishing houses are in financial crisis. Their major costs are in production and printing, distribution, increasingly clever marketing, and internal administration. If they increase costs and prices, they lose subscriptions, and scientists will either read the particular journal less, or find ways of reading what they need free (legally or not), without any money going to the publishing house. It is generally accepted that progress, socio-economic development and management/stewardship of the planet depend largely on scientific and technological research. To use the tiny part of readers' and libraries' subscriptions that publishing houses pass on to pay scientific editors' costs seems a poor way to finance nearly all the judgement and improvement of scientific results. How can we get out of this crazy, self-perpetuating system?
Instead of funding research libraries' budgets for increasingly expensive journal subscriptions, we need more free, i.e. online, access to the best journals. Publication of these journals, which would then cost less to produce, could be funded directly to the journal publishers through national, international and intergovernmental bodies, or through scientific associations. In the field of marine ecology, examples might be the US National Science Foundation working with the Canadian Research Councils for the USA and North America, the European Commission, UNESCO, perhaps ASLO, the International Ecology Institute, groups of university publishing houses and so on. Initially, existing research libraries and research publishing houses would tend to resist movement in this direction fiercely, and they would need to be wooed into the action by making it attractive to them. The costs of printing and distribution of paper journals could thereby be much reduced, and the funds released would then also be less coupled to commercial constraints responsible for the irritating ways currently used to exclude those of us who appear not to have paid. This would allow easier and fairer access to the scientific literature, and facilitate socially more diverse and more rational decision making worldwide. Science-funding bodies should then be freer to propose a more balanced mix of resources to the interdependent areas of aquatic science, which may be broken down thus: (1) proposing research; (2) doing it; (3) writing it up; (4) editing and publishing it; (5) reading it; (6) applying it to teaching and learning; (7) applying it to planetary planning, management and conservation.
Vermaat J (2003)
Co-editor-in-chief, Aquatic Botany, Institute for Environmental Studies, Vrije Universiteit Amsterdam, De Boelelaan 1087,1081 HV Amsterdam, The Netherlands
Editorial responsibility: Hans Ulrik Riisgård
Published: December 16, 2003
Contribution 9 argues that scientists are ranked according to a mixture of quantity and quality of their published papers in journals that have different impact factors. It is argued that the publish-or-perish atmosphere results in an accelerating feedback where journals with a high impact factor become increasingly popular, and thus get an even higher impact. Contribution 9 argues that this is the most serious misbehavior displayed by all parts: authors, referees and editors, and therefore to avoid misuse, measures that stimulate desired behavior are to be preferred.
I have read the Theme Section (TS) in MEPS (Riisgård 2003: 258) on 'misuse of the peer-review system' with interest and I think I have something to contribute to the discussion. My response is to be summarized as follows: (a) an ideal world does not exist, (b) we should spend our energy economically (I agree here with Katja Phillipart) hence take measures where they are needed and where they will pay off, and (c) why send out a survey, receive a large number of responses, call for more statistics and fail to do the statistics?
(a) The targeted misuse is reportedly made by 'some authors', who resubmit a rejected manuscript (ms) elsewhere without paying due or apparent attention to the criticism of the peers. In an ideal world, such misuse does not exist. Ideal markets do not fail, but tend to be rare. The same is true for authors, referees and authors, all of us humans and all in one common pool of peers. Or rather, in a stratified, class-conscious sub society (natural scientists) where individuals cluster in schools of sorts and are ranked according to a fuzzy mixture of status, quantity and quality of their output, the published papers. This quality, these days has impact factor (IF) as indicator. Here, the publish-or-perish atmosphere (or fever? not system), to my feeling operates as an accelerating feed-back where high-impact-factor journals become increasingly popular, and thus get an even higher impact (see Fig. 1). Then I wonder which category of us scientific humans is to be blamed of the most serious misbehavior, the submitting author, the referee, or the editor. On a personal note, some letters that I have received from disappointed authors make me wonder, and some decisions of editors on my mss have made me wonder too. Where many of us play all three roles, a social scientist could do some interesting empirical work. The debated 'misuse' by 'efficient' authors, in my view, should simply be detected by the peers plus editor (as argued well by a.o. Ferdinando Boero), these should together be able to gauge the quality of a ms. I therefore argue that we repair our peer review system where it is needed, that is where time is lost. Most time is lost on the desks of overloaded referees and editors, but also on the desk of the author. For the former two, several colleagues have already raised useful points, but the latter is rather difficult. In my own case, I have estimated for 2003 that an average ms travels to and fro between me and the author for 11 months between my first (this is after first peer review) judgment and final acceptance. That is pretty long, and I feel uncomfortable about it, but the ms spends three months at most on my desk in the ms queue, so we should share the guilt: authors have a responsibility too.
(b) I agree with Sandra Shumway that it is annoying to see the obvious signs of earlier submission and rejection from the style of the reference list. Here I am all too human as an editor, and write prickly remarks in the margin. But that does not cost me much of my precious time. Will we author, however, automatically behave gentle(wo)manlike? Well, this depends on how we train ourselves and next generations. Decent behavior does not fall from the sky, nor is it enforced with a ms-history-backlog system. Such a system may well delay the refereeing process even more - we have to read it all! The training should come at a stage where it is effectively received, so probably at the Ph.D.-stage. An effective system that makes our reviewing peers reply quickly should be first on our wish list and warrants an organized debate.
(c) Is the phenomenon that common or serious? I did do the counting across the TS: Out of the 19 respondents in the non-random sample (I pooled the nested reply of L&O into one), only 5 saw a serious problem, 2 did not know and the remaining 12 found it 'possibly annoying but minor and probably not effectively prevented' (my summary). In short, I see no reason for 'more statistics' and will not advocate to install a such an enquiry for 'my own', journal, Aquatic Botany, despite the fact that it is a more obvious target than MEPS.
Instead, I argue that we concentrate on the referees. Let us make their life easier, we cannot force their employers, but we can influence the management of our journals. Here, we probably could make practical and substantial progress beyond that of electronic review sheets. Some of the respondents came up with useful suggestions, like sandwiching senior and junior referees (as Poul Scheel Larsen did). One could draft a code-of-practice, as a form of cooperation among journals, but such a code is effective only when adhered to and this adherence cannot be enforced. I think we should concentrate on measures that stimulate desired behavior, hence create positive re-enforcement at the author and referee level, without costing us much time.
Fig. 1. Development of impact factors (IFs) with time in a number of selected journals in Aquatic Ecology from 1992 to 2001 (top), and the relation of an increase in IF with the mean IF over that same period (data from staff-www.uni-marburg.de/~woelfel/impact_J.html)
As a final word, I hope that my thoughts contribute meaningfully and I gladly will continue to do so. I have one final 'sour' remark: a quality hierarchy probably exists among journals, however, it is not only a matter of true quality, and it certainly does not depend solely on the rigor of the peer review as suggested in the TS p. 299 bottom left!
Comment (1) to Contribution 9, by Hans Ulrik Riisgård (Forum Editor):
I wrote in the TS (p. 299 left) that I find it "strange that the aquatic-scientific community has so many active scientists involved in peer-review and quality control who apparently find it satisfying to do a backbreaking referee job on the basis of hardly any research analysing the effects of their own work". In the summary this statement has been expressed as "More statistics are needed ... to through light on how effective the peer-review system is and to what extent misuse is a problem". On this background I do not agree in your remark: "(c) why send out a survey, receive a large number of responses, call for more statistics and fail to do the statistics?" I think that the TS has pretty well revealed to what degree resubmissions of the same mss is a problem for different journals and I do not argue for much more statistics on this specific topic; but that more statistics are needed to evaluate how effective the peer-review system is and to what extend misuse (of different kinds) is a problem.
Reply (1) by Jan Vermaat:
As to the disagreement between you and me: Yes, I do understand your point explained above. But a bit of a 'no' as well, since at least to me your original sentence on 'more stats are needed' suggests that more statistics were needed to assess the extent of the misuse. But let us close this minor issue.
Comment (2) to Contribution 9, by Otto Kinne (MEPS Editor):
Impact factors (IFs) have their own problems. Journals with a wide scope tend to have higher factors. Society journals are difficult to fit in; they are easily available to all members and hence more likely to be quoted. There also are journals that specialize in selecting 'sensational' papers and in focusing on raising their IFs. Sometimes they act more like newspapers than as responsible learned scientific publications, which must provide space also for sound but potentially less attention-attracting contributions. MEPS would like to consider a critical TS on journal quality assessment in the future.
Reply (2) by Jan Vermaat:
I have two points to add in reply. (1) I doubt whether society journals are more readily cited because they are more easily available. I cite papers that are relevant to my ms, regardless of their source. I think that electronic availability has also become more important in recent years. I base myself on the availability of the journals of commercial editors in packages offered to University libraries (at a price!), such as my own 'niche' journal Aquatic Botany, all a few keystrokes away. But this is a personal impression, not statistics. (2) I would gladly give journal quality assessment some thought - here 'sound and responsible' journals could possibly find some comfort in citation half-life as another ISI statistic. Half-lives of 'niche' journals are certainly more respectable than those of newspapers.
Boero F (2004)
DiSTeBA (Dipartimento di Scienze e Tecnologie Biologiche e Ambientali), Università di Lecce, 73100 Lecce, Italy
Editorial responsibility: Hans Ulrik Riisgård
Published: May 11, 2004
Contribution 10 is a scathing criticism of the Impact Factor system. It is argued that the scientific community should establish its own evaluation system, regain independence from the Institute for Scientific Information, and instead of Impact Factors, use cited half-life more.
In the preceding Contribution 9 to this Discussion Forum, Vermaat argues that scientists are ranked according to their papers published in journals that have different Impact Factors (IFs), and in his comment to this statement, Kinne says that IFs have their own problems, e.g. that journals with a wide scope tend to have higher IFs, as do journals that specialize in selecting 'sensational' papers. But important questions remain unanswered. Why does the IF system exist; why was this system invented? Answer: the people at the Institute for Scientific Information (ISI) did not want to look at all the journals. Why is systematic zoology underrepresented in the IF system? Answer: museum journals do not have an IF because, since 1864, systematic zoologists have had the Zoological Record, which has eliminated the market for others who might sell this information. Therefore, most systematic zoology is published in journals that are ignored by the IF lobby, and the status of this science is lower because of this.
Careers are determined by the number of articles in journals with an IF in the Journal of Citation Report (JCR) system. Therefore, we publish in JCR journals to have as a reward a better recognition of what we are doing. This has not been decided by the scientific community, but by business people. PLoS Biology, which is a peer-reviewed open-access journal published by the Public Library of Science (www.plosbiology.org), is an attempt to overcome this system. Other ways are to publish through the web, for free. If I have to work for free, I prefer to do it for people who do not earn money from my work. Go to the web page of CIESM (www.ciesm.org) and check out the workshop monographs in the publications section. They can be downloaded for free as pdf files. The most successful ones have been downloaded more than 10 000 times. They surely have a great impact, but have no IF. I am tired of working for free while making others rich, feeding a system that, also in science, is based on unclear standards of quality. Maybe it is time for the scientific community to establish its own evaluation system, regaining independence from the business-oriented Institute for Scientific Information?
All scientists invariably look at the IF and disregard other indicators that the ISI uses for ranking journals. The half-life of journals, for instance, reports the average length of time that an article is cited from a given journal after publication. The maximum cited half-life is >10 yr. It is often the case that highly impacting journals have short half-lives. An article published in such journals is worth gold when it is printed, but its value faces a very rapid decay and, after a few years, it has been completely forgotten. Other journals have much lower IF, but almost infinite half-lives (>10 yr). If we use an integrated index, combining the half-life with the IF, we do more justice to our journals. If you search the JCR and sort the journals on biochemistry and molecular biology by cited half-life, you will find that only 15 journals out of 266 have half-lives of >10 y, and the highest IF in these long-lived journals is 5.4, whereas the highest IF in the section is 36.278. If you check ecology, you will find that 32 out of 101 journals have a >10 yr cited half-life, the highest IF of long-lived journals is 6.1, and there is only one journal with a higher IF than that, namely Trends in Ecology and Evolution, with an IF of 11.9 and a cited half life of 6.2 yr. In this way, even using the ISI evaluations, we can find alternative ways to look at our disciplines.
The first reaction we might have to this comparison of the IFs of molecular biology journals with those of ecology journals is to reject the value of the system. While doing so, we should build another system to evaluate journals, but we can also read all the information provided by the ISI. Some biologists tend to praise the IF, but rather we should praise the cited half-life. We should be proud to say that we produce pieces of biology that stand as classics, and not as some sort of fast-food science.
Korngreen A (2005)
Faculty of Life Sciences, Bar-Ilan University, Ramat-Gan 52900, Israel
Editorial responsibility: Hans Ulrik Riisgård
Published: December 2, 2005
Online manuscript submission and review systems are becoming more and more sophisticated, and the scientific community must generate strategies utilizing the power of the WWW to filter good manuscripts from bad ones. Contribution 11 deals with current changes in the peer-review system.
I support the peer-review system because it is better than the alternatives. The recent decade has seen major improvements to the peer review process. The use of the World Wide Web (WWW) and online manuscript (ms) submission systems have greatly reduced the time required to review a ms from many months to mere weeks. Authors can submit mss from their personal computers and expect rapid replies by e-mail. Editors and reviewers are armed with an arsenal of on-line tools assisting them in the evaluation and decision process. It is probable that online ms submission and review systems will become more sophisticated and include many more options in the near future. Instead of passively adapting to these changes the scientific community should generate novel strategies utilizing the power of the WWW to filter good publications from bad ones. Here I will touch upon three issues related to the changes the peer-review system is experiencing.
First, in the not so recent past, most editors wrote letters to the authors explaining the decision reached regarding their ms. This gave the authors a focused opinion about their ms and a glimpse at the editorial decision process which, in many cases, was helpful as much as the comments made by the reviewers. Today, many online systems produce ready made letters that carry with them no personal impact from the editors. Most systems allow the editor to add comments as postscript to the machine generated letters. This, however, does not ascend to the level of a well written and therefore more carefully considered prose. It has been shown that the e-mail has changed the writing (and thinking) habits of many of us to the worse. The reduction in the level of editorial responses could probably be attributed also to this. I personally miss the days that, not so long ago, the personal touch of the editor was made evident to the authors in a well phrased letter.
Second, the large increase in the number of submitted mss has inflated the number of scientists serving as reviewers. The vast majority of these scientists perform their duty honorably, providing valuable feedback to the authors. However, some reviewers produce bad and even damaging reviews. Some of these reviews are filtered by the editors and some are not. It is my opinion that anonymity is important for the peer review process. However, some power could be granted also to the authors in order to balance the equation. Here the flexibility of the online systems could be employed to establish a feedback mechanism that may help journals weed out rogue reviewers. One can imagine a scenario in which the authors will be requested by the editor in his decision letter (regardless of the nature of the decision) to point their web browser to a place in the journal’s online submission system. There they will find an online form that will allow them to provide feedback about the reviewers of their ms. This can be performed on a completely anonymous basis allowing only the journal to cross reference between the feedback and the name of each reviewer. Once a sufficient number of authors has filled out such feedback forms the journal will be able to identify reviewers that are serial offenders and verify that they will not be approached again by the journal. Conversely, authors who will repeatedly abuse feedback forms (and many of us had at some point the unscientific but gratifying urge to use four letter words to describe reviewers of our mss) could also be readily detected and their opinion discarded. If journals from similar disciplines were willing to share these feedback comments we might rapidly improve the quality of the pool of reviewers.
Third, one of the recent additions to the publication process is the ability of the reader to add online commentaries to the ms. This policy is widely employed in other web pages where hundreds of commentaries are often attached to an article in a daily newspaper. While it is clear that editorial filtering is required in order to keep the level of conversation it is possible to expand this basic mechanism to help the scientific community. I would like to suggest that each reader also be requested to numerically rank the ms she or he is commenting on. The more scientists reading and commenting on a specific ms the more accurate will be the numerical Reader’s Rank of that ms. With time, excellent mss will float to the top of the list while bad ones will sink to the bottom. We can imagine that, once many journals adopt the Reader’s Rank, we will be able to sort search results in PubMed or Google according to this criterion allowing investigators to immediately receive the citations of the best papers in their field of study. In the recent century we have witnessed a vast inflation in the number of scientific publications in the biological sciences. None of us is able anymore to read all the papers that are relevant to our respective fields of research. We urgently need a new tool to single out those excellent mss from the huge mass of mediocre and bad papers. The Reader’s Rank I propose here is just one such possible filter. What is clear to me is that the scientific community must define modern quality filters in addition to the standard peer review or else we will soon not be able to cope with the vast amounts of information heading our way.
Online ms submission and reviewing systems are here to stay. It is probable that they will become even more sophisticated and include many more options designed to make the reviewing process more and more efficient. While this is a welcome progress we must not bow to the god in the machine. Scientific review is a human activity and so it should remain.
Maynou F (2007)
Scientific Editor, Scientia Marina, Institut de Ciències del Mar, CSIC, Psg. Marítim de la Barceloneta 37–49, 08003-Barcelona, Spain
Editorial responsibility: Hans Ulrik Riisgård
Published January 15, 2007
More and more cheaters and free-riders abuse the peer-review system, and this is particularly an increasing problem for small journals. Contribution 12 suggests that the problem could be set right by making a global database on submissions.
I agree with many of the topics covered in the Theme Section Misuse of the peer-review system: time for countermeasures? (MEPS 2003, 258:297-309). The peer-review system is probably the 'best' we have at the moment, although there is room for improvement. As the volume of Science grows (more scientists, more submissions, more research projects, etc.), the system will be prone to produce 'cheaters' and 'free-riders': people who abuse the system, which has been traditionally based to a large degree on confidence between editors and authors. The specific problem about authors resubmitting unchanged versions of rejected manuscripts is probably more serious for small journals like ours (Scientia Marina), which have a relatively low impact factor. Probably authors aim at a high-ranking journal first and, if rejected, they try the next journal along a line of decreasing rank, with the hope that the paper will eventually get through. I think a possible solution would be to build a global database on submissions, which technically should not be very difficult — especially with many journals accepting on-line submissions. Of course, the problem would be to secure funding for maintaining such a database! Also, many editors may be uncomfortable with such strict policy measures, on ethical or moral grounds. In any case, I am afraid we are bound to live with the problem for the next few years and hope that not too many low-quality manuscripts get published.
Larsen PS (2007)
Prof. Emeritus of Fluid Mechanics, Department of Mechanical Engineering, Technical University of Denmark, Lyngby, Denmark
Editorial responsibility: Hans Ulrik Riisgård
Published: December 11, 2007
As a reasonable minimum of courtesy, most editors nowadays inform the reviewers of the ultimate fate of a manuscript (ms). But editors can further reward the reviewers for their anonymous and devoted work in other ways. Contribution 13 suggests that the reviewers should receive a pdf file with the accepted ms in order to learn the outcome of their efforts.
I would like to address the issue of editors' responses to the efforts of reviewers. In the case of some journals, e.g. those of Inter-Research, I appreciate very much receiving a brief email informing me of the verdict on a ms that I have previously reviewed. It typically states that the ms has now been accepted and that the reviewer's efforts on behalf of the authors and the journal are appreciated. Further, the editor offers to send a letter to one's Department Head and/or Administrator, recognizing the assistance provided, optionally mentioning the amount of time spent, but only if one were to wish this. All of this is a well appreciated courtesy.
However, I do miss a pdf file of the accepted ms attached to the same email (and I would like even more to receive the other reviewer comments, as is practiced by some journals), in order to see the outcome of the collective efforts to improve the ms. With today's widely used online submission and communication by email it would seem to be a small act with high impact! Of course, the reviewer will sooner or later be able see the article when it appears in print in the journal, provided the reviewer is a regular and systematic reader of that journal. But often one is called upon to review mss to journals other than the ones regularly scanned, and then the opportunity to gain more knowledge is lost.
We should not forget that our accumulated knowledge in a given field is mostly contextual and attached to specific events and experiences, usually associated with written documentation that can be retrieved from one's personal filing system. Receiving the accepted ms and the reviews of the other referees would enable all reviewers to better fulfill their central function in scientific publishing.