When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

research proposal peer review example

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

research proposal peer review example

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

research proposal peer review example

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

research proposal peer review example

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

Peer review templates, expert examples and free training courses

research proposal peer review example

Joanna Wilkinson

Learning how to write a constructive peer review is an essential step in helping to safeguard the quality and integrity of published literature. Read on for resources that will get you on the right track, including peer review templates, example reports and the Web of Science™ Academy: our free, online course that teaches you the core competencies of peer review through practical experience ( try it today ).

How to write a peer review

Understanding the principles, forms and functions of peer review will enable you to write solid, actionable review reports. It will form the basis for a comprehensive and well-structured review, and help you comment on the quality, rigor and significance of the research paper. It will also help you identify potential breaches of normal ethical practice.

This may sound daunting but it doesn’t need to be. There are plenty of peer review templates, resources and experts out there to help you, including:

Peer review training courses and in-person workshops

  • Peer review templates ( found in our Web of Science Academy )
  • Expert examples of peer review reports
  • Co-reviewing (sharing the task of peer reviewing with a senior researcher)

Other peer review resources, blogs, and guidelines

We’ll go through each one of these in turn below, but first: a quick word on why learning peer review is so important.

Why learn to peer review?

Peer reviewers and editors are gatekeepers of the research literature used to document and communicate human discovery. Reviewers, therefore, need a sound understanding of their role and obligations to ensure the integrity of this process. This also helps them maintain quality research, and to help protect the public from flawed and misleading research findings.

Learning to peer review is also an important step in improving your own professional development.

You’ll become a better writer and a more successful published author in learning to review. It gives you a critical vantage point and you’ll begin to understand what editors are looking for. It will also help you keep abreast of new research and best-practice methods in your field.

We strongly encourage you to learn the core concepts of peer review by joining a course or workshop. You can attend in-person workshops to learn from and network with experienced reviewers and editors. As an example, Sense about Science offers peer review workshops every year. To learn more about what might be in store at one of these, researcher Laura Chatland shares her experience at one of the workshops in London.

There are also plenty of free, online courses available, including courses in the Web of Science Academy such as ‘Reviewing in the Sciences’, ‘Reviewing in the Humanities’ and ‘An introduction to peer review’

The Web of Science Academy also supports co-reviewing with a mentor to teach peer review through practical experience. You learn by writing reviews of preprints, published papers, or even ‘real’ unpublished manuscripts with guidance from your mentor. You can work with one of our community mentors or your own PhD supervisor or postdoc advisor, or even a senior colleague in your department.

Go to the Web of Science Academy

Peer review templates

Peer review templates are helpful to use as you work your way through a manuscript. As part of our free Web of Science Academy courses, you’ll gain exclusive access to comprehensive guidelines and a peer review report. It offers points to consider for all aspects of the manuscript, including the abstract, methods and results sections. It also teaches you how to structure your review and will get you thinking about the overall strengths and impact of the paper at hand.

  • Web of Science Academy template (requires joining one of the free courses)
  • PLoS’s review template
  • Wiley’s peer review guide (not a template as such, but a thorough guide with questions to consider in the first and second reading of the manuscript)

Beyond following a template, it’s worth asking your editor or checking the journal’s peer review management system. That way, you’ll learn whether you need to follow a formal or specific peer review structure for that particular journal. If no such formal approach exists, try asking the editor for examples of other reviews performed for the journal. This will give you a solid understanding of what they expect from you.

Peer review examples

Understand what a constructive peer review looks like by learning from the experts.

Here’s a sample of pre and post-publication peer reviews displayed on Web of Science publication records to help guide you through your first few reviews. Some of these are transparent peer reviews , which means the entire process is open and visible — from initial review and response through to revision and final publication decision. You may wish to scroll to the bottom of these pages so you can first read the initial reviews, and make your way up the page to read the editor and author’s responses.

  • Pre-publication peer review: Patterns and mechanisms in instances of endosymbiont-induced parthenogenesis
  • Pre-publication peer review: Can Ciprofloxacin be Used for Precision Treatment of Gonorrhea in Public STD Clinics? Assessment of Ciprofloxacin Susceptibility and an Opportunity for Point-of-Care Testing
  • Transparent peer review: Towards a standard model of musical improvisation
  • Transparent peer review: Complex mosaic of sexual dichromatism and monochromatism in Pacific robins results from both gains and losses of elaborate coloration
  • Post-publication peer review: Brain state monitoring for the future prediction of migraine attacks
  • Web of Science Academy peer review: Students’ Perception on Training in Writing Research Article for Publication

F1000 has also put together a nice list of expert reviewer comments pertaining to the various aspects of a review report.

Co-reviewing

Co-reviewing (sharing peer review assignments with senior researchers) is one of the best ways to learn peer review. It gives researchers a hands-on, practical understanding of the process.

In an article in The Scientist , the team at Future of Research argues that co-reviewing can be a valuable learning experience for peer review, as long as it’s done properly and with transparency. The reason there’s a need to call out how co-reviewing works is because it does have its downsides. The practice can leave early-career researchers unaware of the core concepts of peer review. This can make it hard to later join an editor’s reviewer pool if they haven’t received adequate recognition for their share of the review work. (If you are asked to write a peer review on behalf of a senior colleague or researcher, get recognition for your efforts by asking your senior colleague to verify the collaborative co-review on your Web of Science researcher profiles).

The Web of Science Academy course ‘Co-reviewing with a mentor’ is uniquely practical in this sense. You will gain experience in peer review by practicing on real papers and working with a mentor to get feedback on how their peer review can be improved. Students submit their peer review report as their course assignment and after internal evaluation receive a course certificate, an Academy graduate badge on their Web of Science researcher profile and is put in front of top editors in their field through the Reviewer Locator at Clarivate.

Here are some external peer review resources found around the web:

  • Peer Review Resources from Sense about Science
  • Peer Review: The Nuts and Bolts by Sense about Science
  • How to review journal manuscripts by R. M. Rosenfeld for Otolaryngology – Head and Neck Surgery
  • Ethical guidelines for peer review from COPE
  • An Instructional Guide for Peer Reviewers of Biomedical Manuscripts by Callaham, Schriger & Cooper for Annals of Emergency Medicine (requires Flash or Adobe)
  • EQUATOR Network’s reporting guidelines for health researchers

And finally, we’ve written a number of blogs about handy peer review tips. Check out some of our top picks:

  • How to Write a Peer Review: 12 things you need to know
  • Want To Peer Review? Top 10 Tips To Get Noticed By Editors
  • Review a manuscript like a pro: 6 tips from a Web of Science Academy supervisor
  • How to write a structured reviewer report: 5 tips from an early-career researcher

Want to learn more? Become a master of peer review and connect with top journal editors. The Web of Science Academy – your free online hub of courses designed by expert reviewers, editors and Nobel Prize winners. Find out more today.

Related posts

Journal citation reports 2024 preview: unified rankings for more inclusive journal assessment.

research proposal peer review example

Introducing the Clarivate Academic AI Platform

research proposal peer review example

Reimagining research impact: Introducing Web of Science Research Intelligence

research proposal peer review example

Kennesaw State University

  • Writing Center
  • Current Students
  • Online Only Students
  • Faculty & Staff
  • Parents & Family
  • Alumni & Friends
  • Community & Business
  • Student Life
  • Video Introduction
  • Become a Writing Assistant
  • All Writers
  • Graduate Students
  • ELL Students
  • Campus and Community
  • Testimonials
  • Encouraging Writing Center Use
  • Incentives and Requirements
  • Open Educational Resources
  • How We Help
  • Get to Know Us
  • Conversation Partners Program
  • Workshop Series
  • Professors Talk Writing
  • Computer Lab
  • Starting a Writing Center
  • A Note to Instructors
  • Annotated Bibliography
  • Literature Review
  • Research Proposal
  • Argument Essay
  • Rhetorical Analysis

Research Proposal Peer Review

facebook

As a writer . . .  

Step 1: Include answers to the following two questions at the top of your draft:  

  • What questions do you have for your reviewer?  
  • List two concerns you have about your argument essay.  

Step 2: When you receive your peer's feedback, read and consider it carefully.  

  • Remember: you are not bound to accept everything your reader suggests; if you believe that the response comes as a result of misunderstanding your intentions, be sure that those intentions are clear. The problem can be either with the reader or the writer! 

As a reviewer . . .  

As you begin writing your peer review, remember that your peers benefit more from constructive criticism than vague praise. A comment like "I got confused here" or "I saw your point clearly here" is more useful than "It looks okay to me." Point out ways your classmates can improve their work.  

Step 1: Read your peer’s draft two times.  

  • Read the draft once to get an overview of the paper, and a second time to provide constructive criticism for the author to use when revising the draft.  

Step 2: Answer the following questions:   

  • Does the draft include an introduction that establishes the purpose of your paper, provides, a thoughtful explanation of your project's significance by communicating why the project is important and how it will contribute to the existing field of knowledge.
  • Does the research review section include at least five credible sources on the topic?
  • In the research review section, has the writer explained the sources' relevance to the topic and discussed the significant commonalities and conflicts between your sources?
  • In the methodology section, has the writer discussed how they will proceed with the proposed project and addressed questions that still need to be answered about the topic? Is it clear why those questions are significant?
  • In the methodology section, has the writer discussed potential challenges (e.g., language and/or cultural barriers, potential safety concerns, time constraints, etc.) and how they plan to overcome them?
  • In the conclusion section, has the writer reminded the reader of the potential benefits of the proposed research by discussing who will potentially benefit from the proposed research and what the research will contribute to knowledge and understanding about your topic?
  • What did you find most interesting about this draft?

Step 3: Address your peer's questions and concerns included at the top of the draft.    

Step 4: Write a short paragraph about what the writer does especially well.  

Step 5: Write a short paragraph about what you think the writer should do to improve the draft.  

Your suggestions will be the most useful part of peer review for your classmates, so focus more of your time on these paragraphs; they will count for more of your peer review grade than the yes or no responses.  

Hints for peer review:  

  • Point out the strengths in the essay.  
  • Address the larger issues first.  
  • Make specific suggestions for improvement.  
  • Be tactful but be candid and direct.  
  • Don't be afraid to disagree with another reviewer.  
  • Make and receive comments in a useful way.  
  • Remember peer review is not an editing service.  

This material was developed by the COMPSS team and is licensed under a Creative Commons Attribution 4.0 International License. All materials created by the COMPSS team are free to use and can be adopted, remixed, shared at will as long as the materials are attributed. 

Contact Info

Kennesaw Campus 1000 Chastain Road Kennesaw, GA 30144

Marietta Campus 1100 South Marietta Pkwy Marietta, GA 30060

Campus Maps

Phone 470-KSU-INFO (470-578-4636)

kennesaw.edu/info

Media Resources

Resources For

Related Links

  • Financial Aid
  • Degrees, Majors & Programs
  • Job Opportunities
  • Campus Security
  • Global Education
  • Sustainability
  • Accessibility

470-KSU-INFO (470-578-4636)

© 2024 Kennesaw State University. All Rights Reserved.

  • Privacy Statement
  • Accreditation
  • Emergency Information
  • Report a Concern
  • Open Records
  • Human Trafficking Notice
  • Reviewer Guidelines
  • Peer review model
  • Scope & article eligibility
  • Reviewer eligibility
  • Peer reviewer code of conduct
  • Guidelines for reviewing
  • How to submit
  • The peer-review process
  • Peer Reviewing Tips
  • Benefits for Reviewers

The genesis of this paper is the proposal that genomes containing a poor percentage of guanosine and cytosine (GC) nucleotide pairs lead to proteomes more prone to aggregation than those encoded by GC-rich genomes. As a consequence these organisms are also more dependent on the protein folding machinery. If true, this interesting hypothesis could establish a direct link between the tendency to aggregate and the genomic code.

In their paper, the authors have tested the hypothesis on the genomes of eubacteria using a genome-wide approach based on multiple machine learning models. Eubacteria are an interesting set of organisms which have an appreciably high variation in their nucleotide composition with the percentage of CG genetic material ranging from 20% to 70%. The authors classified different eubacterial proteomes in terms of their aggregation propensity and chaperone-dependence. For this purpose, new classifiers had to be developed which were based on carefully curated data. They took account for twenty-four different features among which are sequence patterns, the pseudo amino acid composition of phenylalanine, aspartic and glutamic acid, the distribution of positively charged amino acids, the FoldIndex score and the hydrophobicity. These classifiers seem to be altogether more accurate and robust than previous such parameters.

The authors found that, contrary to what expected from the working hypothesis, which would predict a decrease in protein aggregation with an increase in GC richness, the aggregation propensity of proteomes increases with the GC content and thus the stability of the proteome against aggregation increases with the decrease in GC content. The work also established a direct correlation between GC-poor proteomes and a lower dependence on GroEL. The authors conclude by proposing that a decrease in eubacterial GC content may have been selected in organisms facing proteostasis problems. A way to test the overall results would be through in vitro evolution experiments aimed at testing whether adaptation to low GC content provide folding advantage.

The main strengths of this paper is that it addresses an interesting and timely question, finds a novel solution based on a carefully selected set of rules, and provides a clear answer. As such this article represents an excellent and elegant bioinformatics genome-wide study which will almost certainly influence our thinking about protein aggregation and evolution. Some of the weaknesses are the not always easy readability of the text which establishes unclear logical links between concepts.

Another possible criticism could be that, as any in silico study, it makes strong assumptions on the sequence features that lead to aggregation and strongly relies on the quality of the classifiers used. Even though the developed classifiers seem to be more robust than previous such parameters, they remain only overall indications which can only allow statistical considerations. It could of course be argued that this is good enough to reach meaningful conclusions in this specific case.

The paper by Chevalier et al. analyzed whether late sodium current (I NaL ) can be assessed using an automated patch-clamp device. To this end, the I NaL effects of ranolazine (a well known I NaL inhibitor) and veratridine (an I NaL activator) were described. The authors tested the CytoPatch automated patch-clamp equipment and performed whole-cell recordings in HEK293 cells stably transfected with human Nav1.5. Furthermore, they also tested the electrophysiological properties of human induced pluripotent stem cell-derived cardiomyocytes (hiPS) provided by Cellular Dynamics International. The title and abstract are appropriate for the content of the text. Furthermore, the article is well constructed, the experiments were well conducted, and analysis was well performed.

I NaL is a small current component generated by a fraction of Nav1.5 channels that instead to entering in the inactivated state, rapidly reopened in a burst mode. I NaL critically determines action potential duration (APD), in such a way that both acquired (myocardial ischemia and heart failure among others) or inherited (long QT type 3) diseases that augmented the I NaL magnitude also increase the susceptibility to cardiac arrhythmias. Therefore, I NaL has been recognized as an important target for the development of drugs with either antiischemic or antiarrhythmic effects. Unfortunately, accurate measurement of I NaL is a time consuming and technical challenge because of its extra-small density. The automated patch clamp device tested by Chevalier et al. resolves this problem and allows fast and reliable I NaL measurements.

The results here presented merit some comments and arise some unresolved questions. First, in some experiments (such is the case in experiments B and D in Figure 2) current recordings obtained before the ranolazine perfusion seem to be quite unstable. Indeed, the amplitude progressively increased to a maximum value that was considered as the control value (highlighted with arrows). Can this problem be overcome? Is this a consequence of a slow intracellular dialysis? Is it a consequence of a time-dependent shift of the voltage dependence of activation/inactivation? Second, as shown in Figure 2, intensity of drug effects seems to be quite variable. In fact, experiments A, B, C, and D in Figure 2 and panel 2D, demonstrated that veratridine augmentation ranged from 0-400%. Even assuming the normal biological variability, we wonder as to whether this broad range of effect intensities can be justified by changes in the perfusion system. Has been the automated dispensing system tested? If not, we suggest testing the effects of several K + concentrations on inward rectifier currents generated by Kir2.1 channels (I Kir2.1 ).

The authors demonstrated that the recording quality was so high that the automated device allows to the differentiation between noise and current, even when measuring currents of less than 5 pA of amplitude. In order to make more precise mechanistic assumptions, the authors performed an elegant estimation of current variance (σ 2 ) and macroscopic current (I) following the procedure described more than 30 years ago by Van Driessche and Lindemann 1 . By means of this method, Chevalier et al. reducing the open channel probability, while veratridine increases the number of channels in the burst mode. We respectfully would like to stress that these considerations must be put in context from a pharmacological point of view. We do not doubt that ranolazine acts as an open channel blocker, what it seems clear however, is that its onset block kinetics has to be “ultra” slow, otherwise ranolazine would decrease peak I NaL even at low frequencies of stimulation. This comment points towards the fact that for a precise mechanistic study of ionic current modifying drugs it is mandatory to analyze drug effects with much more complicated pulse protocols. Questions thus are: does this automated equipment allow to the analysis of the frequency-, time-, and voltage-dependent effects of drugs? Can versatile and complicated pulse protocols be applied? Does it allow to a good voltage control even when generated currents are big and fast? If this is not possible, and by means of its extraordinary discrimination between current and noise, this automated patch-clamp equipment will only be helpful for rapid I NaL -modifying drug screening. Obviously it will also be perfect to test HERG blocking drug effects as demanded by the regulatory authorities.

Finally, as cardiac electrophysiologists, we would like to stress that it seems that our dream of testing drug effects on human ventricular myocytes seems to come true. Indeed, human atrial myocytes are technically, ethically and logistically difficult to get, but human ventricular are almost impossible to be obtained unless from the explanted hearts from patients at the end stage of cardiac diseases. Here the authors demonstrated that ventricular myocytes derived from hiPS generate beautiful action potentials that can be recorded with this automated equipment. The traces shown suggested that there was not alternation in the action potential duration. Is this a consistent finding? How long do last these stable recordings? The only comment is that resting membrane potential seems to be somewhat variable. Can this be resolved? Is it an unexpected veratridine effect? Standardization of maturation methods of ventricular myocytes derived from hiPS will be a big achievement for cardiac cellular electrophysiology which was obliged for years to the imprecise extrapolation of data obtained from a combination of several species none of which was representative of human electrophysiology. The big deal will be the maturation of human atrial myocytes derived from hiPS that fulfil the known characteristics of human atrial cells.

We suggest suppressing the initial sentence of section 3. We surmise that results obtained from the experiments described in this section cannot serve to understand the role of I NaL in arrhythmogenesis.

1. Van Driessche W, Lindemann B: Concentration dependence of currents through single sodium-selective pores in frog skin. Nature . 1979; 282 (5738): 519-520 PubMed Abstract | Publisher Full Text

The authors have clarified several of the questions I raised in my previous review. Unfortunately, most of the major problems have not been addressed by this revision. As I stated in my previous review, I deem it unlikely that all those issues can be solved merely by a few added paragraphs. Instead there are still some fundamental concerns with the experimental design and, most critically, with the analysis. This means the strong conclusions put forward by this manuscript are not warranted and I cannot approve the manuscript in this form.

  • The greatest concern is that when I followed the description of the methods in the previous version it was possible to decode, with almost perfect accuracy, any arbitrary stimulus labels I chose. See https://doi.org/10.6084/m9.figshare.1167456 for examples of this reanalysis. Regardless of whether we pretend that the actual stimulus appeared at a later time or was continuously alternating between signal and silence, the decoding is always close to perfect. This is an indication that the decoding has nothing to do with the actual stimulus heard by the Sender but is opportunistically exploiting some other features in the data. The control analysis the authors performed, reversing the stimulus labels, cannot address this problem because it suffers from the exact same problem. Essentially, what the classifier is presumably using is the time that has passed since the recording started.
  • The reason for this is presumably that the authors used non-independent data for training and testing. Assuming I understand correctly (see point 3), random sampling one half of data samples from an EEG trace are not independent data . Repeating the analysis five times – the control analysis the authors performed – is not an adequate way to address this concern. Randomly selecting samples from a time series containing slow changes (such as the slow wave activity that presumably dominates these recordings under these circumstances) will inevitably contain strong temporal correlations. See TemporalCorrelations.jpg in https://doi.org/10.6084/m9.figshare.1185723 for 2D density histograms and a correlation matrix demonstrating this.
  • While the revised methods section provides more detail now, it still is unclear about exactly what data were used. Conventional classification analysis report what data features (usual columns in the data matrix) and what observations (usual rows) were used. Anything could be a feature but typically this might be the different EEG channels or fMRI voxels etc. Observations are usually time points. Here I assume the authors transformed the raw samples into a different space using principal component analysis. It is not stated if the dimensionality was reduced using the eigenvalues. Either way, I assume the data samples (collected at 128 Hz) were then used as observations and the EEG channels transformed by PCA were used as features. The stimulus labels were assigned as ON or OFF for each sample. A set of 50% of samples (and labels) was then selected at random for training, and the rest was used for testing. Is this correct?
  • A powerful non-linear classifier can capitalise on such correlations to discriminate arbitrary labels. In my own analyses I used both an SVM with RBF as well as a k-nearest neighbour classifier, both of which produce excellent decoding of arbitrary stimulus labels (see point 1). Interestingly, linear classifiers or less powerful SVM kernels fare much worse – a clear indication that the classifier learns about the complex non-linear pattern of temporal correlations that can describe the stimulus label. This is further corroborated by the fact that when using stimulus labels that are chosen completely at random (i.e. with high temporal frequency) decoding does not work.
  • The authors have mostly clarified how the correlation analysis was performed. It is still left unclear, however, how the correlations for individual pairs were averaged. Was Fisher’s z-transformation used, or were the data pooled across pairs? More importantly, it is not entirely surprising that under the experimental conditions there will be some correlation between the EEG signals for different participants, especially in low frequency bands. Again, this further supports the suspicion that the classification utilizes slow frequency signals that are unrelated to the stimulus and the experimental hypothesis. In fact, a quick spot check seems to confirm this suspicion: correlating the time series separately for each channel from the Receiver in pair 1 with those from the Receiver in pair 18 reveals 131 significant (p‹0.05, Bonferroni corrected) out of 196 (14x14 channels) correlations… One could perhaps argue that this is not surprising because both these pairs had been exposed to identical stimulus protocols: one minute of initial silence and only one signal period (see point 6). However, it certainly argues strongly against the notion that the decoding is any way related to the mental connection between the particular Sender and Receiver in a given pair because it clearly works between Receivers in different pairs! However, to further control for this possibility I repeated the same analysis but now comparing the Receiver from pair 1 to the Receiver from pair 15. This pair was exposed to a different stimulus paradigm (2 minutes of initial silence and a longer paradigm with three signal periods). I only used the initial 3 minutes for the correlation analysis. Therefore, both recordings would have been exposed to only one signal period but at different times (at 1 min and 2 min for pair 1 and 15, respectively). Even though the stimulus protocol was completely different the time courses for all the channels are highly correlated and 137 out of 196 correlations are significant. Considering that I used the raw data for this analysis it should not surprise anyone that extracting power from different frequency bands in short time windows will also reveal significant correlations. Crucially, it demonstrates that correlations between Sender and Receiver are artifactual and trivial.
  • The authors argue in their response and the revision that predictive strategies were unlikely. After having performed these additional analyses I am inclined to agree. The excellent decoding almost certainly has nothing to do with expectation or imagery effects and it is irrelevant whether participants could guess the temporal design of the experiment. Rather, the results are almost entirely an artefact of the analysis. However, this does not mean that predictability is not an issue. The figure StimulusTimecourses.jpg in https://doi.org/10.6084/m9.figshare.1185723 plots the stimulus time courses for all 20 pairs as can be extracted from the newly uploaded data. This confirms what I wrote in my previous review, in fact, with the corrected data sets the problem with predictability is even greater. Out of the 20 pairs, 13 started with 1 min of initial silence. The remaining 7 had 2 minutes of initial silence. Most of the stimulus paradigms are therefore perfectly aligned and thus highly correlated. This also proves incorrect the statement that initial silence periods were 1, 2, or 3 minutes. No pair had 3 min of initial silence. It would therefore have been very easy for any given Receiver to correctly guess the protocol. It should be clear that this is far from optimal for testing such an unorthodox hypothesis. Any future experiments should employ more randomization to decrease predictability. Even if this wasn’t the underlying cause of the present results, this is simply not great experimental design.
  • The authors now acknowledge in their response that all the participants were authors. They say that this is also acknowledged in the methods section, but I did not see any statement about that in the revised manuscript. As before, I also find it highly questionable to include only authors in an experiment of this kind. It is not sufficient to claim that Receivers weren’t guessing their stimulus protocol. While I am giving the authors (and thus the participants) the benefit of the doubt that they actually believe they weren’t guessing/predicting the stimulus protocols, this does not rule out that they did. It may in fact be possible to make such predictions subconsciously (Now, if you ask me, this is an interesting scientific question someone should do an experiment on!). The fact familiar with the protocol may help that. Any future experiments should take steps to prevent this.
  • I do not follow the explanation for the binomial test the authors used. Based on the excessive Bayes Factor of 390,625 it is clear that the authors assumed a chance level of 50% on their binomial test. Because the design is not balanced, this is not correct.
  • In general, the Bayes Factor and the extremely high decoding accuracy should have given the authors reason to start. Considering the unusual hypothesis did the authors not at any point wonder if these results aren’t just far too good to be true? Decoding mental states from brain activity is typically extremely noisy and hardly affords accuracies at the level seen here. Extremely accurate decoding and Bayes Factors in the hundreds of thousands should be a tell-tale sign to check that there isn’t an analytical flaw that makes the result entirely trivial. I believe this is what happened here and thus I think this experiment serves as a very good demonstration for the pitfalls of applying such analysis without sanity checks. In order to make claims like this, the experimental design must contain control conditions that can rule out these problems. Presumably, recordings without any Sender, and maybe even when the “Receiver” is aware of this fact, should produce very similar results.

Based on all these factors, it is impossible for me to approve this manuscript. I should however state that it is laudable that the authors chose to make all the raw data of their experiment publicly available. Without this it would have impossible for me to carry out the additional analyses, and thus the most fundamental problem in the analysis would have remained unknown. I respect the authors’ patience and professionalism in dealing with what I can only assume is a rather harsh review experience. I am honoured by the request for an adversarial collaboration. I do not rule out such efforts at some point in the future. However, for all of the reasons outlined in this and my previous review, I do not think the time is right for this experiment to proceed to this stage. Fundamental analytical flaws and weaknesses in the design should be ruled out first. An adversarial collaboration only really makes sense to me for paradigms were we can be confident that mundane or trivial factors have been excluded.

This manuscript does an excellent job demonstrating significant strain differences in Burdian's paradigm. Since each Drosophila lab has their own wild type (usually Canton-S) isolate, this issue of strain differences is actually a very important one for between lab reproducibility. This work is a good reminder for all geneticists to pay attention to the population effects in the background controls, and presumably the mutant lines we are comparing.

I was very pleased to see the within-isolate behavior was consistent in replicate experiments one year apart. The authors further argue that the between-isolate differences in behavior arise from a Founder's effect, at least in the differences in locomotor behavior between the Paris lines CS_TP and CS_JC. I believe this is a very reasonable and testable hypothesis. It predicts that genetic variability for these traits exist within the populations. It should now be possible to perform selection experiments from the original CS_TP population to replicate the founding event and estimate the heritability of these traits.

Two other things that I liked about this manuscript are the ability to adjust parameters in figure 3, and our ability to download the raw data. After reading the manuscript, I was a little disappointed that the performance of the five strains in each 12 behavioral variables weren't broken down individually in a table or figure. I thought this may help us readers understand what the principle components were representing. The authors have made this data readily accessible in a downloadable spreadsheet.

This is an exceptionally good review and balanced assessment of the status of CETP inhibitors and ASCVD from a world authority in the field. The article highlights important data that might have been overlooked when promulgating the clinical value of CETPIs and related trials.

Only 2 areas need revision:

  • Page 3, para 2: the notion that these data from Papp et al . convey is critical and the message needs an explicit sentence or two at end of paragraph.
  • Page 4, Conclusion: the assertion concerning the ethics of the two Phase 3 clinical trials needs toning down. Perhaps rephrase to indicate that the value and sense of doing these trials is open to question, with attendant ethical implications, or softer wording to that effect.

The Wiley et al . manuscript describes a beautiful synthesis of contemporary genetic approaches to, with astonishing efficiency, identify lead compounds for therapeutic approaches to a serious human disease. I believe the importance of this paper stems from the applicability of the approach to the several thousand of rare human disease genes that Next-Gen sequencing will uncover in the next few years and the challenge we will have in figuring out the function of these genes and their resulting defects. This work presents a paradigm that can be broadly and usefully applied.

In detail, the authors begin with gene responsible for X-linked spinal muscular atrophy and express both the wild-type version of that human gene as well as a mutant form of that gene in S. pombe . The conceptual leap here is that progress in genetics is driven by phenotype, and this approach involving a yeast with no spine or muscles to atrophy is nevertheless and N-dimensional detector of phenotype.

The study is not without a small measure of luck in that expression of the wild-type UBA1 gene caused a slow growth phenotype which the mutant did not. Hence there was something in S. pombe that could feel the impact of this protein. Given this phenotype, the authors then went to work and using the power of the synthetic genetic array approach pioneered by Boone and colleagues made a systematic set of double mutants combining the human expressed UBA1 gene with knockout alleles of a plurality of S. pombe genes. They found well over a hundred mutations that either enhanced or suppressed the growth defect of the cells expressing UBI1. Most of these have human orthologs. My hunch is that many human genes expressed in yeast will have some comparably exploitable phenotype, and time will tell.

Building on the interaction networks of S. pombe genes already established, augmenting these networks by the protein interaction networks from yeast and from human proteome studies involving these genes, and from the structure of the emerging networks, the authors deduced that an E3 ligase modulated UBA1 and made the leap that it therefore might also impact X-linked Spinal Muscular Atrophy.

Here, the awesome power of the model organism community comes into the picture as there is a zebrafish model of spinal muscular atrophy. The principle of phenologs articulated by the Marcotte group inspire the recognition of the transitive logic of how phenotypes in one organism relate to phenotypes in another. With this zebrafish model, they were able to confirm that an inhibitor of E3 ligases and of the Nedd8-E1 activating suppressed the motor axon anomalies, as predicted by the effect of mutations in S. pombe on the phenotypes of the UBA1 overexpression.

I believe this is an important paper to teach in intro graduate courses as it illustrates beautifully how important it is to know about and embrace the many new sources of systematic genetic information and apply them broadly.

This paper by Amrhein et al. criticizes a paper by Bradley Efron that discusses Bayesian statistics ( Efron, 2013a ), focusing on a particular example that was also discussed in Efron (2013b) . The example concerns a woman who is carrying twins, both male (as determined by sonogram and we ignore the possibility that gender has been observed incorrectly). The parents-to-be ask Efron to tell them the probability that the twins are identical.

This is my first open review, so I'm not sure of the protocol. But given that there appears to be errors in both Efron (2013b) and the paper under review, I am sorry to say that my review might actually be longer than the article by Efron (2013a) , the primary focus of the critique, and the critique itself. I apologize in advance for this. To start, I will outline the problem being discussed for the sake of readers.

This problem has various parameters of interest. The primary parameter is the genetic composition of the twins in the mother’s womb. Are they identical (which I describe as the state x = 1) or fraternal twins ( x = 0)? Let y be the data, with y = 1 to indicate the twins are the same gender. Finally, we wish to obtain Pr( x = 1 | y = 1), the probability the twins are identical given they are the same gender 1 . Bayes’ rule gives us an expression for this:

Pr( x = 1 | y = 1) = Pr( x =1) Pr( y = 1 | x = 1) / {Pr( x =1) Pr( y = 1 | x = 1) + Pr( x =0) Pr( y = 1 | x = 0)}

Now we know that Pr( y = 1 | x = 1) = 1; twins must be the same gender if they are identical. Further, Pr( y = 1 | x = 0) = 1/2; if twins are not identical, the probability of them being the same gender is 1/2.

Finally, Pr( x = 1) is the prior probability that the twins are identical. The bone of contention in the Efron papers and the critique by Amrhein et al. revolves around how this prior is treated. One can think of Pr( x = 1) as the population-level proportion of twins that are identical for a mother like the one being considered.

However, if we ignore other forms of twins that are extremely rare (equivalent to ignoring coins finishing on their edges when flipping them), one incontrovertible fact is that Pr( x = 0) = 1 − Pr( x = 1); the probability that the twins are fraternal is the complement of the probability that they are identical.

The above values and expressions for Pr( y = 1 | x = 1), Pr( y = 1 | x = 0), and Pr( x = 0) leads to a simpler expression for the probability that we seek ‐ the probability that the twins are identical given they have the same gender:

Pr( x = 1 | y = 1) = 2 Pr( x =1) / [1 + Pr( x =1)] (1)

We see that the answer depends on the prior probability that the twins are identical, Pr( x =1). The paper by Amrhein et al. points out that this is a mathematical fact. For example, if identical twins were impossible (Pr( x = 1) = 0), then Pr( x = 1| y = 1) = 0. Similarly, if all twins were identical (Pr( x = 1) = 1), then Pr( x = 1| y = 1) = 1. The “true” prior lies somewhere in between. Apparently, the doctor knows that one third of twins are identical 2 . Therefore, if we assume Pr( x = 1) = 1/3, then Pr( x = 1| y = 1) = 1/2.

Now, what would happen if we didn't have the doctor's knowledge? Laplace's “Principle of Insufficient Reason” would suggest that we give equal prior probability to all possibilities, so Pr( x = 1) = 1/2 and Pr( x = 1| y = 1) = 2/3, an answer different from 1/2 that was obtained when using the doctor's prior of 1/3.

Efron(2013a) highlights this sensitivity to the prior, representing someone who defines an uninformative prior as a “violator”, with Laplace as the “prime violator”. In contrast, Amrhein et al. correctly points out that the difference in the posterior probabilities is merely a consequence of mathematical logic. No one is violating logic – they are merely expressing ignorance by specifying equal probabilities to all states of nature. Whether this is philosophically valid is debatable ( Colyvan 2008 ), but weight to that question, and it is well beyond the scope of this review. But setting Pr( x = 1) = 1/2 is not a violation; it is merely an assumption with consequences (and one that in hindsight might be incorrect 2 ).

Alternatively, if we don't know Pr( x = 1), we could describe that probability by its own probability distribution. Now the problem has two aspects that are uncertain. We don’t know the true state x , and we don’t know the prior (except in the case where we use the doctor’s knowledge that Pr( x = 1) = 1/3). Uncertainty in the state of x refers to uncertainty about this particular set of twins. In contrast, uncertainty in Pr( x = 1) reflects uncertainty in the population-level frequency of identical twins. A key point is that the state of one particular set of twins is a different parameter from the frequency of occurrence of identical twins in the population.

Without knowledge about Pr( x = 1), we might use Pr( x = 1) ~ dunif(0, 1), which is consistent with Laplace. Alternatively, Efron (2013b) notes another alternative for an uninformative prior: Pr( x = 1) ~ dbeta(0.5, 0.5), which is the Jeffreys prior for a probability.

Here I disagree with Amrhein et al. ; I think they are confusing the two uncertain parameters. Amrhein et al. state:

“We argue that this example is not only flawed, but useless in illustrating Bayesian data analysis because it does not rely on any data. Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal), Efron does not use it to update prior knowledge. Instead, Efron combines different pieces of expert knowledge from the doctor and genetics using Bayes’ theorem.”

This claim might be correct when describing uncertainty in the population-level frequency of identical twins. The data about the twin boys is not useful by itself for this purpose – they are a biased sample (the data have come to light because their gender is the same; they are not a random sample of twins). Further, a sample of size one, especially if biased, is not a firm basis for inference about a population parameter. While the data are biased, the claim by Amrheim et al. that there are no data is incorrect.

However, the data point (the twins have the same gender) is entirely relevant to the question about the state of this particular set of twins. And it does update the prior. This updating of the prior is given by equation (1) above. The doctor’s prior probability that the twins are identical (1/3) becomes the posterior probability (1/2) when using information that the twins are the same gender. The prior is clearly updated with Pr( x = 1| y = 1) ≠ Pr( x = 1) in all but trivial cases; Amrheim et al. ’s statement that I quoted above is incorrect in this regard.

This possible confusion between uncertainty about these twins and uncertainty about the population level frequency of identical twins is further suggested by Amrhein et al. ’s statements:

“Second, for the uninformative prior, Efron mentions erroneously that he used a uniform distribution between zero and one, which is clearly different from the value of 0.5 that was used. Third, we find it at least debatable whether a prior can be called an uninformative prior if it has a fixed value of 0.5 given without any measurement of uncertainty.”

Note, if the prior for Pr( x = 1) is specified as 0.5, or dunif(0,1), or dbeta(0.5, 0.5), the posterior probability that these twins are identical is 2/3 in all cases. Efron (2013b) says the different priors lead to different results, but this result is incorrect, and the correct answer (2/3) is given in Efron (2013a) 3 . Nevertheless, a prior that specifies Pr( x = 1) = 0.5 does indicate uncertainty about whether this particular set of twins is identical (but certainty in the population level frequency of twins). And Efron’s (2013a) result is consistent with Pr( x = 1) having a uniform prior. Therefore, both claims in the quote above are incorrect.

It is probably easiest to show the (lack of) influence of the prior using MCMC sampling. Here is WinBUGS code for the case using Pr( x = 1) = 0.5.

Running this model in WinBUGS shows that the posterior mean of x is 2/3; this is the posterior probability that x = 1.

Instead of using pr_ident_twins <- 0.5, we could set this probability as being uncertain and define pr_ident_twins ~ dunif(0,1), or pr_ident_twins ~ dbeta(0.5,0.5). In either case, the posterior mean value of x remains 2/3 (contrary to Efron 2013b , but in accord with the correction in Efron 2013a ).

Note, however, that the value of the population level parameter pr_ident_twins is different in all three cases. In the first it remains unchanged at 1/2 where it was set. In the case where the prior distribution for pr_ident_twins is uniform or beta, the posterior distributions remain broad, but they differ depending on the prior (as they should – different priors lead to different posteriors 4 ). However, given the biased sample size of 1, the posterior distribution for this particular parameter is likely to be misleading as an estimate of the population-level frequency of twins.

So why doesn’t the choice of prior influence the posterior probability that these twins are identical? Well, for these three priors, the prior probability that any single set of twins is identical is 1/2 (this is essentially the mean of the prior distributions in these three cases).

If, instead, we set the prior as dbeta(1,2), which has a mean of 1/3, then the posterior probability that these twins are identical is 1/2. This is the same result as if we had set Pr( x = 1) = 1/3. In both these cases (choosing dbeta(1,2) or 1/3), the prior probability that a single set of twins is identical is 1/3, so the posterior is the same (1/2) given the data (the twins have the same gender).

Further, Amrhein et al. also seem to misunderstand the data. They note:

“Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal)...”

This is incorrect. The parents simply know that the twins are both male. Whether they are fraternal is unknown (fraternal twins being the complement of identical twins) – that is the question the parents are asking. This error of interpretation makes the calculations in Box 1 and subsequent comments irrelevant.

Box 1 also implies Amrhein et al. are using the data to estimate the population frequency of identical twins rather than the state of this particular set of twins. This is different from the aim of Efron (2013a) and the stated question.

Efron suggests that Bayesian calculations should be checked with frequentist methods when priors are uncertain. However, this is a good example where this cannot be done easily, and Amrhein et al. are correct to point this out. In this case, we are interested in the probability that the hypothesis is true given the data (an inverse probability), not the probabilities that the observed data would be generated given particular hypotheses (frequentist probabilities). If one wants the inverse probability (the probability the twins are identical given they are the same gender), then Bayesian methods (andtherefore a prior) are required. A logical answer simply requires that the prior is constructed logically. Whether that answer is “correct” will be, in most cases, only known in hindsight.

However, one possible way to analyse this example using frequentist methods would be to assess the likelihood of obtaining the data for each of the two hypothesis (the twins are identical or fraternal). The likelihood of the twins having the same gender under the hypothesis that they are identical is 1. The likelihood of the twins having the same gender under the hypothesis that they are fraternal is 0.5. Therefore, the weight of evidence in favour of identical twins is twice that of fraternal twins. Scaling these weights so they sum to one ( Burnham and Anderson 2002 ), gives a weight of 2/3 for identical twins and 1/3 for fraternal twins. These scaled weights have the same numerical values as the posterior probabilities based on either a Laplace or Jeffreys prior. Thus, one might argue that the weight of evidence for each hypothesis when using frequentist methods is equivalent to the posterior probabilities derived from an uninformative prior. So, as a final aside in reference to Efron (2013a) , if we are being “violators” when using a uniform prior, are we also being “violators” when using frequentist methods to weigh evidence? Regardless of the answer to this rhetorical question, “checking” the results with frequentist methods doesn’t give any more insight than using uninformative priors (in this case). However, this analysis shows that the question can be analysed using frequentist methods; the single data point is not a problem for this. The claim in Armhein et al. that a frequentist analyis "is impossible because there is only one data point, and frequentist methods generally cannot handle such situations" is not supported by this example.

In summary, the comment by Amrhein et al. raises some interesting points that seem worth discussing, but it makes important errors in analysis and interpretation, and misrepresents the results of Efron (2013a) . This means the current version should not be approved.

Burnham, K.P. & D.R. Anderson. 2002. Model Selection and Multi-model Inference: a Practical Information-theoretic Approach. Springer-Verlag, New York.

Colyvan, M. 2008. Is Probability the Only Coherent Approach to Uncertainty? Risk Anal. 28: 645-652.

Efron B. (2003a) Bayes’ Theorem in the 21st Century. Science 340(6137): 1177-1178.

Efron B. (2013b) A 250-year argument: Belief, behavior, and the bootstrap. Bull Amer. Math Soc. 50: 129-146.

  • The twins are both male. However, if the twins were both female, the statistical results would be the same, so I will simply use the data that the twins are the same gender.
  • In reality, the frequency of twins that are identical is likely to vary depending on many factors but we will accept 1/3 for now.
  • Efron (2013b) reports the posterior probability for these twins being identical as “a whopping 61.4% with a flat Laplace prior” but as 2/3 in Efron (2013a) . The latter (I assume 2/3 is “even more whopping”!) is the correct answer, which I confirmed via email with Professor Efron. Therefore, Efron (2013b) incorrectly claims the posterior probability is sensitive to the choice between a Jeffreys or Laplace uninformative prior.
  • When the data are very informative relative to the different priors, the posteriors will be similar, although not identical.

I am very glad the authors wrote this essay. It is a well-written, needed, and useful summary of the current status of “data publication” from a certain perspective. The authors, however, need to be bolder and more analytical. This is an opinion piece, yet I see little opinion. A certain view is implied by the organization of the paper and the references chosen, but they could be more explicit.

The paper would be both more compelling and useful to a broad readership if the authors moved beyond providing a simple summary of the landscape and examined why there is controversy in some areas and then use the evidence they have compiled to suggest a path forward. They need to be more forthright in saying what data publication means to them, or what parts of it they do not deal with. Are they satisfied with the Lawrence et al. definition? Do they accept the critique of Parsons and Fox? What is the scope of their essay?

The authors take a rather narrow view of data publication, which I think hinders their analyses. They describe three types of (digital) data publication: Data as a supplement to an article; data as the subject of a paper; and data independent of a paper. The first two types are relatively new and they represent very little of the data actually being published or released today. The last category, which is essentially an “other” category, is rich in its complexity and encompasses the vast majority of data released. I was disappointed that the examples of this type were only the most bare-bones (Zenodo and Figshare). I think a deeper examination of this third category and its complexity would help the authors better characterize the current landscape and suggest paths forward.

Some questions the authors might consider: Are these really the only three models in consideration or does the publication model overstate a consensus around a certain type of data publication? Why are there different models and which approach is better for different situations? Do they have different business models or imply different social contracts? Might it also be worthy of typing “publishers” instead of “publications”? For example, do domain repositories vs. institutional repositories vs. publishers address the issues differently? Are these models sustaining models or just something to get us through the next 5-10 years while we really figure it out?

I think this oversimplification inhibited some deeper analysis in other areas as well. I would like to see more examination of the validation requirement beyond the lens of peer review, and I would like a deeper examination of incentives and credit beyond citation.

I thought the validation section of the paper was very relevant, but somewhat light. I like the choice of the term validation as more accurate than “quality” and it fits quite well with Callaghan’s useful distinction between technical and scientific review, but I think the authors overemphasize the peer-review style approach. The authors rightly argue that “peer-review” is where the publication metaphor leads us, but it may be a false path. They overstate some difficulties of peer-review (No-one looks at every data value? No, they use statistics, visualization, and other techniques.) while not fully considering who is responsible for what. We need a closer examination of different roles and who are appropriate validators (not necessarily conventional peers). The narrowly defined models of data publication may easily allow for a conventional peer-review process, but it is much more complex in the real-world “other” category. The authors discuss some of this in what they call “independent data validation,” but they don’t draw any conclusions.

Only the simplest of research data collections are validated only by the original creators. More often there are teams working together to develop experiments, sampling protocols, algorithms, etc. There are additional teams who assess, calibrate, and revise the data as they are collected and assembled. The authors discuss some of this in their examples like the PDS and tDAR, but I wish they were more analytical and offered an opinion on the way forward. Are there emerging practices or consensus in these team-based schemes? The level of service concept illustrated by Open Context may be one such area. Would formalizing or codifying some of these processes accomplish the same as peer-review or more? What is the role of the curator or data scientist in all of this? Given the authors’s backgrounds, I was surprised this role was not emphasized more. Finally, I think it is a mistake for science review to be the main way to assess reuse value. It has been shown time and again that data end up being used effectively (and valued) in ways that original experts never envisioned or even thought valid.

The discussion of data citation was good and captured the state of the art well, but again I would have liked to see some views on a way forward. Have we solved the basic problem and are now just dealing with edge cases? Is the “just-in-time identifier” the way to go? What are the implications? Will the more basic solutions work in the interim? More critically, are we overemphasizing the role of citation to provide academic credit? I was gratified that the authors referenced the Parsons and Fox paper which questions the whole data publication metaphor, but I was surprised that they only discussed the “data as software” alternative metaphor. That is a useful metaphor, but I think the ecosystem metaphor has broader acceptance. I mention this because the authors critique the software metaphor because “using it to alter or affect the academic reward system is a tricky prospect”. Yet there is little to suggest that data publication and corresponding citation alters that system either. Indeed there is little if any evidence that data publication and citation incentivize data sharing or stewardship. As Christine Borgman suggests, we need to look more closely at who we are trying to incentivize to do what. There is no reason to assume it follows the same model as research literature publication. It may be beyond the scope of this paper to fully examine incentive structures, but it at least needs to be acknowledged that building on the current model doesn’t seem to be working.

Finally, what is the takeaway message from this essay? It ends rather abruptly with no summary, no suggested directions or immediate challenges to overcome, no call to action, no indications of things we should stop trying, and only brief mention of alternative perspectives. What do the authors want us to take away from this paper?

Overall though, this is a timely and needed essay. It is well researched and nicely written with rich metaphor. With modifications addressing the detailed comments below and better recognizing the complexity of the current data publication landscape, this will be a worthwhile review paper. With more significant modification where the authors dig deeper into the complexities and controversies and truly grapple with their implications to suggest a way forward, this could be a very influential paper. It is possible that the definitions of “publication” and “peer-review” need not be just stretched but changed or even rejected.

  • The whole paper needs a quick copy edit. There are a few typos, missing words, and wrong verb tenses. Note the word “data” is a plural noun. E.g., Data are not software, nor are they literature. (NSICD, instead of NSIDC)
  • Page 2, para 2: “citability is addressed by assigning a PID.” This is not true, as the authors discuss on page 4, para 4. Indeed, page 4, para 4 seems to contradict itself. Citation is more than a locator/identifier.
  • In the discussion of “Data independent of any paper” it is worth noting that there may often be linkages between these data and myriad papers. Indeed a looser concept of a data paper has existed for some time, where researchers request a citation to a paper even though it is not the data nor fully describes the data (e.g the CRU temp records)
  • Page 4, para 1: I’m not sure it’s entirely true that published data cannot involve requesting permission. In past work with Indigenous knowledge holders, they were willing to publish summary data and then provide the details when satisfied the use was appropriate and not exploitive. I think those data were “published” as best they could be. A nit, perhaps, but it highlights that there are few if any hard and fast rules about data publication.
  • Page 4, para 2: You may also want to mention the WDS certification effort, which is combining with the DSA via an RDA Working Group:
  • Page 4, para 2: The joint declaration of data citation principles involved many more organizations than Force11, CODATA, and DCC. Please credit them all (maybe in a footnote). The glory of the effort was that it was truly a joint effort across many groups. There is no leader. Force11 was primarily a convener.
  • Page 4, para 6: The deep citation approach recommended by ESIP is not to just to list variables or a range of data. It is to identify a “structural index” for the data and to use this to reference subsets. In Earth science this structural index is often space and time, but many other indices are possible--location in a gene sequence, file type, variable, bandwidth, viewing angle, etc. It is not just for “straightforward” data sets.
  • Page 5, para 5: I take issue with the statement that few repositories provide scientific review. I can think of a couple dozen that do just off the top of my head, and I bet most domain repositories have some level of science review. The “scientists” may not always be in house, but the repository is a team facilitator. See my general comments.
  • Page 5, para 10: The PDS system is only unusual in that it is well documented and advertised. As mentioned, this team style approach is actually fairly common.
  • Page 6, para 3: Parsons and Fox don’t just argue that the data publication metaphor is limiting. They also say it is misleading. That should be acknowledged at least, if not actively grappled with.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions. This is my strongest criticism of the paper and should be easily addressed as per my previous review comment. The fact that this simple data presentation was not performed to remove a clear potential source of spurious results is disappointing.
  • The authors have added 95% CIs to figures S1 and S2. This clarifies the scope for expectation bias in these data. The addition of error bars permits the authors’ assumption of a linear trend, indicating that the effect of sequences of either guns or smiles may not skew results. Equally, there could be either a downwards or upwards trend fitting within the confidence intervals that could be indicative of a cognitive bias that may violate the assumptions of the authors, leading to spurious results. One way to remove these doubts could be to stratify the analyses by the length of sequences of identical symbols. If the results hold up in each of the strata, this potential bias could be shown to not be present in the data. If the bias is strong, particularly in longer runs, this could indicate that the positive result was due to small numbers of longer identical runs combined with a cognitive bias rather than an ability to predict future events.

Chamberlain and Szöcs present the taxize R package, a set of functions that provides interfaces to several web tools and databases, and simplifies the process of checking, updating, correcting and manipulating taxon names for researchers working with ecological/biological data. A key feature that is repeated throughout is the need for reproducibility of science workflows and taxize provides a means to achieve this within the R software ecosystem for taxonomic search.

The manuscript is well-written and nicely presented, with a good balance of descriptive text and discourse and practical illustration of package usage. A number of examples illustrate the scope of the package, something that is fully expanded upon in the two appendices, which are a welcome addition to the paper.

As to the package, I am not overly fond of long function names; the authors should consider dropping the data source abbreviations from the function names in a future update/revision of the package. Likewise there is some inconsistency in the naming conventions used. For example there is the ’tpl_search()’ function to search The Plant List, but the equivalent function to search uBio is ’ubio_namebank()’. Whilst this may reflect specific aspects of terminology in use at the respective data stores, it does not help the user gain familiarity with the package by having them remember inconsistent function names.

One advantage of taxize is that it draws together a rich selection of data stores to query. A further suggestion for a future update would be to add generic function names, that apply to a database connection/information object. The latter would describe the resource the user wants to search and any other required information, such as the API key, etc., for example:

The user function to search would then be ’search(foo, "Abies")’. Similar generically named functions would provide the primary user-interface, thus promoting a more consistent toolbox at the R level. This will become increasingly relevant as the scope of taxize increases through the addition of new data stores that the package can access.

In terms of presentation in the paper, I really don’t like the way the R code inputs merge with the R outputs. I know the author of Knitr doesn’t like the demarcation of output being polluted by the R prompt, but I do find it difficult parsing the inputs/outputs you show because often there is no space between them and users not familiar with R will have greater difficulties than I. Consider adding in more conventional indications of R outputs, or physically separate input from output by breaking up the chunks of code to have whitespace between the grey-background chunks. Related, in one location I noticed something amiss with the layout; in the first code block at the top of page 5, the printed output looks wrong here. I would expect the attributes to print on their own line and the data in the attribute to also be on its own separate line.

Note also, the inconsistency in the naming of the output object columns. For example, in the two code chunks shown in column 1 of page 4, the first block has an object printed with column names ’matched_name’ and ’data_source_title’, whilst camelCase is used in the outputs shown in the second block. As the package is revised and developed, consider this and other aspects of providing a consistent presentation to the user.

I was a little confused about the example in the section Resolve Taxonomic Names on page 4. Should the taxon name be “Helianthus annuus” or “Helianthus annus” ? In the ‘mynames’ definition you include ‘Helianthus annuus’ in the character vector but the output shown suggests that the submitted name was ‘Helianthus annus’ (1 “u”) in rows with rownames 9 and 10 in the output shown.

Other than that there were the following minor observations:

  • Abstract: replace “easy” with “simple” in “...fashion that’s easy...” , and move the details about availability and the URI to the end of the sentence.
  • Page 2, Column 1, Paragraph 2: You have “In addition, there is no one authoritative taxonomic names source...” , which is a little clumsy to read. How about “In addition, there is no one authoritative source of taxonomic names... ” ?
  • Pg 2, C1, P2-3: The abbreviated data sources are presented first (in paragraph 2) and subsequently defined (in para 3). Restructure this so that the abbreviated forms are explained upon first usage.
  • Pg 2, C2, P2: Most R packages are “in development” so I would drop the qualifier and reword the opening sentence of the paragraph.
  • Pg 2, C2, P6: Change “and more can easily be added” to “and more can be easily added” seems to flow better?
  • Pg 5, paragraph above Figure 1: You refer to converting the object to an **ape** *phylo* object and then repeat essentially the same information in the next sentence. Remove the repetition.
  • Pg 6, C1: The header may be better as “Which taxa are children of the taxon of interest” .
  • Pg 6: In the section “IUCN status”, the term “we” is used to refer to both the authors and the user. This is confusing. Reserve “we” for reference to the authors and use something else (“a user” perhaps) for the other instances. Check this throughout the entire manuscript.
  • Pg 6, C2: in the paragraph immediately below the ‘grep()’ for “RAG1”, two consecutive sentences begin with “However”.
  • Pg 7: The first sentence of “Aggregating data....” reads “In biology, one can asks questions...” . It should be “one asks” or “one can ask” .
  • Pg 7, Conclusions: The first sentence reads “information is increasingly sought out by biologists” . I would drop “out” as “sought” is sufficient on its own.
  • Appendices: Should the two figures in the Appendices have a different reference to differentiate them from Figure 1 in the main body of the paper? As it stands, the paper has two Figure 1s, one on page 5 and a second on page 12 in the Appendix.
  • On Appendix Figure 2: The individual points are a little large. Consider reducing the plotting character size. I appreciate the effect you were going for with the transparency indicating density of observation through overplotting, but the effect is weakened by the size of the individual points.
  • Should the phylogenetic trees have some scale to them? I presume the height of the stems is an indication of phylogenetic distance but the figure is hard to calibrate without an associated scale. A quick look at Paradis (2012) Analysis of Phylogenetics and Evolution with R would suggest however that a scale is not consistently applied to these trees. I am happy to be guided by the authors as they will be more familiar with the conventions than I.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed; focusing mostly on the two main products in the pipeline of several biotech companies (in Europe and USA) which work with miRNAs-based agents, disease diagnostics and therapeutics. Interestingly, not only the specific agents that are being produced are mentioned, but also clever insights in the important cellular pathways regulated by key miRNAs are briefly discussed.

Minor points to consider in subsequent versions:

  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : the concept of miRNA clusters and precursors could be a bit better explained.
  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : when discussing the paper by the laboratory of Richard Young (reference 16); I think it is important to mention that that particular study refers to stem cells.
  • Page 2; paragraph ‘Processing of microRNAs’ : “Argonate” should be replaced by “Argonaute”.
  • Page 3; paragraph ‘MicroRNAs in disease diagnostics’ : are miR-15a and 16-1 two different miRNAs? I suggest mentioning them as: miR-15a and miR-16-1 and not using a slash sign (/) between them.
  • Page 4; paragraph ‘Circulating microRNAs’ : I am a bit bothered by the description of multiple sclerosis (MS) only as an autoimmune disease. Without being an expert in the field, I believe that there are other hypotheses related to the etiology of MS.
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : Does ‘hsa’ in hsa-miR-205 mean something?
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : the authors mention the company Asuragen, Austin, TX, USA but they do not really say anything about their products. I suggest to either remove the reference to that company or to include their current pipeline efforts.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : in the first paragraph the authors suggest that miRNAs-based therapeutics should be able to be applied with “minimal side-effects”. Since one miRNA can affect a whole gene program, I found this a bit counterintuitive; I was wondering if any data has been published to support that statement. Also, in the same paragraph, the authors compare miRNAs to protein inhibitors, which are described as more specific and/or selective. I think there are now good indications to think that protein inhibitors are not always that specific and/or selective and that such a property actually could be important for their evidenced therapeutic effects.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : I think the concept of “antagomir” is an important one and could be better highlighted in the text.
  • Throughout the text (pages 3, 5, 6, and 7): I am a bit bothered by separating the word “miRNA” or “miRNAs” at the end of a sentence in the following way: “miR-NA” or “miR-NAs”. It is a bit confusing considering the particular nomenclature used for miRNAs. That was probably done during the formatting and editing step of the paper.
  • I was wondering if the authors could develop a bit more the general concept that seems to indicate that in disease (and in particular in cancer) the expression and levels of miRNAs are in general downregulated. Maybe some papers have been published about this phenomenon?

The authors describe their attempt to reproduce a study in which it was claimed that mild acid treatment was sufficient to reprogramme postnatal splenocytes from a mouse expressing GFP in the oct4 locus to pluripotent stem cells. The authors followed a protocol that has recently become available as a technical update of the original publication.

They report obtaining no pluripotent stem cells expressing GFP driven over the same time period of several days described in the original publication. They describe observation of some green fluorescence that they attributed to autofluorescence rather than GFP since it coincided with PI positive dead cells. They confirmed the absence of oct4 expression by RT-PCR and also found no evidence for Nanog or Sox2, also markers of pluripotent stem cells.

The paper appears to be an authentic attempt to reproduce the original study, although the study might have had additional value with more controls: “failure to reproduce” studies need to be particularly well controlled.

Examples that could have been valuable to include are:

  • For the claim of autofluorescence: the emission spectrum of the samples would likely have shown a broad spectrum not coincident with that of GFP.
  • The reprogramming efficiency of postnatal mouse splenocytes using more conventional methods in the hands of the authors would have been useful as a comparison. Idem the lung fibroblasts.
  • There are no positive control samples (conventional mESC or miPSC) in the qPCR experiments for pluripotency markers. This would have indicated the biological sensitivity of the assay.
  • Although perhaps a sensitive issue, it might have been helpful if the authors had been able to obtain samples of cells (or their mRNA) from the original authors for simultaneous analysis.

In summary, this is a useful study as it is citable and confirms previous blog reports, but it could have been improved by more controls.

The article is well written, treats an actual problem (the risk of development of valvulopathy after long-term cabergoline treatment in patients with macroprolactinoma) and provides evidence about the reversibility of valvular changes after timely discontinuation of DA treatment.

Title and abstract: The title is appropriate for the content of the article. The abstract is concise and accurately summarizes the essential information of the paper although it would be better if the authors define more precisely the anatomic specificity of valvulopathy – mild mitral regurgitation.

Case report: The clinical case presentation is comprehensive and detailed but there are some minor points that should be clarified:

  • Please clarify the prolactin levels at diagnosis. In the Presentation section (line 3) “At presentation, prolactin level was found to be greater than 1000 ng/ml on diluted testing” but in the section describing the laboratory evaluation at diagnosis (line 7) “Prolactin level was 55 ng/ml”. Was the difference due to so called “hook effect”?
  • Figure 1: In the text the follow-up MR imaging is indicated to be “after 10 months of cabergoline treatment” . However, the figures 1C and 1D represent 2 years post-treatment MR images. Please clarify.
  • Figure 2: Echocardiograms 2A and 2B are defined as baseline but actually they correspond to the follow-up echocardiographic assessment at the 4th year of cabergoline treatment. Did the patient undergo a baseline (prior to dopamine agonist treatment) echocardiographic evaluation? If he did not, it should be mentioned as study limitation in the Discussion section.
  • The mitral valve thickness was mentioned to be normal. Did the echographic examination visualize increased echogenicity (hyperechogenicity) of the mitral cusps?
  • How could you explain the decrease of LV ejection fraction (from 60-65% to 50-55%) after switching from cabergoline to bromocriptine treatment and respectively its increase to 62% after doubling the bromocriptine daily dose? Was LV function estimated always by the same method during the follow-up?
  • Final paragraph: Authors conclude that early discontinuation and management with bromocriptine may be effective in reversing cardiac valvular dysfunction. Even though, regular echocardiographic follow up should be considered in patients who are expected to be on long-term high dose treatment with bromocriptine regarding its partial 5-HT2b agonist activity.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set. But I have some reservations about the adequacy of that data set, as it currently stands, given the claims the authors make; the relevance of the analytical framework(s) they draw upon; and the extent to which their analysis has offered significant new insights ‐ by which I mean, I would be keen to see the authors push their discussion further. My suggestions are essentially that they extend the data set they are working with to ensure that their analysis is both rigorous and generalisable, an re-consider the analytical frame they use. I will make some more concrete comments below.

With regard to the data: my feeling is that 14 interviews is a rather slim data set, and that this is heightened by the fact that they were all carried out in a single location, and recruited via snowball sampling and personal contacts. What efforts have the authors made to ensure that they are not speaking to a single, small, sub-community in the much wider category of science communicators? ‐ a case study, if you like, of a particular group of science communicators in North Carolina? In addition, though the authors reference grounded theory as a method for analysis, I got little sense of the data reaching saturation. The reliance on one-off quotes, and on the stories and interests of particular individuals, left me unsure as to how representative interview extracts were. I would therefore recommend either that the data set is extended by carrying out more interviews, in a wider variety of locations (e.g. other sites in the US), or that it is redeveloped as a case study of a particular local professional community. (Which would open up some fascinating questions ‐ how many of these people know each other? What spaces, online or offline, do they interact in, and do they share knowledge, for instance about their audiences? Are there certain touchstone events or publics they communally make reference to?)

As a more minor point with regard to the data set and what the authors want it to do, there were some inconsistencies as to how the study was framed. On p.2 they variously describe the purpose as to “understand the experiences and perspectives of science communicators” and the goals as identifying “the basic interests and value orientations attributed to lay audiences by science communicators”. Later, on p.5, they note that the “research is inductive and seeks to build theory rather than generalizable claims”, while in the Discussion they talk again about having identified communicators‘ “personal motivations” (p.12). There are a number of questions left hanging: is the purpose to understand communicator experiences ‐ in which case why focus on perceptions of audiences? Where is theory being built, and in what ways can this be mobilised in future work? The way that the study is framed and argued as a whole needs, I would suggest, to be clarified.

Relatedly, my sense is that some of this confusion is derived from what I find a rather busy analytical framework. I was not convinced of the value of combining inductive and deductive coding: if the ‘human value typology’ the authors use is ‘universal’, then what is added by open coding? Or, alternatively, why let their open coding, and their findings from this, be constrained by an additional, rather rigid, framework? The addition of the considerable literature on news values to the mix makes the discussion more confusing again. I would suggest that the authors either make much more clear the value of combining these different approaches ‐ building new theory outlining how they relate, and can be jointly mobilised in practice ‐ or fix on one. (My preference would be to focus on the findings from the open coding ‐ but that reflects my own disciplinary biases.)

A more minor analytical point: the authors note that their interviewees come from slightly different professions, and communicate through different formats, have different levels of experience, and different educational backgrounds ‐ but as far as I can see there is no comparative analysis based on this. Were there noticeable differences in the interview talk based on these categorisations? Or was the data set too small to identify any potential contrasts or themes? A note explaining this would be useful.

My final point has reference to the potential that this data set has, particularly if it is extended and developed. I would like to encourage the authors to take their analysis further: at the moment, I was not particularly surprised by the ways in which the communicators referenced news values or imagined their audiences. But it seems to me that the analytical work is not yet complete. What does it mean that communicators imagine audience values and preferences in the way that they do ‐ who is included and excluded by these imaginations? One experiment might be to consider what ‘ideal type’ publics are created in the communicators’ talk. What are the characteristics of the audiences constructed in the interviews and ‐ presumably ‐ in the communicative products of interviewees? What would these people look like? There are also some tantalizing hints in the Discussion that are not really discussed in the Findings ‐ of, for instance, the way in which communicator’s personal motivations may combine with their perceptions of audiences to shape their products. How does this happen? These are, of course, suggestions. But my wider point is that the authors need to show more clearly what is original and useful in their findings ‐ what it is, exactly, that will be important to other scholars in the field.

I hope my comments make sense ‐ please do not hesitate to contact me if not.

This is an interesting article and piece of software. I think it contributes towards further alternatives to easily visualize high dimensionality data on the web. It’s simple and easy to embed into other web frameworks or applications.

a) About the software

  • CSV format . It was hard to guess the expected format. The authors need to add a syntax description of the CSV format at the help page.
  • Simple HTML example . It will be easy to test HeatmapViewer (HmV) if you add a simple downloadable example file with the minimum required HTML-JavaScript to set up a HmV (without all the CSV import code).
  • Color scale . HmV only implements a simple three point linear color scale. For me this is the major weakness of HmV. It will be very convenient that in the next HmV release the user can give as a parameter a function that manages the score to color conversion.

b) About the paper

  • http://www.broadinstitute.org/gsea (desktop)
  • http://jheatmap.github.io/jheatmap/ (website)
  • http://www.gitools.org/ (desktop)
  • http://blog.nextgenetics.net/demo/entry0044/ (website)
  • http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html (python)
  • http://matplotlib.org/api/pyplot_api.html (python)
  • Predicted protein mutability landscape: The authors say: “Without using a tool such as the HeatmapViewer, we could hardly obtain an overview of the protein mutability landscape”. This paragraph seems to suggest that you can explore the data with HmV. I think that HmV is a good tool to report your data, but not to explore it.
  • Conclusions: The authors say: “... provides a new, powerful way to generate and display matrix data in web presentations and in publications.” To use heat maps in web presentations and publications is nothing new. I think that HmV makes it easier and user-friendly, but it’s not new.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation. There are a number of issues with the methods and analysis that need to be clarified/addressed however; furthermore, some of the conclusions overreach the data collected, while other important results are given less emphasis that they warrant. Below are more specific comments by section:

The conclusion that human-driven tree removal is an important contributor to the degradation of ASF is reasonable given the data reported in the article. Elephant damage, while clearly likely a very big contributor to habitat modification in ASF, was not the focus of the study (the authors state clearly in the Discussion that elephant damage was not systematically quantified, and thus no data were analyzed) ‐ and thus should only be mentioned in passing here ‐ if at all.

More information about the life history ecology of A. Sokokensis would provide welcome context here. A bit more detail about breeding sites as well as dispersal behavior etc. would be helpful – and especially why these and other aspects render the Pipit a good indicator species/proxy for habitat condition. This could be revisited in the Discussion as links are made between habitat conditions and occurrence of the bird (where you discuss the underlying mechanisms for why it thrives in some parts of ASF and not others, and why it’s abundance correlate strongly with some types of disturbance and not others). Again, you reference other studies that have explored other species in ASF and forest disturbance, but do not really explicitly state why the Pipit is a particularly important indicator of forest condition.

  • Bird Survey: As described, all sightings and calls were recorded and incorporated into distance analysis – but it is not clear here whether or not distances to both auditory and visual encounters were measured the same way (i.e., with the rangefinder). Please clarify.
  • Floor litter sampling: Not clear here whether or not litter cover was recorded as a continuous or categorical variable (percentage). If not, please describe percentage “categories” used.
  • Mean litter depth graph (Figure 2) and accompanying text reports the means and sd but no post-hoc comparison test (e.g. Tukey HSD) – need to report the stats on which differences were/were not significant.
  • Figure 3 – you indicate litter depth was better predictor of bird abundance than litter cover, but r-squared is higher for litter cover. Need to clarify (and also indicate why you chose only to shown depth values in Figure 3.
  • The linear equation can be put in Figure 3 caption (not necessary to include in text).
  • Figure 4 – stats aren’t presented here; also, the caption states that tree loss and leaf litter are inversely correlated – this might be taken to mean, given discussion (below) about pruning, that there could be a poaching threshold below which poaching may pay dividends to Pipits (and above which Pipits are negatively affected). This warrants further exploration/elaboration.
  • The pruning result is arguably the most important one here – this suggests an intriguing trade-off between poaching and bird conservation (in particular, the suggestion that pruning by poachers may bolster Pipit populations – or at the very least mitigate against other aspects of habitat degradation). Worth highlighting this more in Discussion.
  • Last sentence on p. 7 suggests causality (“That is because…”) – but your data only support correlation (one can imagine that there may have been other extrinsic or intrinsic drivers of population decline).
  • P. 8: discussion of classification of habitat types in ASF is certainly interesting, but could be made much more succinct in keeping with focus of this paper.
  • P. 9, top: first paragraph could be expanded – as noted before, tradeoff between poaching/pruning and Pipit abundance is worth exploring in more depth. Could your results be taken as a prescription for understory pruning as a conservation tool for the Sokoke Pipit or other threatened species? More detail here would be welcome (and also in Conclusion); in subsequent paragraph about Pipit foraging behavior and specific relationship to understory vegetation at varying heights could be incorporated into this discussion. Is there any info about optimal perch height for foraging or for flying through the understory? Linking to results of other studies in ASF, is there potential for positive correlations with optimal habitat conditions for the other important bird species in ASF in order to make more general conclusions about management?

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior.

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior. The authors found support to their hypothesis of an influence of SCR on the evolution of deceptive behavior as their findings at species level showed a positive correlation between mean sexual activity and the occurrence of deceptive behavior. Moreover, they found a positive correlation between mean aggressiveness and sexual activity but they did not detect a relationship between aggressiveness and audience effects.

The manuscript is certainly well written and attractive, but I have some major concerns on the data analyses that prevent me to endorse its acceptance at the present stage.

I see three main problems with the statistics that could have led to potentially wrong results and, thus, to completely misleading conclusions.

  • First of all the Authors cannot run an ANCOVA in which there is a significant interaction between factor and covariate Tab. 2 (a). Indeed, when the assumption of common slopes is violated (as in their case), all other significant terms are meaningless. They might want to consider alternative statistical procedures, e.g. Johnson—Neyman method.
  • Second, the Authors cannot retain into the model a non significant interaction term, as this may affect estimations for the factors Tab. 2 (d). They need to remove the species x treatment interaction (as they did for other non significant terms, see top left of the same page 7).
  • The third problem I see regards all the GLMs in which species are compared. Authors entered the 'species' level as fixed factor when species are clearly a random factor. Entering species as fixed factors has the effect of badly inflating the denominator degrees of freedom, making authors’ conclusions far too permissive. They should, instead, use mixed LMs, in which species are the random factor. They should also take care that the degrees of freedom are approximately equal to the number of species (not the number of trials). To do so, they can enter as random factor the interaction between treatment and species.

Data need to be re-analyzed relying on the proper statistical procedures to confirm results and conclusions.

A more theoretical objection to the authors’ interpretation of results (supposing that results will be confirmed by the new analyses) could emerge from the idea that male success in mating with the preferred female may reduce the probability of immediate female’s re-mating, and thus reduce the risk of sperm competition on the short term. As a consequence, it may be not beneficial to significantly increase the risk of losing a high quality and inseminated female for a cost that will not be paid with certainty. The authors might want to consider also this for discussion.

Lastly, I think that the scenario generated from comparative studies at species level may be explained by phylogenetic factors other than sexual selection. Only the inclusion of phylogeny, that allow to account for the shared history among species, into data analyses can lead to unequivocal adaptive explanations for the observed patterns. I see the difficulty in doing this with few species, as it is the case of the present study, but I would suggest the Authors to consider also this future perspective. Moreover, a phylogenetic comparative study would be aided by the recent development of a well-resolved phylogenetic tree for the genus Poecilia (Meredith 2011).

Page 3: the authors should specify that also part of data on male aggressiveness (3 species from Table 1) come from previous studies, as they do for data on deceptive male mating behavior.

Page 5: since data on mate choice come from other studies is it so necessary to report a detailed description of methods for this section? Maybe the authors could refer to the already published methods and only give a brief additional description.

Page 6: how do the authors explain the complete absence of aggressive displays between the focal male and the audience male during the mate choice experiments? This sounds curious if considering that in all the examined species aggressive behaviors and dominance establishment are always observed during dyadic encounters.

In their response to my previous comments, the authors have clarified that only the data from the “Experimental phase” were used to calculate prediction accuracy. However, if I now understand the analysis procedure correctly, there are serious concerns with the approach adopted.

First, let me state what I now understand the analysis procedure to be:

  • For each subject the PD values across the 20 trials were converted to z-scores.
  • For each stimulus, the mean z-score was calculated.
  • The sign of the mean z-score for each stimulus was used to make predictions.
  • For each of the 20 trials, if the sign of the z-score on that trial was the same as for the mean z-score for that stimulus, a hit (correct prediction) was assigned. In contrast, if the sign of the z-score on that trial was the opposite as for the mean z-score for that stimulus, a miss (incorrect prediction) was assigned.
  • For each stimulus the total hits and misses were calculated.
  • Average hits (correct prediction) for each stimulus was calculated across subjects.

If this is a correct description of the procedure, the problem is that the same data were used to determine the sign of the z-score that would be associated with a correct prediction and to determine the actual correct predictions. This will effectively guarantee a correct prediction rate above chance.

To check if this is true, I quickly generated random data and used the analysis procedure as laid out above (see MATLAB code below). Across 10,000 iterations of 100 random subjects, the average “prediction” accuracy was ~57% for each stimulus (standard deviation, 1.1%), remarkably similar to the values reported by the authors in their two studies. In this simulation, I assumed that all subjects contributed 20 trials, but in the actual data analyzed in the study, some subjects contributed fewer than 20 trials due to artifacts in the pupil measurements.

If the above description of the analysis procedure is correct, then I think the authors have provided no evidence to support pupil dilation prediction of random events, with the results reflecting circularity in the analysis procedure.

However, if the above description of the procedure is incorrect, the authors need to clarify exactly what the analysis procedure was, perhaps by providing their analysis scripts.

I think this paper excellent and is an important addition to the literature. I really like the conceptualization of a self-replicating cycle as it illustrates the concept that the “problem” starts with the neuron, i.e., due to one or more of a variety of insults, the neuron is negatively impacted and releases H1, which in turn activates microglia with over expression of cytokines that may, when limited, foster repair but when activated becomes chronic (as is demonstrated here with the potential of cyclic H1 release) and thus facilitates neurotoxicity. I hope the authors intend to measure cytokine expression soon, especially IL-1 and TNF in both astrocytes and microglia, and S100B in astrocytes.

In more detail, Gilthorpe and colleagues provide novel experimental data that demonstrate a new role for a specific histone protein—the linker histone, H1—in neurodegeneration. This study, which was originally designed to identify axonal chemorepellents, actually provided a previously unknown role for H1, as well as other novel and thought provoking results. Fortuitously, as sometimes happens, the authors had a pleasant surprise: their results set some old dogmas on their respective ears and opened up new avenues of approach for studying the role of histones in self-amplification of neurodegenerative cycles. In point, they show that H1 is not just a nice little partner of nuclear DNA as previously thought. H1 is released from ‘damaged’ (or leaky) neurons, kills adjacent healthy neurons, and promotes a proinflammatory profile in both microglia and astrocytes.

Interestingly, the authors’ conceptualization of a damaged neuron → H1 release → healthy neuron killing cycle does not take into account the H1-mediated proinflammatory glial response. This facet of the study opens for these investigators a new avenue they may wish to follow: the role of H1 in stimulation of neuroinflammation with overexpression of cytokines. This is interesting, as neuronal injury has been shown to set in motion an acute phase response that activates glia, increases their expression of cytokines (interleukin-1 and S100B), which, in turn, induce neurons to produce excess Alzheimer-related proteins such as βAPP and ApoE (favoring formation of mature Aβ/ApoE plaques), activated MAPK-p38 and hyperphosphorylated tau (favoring formation of neurofibrillary tangles), and α synuclein (favoring formation of Lewy bodies). To date, the neuronal response shown responsible for stimulating glia is neuronal stress related release of sAPP, but these H1 results from Gilthorpe and colleagues may contribute to or exacerbate the role of sAPP.

The email address should be the one you originally registered with F1000.

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here .

If you still need help with your Google account password, please click here .

You registered with F1000 via Facebook, so we cannot reset your password.

If you still need help with your Facebook account password, please click here .

If your email address is registered with us, we will email you instructions to reset your password.

If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.

  • Cover Letter
  • Reference Request Letters
  • LinkedIn Profile
  • Definition Assignment
  • Formal Report Proposal
  • Complaint Letter

Peer Review: Research Proposal Memo

research proposal peer review example

A large component of Engl 301, was learning how to deliver effective feedback on a peer’s work.  To do this effectively, the writer must ensure they maintain a YOU attitude throughout and deliver both positive and constructive criticism tactfully.

Here is an example of a peer review I completed for a fellow class mate.  The original document was a research proposal memo where students were asked to outline their research process and audience for a final formal report.  This review provided comprehensive feedback with specific examples to help the student improve their work while highlighting the elements that were done beautifully.

A PDF version of that document can be found here.

To:                               Jonah Hamilton From:                          Samantha Langley Date:                           October 10, 2016 Subject:                      Peer review of formal report proposal

First impression After a preliminary reading, it is obvious that you understand the daily operations of the Wine Research Center (WRC) well enough to be able to identify a need in this facility.  You have identified some major areas of concern as a result of the lack of laboratory oversight.  However, the purpose of your report between the Subject statement and Scope statement, identifies two different motivations; stating that “you will create this position” versus “is creating this position a good idea?”  Your desire to determine the feasibility is also restated in the concluding statements of this proposal.  By having the purpose of this report maintain continuity from start to finish, you will avoid confusing the reader.

Layout and Design As a reader, this document is very clear about what each section is about and how each flows together.  The eye can easily scan the page and does not get overwhelmed by large blocks of text.

There is some conflict from your Subject statement and information in your Scope and Conclusion sections.  In your subject you state that you are proposing the creation of a lab manager position.  However, in your Scope and Conclusion sections you comment on determining the feasibility of creating this position.  Determining the feasibility of something is different from proposing to create something.  The rest of your proposal supports your desire to determine if creation of a lab manager position is a good idea or the feasibility of this position creation.

Introduction What is involved in a research lab and the importance of equipment up keep is stated well in this section.  What can be clarified a bit more is the structure of the WRC.  For example, when you describe that there are three individually operating laboratories in the WRC and then state “Currently there is no lab manager” do you mean that the individual labs do not have managers or that the WRC does not have a lab manager overseeing all three labs?

In the final sentence of this section you clearly point out some very important benefits of creating this position.  This could be expanded to include all the points you mention later in this report such as the identification of safety hazards and maintenance of equipment.

Statement of problem Beginning sentences with “because” creates an informal tone.  In this case, the start of the first sentence contains some unnecessary words that could be eliminated. It is already stated that there is not a laboratory manager in the introduction, so this does not need to be restated.  Instead you could begin by saying “Currently, each researcher …”

As someone who does not have a lot of experience in research labs, I am unclear what you mean when you say “Supply orders are placed separately”.  Do you mean that they are placed separately between each lab or each researcher?  Are there multiple researchers in a single lab at the WRC?

Proposed Solution You have done an exceptional job keeping this section clear and concise.  Well done!

Scope This section begins by stating that you will be looking to determine the feasibility of creating this position.  As described earlier, this should be reflected in the Subject statement.  The first area of inquiry should be expanded for clarity.  Who wants a new lab manager?  Are you looking to determine if the current WRC researchers desire a new level of management?    

Methods It sounds like that you will have plenty of resources for data on this project.  Some confusion does arise when you say current lab managers will be interviewed as it is previously stated there were none.  Do you mean managers from other laboratories not related to the WRC?

Qualifications/Conclusion Your previous experience is certainly an asset for this project.  Having experience in labs with more levels of management will provide a great comparison to what you are currently experiencing.  These other labs may also be a great source of information for you too!

Grammar and Spelling There are minor grammar and spelling errors worth mentioning here.

  • “ Furthermore, knowledge of the complex procedures and the required equipment is imperative ”

– This sentence needs some clarity. It is unclear if you mean that knowing the complex procedures of the equipment or knowing both the complex laboratory procedures and each piece of equipment in the lab are imperative.

– There are a few areas where sentences are extended using the word “and” twice. This ends up being cumbersome for the reader and are in areas that could easily be simplified by a change of words or addition of a period.

  • For example, “ Currently there is no lab manager and consequently each researcher is responsible for the upkeep and operation of their respective lab.”
  • The first part of this sentence could be reworded to something more simple such as “As a result of having no manager, each researcher is responsible…”

– Another example comes at the end of your report, “ Between completing experiments, writing grant applications, and general lab upkeep time is at a premium for WRC employees and action is needed ”

  • This sentence is difficult to follow as a reader and why “action is needed” is not clear.

– Don’t forget to use commas. Refer to the last two sentences of your Statement of Problem section to see where these commas are needed.

– In the Proposed Solution section “insuring” should be “ensuring”.

  • Also in this section “researches” should be “researchers”

General Comments Overall, you have done an excellent job communicating why creation of this position would be beneficial for the WRC.  After clarifying the objective of this report and some details regarding the WRC, I feel the reader will have a clear understanding of what your final report will entail.  I hope these comments will be useful in your editing process.

  • Search for:

Recent Posts

  • Reflection: Final Self Assessment
  • A Reflection: Web Folio Writing Process
  • Unit 3 Reflection Blog
  • Unit 2 Reflections Blog
  • Unit 1 Reflection Blog
  • December 2016  (2)
  • November 2016  (1)
  • October 2016  (2)
  • September 2016  (3)

Spam prevention powered by Akismet

The Savvy Scientist

The Savvy Scientist

Experiences of a London PhD student and beyond

My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions

research proposal peer review example

Once you’ve submitted your paper to an academic journal you’re in the nerve-racking position of waiting to hear back about the fate of your work. In this post we’ll cover everything from potential responses you could receive from the editor and example peer review comments through to how to submit revisions.

My first first-author paper was reviewed by five (yes 5!) reviewers and since then I’ve published several others papers, so now I want to share the insights I’ve gained which will hopefully help you out!

This post is part of my series to help with writing and publishing your first academic journal paper. You can find the whole series here: Writing an academic journal paper .

The Peer Review Process

An overview of the academic journal peer review process.

When you submit a paper to a journal, the first thing that will happen is one of the editorial team will do an initial assessment of whether or not the article is of interest. They may decide for a number of reasons that the article isn’t suitable for the journal and may reject the submission before even sending it out to reviewers.

If this happens hopefully they’ll have let you know quickly so that you can move on and make a start targeting a different journal instead.

Handy way to check the status – Sign in to the journal’s submission website and have a look at the status of your journal article online. If you can see that the article is under review then you’ve passed that first hurdle!

When your paper is under peer review, the journal will have set out a framework to help the reviewers assess your work. Generally they’ll be deciding whether the work is to a high enough standard.

Interested in reading about what reviewers are looking for? Check out my post on being a reviewer for the first time. Peer-Reviewing Journal Articles: Should You Do It? Sharing What I Learned From My First Experiences .

Once the reviewers have made their assessments, they’ll return their comments and suggestions to the editor who will then decide how the article should proceed.

How Many People Review Each Paper?

The editor ideally wants a clear decision from the reviewers as to whether the paper should be accepted or rejected. If there is no consensus among the reviewers then the editor may send your paper out to more reviewers to better judge whether or not to accept the paper.

If you’ve got a lot of reviewers on your paper it isn’t necessarily that the reviewers disagreed about accepting your paper.

You can also end up with lots of reviewers in the following circumstance:

  • The editor asks a certain academic to review the paper but doesn’t get a response from them
  • The editor asks another academic to step in
  • The initial reviewer then responds

Next thing you know your work is being scrutinised by extra pairs of eyes!

As mentioned in the intro, my first paper ended up with five reviewers!

Potential Journal Responses

Assuming that the paper passes the editor’s initial evaluation and is sent out for peer-review, here are the potential decisions you may receive:

  • Reject the paper. Sadly the editor and reviewers decided against publishing your work. Hopefully they’ll have included feedback which you can incorporate into your submission to another journal. I’ve had some rejections and the reviewer comments were genuinely useful.
  • Accept the paper with major revisions . Good news: with some more work your paper could get published. If you make all the changes that the reviewers suggest, and they’re happy with your responses, then it should get accepted. Some people see major revisions as a disappointment but it doesn’t have to be.
  • Accept the paper with minor revisions. This is like getting a major revisions response but better! Generally minor revisions can be addressed quickly and often come down to clarifying things for the reviewers: rewording, addressing minor concerns etc and don’t require any more experiments or analysis. You stand a really good chance of getting the paper published if you’ve been given a minor revisions result.
  • Accept the paper with no revisions . I’m not sure that this ever really happens, but it is potentially possible if the reviewers are already completely happy with your paper!

Keen to know more about academic publishing? My series on publishing is now available as a free eBook. It includes my experiences being a peer reviewer. Click the image below for access.

research proposal peer review example

Example Peer Review Comments & Addressing Reviewer Feedback

If your paper has been accepted but requires revisions, the editor will forward to you the comments and concerns that the reviewers raised. You’ll have to address these points so that the reviewers are satisfied your work is of a publishable standard.

It is extremely important to take this stage seriously. If you don’t do a thorough job then the reviewers won’t recommend that your paper is accepted for publication!

You’ll have to put together a resubmission with your co-authors and there are two crucial things you must do:

  • Make revisions to your manuscript based off reviewer comments
  • Reply to the reviewers, telling them the changes you’ve made and potentially changes you’ve not made in instances where you disagree with them. Read on to see some example peer review comments and how I replied!

Before making any changes to your actual paper, I suggest having a thorough read through the reviewer comments.

Once you’ve read through the comments you might be keen to dive straight in and make the changes in your paper. Instead, I actually suggest firstly drafting your reply to the reviewers.

Why start with the reply to reviewers? Well in a way it is actually potentially more important than the changes you’re making in the manuscript.

Imagine when a reviewer receives your response to their comments: you want them to be able to read your reply document and be satisfied that their queries have largely been addressed without even having to open the updated draft of your manuscript. If you do a good job with the replies, the reviewers will be better placed to recommend the paper be accepted!

By starting with your reply to the reviewers you’ll also clarify for yourself what changes actually have to be made to the paper.

So let’s now cover how to reply to the reviewers.

1. Replying to Journal Reviewers

It is so important to make sure you do a solid job addressing your reviewers’ feedback in your reply document. If you leave anything unanswered you’re asking for trouble, which in this case means either a rejection or another round of revisions: though some journals only give you one shot! Therefore make sure you’re thorough, not just with making the changes but demonstrating the changes in your replies.

It’s no good putting in the work to revise your paper but not evidence it in your reply to the reviewers!

There may be points that reviewers raise which don’t appear to necessitate making changes to your manuscript, but this is rarely the case. Even for comments or concerns they raise which are already addressed in the paper, clearly those areas could be clarified or highlighted to ensure that future readers don’t get confused.

How to Reply to Journal Reviewers

Some journals will request a certain format for how you should structure a reply to the reviewers. If so this should be included in the email you receive from the journal’s editor. If there are no certain requirements here is what I do:

  • Copy and paste all replies into a document.
  • Separate out each point they raise onto a separate line. Often they’ll already be nicely numbered but sometimes they actually still raise separate issues in one block of text. I suggest separating it all out so that each query is addressed separately.
  • Form your reply for each point that they raise. I start by just jotting down notes for roughly how I’ll respond. Once I’m happy with the key message I’ll write it up into a scripted reply.
  • Finally, go through and format it nicely and include line number references for the changes you’ve made in the manuscript.

By the end you’ll have a document that looks something like:

Reviewer 1 Point 1: [Quote the reviewer’s comment] Response 1: [Address point 1 and say what revisions you’ve made to the paper] Point 2: [Quote the reviewer’s comment] Response 2: [Address point 2 and say what revisions you’ve made to the paper] Then repeat this for all comments by all reviewers!

What To Actually Include In Your Reply To Reviewers

For every single point raised by the reviewers, you should do the following:

  • Address their concern: Do you agree or disagree with the reviewer’s comment? Either way, make your position clear and justify any differences of opinion. If the reviewer wants more clarity on an issue, provide it. It is really important that you actually address their concerns in your reply. Don’t just say “Thanks, we’ve changed the text”. Actually include everything they want to know in your reply. Yes this means you’ll be repeating things between your reply and the revisions to the paper but that’s fine.
  • Reference changes to your manuscript in your reply. Once you’ve answered the reviewer’s question, you must show that you’re actually using this feedback to revise the manuscript. The best way to do this is to refer to where the changes have been made throughout the text. I personally do this by include line references. Make sure you save this right until the end once you’ve finished making changes!

Example Peer Review Comments & Author Replies

In order to understand how this works in practice I’d suggest reading through a few real-life example peer review comments and replies.

The good news is that published papers often now include peer-review records, including the reviewer comments and authors’ replies. So here are two feedback examples from my own papers:

Example Peer Review: Paper 1

Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 – Available here

This paper was reviewed by two academics and was given major revisions. The journal gave us only 10 days to get them done, which was a bit stressful!

  • Reviewer Comments
  • My reply to Reviewer 1
  • My reply to Reviewer 2

One round of reviews wasn’t enough for Reviewer 2…

  • My reply to Reviewer 2 – ROUND 2

Thankfully it was accepted after the second round of review, and actually ended up being selected for this accolade, whatever most notable means?!

Nice to see our recent paper highlighted as one of the most notable articles, great start to the week! Thanks @Materials_mdpi 😀 #openaccess & available here: https://t.co/AKWLcyUtpC @ICBiomechanics @julianrjones @saman_tavana pic.twitter.com/ciOX2vftVL — Jeff Clark (@savvy_scientist) December 7, 2020

Example Peer Review: Paper 2

Exploratory Full-Field Mechanical Analysis across the Osteochondral Tissue—Biomaterial Interface in an Ovine Model, J. Clark et al. 2020 – Available here

This paper was reviewed by three academics and was given minor revisions.

  • My reply to Reviewer 3

I’m pleased to say it was accepted after the first round of revisions 🙂

Things To Be Aware Of When Replying To Peer Review Comments

  • Generally, try to make a revision to your paper for every comment. No matter what the reviewer’s comment is, you can probably make a change to the paper which will improve your manuscript. For example, if the reviewer seems confused about something, improve the clarity in your paper. If you disagree with the reviewer, include better justification for your choices in the paper. It is far more favourable to take on board the reviewer’s feedback and act on it with actual changes to your draft.
  • Organise your responses. Sometimes journals will request the reply to each reviewer is sent in a separate document. Unless they ask for it this way I stick them all together in one document with subheadings eg “Reviewer 1” etc.
  • Make sure you address each and every question. If you dodge anything then the reviewer will have a valid reason to reject your resubmission. You don’t need to agree with them on every point but you do need to justify your position.
  • Be courteous. No need to go overboard with compliments but stay polite as reviewers are providing constructive feedback. I like to add in “We thank the reviewer for their suggestion” every so often where it genuinely warrants it. Remember that written language doesn’t always carry tone very well, so rather than risk coming off as abrasive if I don’t agree with the reviewer’s suggestion I’d rather be generous with friendliness throughout the reply.

2. How to Make Revisions To Your Paper

Once you’ve drafted your replies to the reviewers, you’ve actually done a lot of the ground work for making changes to the paper. Remember, you are making changes to the paper based off the reviewer comments so you should regularly be referring back to the comments to ensure you’re not getting sidetracked.

Reviewers could request modifications to any part of your paper. You may need to collect more data, do more analysis, reformat some figures, add in more references or discussion or any number of other revisions! So I can’t really help with everything, even so here is some general advice:

  • Use tracked-changes. This is so important. The editor and reviewers need to be able to see every single change you’ve made compared to your first submission. Sometimes the journal will want a clean copy too but always start with tracked-changes enabled then just save a clean copy afterwards.
  • Be thorough . Try to not leave any opportunity for the reviewers to not recommend your paper to be published. Any chance you have to satisfy their concerns, take it. For example if the reviewers are concerned about sample size and you have the means to include other experiments, consider doing so. If they want to see more justification or references, be thorough. To be clear again, this doesn’t necessarily mean making changes you don’t believe in. If you don’t want to make a change, you can justify your position to the reviewers. Either way, be thorough.
  • Use your reply to the reviewers as a guide. In your draft reply to the reviewers you should have already included a lot of details which can be incorporated into the text. If they raised a concern, you should be able to go and find references which address the concern. This reference should appear both in your reply and in the manuscript. As mentioned above I always suggest starting with the reply, then simply adding these details to your manuscript once you know what needs doing.

Putting Together Your Paper Revision Submission

  • Once you’ve drafted your reply to the reviewers and revised manuscript, make sure to give sufficient time for your co-authors to give feedback. Also give yourself time afterwards to make changes based off of their feedback. I ideally give a week for the feedback and another few days to make the changes.
  • When you’re satisfied that you’ve addressed the reviewer comments, you can think about submitting it. The journal may ask for another letter to the editor, if not I simply add to the top of the reply to reviewers something like:
“Dear [Editor], We are grateful to the reviewer for their positive and constructive comments that have led to an improved manuscript.  Here, we address their concerns/suggestions and have tracked changes throughout the revised manuscript.”

Once you’re ready to submit:

  • Double check that you’ve done everything that the editor requested in their email
  • Double check that the file names and formats are as required
  • Triple check you’ve addressed the reviewer comments adequately
  • Click submit and bask in relief!

You won’t always get the paper accepted, but if you’re thorough and present your revisions clearly then you’ll put yourself in a really good position. Remember to try as hard as possible to satisfy the reviewers’ concerns to minimise any opportunity for them to not accept your revisions!

Best of luck!

I really hope that this post has been useful to you and that the example peer review section has given you some ideas for how to respond. I know how daunting it can be to reply to reviewers, and it is really important to try to do a good job and give yourself the best chances of success. If you’d like to read other posts in my academic publishing series you can find them here:

Blog post series: Writing an academic journal paper

Subscribe below to stay up to date with new posts in the academic publishing series and other PhD content.

Share this:

  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)

Related Posts

Graphic of data from experiments written on a notepad with the title "How to manage data"

How to Master Data Management in Research

25th April 2024 27th April 2024

Graphic of a researcher writing, perhaps a thesis title

Thesis Title: Examples and Suggestions from a PhD Grad

23rd February 2024 23rd February 2024

Graphic of a zen-like man meditating, surrounded by graphics of healthy food, sport, sleep and heart-health: all in an effort to stay healthy as a student

How to Stay Healthy as a Student

25th January 2024 25th January 2024

2 Comments on “My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions”

Excellent article! Thank you for the inspiration!

No worries at all, thanks for your kind comment!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Privacy Overview

Organizing Your Social Sciences Research Assignments

  • Annotated Bibliography
  • Analyzing a Scholarly Journal Article
  • Group Presentations
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • Types of Structured Group Activities
  • Group Project Survival Skills
  • Leading a Class Discussion
  • Multiple Book Review Essay
  • Reviewing Collected Works
  • Writing a Case Analysis Paper
  • Writing a Case Study
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Reflective Paper
  • Writing a Research Proposal
  • Generative AI and Writing
  • Acknowledgments

The goal of a research proposal is twofold: to present and justify the need to study a research problem and to present the practical ways in which the proposed study should be conducted. The design elements and procedures for conducting research are governed by standards of the predominant discipline in which the problem resides, therefore, the guidelines for research proposals are more exacting and less formal than a general project proposal. Research proposals contain extensive literature reviews. They must provide persuasive evidence that a need exists for the proposed study. In addition to providing a rationale, a proposal describes detailed methodology for conducting the research consistent with requirements of the professional or academic field and a statement on anticipated outcomes and benefits derived from the study's completion.

Krathwohl, David R. How to Prepare a Dissertation Proposal: Suggestions for Students in Education and the Social and Behavioral Sciences . Syracuse, NY: Syracuse University Press, 2005.

How to Approach Writing a Research Proposal

Your professor may assign the task of writing a research proposal for the following reasons:

  • Develop your skills in thinking about and designing a comprehensive research study;
  • Learn how to conduct a comprehensive review of the literature to determine that the research problem has not been adequately addressed or has been answered ineffectively and, in so doing, become better at locating pertinent scholarship related to your topic;
  • Improve your general research and writing skills;
  • Practice identifying the logical steps that must be taken to accomplish one's research goals;
  • Critically review, examine, and consider the use of different methods for gathering and analyzing data related to the research problem; and,
  • Nurture a sense of inquisitiveness within yourself and to help see yourself as an active participant in the process of conducting scholarly research.

A proposal should contain all the key elements involved in designing a completed research study, with sufficient information that allows readers to assess the validity and usefulness of your proposed study. The only elements missing from a research proposal are the findings of the study and your analysis of those findings. Finally, an effective proposal is judged on the quality of your writing and, therefore, it is important that your proposal is coherent, clear, and compelling.

Regardless of the research problem you are investigating and the methodology you choose, all research proposals must address the following questions:

  • What do you plan to accomplish? Be clear and succinct in defining the research problem and what it is you are proposing to investigate.
  • Why do you want to do the research? In addition to detailing your research design, you also must conduct a thorough review of the literature and provide convincing evidence that it is a topic worthy of in-depth study. A successful research proposal must answer the "So What?" question.
  • How are you going to conduct the research? Be sure that what you propose is doable. If you're having difficulty formulating a research problem to propose investigating, go here for strategies in developing a problem to study.

Common Mistakes to Avoid

  • Failure to be concise . A research proposal must be focused and not be "all over the map" or diverge into unrelated tangents without a clear sense of purpose.
  • Failure to cite landmark works in your literature review . Proposals should be grounded in foundational research that lays a foundation for understanding the development and scope of the the topic and its relevance.
  • Failure to delimit the contextual scope of your research [e.g., time, place, people, etc.]. As with any research paper, your proposed study must inform the reader how and in what ways the study will frame the problem.
  • Failure to develop a coherent and persuasive argument for the proposed research . This is critical. In many workplace settings, the research proposal is a formal document intended to argue for why a study should be funded.
  • Sloppy or imprecise writing, or poor grammar . Although a research proposal does not represent a completed research study, there is still an expectation that it is well-written and follows the style and rules of good academic writing.
  • Too much detail on minor issues, but not enough detail on major issues . Your proposal should focus on only a few key research questions in order to support the argument that the research needs to be conducted. Minor issues, even if valid, can be mentioned but they should not dominate the overall narrative.

Procter, Margaret. The Academic Proposal.  The Lab Report. University College Writing Centre. University of Toronto; Sanford, Keith. Information for Students: Writing a Research Proposal. Baylor University; Wong, Paul T. P. How to Write a Research Proposal. International Network on Personal Meaning. Trinity Western University; Writing Academic Proposals: Conferences, Articles, and Books. The Writing Lab and The OWL. Purdue University; Writing a Research Proposal. University Library. University of Illinois at Urbana-Champaign.

Structure and Writing Style

Beginning the Proposal Process

As with writing most college-level academic papers, research proposals are generally organized the same way throughout most social science disciplines. The text of proposals generally vary in length between ten and thirty-five pages, followed by the list of references. However, before you begin, read the assignment carefully and, if anything seems unclear, ask your professor whether there are any specific requirements for organizing and writing the proposal.

A good place to begin is to ask yourself a series of questions:

  • What do I want to study?
  • Why is the topic important?
  • How is it significant within the subject areas covered in my class?
  • What problems will it help solve?
  • How does it build upon [and hopefully go beyond] research already conducted on the topic?
  • What exactly should I plan to do, and can I get it done in the time available?

In general, a compelling research proposal should document your knowledge of the topic and demonstrate your enthusiasm for conducting the study. Approach it with the intention of leaving your readers feeling like, "Wow, that's an exciting idea and I can’t wait to see how it turns out!"

Most proposals should include the following sections:

I.  Introduction

In the real world of higher education, a research proposal is most often written by scholars seeking grant funding for a research project or it's the first step in getting approval to write a doctoral dissertation. Even if this is just a course assignment, treat your introduction as the initial pitch of an idea based on a thorough examination of the significance of a research problem. After reading the introduction, your readers should not only have an understanding of what you want to do, but they should also be able to gain a sense of your passion for the topic and to be excited about the study's possible outcomes. Note that most proposals do not include an abstract [summary] before the introduction.

Think about your introduction as a narrative written in two to four paragraphs that succinctly answers the following four questions :

  • What is the central research problem?
  • What is the topic of study related to that research problem?
  • What methods should be used to analyze the research problem?
  • Answer the "So What?" question by explaining why this is important research, what is its significance, and why should someone reading the proposal care about the outcomes of the proposed study?

II.  Background and Significance

This is where you explain the scope and context of your proposal and describe in detail why it's important. It can be melded into your introduction or you can create a separate section to help with the organization and narrative flow of your proposal. Approach writing this section with the thought that you can’t assume your readers will know as much about the research problem as you do. Note that this section is not an essay going over everything you have learned about the topic; instead, you must choose what is most relevant in explaining the aims of your research.

To that end, while there are no prescribed rules for establishing the significance of your proposed study, you should attempt to address some or all of the following:

  • State the research problem and give a more detailed explanation about the purpose of the study than what you stated in the introduction. This is particularly important if the problem is complex or multifaceted .
  • Present the rationale of your proposed study and clearly indicate why it is worth doing; be sure to answer the "So What? question [i.e., why should anyone care?].
  • Describe the major issues or problems examined by your research. This can be in the form of questions to be addressed. Be sure to note how your proposed study builds on previous assumptions about the research problem.
  • Explain the methods you plan to use for conducting your research. Clearly identify the key sources you intend to use and explain how they will contribute to your analysis of the topic.
  • Describe the boundaries of your proposed research in order to provide a clear focus. Where appropriate, state not only what you plan to study, but what aspects of the research problem will be excluded from the study.
  • If necessary, provide definitions of key concepts, theories, or terms.

III.  Literature Review

Connected to the background and significance of your study is a section of your proposal devoted to a more deliberate review and synthesis of prior studies related to the research problem under investigation . The purpose here is to place your project within the larger whole of what is currently being explored, while at the same time, demonstrating to your readers that your work is original and innovative. Think about what questions other researchers have asked, what methodological approaches they have used, and what is your understanding of their findings and, when stated, their recommendations. Also pay attention to any suggestions for further research.

Since a literature review is information dense, it is crucial that this section is intelligently structured to enable a reader to grasp the key arguments underpinning your proposed study in relation to the arguments put forth by other researchers. A good strategy is to break the literature into "conceptual categories" [themes] rather than systematically or chronologically describing groups of materials one at a time. Note that conceptual categories generally reveal themselves after you have read most of the pertinent literature on your topic so adding new categories is an on-going process of discovery as you review more studies. How do you know you've covered the key conceptual categories underlying the research literature? Generally, you can have confidence that all of the significant conceptual categories have been identified if you start to see repetition in the conclusions or recommendations that are being made.

NOTE: Do not shy away from challenging the conclusions made in prior research as a basis for supporting the need for your proposal. Assess what you believe is missing and state how previous research has failed to adequately examine the issue that your study addresses. Highlighting the problematic conclusions strengthens your proposal. For more information on writing literature reviews, GO HERE .

To help frame your proposal's review of prior research, consider the "five C’s" of writing a literature review:

  • Cite , so as to keep the primary focus on the literature pertinent to your research problem.
  • Compare the various arguments, theories, methodologies, and findings expressed in the literature: what do the authors agree on? Who applies similar approaches to analyzing the research problem?
  • Contrast the various arguments, themes, methodologies, approaches, and controversies expressed in the literature: describe what are the major areas of disagreement, controversy, or debate among scholars?
  • Critique the literature: Which arguments are more persuasive, and why? Which approaches, findings, and methodologies seem most reliable, valid, or appropriate, and why? Pay attention to the verbs you use to describe what an author says/does [e.g., asserts, demonstrates, argues, etc.].
  • Connect the literature to your own area of research and investigation: how does your own work draw upon, depart from, synthesize, or add a new perspective to what has been said in the literature?

IV.  Research Design and Methods

This section must be well-written and logically organized because you are not actually doing the research, yet, your reader must have confidence that you have a plan worth pursuing . The reader will never have a study outcome from which to evaluate whether your methodological choices were the correct ones. Thus, the objective here is to convince the reader that your overall research design and proposed methods of analysis will correctly address the problem and that the methods will provide the means to effectively interpret the potential results. Your design and methods should be unmistakably tied to the specific aims of your study.

Describe the overall research design by building upon and drawing examples from your review of the literature. Consider not only methods that other researchers have used, but methods of data gathering that have not been used but perhaps could be. Be specific about the methodological approaches you plan to undertake to obtain information, the techniques you would use to analyze the data, and the tests of external validity to which you commit yourself [i.e., the trustworthiness by which you can generalize from your study to other people, places, events, and/or periods of time].

When describing the methods you will use, be sure to cover the following:

  • Specify the research process you will undertake and the way you will interpret the results obtained in relation to the research problem. Don't just describe what you intend to achieve from applying the methods you choose, but state how you will spend your time while applying these methods [e.g., coding text from interviews to find statements about the need to change school curriculum; running a regression to determine if there is a relationship between campaign advertising on social media sites and election outcomes in Europe ].
  • Keep in mind that the methodology is not just a list of tasks; it is a deliberate argument as to why techniques for gathering information add up to the best way to investigate the research problem. This is an important point because the mere listing of tasks to be performed does not demonstrate that, collectively, they effectively address the research problem. Be sure you clearly explain this.
  • Anticipate and acknowledge any potential barriers and pitfalls in carrying out your research design and explain how you plan to address them. No method applied to research in the social and behavioral sciences is perfect, so you need to describe where you believe challenges may exist in obtaining data or accessing information. It's always better to acknowledge this than to have it brought up by your professor!

V.  Preliminary Suppositions and Implications

Just because you don't have to actually conduct the study and analyze the results, doesn't mean you can skip talking about the analytical process and potential implications . The purpose of this section is to argue how and in what ways you believe your research will refine, revise, or extend existing knowledge in the subject area under investigation. Depending on the aims and objectives of your study, describe how the anticipated results will impact future scholarly research, theory, practice, forms of interventions, or policy making. Note that such discussions may have either substantive [a potential new policy], theoretical [a potential new understanding], or methodological [a potential new way of analyzing] significance.   When thinking about the potential implications of your study, ask the following questions:

  • What might the results mean in regards to challenging the theoretical framework and underlying assumptions that support the study?
  • What suggestions for subsequent research could arise from the potential outcomes of the study?
  • What will the results mean to practitioners in the natural settings of their workplace, organization, or community?
  • Will the results influence programs, methods, and/or forms of intervention?
  • How might the results contribute to the solution of social, economic, or other types of problems?
  • Will the results influence policy decisions?
  • In what way do individuals or groups benefit should your study be pursued?
  • What will be improved or changed as a result of the proposed research?
  • How will the results of the study be implemented and what innovations or transformative insights could emerge from the process of implementation?

NOTE:   This section should not delve into idle speculation, opinion, or be formulated on the basis of unclear evidence . The purpose is to reflect upon gaps or understudied areas of the current literature and describe how your proposed research contributes to a new understanding of the research problem should the study be implemented as designed.

ANOTHER NOTE : This section is also where you describe any potential limitations to your proposed study. While it is impossible to highlight all potential limitations because the study has yet to be conducted, you still must tell the reader where and in what form impediments may arise and how you plan to address them.

VI.  Conclusion

The conclusion reiterates the importance or significance of your proposal and provides a brief summary of the entire study . This section should be only one or two paragraphs long, emphasizing why the research problem is worth investigating, why your research study is unique, and how it should advance existing knowledge.

Someone reading this section should come away with an understanding of:

  • Why the study should be done;
  • The specific purpose of the study and the research questions it attempts to answer;
  • The decision for why the research design and methods used where chosen over other options;
  • The potential implications emerging from your proposed study of the research problem; and
  • A sense of how your study fits within the broader scholarship about the research problem.

VII.  Citations

As with any scholarly research paper, you must cite the sources you used . In a standard research proposal, this section can take two forms, so consult with your professor about which one is preferred.

  • References -- a list of only the sources you actually used in creating your proposal.
  • Bibliography -- a list of everything you used in creating your proposal, along with additional citations to any key sources relevant to understanding the research problem.

In either case, this section should testify to the fact that you did enough preparatory work to ensure the project will complement and not just duplicate the efforts of other researchers. It demonstrates to the reader that you have a thorough understanding of prior research on the topic.

Most proposal formats have you start a new page and use the heading "References" or "Bibliography" centered at the top of the page. Cited works should always use a standard format that follows the writing style advised by the discipline of your course [e.g., education=APA; history=Chicago] or that is preferred by your professor. This section normally does not count towards the total page length of your research proposal.

Develop a Research Proposal: Writing the Proposal. Office of Library Information Services. Baltimore County Public Schools; Heath, M. Teresa Pereira and Caroline Tynan. “Crafting a Research Proposal.” The Marketing Review 10 (Summer 2010): 147-168; Jones, Mark. “Writing a Research Proposal.” In MasterClass in Geography Education: Transforming Teaching and Learning . Graham Butt, editor. (New York: Bloomsbury Academic, 2015), pp. 113-127; Juni, Muhamad Hanafiah. “Writing a Research Proposal.” International Journal of Public Health and Clinical Sciences 1 (September/October 2014): 229-240; Krathwohl, David R. How to Prepare a Dissertation Proposal: Suggestions for Students in Education and the Social and Behavioral Sciences . Syracuse, NY: Syracuse University Press, 2005; Procter, Margaret. The Academic Proposal. The Lab Report. University College Writing Centre. University of Toronto; Punch, Keith and Wayne McGowan. "Developing and Writing a Research Proposal." In From Postgraduate to Social Scientist: A Guide to Key Skills . Nigel Gilbert, ed. (Thousand Oaks, CA: Sage, 2006), 59-81; Wong, Paul T. P. How to Write a Research Proposal. International Network on Personal Meaning. Trinity Western University; Writing Academic Proposals: Conferences , Articles, and Books. The Writing Lab and The OWL. Purdue University; Writing a Research Proposal. University Library. University of Illinois at Urbana-Champaign.

  • << Previous: Writing a Reflective Paper
  • Next: Generative AI and Writing >>
  • Last Updated: May 7, 2024 9:45 AM
  • URL: https://libguides.usc.edu/writingguide/assignments

 Popup blocker may stop you from viewing  some pages. Please add this site to your Trusted Sites, or temporarily disable any Popup blocker.

  • Increase novice peer reviewers' awareness of common mistakes and dilemmas faced in      reviewing research proposals and manuscripts.
  • Suggest strategies for novice peer reviewers to offer constructive criticisms to authors of      research proposals and manuscripts.

For accessing information in different file formats, see Download Viewers and Players .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.23(2); 2008 Apr

Logo of omanmedj

How to prepare a Research Proposal

Health research, medical education and clinical practice form the three pillars of modern day medical practice. As one authority rightly put it: ‘Health research is not a luxury, but an essential need that no nation can afford to ignore’. Health research can and should be pursued by a broad range of people. Even if they do not conduct research themselves, they need to grasp the principles of the scientific method to understand the value and limitations of science and to be able to assess and evaluate results of research before applying them. This review paper aims to highlight the essential concepts to the students and beginning researchers and sensitize and motivate the readers to access the vast literature available on research methodologies.

Most students and beginning researchers do not fully understand what a research proposal means, nor do they understand its importance. 1 A research proposal is a detailed description of a proposed study designed to investigate a given problem. 2

A research proposal is intended to convince others that you have a worthwhile research project and that you have the competence and the work-plan to complete it. Broadly the research proposal must address the following questions regardless of your research area and the methodology you choose: What you plan to accomplish, why do you want to do it and how are you going to do it. 1 The aim of this article is to highlight the essential concepts and not to provide extensive details about this topic.

The elements of a research proposal are highlighted below:

1. Title: It should be concise and descriptive. It must be informative and catchy. An effective title not only prick’s the readers interest, but also predisposes him/her favorably towards the proposal. Often titles are stated in terms of a functional relationship, because such titles clearly indicate the independent and dependent variables. 1 The title may need to be revised after completion of writing of the protocol to reflect more closely the sense of the study. 3

2. Abstract: It is a brief summary of approximately 300 words. It should include the main research question, the rationale for the study, the hypothesis (if any) and the method. Descriptions of the method may include the design, procedures, the sample and any instruments that will be used. 1 It should stand on its own, and not refer the reader to points in the project description. 3

3. Introduction: The introduction provides the readers with the background information. Its purpose is to establish a framework for the research, so that readers can understand how it relates to other research. 4 It should answer the question of why the research needs to be done and what will be its relevance. It puts the proposal in context. 3

The introduction typically begins with a statement of the research problem in precise and clear terms. 1

The importance of the statement of the research problem 5 : The statement of the problem is the essential basis for the construction of a research proposal (research objectives, hypotheses, methodology, work plan and budget etc). It is an integral part of selecting a research topic. It will guide and put into sharper focus the research design being considered for solving the problem. It allows the investigator to describe the problem systematically, to reflect on its importance, its priority in the country and region and to point out why the proposed research on the problem should be undertaken. It also facilitates peer review of the research proposal by the funding agencies.

Then it is necessary to provide the context and set the stage for the research question in such a way as to show its necessity and importance. 1 This step is necessary for the investigators to familiarize themselves with existing knowledge about the research problem and to find out whether or not others have investigated the same or similar problems. This step is accomplished by a thorough and critical review of the literature and by personal communication with experts. 5 It helps further understanding of the problem proposed for research and may lead to refining the statement of the problem, to identify the study variables and conceptualize their relationships, and in formulation and selection of a research hypothesis. 5 It ensures that you are not "re-inventing the wheel" and demonstrates your understanding of the research problem. It gives due credit to those who have laid the groundwork for your proposed research. 1 In a proposal, the literature review is generally brief and to the point. The literature selected should be pertinent and relevant. 6

Against this background, you then present the rationale of the proposed study and clearly indicate why it is worth doing.

4. Objectives: Research objectives are the goals to be achieved by conducting the research. 5 They may be stated as ‘general’ and ‘specific’.

The general objective of the research is what is to be accomplished by the research project, for example, to determine whether or not a new vaccine should be incorporated in a public health program.

The specific objectives relate to the specific research questions the investigator wants to answer through the proposed study and may be presented as primary and secondary objectives, for example, primary: To determine the degree of protection that is attributable to the new vaccine in a study population by comparing the vaccinated and unvaccinated groups. 5 Secondary: To study the cost-effectiveness of this programme.

Young investigators are advised to resist the temptation to put too many objectives or over-ambitious objectives that cannot be adequately achieved by the implementation of the protocol. 3

5. Variables: During the planning stage, it is necessary to identify the key variables of the study and their method of measurement and unit of measurement must be clearly indicated. Four types of variables are important in research 5 :

a. Independent variables: variables that are manipulated or treated in a study in order to see what effect differences in them will have on those variables proposed as being dependent on them. The different synonyms for the term ‘independent variable’ which are used in literature are: cause, input, predisposing factor, risk factor, determinant, antecedent, characteristic and attribute.

b. Dependent variables: variables in which changes are results of the level or amount of the independent variable or variables.

Synonyms: effect, outcome, consequence, result, condition, disease.

c. Confounding or intervening variables: variables that should be studied because they may influence or ‘mix’ the effect of the independent variables. For instance, in a study of the effect of measles (independent variable) on child mortality (dependent variable), the nutritional status of the child may play an intervening (confounding) role.

d. Background variables: variables that are so often of relevance in investigations of groups or populations that they should be considered for possible inclusion in the study. For example sex, age, ethnic origin, education, marital status, social status etc.

The objective of research is usually to determine the effect of changes in one or more independent variables on one or more dependent variables. For example, a study may ask "Will alcohol intake (independent variable) have an effect on development of gastric ulcer (dependent variable)?"

Certain variables may not be easy to identify. The characteristics that define these variables must be clearly identified for the purpose of the study.

6. Questions and/ or hypotheses: If you as a researcher know enough to make prediction concerning what you are studying, then the hypothesis may be formulated. A hypothesis can be defined as a tentative prediction or explanation of the relationship between two or more variables. In other words, the hypothesis translates the problem statement into a precise, unambiguous prediction of expected outcomes. Hypotheses are not meant to be haphazard guesses, but should reflect the depth of knowledge, imagination and experience of the investigator. 5 In the process of formulating the hypotheses, all variables relevant to the study must be identified. For example: "Health education involving active participation by mothers will produce more positive changes in child feeding than health education based on lectures". Here the independent variable is types of health education and the dependent variable is changes in child feeding.

A research question poses a relationship between two or more variables but phrases the relationship as a question; a hypothesis represents a declarative statement of the relations between two or more variables. 7

For exploratory or phenomenological research, you may not have any hypothesis (please do not confuse the hypothesis with the statistical null hypothesis). 1 Questions are relevant to normative or census type research (How many of them are there? Is there a relationship between them?). Deciding whether to use questions or hypotheses depends on factors such as the purpose of the study, the nature of the design and methodology, and the audience of the research (at times even the outlook and preference of the committee members, particularly the Chair). 6

7. Methodology: The method section is very important because it tells your research Committee how you plan to tackle your research problem. The guiding principle for writing the Methods section is that it should contain sufficient information for the reader to determine whether the methodology is sound. Some even argue that a good proposal should contain sufficient details for another qualified researcher to implement the study. 1 Indicate the methodological steps you will take to answer every question or to test every hypothesis illustrated in the Questions/hypotheses section. 6 It is vital that you consult a biostatistician during the planning stage of your study, 8 to resolve the methodological issues before submitting the proposal.

This section should include:

Research design: The selection of the research strategy is the core of research design and is probably the single most important decision the investigator has to make. The choice of the strategy, whether descriptive, analytical, experimental, operational or a combination of these depend on a number of considerations, 5 but this choice must be explained in relation to the study objectives. 3

Research subjects or participants: Depending on the type of your study, the following questions should be answered 3 , 5

  • - What are the criteria for inclusion or selection?
  • - What are the criteria for exclusion?
  • - What is the sampling procedure you will use so as to ensure representativeness and reliability of the sample and to minimize sampling errors? The key reason for being concerned with sampling is the issue of validity-both internal and external of the study results. 9
  • - Will there be use of controls in your study? Controls or comparison groups are used in scientific research in order to increase the validity of the conclusions. Control groups are necessary in all analytical epidemiological studies, in experimental studies of drug trials, in research on effects of intervention programmes and disease control measures and in many other investigations. Some descriptive studies (studies of existing data, surveys) may not require control groups.
  • - What are the criteria for discontinuation?

Sample size: The proposal should provide information and justification (basis on which the sample size is calculated) about sample size in the methodology section. 3 A larger sample size than needed to test the research hypothesis increases the cost and duration of the study and will be unethical if it exposes human subjects to any potential unnecessary risk without additional benefit. A smaller sample size than needed can also be unethical as it exposes human subjects to risk with no benefit to scientific knowledge. Calculation of sample size has been made easy by computer software programmes, but the principles underlying the estimation should be well understood.

Interventions: If an intervention is introduced, a description must be given of the drugs or devices (proprietary names, manufacturer, chemical composition, dose, frequency of administration) if they are already commercially available. If they are in phases of experimentation or are already commercially available but used for other indications, information must be provided on available pre-clinical investigations in animals and/or results of studies already conducted in humans (in such cases, approval of the drug regulatory agency in the country is needed before the study). 3

Ethical issues 3 : Ethical considerations apply to all types of health research. Before the proposal is submitted to the Ethics Committee for approval, two important documents mentioned below (where appropriate) must be appended to the proposal. In additions, there is another vital issue of Conflict of Interest, wherein the researchers should furnish a statement regarding the same.

The Informed consent form (informed decision-making): A consent form, where appropriate, must be developed and attached to the proposal. It should be written in the prospective subjects’ mother tongue and in simple language which can be easily understood by the subject. The use of medical terminology should be avoided as far as possible. Special care is needed when subjects are illiterate. It should explain why the study is being done and why the subject has been asked to participate. It should describe, in sequence, what will happen in the course of the study, giving enough detail for the subject to gain a clear idea of what to expect. It should clarify whether or not the study procedures offer any benefits to the subject or to others, and explain the nature, likelihood and treatment of anticipated discomfort or adverse effects, including psychological and social risks, if any. Where relevant, a comparison with risks posed by standard drugs or treatment must be included. If the risks are unknown or a comparative risk cannot be given it should be so stated. It should indicate that the subject has the right to withdraw from the study at any time without, in any way, affecting his/her further medical care. It should assure the participant of confidentiality of the findings.

Ethics checklist: The proposal must describe the measures that will be undertaken to ensure that the proposed research is carried out in accordance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical research involving Human Subjects. 10 It must answer the following questions:

  • • Is the research design adequate to provide answers to the research question? It is unethical to expose subjects to research that will have no value.
  • • Is the method of selection of research subjects justified? The use of vulnerable subjects as research participants needs special justification. Vulnerable subjects include those in prison, minors and persons with mental disability. In international research it is important to mention that the population in which the study is conducted will benefit from any potential outcome of the research and the research is not being conducted solely for the benefit of some other population. Justification is needed for any inducement, financial or otherwise, for the participants to be enrolled in the study.
  • • Are the interventions justified, in terms of risk/benefit ratio? Risks are not limited to physical harm. Psychological and social risks must also be considered.
  • • For observations made, have measures been taken to ensure confidentiality?

Research setting 5 : The research setting includes all the pertinent facets of the study, such as the population to be studied (sampling frame), the place and time of study.

Study instruments 3 , 5 : Instruments are the tools by which the data are collected. For validated questionnaires/interview schedules, reference to published work should be given and the instrument appended to the proposal. For new a questionnaire which is being designed specifically for your study the details about preparing, precoding and pretesting of questionnaire should be furnished and the document appended to the proposal. Descriptions of other methods of observations like medical examination, laboratory tests and screening procedures is necessary- for established procedures, reference of published work cited but for new or modified procedure, an adequate description is necessary with justification for the same.

Collection of data: A short description of the protocol of data collection. For example, in a study on blood pressure measurement: time of participant arrival, rest for 5p. 10 minutes, which apparatus (standard calibrated) to be used, in which room to take measurement, measurement in sitting or lying down position, how many measurements, measurement in which arm first (whether this is going to be randomized), details of cuff and its placement, who will take the measurement. This minimizes the possibility of confusion, delays and errors.

Data analysis: The description should include the design of the analysis form, plans for processing and coding the data and the choice of the statistical method to be applied to each data. What will be the procedures for accounting for missing, unused or spurious data?

Monitoring, supervision and quality control: Detailed statement about the all logistical issues to satisfy the requirements of Good Clinical Practices (GCP), protocol procedures, responsibilities of each member of the research team, training of study investigators, steps taken to assure quality control (laboratory procedures, equipment calibration etc)

Gantt chart: A Gantt chart is an overview of tasks/proposed activities and a time frame for the same. You put weeks, days or months at one side, and the tasks at the other. You draw fat lines to indicate the period the task will be performed to give a timeline for your research study (take help of tutorial on youtube). 11

Significance of the study: Indicate how your research will refine, revise or extend existing knowledge in the area under investigation. How will it benefit the concerned stakeholders? What could be the larger implications of your research study?

Dissemination of the study results: How do you propose to share the findings of your study with professional peers, practitioners, participants and the funding agency?

Budget: A proposal budget with item wise/activity wise breakdown and justification for the same. Indicate how will the study be financed.

References: The proposal should end with relevant references on the subject. For web based search include the date of access for the cited website, for example: add the sentence "accessed on June 10, 2008".

Appendixes: Include the appropriate appendixes in the proposal. For example: Interview protocols, sample of informed consent forms, cover letters sent to appropriate stakeholders, official letters for permission to conduct research. Regarding original scales or questionnaires, if the instrument is copyrighted then permission in writing to reproduce the instrument from the copyright holder or proof of purchase of the instrument must be submitted.

Grey Arrow in Cicler

  • Student Member
  • Corporate Partnership
  • Accreditation
  • OR Society Accreditation »
  • Become Chartered »
  • Data Science Professional Certification »
  • Continuing Professional Development CPD »
  • Awards, Medals and Scholarships
  • Beale Medal »
  • President's Medal »
  • Goodeve Medal »
  • Stafford Beer Medal »
  • Cook Medal »
  • KD Tocher Medal »
  • Griffiths Medal »
  • Ranyard Medal »
  • Lyn Thomas Impact Medal »
  • Companion of OR »
  • The Simpson Award »
  • May Hicks Award »
  • The OR Society Undergraduate Award »
  • The Doctoral Award »
  • Scholarships IFORS »
  • Donald Hicks Scholarships »
  • EURO Summer Institute Scholarships »
  • Elsie Cropper Award »
  • Assisted Places »
  • Silver Medal »
  • Master's Scholarship »
  • University Master's Courses in OR »
  • Master's Scholarships Recent Winners »
  • Annual Conference
  • OR66 Streams »
  • OR66 Organising Committee »
  • OR66 Sponsors »
  • OR66 Plenary Speakers »
  • Previous Annual Conferences
  • OR64 »
  • OR63 »
  • OR62 »
  • OR61 »
  • OR65 »
  • OR65 Plenary Speakers »
  • OR65 Streams »
  • OR65 Key Dates »
  • OR65 Rates and Booking »
  • OR65 Useful Information »
  • ECR Workshop »
  • OR65 Organising Committee »
  • Annual General Meeting
  • 2018 Beale Lecture Richard Omerod »
  • 2018 Beale Lecture Dr Çagri Koç »
  • 2019 Beale Lecture Mike Jackson »
  • Beale Lecture 2020 Speakers »
  • 2021 Beale Lecture »
  • Beale 2023 »
  • Blackett Lecture
  • Previous Blackett Lectures »
  • Blackett 2022: Professor Christina Pagel »
  • Careers Open Day
  • COD 2024 Exhibitors »
  • COD 2022 Exhibitors »
  • ISMOR 40 Proceedings »
  • ISMOR 39 Proceedings »
  • Ismor Sponsors »
  • Knowledge Exchange Day
  • New to OR Conference
  • Speakers »
  • Simulation Workshop
  • SW25 Organising Committee »
  • SW25 Sponsors and Exhibitors »
  • SW25 Call for Papers »
  • SW25 Key Dates »
  • SW21 »
  • SW21 Registration and Rates »
  • SW21 Programme »
  • Keynote Speakers »
  • SW21 Key Dates »
  • SW21 Sponsor and Exhibitors »
  • SW21 Organising Committee »
  • SW21 Programme Committee »
  • SW21 Advisory Board »
  • SW21 DSE Consulting Headline Sponsor »
  • SW23 Keynote Speakers »
  • SW23 Organising Committee »
  • Scenario Planning and Foresight
  • Validate AI Conference
  • Regional Society & SIG Events
  • Non-Society Events
  • WORAN Land Lecture
  • Previous WORAN Land Lectures »
  • September Webinar »
  • November Webinar »
  • October Webinar »
  • 15 November Webinar »
  • December Webinar »
  • January 24 Webinar »
  • WISDOM Webinar »
  • February 24 webinar »
  • March 2024 »
  • April 24 »
  • May 24 »
  • May_2_2024 »
  • July 2024 »
  • Joint SIG Event
  • Joint SIG Speakers 2024 »
  • JORS Gender Equality Webinar
  • Publications
  • JORS »
  • Call for Co-Editor-in-Chief »
  • EJIS »
  • KMRP »
  • JOS »
  • JOS Africa Focus Initiative »
  • JBA »
  • OR Insight »
  • Inside OR »
  • Impact Magazine »
  • Databases & Literature Searches
  • Additional Journals
  • Technology Analysis & Strategic Management »
  • Engineering Optimization »
  • Journal of Decision Systems »
  • Journal of Management Analytics »
  • International Journal of Modelling and Simulation »
  • International Journal of Management Science and Engineering Management »
  • International Journal of Systems Science: Operations & Logistics »
  • International Journal of Healthcare Management »
  • Tutors »
  • Your Learning Portal
  • Inhouse Private Courses
  • Training for PhD Students – NATCOR
  • Submit Training Bids
  • Researchers Database
  • EPSRC Peer College Review
  • Why Kerem Akartunali joined the EPSRC Peer Review College »
  • Why Alain Zemkoho is joining the EPSRC Peer Review College »
  • Why Kathy Kotiadis joined the EPSRC Peer Review College »
  • Open Funding Opportunities
  • Top Tips: Applying for Funding
  • Potential Funding Sources for Research

Top Tips: Reviewing a Research Proposal

  • Get Involved
  • Job Opportunities in OR
  • Volunteering Opportunities
  • OR in Education
  • University Masters Courses in OR »
  • Careers »
  • For Volunteers »
  • For Lecturers »
  • For Teachers »
  • OR in Education Resources »
  • Webinars »
  • Pro Bono OR
  • Pro Bono OR Volunteering »
  • Open Pro Bono Projects »
  • Pro Bono OR for the Third Sector »
  • Case Studies »
  • Society Groups
  • Regional Societies »
  • East Midlands »
  • London & South East »
  • Midlands »
  • North East »
  • North West »
  • Scotland »
  • South Wales »
  • Southern »
  • Western »
  • Yorkshire & Humberside »
  • Special Interest Groups and Networks »
  • Analytics Network »
  • Behavioural OR »
  • Decision Analysis »
  • Defence »
  • Early Career Researchers (ECR) »
  • Health & Social Services »
  • Independent Consultants Network »
  • New to OR Network »
  • OR, Analytics, and Education »
  • OR and Strategy »
  • OR in Practice »
  • OR in the Third Sector »
  • People Analytics »
  • Problem Structuring Methods »
  • Public Policy Design »
  • Simulation OR »
  • Systems Thinking »
  • Women in OR & Analytics Network »
  • Related Organisations
  • Submit a paper to a journal
  • Become a reviewer
  • Legacy Giving
  • History of OR
  • OR in Business
  • Agent Based Modelling »
  • Bayesian Analysis »
  • Data Analytics and Big Data »
  • Data Envelopment Analysis »
  • Data Provenance »
  • Data Warehousing »
  • Forecasting »
  • Fuzzy Systems »
  • Game Theory »
  • Grey Models »
  • Heuristics »
  • Machine Learning and Artificial Intelligence (AI) »
  • Mathematical Modelling »
  • Multicriteria Analysis »
  • Neural Networks »
  • Optimisation »
  • Queueing »
  • Simulation »
  • Supply Chain Optimisation »
  • System Dynamics »
  • Vehicle Routing Problem »
  • OR Insights
  • February's OR Insights »
  • Bringing AI into the classroom »
  • INFORMS Finalists »
  • New White Paper on optimising patient records in the NHS »
  • April 2024 OR Insights »
  • Navigating stress and burnout in Operational Research »
  • Knowledge Exchange »
  • EURO launches new online seminar series »
  • OR in real world situations
  • Cybersecurity Operations »
  • Environmental and Sustainable Operations »
  • Healthcare Operations Management »
  • Humanitarian Operations »
  • Risk Management and Resilience »
  • Service and Customer Experience »
  • Smart Cites and Planning »
  • History of The OR Society
  • 75th Anniversary

The peer review process is invaluable in assisting research panels to make decisions about funding. Independent experts scrutinise the importance, potential and cost-effectiveness of the research being proposed.

Check the funder’s website for guidance Ensure you are clear on what type of proposal you are being asked to review and read the assessment criteria and scoring matrix as a priority. Many funding councils have prepared comprehensive guidance for reviewers that is freely available online. As an example, EPSRC and ESRC guidance can be accessed here:

EPSRC:  https://www.epsrc.ac.uk/funding/assessmentprocess/review/formsandguidancenotes/standardcalls/

ESRC:  http://www.esrc.ac.uk/funding/guidance-for-peer-reviewers/

Be objective and professional Provide clear and concise comments and objective criticism when identifying strengths and weaknesses in the proposal. Whether or not there are major flaws or ethical concerns, provide justification and references for your comments and the score you provide. Remain anonymous by avoiding referring to your own work or any personal information. Don’t allow your review to be influenced by bias for your own field of research and be mindful of unconscious bias and the impact this could have on your review. See:  https://implicit.harvard.edu/implicit/takeatest.html.

Be concise but clear Many submission systems have character limits for the review sections, so you will need to be concise. However, you should be conscious that not everyone reading your review comments will be a specialist in your field so use accessible language throughout.

Remember to praise a good proposal If you find that the proposal you’re reviewing is good, you should say so and explain why.

Take your time Finally, allow enough time to thoroughly read the proposal before writing and submitting your review. If you feel you need more time to complete your review, then contact the funder to request a deadline extension. Most funders would prefer that you request an extension, and provide a more comprehensive review, than submit something brief and uninformative because there was inadequate time for you to consider it in detail.

Andrew (2014, May 19). Review a research grant-application in five minutes. Retrieved from:  https://parkerderrington.com/peer-review-your-own-grant-application-in-five-minutes/

Medical Research Council (2017) Guidance for peer reviewers. Retrieved from: https://www.mrc.ac.uk/documents/pdf/reviewers-handbook/

Prosser, R. (2016, September 19). 8 top tips for writing a useful grant review. Insight . Retrieved from:  https://mrc.ukri.org/news/blog/8-top-tips-for-writing-a-useful-review/?redirected-from-wordpress

research proposal peer review example

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Science and Public Policy
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. background, 4. findings, 5. discussion, 6. conclusion and final remarks, supplementary material, data availability, conflict of interest statement., acknowledgements.

  • < Previous

Evaluation of research proposals by peer review panels: broader panels for broader assessments?

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Rebecca Abma-Schouten, Joey Gijbels, Wendy Reijmerink, Ingeborg Meijer, Evaluation of research proposals by peer review panels: broader panels for broader assessments?, Science and Public Policy , Volume 50, Issue 4, August 2023, Pages 619–632, https://doi.org/10.1093/scipol/scad009

  • Permissions Icon Permissions

Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the Netherlands, we gain insight into how scientific quality and societal relevance are discussed in panel meetings. We explore, in ten review panel meetings of biomedical and health funding programmes, how panel composition and formal assessment criteria affect the arguments used. We observe that more scientific arguments are used than arguments related to societal relevance and expected impact. Also, more diverse panels result in a wider range of arguments, largely for the benefit of arguments related to societal relevance and impact. We discuss how funders can contribute to the quality of peer review by creating a shared conceptual framework that better defines research quality and societal relevance. We also contribute to a further understanding of the role of diverse peer review panels.

Scientific biomedical and health research is often supported by project or programme grants from public funding agencies such as governmental research funders and charities. Research funders primarily rely on peer review, often a combination of independent written review and discussion in a peer review panel, to inform their funding decisions. Peer review panels have the difficult task of integrating and balancing the various assessment criteria to select and rank the eligible proposals. With the increasing emphasis on societal benefit and being responsive to societal needs, the assessment of research proposals ought to include broader assessment criteria, including both scientific quality and societal relevance, and a broader perspective on relevant peers. This results in new practices of including non-scientific peers in review panels ( Del Carmen Calatrava Moreno et al. 2019 ; Den Oudendammer et al. 2019 ; Van den Brink et al. 2016 ). Relevant peers, in the context of biomedical and health research, include, for example, health-care professionals, (healthcare) policymakers, and patients as the (end-)users of research.

Currently, in scientific and grey literature, much attention is paid to what legitimate criteria are and to deficiencies in the peer review process, for example, focusing on the role of chance and the difficulty of assessing interdisciplinary or ‘blue sky’ research ( Langfeldt 2006 ; Roumbanis 2021a ). Our research primarily builds upon the work of Lamont (2009) , Huutoniemi (2012) , and Kolarz et al. (2016) . Their work articulates how the discourse in peer review panels can be understood by giving insight into disciplinary assessment cultures and social dynamics, as well as how panel members define and value concepts such as scientific excellence, interdisciplinarity, and societal impact. At the same time, there is little empirical work on what actually is discussed in peer review meetings and to what extent this is related to the specific objectives of the research funding programme. Such observational work is especially lacking in the biomedical and health domain.

The aim of our exploratory study is to learn what arguments panel members use in a review meeting when assessing research proposals in biomedical and health research programmes. We explore how arguments used in peer review panels are affected by (1) the formal assessment criteria and (2) the inclusion of non-scientific peers in review panels, also called (end-)users of research, societal stakeholders, or societal actors. We add to the existing literature by focusing on the actual arguments used in peer review assessment in practice.

To this end, we observed ten panel meetings in a variety of eight biomedical and health research programmes at two large research funders in the Netherlands: the governmental research funder The Netherlands Organisation for Health Research and Development (ZonMw) and the charitable research funder the Dutch Heart Foundation (DHF). Our first research question focuses on what arguments panel members use when assessing research proposals in a review meeting. The second examines to what extent these arguments correspond with the formal −as described in the programme brochure and assessment form− criteria on scientific quality and societal impact creation. The third question focuses on how arguments used differ between panel members with different perspectives.

2.1 Relation between science and society

To understand the dual focus of scientific quality and societal relevance in research funding, a theoretical understanding and a practical operationalisation of the relation between science and society are needed. The conceptualisation of this relationship affects both who are perceived as relevant peers in the review process and the criteria by which research proposals are assessed.

The relationship between science and society is not constant over time nor static, yet a relation that is much debated. Scientific knowledge can have a huge impact on societies, either intended or unintended. Vice versa, the social environment and structure in which science takes place influence the rate of development, the topics of interest, and the content of science. However, the second part of this inter-relatedness between science and society generally receives less attention ( Merton 1968 ; Weingart 1999 ).

From a historical perspective, scientific and technological progress contributed to the view that science was valuable on its own account and that science and the scientist stood independent of society. While this protected science from unwarranted political influence, societal disengagement with science resulted in less authority by science and debate about its contribution to society. This interdependence and mutual influence contributed to a modern view of science in which knowledge development is valued both on its own merit and for its impact on, and interaction with, society. As such, societal factors and problems are important drivers for scientific research. This warrants that the relation and boundaries between science, society, and politics need to be organised and constantly reinforced and reiterated ( Merton 1968 ; Shapin 2008 ; Weingart 1999 ).

Glerup and Horst (2014) conceptualise the value of science to society and the role of society in science in four rationalities that reflect different justifications for their relation and thus also for who is responsible for (assessing) the societal value of science. The rationalities are arranged along two axes: one is related to the internal or external regulation of science and the other is related to either the process or the outcome of science as the object of steering. The first two rationalities of Reflexivity and Demarcation focus on internal regulation in the scientific community. Reflexivity focuses on the outcome. Central is that science, and thus, scientists should learn from societal problems and provide solutions. Demarcation focuses on the process: science should continuously question its own motives and methods. The latter two rationalities of Contribution and Integration focus on external regulation. The core of the outcome-oriented Contribution rationality is that scientists do not necessarily see themselves as ‘working for the public good’. Science should thus be regulated by society to ensure that outcomes are useful. The central idea of the process-oriented Integration rationality is that societal actors should be involved in science in order to influence the direction of research.

Research funders can be seen as external or societal regulators of science. They can focus on organising the process of science, Integration, or on scientific outcomes that function as solutions for societal challenges, Contribution. In the Contribution perspective, a funder could enhance outside (societal) involvement in science to ensure that scientists take responsibility to deliver results that are needed and used by society. From Integration follows that actors from science and society need to work together in order to produce the best results. In this perspective, there is a lack of integration between science and society and more collaboration and dialogue are needed to develop a new kind of integrative responsibility ( Glerup and Horst 2014 ). This argues for the inclusion of other types of evaluators in research assessment. In reality, these rationalities are not mutually exclusive and also not strictly separated. As a consequence, multiple rationalities can be recognised in the reasoning of scientists and in the policies of research funders today.

2.2 Criteria for research quality and societal relevance

The rationalities of Glerup and Horst have consequences for which language is used to discuss societal relevance and impact in research proposals. Even though the main ingredients are quite similar, as a consequence of the coexisting rationalities in science, societal aspects can be defined and operationalised in different ways ( Alla et al. 2017 ). In the definition of societal impact by Reed, emphasis is placed on the outcome : the contribution to society. It includes the significance for society, the size of potential impact, and the reach , the number of people or organisations benefiting from the expected outcomes ( Reed et al. 2021 ). Other models and definitions focus more on the process of science and its interaction with society. Spaapen and Van Drooge introduced productive interactions in the assessment of societal impact, highlighting a direct contact between researchers and other actors. A key idea is that the interaction in different domains leads to impact in different domains ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Definitions that focus on the process often refer to societal impact as (1) something that can take place in distinguishable societal domains, (2) something that needs to be actively pursued, and (3) something that requires interactions with societal stakeholders (or users of research) ( Hughes and Kitson 2012 ; Spaapen and Van Drooge 2011 ).

Glerup and Horst show that process and outcome-oriented aspects can be combined in the operationalisation of criteria for assessing research proposals on societal aspects. Also, the funders participating in this study include the outcome—the value created in different domains—and the process—productive interactions with stakeholders—in their formal assessment criteria for societal relevance and impact. Different labels are used for these criteria, such as societal relevance , societal quality , and societal impact ( Abma-Schouten 2017 ; Reijmerink and Oortwijn 2017 ). In this paper, we use societal relevance or societal relevance and impact .

Scientific quality in research assessment frequently refers to all aspects and activities in the study that contribute to the validity and reliability of the research results and that contribute to the integrity and quality of the research process itself. The criteria commonly include the relevance of the proposal for the funding programme, the scientific relevance, originality, innovativeness, methodology, and feasibility ( Abdoul et al. 2012 ). Several studies demonstrated that quality is seen as not only a rich concept but also a complex concept in which excellence and innovativeness, methodological aspects, engagement of stakeholders, multidisciplinary collaboration, and societal relevance all play a role ( Geurts 2016 ; Roumbanis 2019 ; Scholten et al. 2018 ). Another study showed a comprehensive definition of ‘good’ science, which includes creativity, reproducibility, perseverance, intellectual courage, and personal integrity. It demonstrated that ‘good’ science involves not only scientific excellence but also personal values and ethics, and engagement with society ( Van den Brink et al. 2016 ). Noticeable in these studies is the connection made between societal relevance and scientific quality.

In summary, the criteria for scientific quality and societal relevance are conceptualised in different ways, and perspectives on the role of societal value creation and the involvement of societal actors vary strongly. Research funders hence have to pay attention to the meaning of the criteria for the panel members they recruit to help them, and navigate and negotiate how the criteria are applied in assessing research proposals. To be able to do so, more insight is needed in which elements of scientific quality and societal relevance are discussed in practice by peer review panels.

2.3 Role of funders and societal actors in peer review

National governments and charities are important funders of biomedical and health research. How this funding is distributed varies per country. Project funding is frequently allocated based on research programming by specialised public funding organisations, such as the Dutch Research Council in the Netherlands and ZonMw for health research. The DHF, the second largest private non-profit research funder in the Netherlands, provides project funding ( Private Non-Profit Financiering 2020 ). Funders, as so-called boundary organisations, can act as key intermediaries between government, science, and society ( Jasanoff 2011 ). Their responsibility is to develop effective research policies connecting societal demands and scientific ‘supply’. This includes setting up and executing fair and balanced assessment procedures ( Sarewitz and Pielke 2007 ). Herein, the role of societal stakeholders is receiving increasing attention ( Benedictus et al. 2016 ; De Rijcke et al. 2016 ; Dijstelbloem et al. 2013 ; Scholten et al. 2018 ).

All charitable health research funders in the Netherlands have, in the last decade, included patients at different stages of the funding process, including in assessing research proposals ( Den Oudendammer et al. 2019 ). To facilitate research funders in involving patients in assessing research proposals, the federation of Dutch patient organisations set up an independent reviewer panel with (at-risk) patients and direct caregivers ( Patiëntenfederatie Nederland, n.d .). Other foundations have set up societal advisory panels including a wider range of societal actors than patients alone. The Committee Societal Quality (CSQ) of the DHF includes, for example, (at-risk) patients and a wide range of cardiovascular health-care professionals who are not active as academic researchers. This model is also applied by the Diabetes Foundation and the Princess Beatrix Muscle Foundation in the Netherlands ( Diabetesfonds, n.d .; Prinses Beatrix Spierfonds, n.d .).

In 2014, the Lancet presented a series of five papers about biomedical and health research known as the ‘increasing value, reducing waste’ series ( Macleod et al. 2014 ). The authors addressed several issues as well as potential solutions that funders can implement. They highlight, among others, the importance of improving the societal relevance of the research questions and including the burden of disease in research assessment in order to increase the value of biomedical and health science for society. A better understanding of and an increasing role of users of research are also part of the described solutions ( Chalmers et al. 2014 ; Van den Brink et al. 2016 ). This is also in line with the recommendations of the 2013 Declaration on Research Assessment (DORA) ( DORA 2013 ). These recommendations influence the way in which research funders operationalise their criteria in research assessment, how they balance the judgement of scientific and societal aspects, and how they involve societal stakeholders in peer review.

2.4 Panel peer review of research proposals

To assess research proposals, funders rely on the services of peer experts to review the thousands or perhaps millions of research proposals seeking funding each year. While often associated with scholarly publishing, peer review also includes the ex ante assessment of research grant and fellowship applications ( Abdoul et al. 2012 ). Peer review of proposals often includes a written assessment of a proposal by an anonymous peer and a peer review panel meeting to select the proposals eligible for funding. Peer review is an established component of professional academic practice, is deeply embedded in the research culture, and essentially consists of experts in a given domain appraising the professional performance, creativity, and/or quality of scientific work produced by others in their field of competence ( Demicheli and Di Pietrantonj 2007 ). The history of peer review as the default approach for scientific evaluation and accountability is, however, relatively young. While the term was unheard of in the 1960s, by 1970, it had become the standard. Since that time, peer review has become increasingly diverse and formalised, resulting in more public accountability ( Reinhart and Schendzielorz 2021 ).

While many studies have been conducted concerning peer review in scholarly publishing, peer review in grant allocation processes has been less discussed ( Demicheli and Di Pietrantonj 2007 ). The most extensive work on this topic has been conducted by Lamont (2009) . Lamont studied peer review panels in five American research funding organisations, including observing three panels. Other examples include Roumbanis’s ethnographic observations of ten review panels at the Swedish Research Council in natural and engineering sciences ( Roumbanis 2017 , 2021a ). Also, Huutoniemi was able to study, but not observe, four panels on environmental studies and social sciences of the Academy of Finland ( Huutoniemi 2012 ). Additionally, Van Arensbergen and Van den Besselaar (2012) analysed peer review through interviews and by analysing the scores and outcomes at different stages of the peer review process in a talent funding programme. In particular, interesting is the study by Luo and colleagues on 164 written panel review reports, showing that the reviews from panels that included non-scientific peers described broader and more concrete impact topics. Mixed panels also more often connected research processes and characteristics of applicants with impact creation ( Luo et al. 2021 ).

While these studies primarily focused on peer review panels in other disciplinary domains or are based on interviews or reports instead of direct observations, we believe that many of the findings are relevant to the functioning of panels in the context of biomedical and health research. From this literature, we learn to have realistic expectations of peer review. It is inherently difficult to predict in advance which research projects will provide the most important findings or breakthroughs ( Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , 2021b ). At the same time, these limitations may not substantiate the replacement of peer review by another assessment approach ( Wessely 1998 ). Many topics addressed in the literature are inter-related and relevant to our study, such as disciplinary differences and interdisciplinarity, social dynamics and their consequences for consistency and bias, and suggestions to improve panel peer review ( Lamont and Huutoniemi 2011 ; Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , b ; Wessely 1998 ).

Different scientific disciplines show different preferences and beliefs about how to build knowledge and thus have different perceptions of excellence. However, panellists are willing to respect and acknowledge other standards of excellence ( Lamont 2009 ). Evaluation cultures also differ between scientific fields. Science, technology, engineering, and mathematics panels might, in comparison with panellists from social sciences and humanities, be more concerned with the consistency of the assessment across panels and therefore with clear definitions and uses of assessment criteria ( Lamont and Huutoniemi 2011 ). However, much is still to learn about how panellists’ cognitive affiliations with particular disciplines unfold in the evaluation process. Therefore, the assessment of interdisciplinary research is much more complex than just improving the criteria or procedure because less explicit repertoires would also need to change ( Huutoniemi 2012 ).

Social dynamics play a role as panellists may differ in their motivation to engage in allocation processes, which could create bias ( Lee et al. 2013 ). Placing emphasis on meeting established standards or thoroughness in peer review may promote uncontroversial and safe projects, especially in a situation where strong competition puts pressure on experts to reach a consensus ( Langfeldt 2001 ,2006 ). Personal interest and cognitive similarity may also contribute to conservative bias, which could negatively affect controversial or frontier science ( Luukkonen 2012 ; Roumbanis 2021a ; Travis and Collins 1991 ). Central in this part of literature is that panel conclusions are the outcome of and are influenced by the group interaction ( Van Arensbergen et al. 2014a ). Differences in, for example, the status and expertise of the panel members can play an important role in group dynamics. Insights from social psychology on group dynamics can help in understanding and avoiding bias in peer review panels ( Olbrecht and Bornmann 2010 ). For example, group performance research shows that more diverse groups with complementary skills make better group decisions than homogenous groups. Yet, heterogeneity can also increase conflict within the group ( Forsyth 1999 ). Therefore, it is important to pay attention to power dynamics and maintain team spirit and good communication ( Van Arensbergen et al. 2014a ), especially in meetings that include both scientific and non-scientific peers.

The literature also provides funders with starting points to improve the peer review process. For example, the explicitness of review procedures positively influences the decision-making processes ( Langfeldt 2001 ). Strategic voting and decision-making appear to be less frequent in panels that rate than in panels that rank proposals. Also, an advisory instead of a decisional role may improve the quality of the panel assessment ( Lamont and Huutoniemi 2011 ).

Despite different disciplinary evaluative cultures, formal procedures, and criteria, panel members with different backgrounds develop shared customary rules of deliberation that facilitate agreement and help avoid situations of conflict ( Huutoniemi 2012 ; Lamont 2009 ). This is a necessary prerequisite for opening up peer review panels to include non-academic experts. When doing so, it is important to realise that panel review is a social, emotional, and interactional process. It is therefore important to also take these non-cognitive aspects into account when studying cognitive aspects ( Lamont and Guetzkow 2016 ), as we do in this study.

In summary, what we learn from the literature is that (1) the specific criteria to operationalise scientific quality and societal relevance of research are important, (2) the rationalities from Glerup and Horst predict that not everyone values societal aspects and involve non-scientists in peer review to the same extent and in the same way, (3) this may affect the way peer review panels discuss these aspects, and (4) peer review is a challenging group process that could accommodate other rationalities in order to prevent bias towards specific scientific criteria. To disentangle these aspects, we have carried out an observational study of a diverse range of peer review panel sessions using a fixed set of criteria focusing on scientific quality and societal relevance.

3.1 Research assessment at ZonMw and the DHF

The peer review approach and the criteria used by both the DHF and ZonMw are largely comparable. Funding programmes at both organisations start with a brochure describing the purposes, goals, and conditions for research applications, as well as the assessment procedure and criteria. Both organisations apply a two-stage process. In the first phase, reviewers are asked to write a peer review. In the second phase, a panel reviews the application based on the advice of the written reviews and the applicants’ rebuttal. The panels advise the board on eligible proposals for funding including a ranking of these proposals.

There are also differences between the two organisations. At ZonMw, the criteria for societal relevance and quality are operationalised in the ZonMw Framework Fostering Responsible Research Practices ( Reijmerink and Oortwijn 2017 ). This contributes to a common operationalisation of both quality and societal relevance on the level of individual funding programmes. Important elements in the criteria for societal relevance are, for instance, stakeholder participation, (applying) holistic health concepts, and the added value of knowledge in practice, policy, and education. The framework was developed to optimise the funding process from the perspective of knowledge utilisation and includes concepts like productive interactions and Open Science. It is part of the ZonMw Impact Assessment Framework aimed at guiding the planning, monitoring, and evaluation of funding programmes ( Reijmerink et al. 2020 ). At ZonMw, interdisciplinary panels are set up specifically for each funding programme. Panels are interdisciplinary in nature with academics of a wide range of disciplines and often include non-academic peers, like policymakers, health-care professionals, and patients.

At the DHF, the criteria for scientific quality and societal relevance, at the DHF called societal impact , find their origin in the strategy report of the advisory committee CardioVascular Research Netherlands ( Reneman et al. 2010 ). This report forms the basis of the DHF research policy focusing on scientific and societal impact by creating national collaborations in thematic, interdisciplinary research programmes (the so-called consortia) connecting preclinical and clinical expertise into one concerted effort. An International Scientific Advisory Committee (ISAC) was established to assess these thematic consortia. This panel consists of international scientists, primarily with expertise in the broad cardiovascular research field. The DHF criteria for societal impact were redeveloped in 2013 in collaboration with their CSQ. This panel assesses and advises on the societal aspects of proposed studies. The societal impact criteria include the relevance of the health-care problem, the expected contribution to a solution, attention to the next step in science and towards implementation in practice, and the involvement of and interaction with (end-)users of research (R.Y. Abma-Schouten and I.M. Meijer, unpublished data). Peer review panels for consortium funding are generally composed of members of the ISAC, members of the CSQ, and ad hoc panel members relevant to the specific programme. CSQ members often have a pre-meeting before the final panel meetings to prepare and empower CSQ representatives participating in the peer review panel.

3.2 Selection of funding programmes

To compare and evaluate observations between the two organisations, we selected funding programmes that were relatively comparable in scope and aims. The criteria were (1) a translational and/or clinical objective and (2) the selection procedure consisted of review panels that were responsible for the (final) relevance and quality assessment of grant applications. In total, we selected eight programmes: four at each organisation. At the DHF, two programmes were chosen in which the CSQ did not participate to better disentangle the role of the panel composition. For each programme, we observed the selection process varying from one session on one day (taking 2–8 h) to multiple sessions over several days. Ten sessions were observed in total, of which eight were final peer review panel meetings and two were CSQ meetings preparing for the panel meeting.

After management approval for the study in both organisations, we asked programme managers and panel chairpersons of the programmes that were selected for their consent for observation; none refused participation. Panel members were, in a passive consent procedure, informed about the planned observation and anonymous analyses.

To ensure the independence of this evaluation, the selection of the grant programmes, and peer review panels observed, was at the discretion of the project team of this study. The observations and supervision of the analyses were performed by the senior author not affiliated with the funders.

3.3 Observation matrix

Given the lack of a common operationalisation for scientific quality and societal relevance, we decided to use an observation matrix with a fixed set of detailed aspects as a gold standard to score the brochures, the assessment forms, and the arguments used in panel meetings. The matrix used for the observations of the review panels was based upon and adapted from a ‘grant committee observation matrix’ developed by Van Arensbergen. The original matrix informed a literature review on the selection of talent through peer review and the social dynamics in grant review committees ( van Arensbergen et al. 2014b ). The matrix includes four categories of aspects that operationalise societal relevance, scientific quality, committee, and applicant (see  Table 1 ). The aspects of scientific quality and societal relevance were adapted to fit the operationalisation of scientific quality and societal relevance of the organisations involved. The aspects concerning societal relevance were derived from the CSQ criteria, and the aspects concerning scientific quality were based on the scientific criteria of the first panel observed. The four argument types related to the panel were kept as they were. This committee-related category reflects statements that are related to the personal experience or preference of a panel member and can be seen as signals for bias. This category also includes statements that compare a project with another project without further substantiation. The three applicant-related arguments in the original observation matrix were extended with a fourth on social skills in communication with society. We added health technology assessment (HTA) because one programme specifically focused on this aspect. We tested our version of the observation matrix in pilot observations.

Aspects included in the observation matrix and examples of arguments.

3.4 Observations

Data were primarily collected through observations. Our observations of review panel meetings were non-participatory: the observer and goal of the observation were introduced at the start of the meeting, without further interactions during the meeting. To aid in the processing of observations, some meetings were audiotaped (sound only). Presentations or responses of applicants were not noted and were not part of the analysis. The observer made notes on the ongoing discussion and scored the arguments while listening. One meeting was not attended in person and only observed and scored by listening to the audiotape recording. Because this made identification of the panel members unreliable, this panel meeting was excluded from the analysis of the third research question on how arguments used differ between panel members with different perspectives.

3.5 Grant programmes and the assessment criteria

We gathered and analysed all brochures and assessment forms used by the review panels in order to answer our second research question on the correspondence of arguments used with the formal criteria. Several programmes consisted of multiple grant calls: in that case, the specific call brochure was gathered and analysed, not the overall programme brochure. Additional documentation (e.g. instructional presentations at the start of the panel meeting) was not included in the document analysis. All included documents were marked using the aforementioned observation matrix. The panel-related arguments were not used because this category reflects the personal arguments of panel members that are not part of brochures or instructions. To avoid potential differences in scoring methods, two of the authors independently scored half of the documents that were checked and validated afterwards by the other. Differences were discussed until a consensus was reached.

3.6 Panel composition

In order to answer the third research question, background information on panel members was collected. We categorised the panel members into five common types of panel members: scientific, clinical scientific, health-care professional/clinical, patient, and policy. First, a list of all panel members was composed including their scientific and professional backgrounds and affiliations. The theoretical notion that reviewers represent different types of users of research and therefore potential impact domains (academic, social, economic, and cultural) was leading in the categorisation ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Because clinical researchers play a dual role in both advancing research as a fellow academic and as a user of the research output in health-care practice, we divided the academic members into two categories of non-clinical and clinical researchers. Multiple types of professional actors participated in each review panel. These were divided into two groups for the analysis: health-care professionals (without current academic activity) and policymakers in the health-care sector. No representatives of the private sector participated in the observed review panels. From the public domain, (at-risk) patients and patient representatives were part of several review panels. Only publicly available information was used to classify the panel members. Members were assigned to one category only: categorisation took place based on the specific role and expertise for which they were appointed to the panel.

In two of the four DHF programmes, the assessment procedure included the CSQ. In these two programmes, representatives of this CSQ participated in the scientific panel to articulate the findings of the CSQ meeting during the final assessment meeting. Two grant programmes were assessed by a review panel with solely (clinical) scientific members.

3.7 Analysis

Data were processed using ATLAS.ti 8 and Microsoft Excel 2010 to produce descriptive statistics. All observed arguments were coded and given a randomised identification code for the panel member using that particular argument. The number of times an argument type was observed was used as an indicator for the relative importance of that argument in the appraisal of proposals. With this approach, a practical and reproducible method for research funders to evaluate the effect of policy changes on peer review was developed. If codes or notes were unclear, post-observation validation of codes was carried out based on observation matrix notes. Arguments that were noted by the observer but could not be matched with an existing code were first coded as a ‘non-existing’ code, and these were resolved by listening back to the audiotapes. Arguments that could not be assigned to a panel member were assigned a ‘missing panel member’ code. A total of 4.7 per cent of all codes were assigned a ‘missing panel member’ code.

After the analyses, two meetings were held to reflect on the results: one with the CSQ and the other with the programme coordinators of both organisations. The goal of these meetings was to improve our interpretation of the findings, disseminate the results derived from this project, and identify topics for further analyses or future studies.

3.8 Limitations

Our study focuses on studying the final phase of the peer review process of research applications in a real-life setting. Our design, a non-participant observation of peer review panels, also introduced several challenges ( Liu and Maitlis 2010 ).

First, the independent review phase or pre-application phase was not part of our study. We therefore could not assess to what extent attention to certain aspects of scientific quality or societal relevance and impact in the review phase influenced the topics discussed during the meeting.

Second, the most important challenge of overt non-participant observations is the observer effect: the danger of causing reactivity in those under study. We believe that the consequences of this effect on our conclusions were limited because panellists are used to external observers in the meetings of these two funders. The observer briefly explained the goal of the study during the introductory round of the panel in general terms. The observer sat as unobtrusively as possible and avoided reactivity to discussions. Similar to previous observations of panels, we experienced that the fact that an observer was present faded into the background during a meeting ( Roumbanis 2021a ). However, a limited observer effect can never be entirely excluded.

Third, our design to only score the arguments raised, and not the responses of the applicant, or information on the content of the proposals, has its positives and negatives. With this approach, we could assure the anonymity of the grant procedures reviewed, the applicants and proposals, panels, and individual panellists. This was an important condition for the funders involved. We took the frequency arguments used as a proxy for the relative importance of that argument in decision-making, which undeniably also has its caveats. Our data collection approach limits more in-depth reflection on which arguments were decisive in decision-making and on group dynamics during the interaction with the applicants as non-verbal and non-content-related comments were not captured in this study.

Fourth, despite this being one of the largest observational studies on the peer review assessment of grant applications with the observation of ten panels in eight grant programmes, many variables might explain differences in arguments used within and beyond our view. Examples of ‘confounding’ variables are the many variations in panel composition, the differences in objectives of the programmes, and the range of the funding programmes. Our study should therefore be seen as exploratory and thus warrants caution in drawing conclusions.

4.1 Overview of observational data

The grant programmes included in this study reflected a broad range of biomedical and health funding programmes, ranging from fellowship grants to translational research and applied health research. All formal documents available to the applicants and to the review panel were retrieved for both ZonMw and the DHF. In total, eighteen documents corresponding to the eight grant programmes were studied. The number of proposals assessed per programme varied from three to thirty-three. The duration of the panel meetings varied between 2 h and two consecutive days. Together, this resulted in a large spread in the number of total arguments used in an individual meeting and in a grant programme as a whole. In the shortest meeting, 49 arguments were observed versus 254 in the longest, with a mean of 126 arguments per meeting and on average 15 arguments per proposal.

We found consistency between how criteria were operationalised in the grant programme’s brochures and in the assessment forms of the review panels overall. At the same time, because the number of elements included in the observation matrix is limited, there was a considerable diversity in the arguments that fall within each aspect (see examples in  Table 1 ). Some of these differences could possibly be explained by differences in language used and the level of detail in the observation matrix, the brochure, and the panel’s instructions. This was especially the case in the applicant-related aspects in which the observation matrix was more detailed than the text in the brochure and assessment forms.

In interpretating our findings, it is important to take into account that, even though our data were largely complete and the observation matrix matched well with the description of the criteria in the brochures and assessment forms, there was a large diversity in the type and number of arguments used and in the number of proposals assessed in the grant programmes included in our study.

4.2 Wide range of arguments used by panels: scientific arguments used most

For our first research question, we explored the number and type of arguments used in the panel meetings. Figure 1 provides an overview of the arguments used. Scientific quality was discussed most. The number of times the feasibility of the aims was discussed clearly stands out in comparison to all other arguments. Also, the match between the science and the problem studied and the plan of work were frequently discussed aspects of scientific quality. International competitiveness of the proposal was discussed the least of all five scientific arguments.

The number of arguments used in panel meetings.

The number of arguments used in panel meetings.

Attention was paid to societal relevance and impact in the panel meetings of both organisations. Yet, the language used differed somewhat between organisations. The contribution to a solution and the next step in science were the most often used societal arguments. At ZonMw, the impact of the health-care problem studied and the activities towards partners were less frequently discussed than the other three societal arguments. At the DHF, the five societal arguments were used equally often.

With the exception of the fellowship programme meeting, applicant-related arguments were not often used. The fellowship panel used arguments related to the applicant and to scientific quality about equally often. Committee-related arguments were also rarely used in the majority of the eight grant programmes observed. In three out of the ten panel meetings, one or two arguments were observed, which were related to personal experience with the applicant or their direct network. In seven out of ten meetings, statements were observed, which were unasserted or were explicitly announced as reflecting a personal preference. The frequency varied between one and seven statements (sixteen in total), which is low in comparison to the other arguments used (see  Fig. 1 for examples).

4.3 Use of arguments varied strongly per panel meeting

The balance in the use of scientific and societal arguments varied strongly per grant programme, panel, and organisation. At ZonMw, two meetings had approximately an equal balance in societal and scientific arguments. In the other two meetings, scientific arguments were used twice to four times as often as societal arguments. At the DHF, three types of panels were observed. Different patterns in the relative use of societal and scientific arguments were observed for each of these panel types. In the two CSQ-only meetings the societal arguments were used approximately twice as often as scientific arguments. In the two meetings of the scientific panels, societal arguments were infrequently used (between zero and four times per argument category). In the combined societal and scientific panel meetings, the use of societal and scientific arguments was more balanced.

4.4 Match of arguments used by panels with the assessment criteria

In order to answer our second research question, we looked into the relation of the arguments used with the formal criteria. We observed that a broader range of arguments were often used in comparison to how the criteria were described in the brochure and assessment instruction. However, arguments related to aspects that were consequently included in the brochure and instruction seemed to be discussed more frequently than in programmes where those aspects were not consistently included or were not included at all. Although the match of the science with the health-care problem and the background and reputation of the applicant were not always made explicit in the brochure or instructions, they were discussed in many panel meetings. Supplementary Fig. S1 provides a visualisation of how arguments used differ between the programmes in which those aspects were, were not, consistently included in the brochure and instruction forms.

4.5 Two-thirds of the assessment was driven by scientific panel members

To answer our third question, we looked into the differences in arguments used between panel members representing a scientific, clinical scientific, professional, policy, or patient perspective. In each research programme, the majority of panellists had a scientific background ( n  = 35), thirty-four members had a clinical scientific background, twenty had a health professional/clinical background, eight members represented a policy perspective, and fifteen represented a patient perspective. From the total number of arguments (1,097), two-thirds were made by members with a scientific or clinical scientific perspective. Members with a scientific background engaged most actively in the discussion with a mean of twelve arguments per member. Similarly, clinical scientists and health-care professionals participated with a mean of nine arguments, and members with a policy and patient perspective put forward the least number of arguments on average, namely, seven and eight. Figure 2 provides a complete overview of the total and mean number of arguments used by the different disciplines in the various panels.

The total and mean number of arguments displayed per subgroup of panel members.

The total and mean number of arguments displayed per subgroup of panel members.

4.6 Diverse use of arguments by panellists, but background matters

In meetings of both organisations, we observed a diverse use of arguments by the panel members. Yet, the use of arguments varied depending on the background of the panel member (see  Fig. 3 ). Those with a scientific and clinical scientific perspective used primarily scientific arguments. As could be expected, health-care professionals and patients used societal arguments more often.

The use of arguments differentiated by panel member background.

The use of arguments differentiated by panel member background.

Further breakdown of arguments across backgrounds showed clear differences in the use of scientific arguments between the different disciplines of panellists. Scientists and clinical scientists discussed the feasibility of the aims more than twice as often as their second most often uttered element of scientific quality, which was the match between the science and the problem studied . Patients and members with a policy or health professional background put forward fewer but more varied scientific arguments.

Patients and health-care professionals accounted for approximately half of the societal arguments used, despite being a much smaller part of the panel’s overall composition. In other words, members with a scientific perspective were less likely to use societal arguments. The relevance of the health-care problem studied, activities towards partners , and arguments related to participation and diversity were not used often by this group. Patients often used arguments related to patient participation and diversity and activities towards partners , although the frequency of the use of the latter differed per organisation.

The majority of the applicant-related arguments were put forward by scientists, including clinical scientists. Committee-related arguments were very rare and are therefore not differentiated by panel member background, except comments related to a comparison with other applications. These arguments were mainly put forward by panel members with a scientific background. HTA -related arguments were often used by panel members with a scientific perspective. Panel members with other perspectives used this argument scarcely (see Supplementary Figs S2–S4 for the visual presentation of the differences between panel members on all aspects included in the matrix).

5.1 Explanations for arguments used in panels

Our observations show that most arguments for scientific quality were often used. However, except for the feasibility , the frequency of arguments used varied strongly between the meetings and between the individual proposals that were discussed. The fact that most arguments were not consistently used is not surprising given the results from previous studies that showed heterogeneity in grant application assessments and low consistency in comments and scores by independent reviewers ( Abdoul et al. 2012 ; Pier et al. 2018 ). In an analysis of written assessments on nine observed dimensions, no dimension was used in more than 45 per cent of the reviews ( Hartmann and Neidhardt 1990 ).

There are several possible explanations for this heterogeneity. Roumbanis (2021a) described how being responsive to the different challenges in the proposals and to the points of attention arising from the written assessments influenced discussion in panels. Also when a disagreement arises, more time is spent on discussion ( Roumbanis 2021a ). One could infer that unambiguous, and thus not debated, aspects might remain largely undetected in our study. We believe, however, that the main points relevant to the assessment will not remain entirely unmentioned, because most panels in our study started the discussion with a short summary of the proposal, the written assessment, and the rebuttal. Lamont (2009) , however, points out that opening statements serve more goals than merely decision-making. They can also increase the credibility of the panellist, showing their comprehension and balanced assessment of an application. We can therefore not entirely disentangle whether the arguments observed most were also found to be most important or decisive or those were simply the topics that led to most disagreement.

An interesting difference with Roumbanis’ study was the available discussion time per proposal. In our study, most panels handled a limited number of proposals, allowing for longer discussions in comparison with the often 2-min time frame that Roumbanis (2021b) described, potentially contributing to a wider range of arguments being discussed. Limited time per proposal might also limit the number of panellists contributing to the discussion per proposal ( De Bont 2014 ).

5.2 Reducing heterogeneity by improving operationalisation and the consequent use of assessment criteria

We found that the language used for the operationalisation of the assessment criteria in programme brochures and in the observation matrix was much more detailed than in the instruction for the panel, which was often very concise. The exercise also illustrated that many terms were used interchangeably.

This was especially true for the applicant-related aspects. Several panels discussed how talent should be assessed. This confusion is understandable when considering the changing values in research and its assessment ( Moher et al. 2018 ) and the fact that the instruction of the funders was very concise. For example, it was not explicated whether the individual or the team should be assessed. Arensbergen et al. (2014b) described how in grant allocation processes, talent is generally assessed using limited characteristics. More objective and quantifiable outputs often prevailed at the expense of recognising and rewarding a broad variety of skills and traits combining professional, social, and individual capital ( DORA 2013 ).

In addition, committee-related arguments, like personal experiences with the applicant or their institute, were rarely used in our study. Comparisons between proposals were sometimes made without further argumentation, mainly by scientific panel members. This was especially pronounced in one (fellowship) grant programme with a high number of proposals. In this programme, the panel meeting concentrated on quickly comparing the quality of the applicants and of the proposals based on the reviewer’s judgement, instead of a more in-depth discussion of the different aspects of the proposals. Because the review phase was not part of this study, the question of which aspects have been used for the assessment of the proposals in this panel therefore remains partially unanswered. However, weighing and comparing proposals on different aspects and with different inputs is a core element of scientific peer review, both in the review of papers and in the review of grants ( Hirschauer 2010 ). The large role of scientific panel members in comparing proposals is therefore not surprising.

One could anticipate that more consequent language in the operationalising criteria may lead to more clarity for both applicants and panellists and to more consistency in the assessment of research proposals. The trend in our observations was that arguments were used less when the related criteria were not or were consequently included in the brochure and panel instruction. It remains, however, challenging to disentangle the influence of the formal definitions of criteria on the arguments used. Previous studies also encountered difficulties in studying the role of the formal instruction in peer review but concluded that this role is relatively limited ( Langfeldt 2001 ; Reinhart 2010 ).

The lack of a clear operationalisation of criteria can contribute to heterogeneity in peer review as many scholars found that assessors differ in the conceptualisation of good science and to the importance they attach to various aspects of research quality and societal relevance ( Abdoul et al. 2012 ; Geurts 2016 ; Scholten et al. 2018 ; Van den Brink et al. 2016 ). The large variation and absence of a gold standard in the interpretation of scientific quality and societal relevance affect the consistency of peer review. As a consequence, it is challenging to systematically evaluate and improve peer review in order to fund the research that contributes most to science and society. To contribute to responsible research and innovation, it is, therefore, important that funders invest in a more consistent and conscientious peer review process ( Curry et al. 2020 ; DORA 2013 ).

A common conceptualisation of scientific quality and societal relevance and impact could improve the alignment between views on good scientific conduct, programmes’ objectives, and the peer review in practice. Such a conceptualisation could contribute to more transparency and quality in the assessment of research. By involving panel members from all relevant backgrounds, including the research community, health-care professionals, and societal actors, in a better operationalisation of criteria, more inclusive views of good science can be implemented more systematically in the peer review assessment of research proposals. The ZonMw Framework Fostering Responsible Research Practices is an example of an initiative aiming to support standardisation and integration ( Reijmerink et al. 2020 ).

Given the lack of a common definition or conceptualisation of scientific quality and societal relevance, our study made an important decision by choosing to use a fixed set of detailed aspects of two important criteria as a gold standard to score the brochures, the panel instructions, and the arguments used by the panels. This approach proved helpful in disentangling the different components of scientific quality and societal relevance. Having said that, it is important not to oversimplify the causes for heterogeneity in peer review because these substantive arguments are not independent of non-cognitive, emotional, or social aspects ( Lamont and Guetzkow 2016 ; Reinhart 2010 ).

5.3 Do more diverse panels contribute to a broader use of arguments?

Both funders participating in our study have an outspoken public mission that requests sufficient attention to societal aspects in assessment processes. In reality, as observed in several panels, the main focus of peer review meetings is on scientific arguments. Next to the possible explanations earlier, the composition of the panel might play a role in explaining arguments used in panel meetings. Our results have shown that health-care professionals and patients bring in more societal arguments than scientists, including those who are also clinicians. It is, however, not that simple. In the more diverse panels, panel members, regardless of their backgrounds, used more societal arguments than in the less diverse panels.

Observing ten panel meetings was sufficient to explore differences in arguments used by panel members with different backgrounds. The pattern of (primarily) scientific arguments being raised by panels with mainly scientific members is not surprising. After all, it is their main task to assess the scientific content of grant proposals and fit their competencies. As such, one could argue, depending on how one justifies the relationship between science and society, that health-care professionals and patients might be better suited to assess the value for potential users of research results. Scientific panel members and clinical scientists in our study used less arguments that reflect on opening up and connecting science directly to others who can bring it further (being industry, health-care professionals, or other stakeholders). Patients filled this gap since these two types of arguments were the most prevalent type put forward by them. Making an active connection with society apparently needs a broader, more diverse panel for scientists to direct their attention to more societal arguments. Evident from our observations is that in panels with patients and health-care professionals, their presence seemed to increase the attention placed on arguments beyond the scientific arguments put forward by all panel members, including scientists. This conclusion is congruent with the observation that there was a more equal balance in the use of societal and scientific arguments in the scientific panels in which the CSQ participated. This illustrates that opening up peer review panels to non-scientific members creates an opportunity to focus on both the contribution and the integrative rationality ( Glerup and Horst 2014 ) or, in other words, to allow productive interactions between scientific and non-scientific actors. This corresponds with previous research that suggests that with regard to societal aspects, reviews from mixed panels were broader and richer ( Luo et al. 2021 ). In panels with non-scientific experts, more emphasis was placed on the role of the proposed research process to increase the likelihood of societal impact over the causal importance of scientific excellence for broader impacts. This is in line with the findings that panels with more disciplinary diversity, in range and also by including generalist experts, applied more versatile styles to reach consensus and paid more attention to relevance and pragmatic value ( Huutoniemi 2012 ).

Our observations further illustrate that patients and health-care professionals were less vocal in panels than (clinical) scientists and were in the minority. This could reflect their social role and lower perceived authority in the panel. Several guides are available for funders to stimulate the equal participation of patients in science. These guides are also applicable to their involvement in peer review panels. Measures to be taken include the support and training to help prepare patients for their participation in deliberations with renowned scientists and explicitly addressing power differences ( De Wit et al. 2016 ). Panel chairs and programme officers have to set and supervise the conditions for the functioning of both the individual panel members and the panel as a whole ( Lamont 2009 ).

5.4 Suggestions for future studies

In future studies, it is important to further disentangle the role of the operationalisation and appraisal of assessment criteria in reducing heterogeneity in the arguments used by panels. More controlled experimental settings are a valuable addition to the current mainly observational methodologies applied to disentangle some of the cognitive and social factors that influence the functioning and argumentation of peer review panels. Reusing data from the panel observations and the data on the written reports could also provide a starting point for a bottom-up approach to create a more consistent and shared conceptualisation and operationalisation of assessment criteria.

To further understand the effects of opening up review panels to non-scientific peers, it is valuable to compare the role of diversity and interdisciplinarity in solely scientific panels versus panels that also include non-scientific experts.

In future studies, differences between domains and types of research should also be addressed. We hypothesise that biomedical and health research is perhaps more suited for the inclusion of non-scientific peers in panels than other research domains. For example, it is valuable to better understand how potentially relevant users can be well enough identified in other research fields and to what extent non-academics can contribute to assessing the possible value of, especially early or blue sky, research.

The goal of our study was to explore in practice which arguments regarding the main criteria of scientific quality and societal relevance were used by peer review panels of biomedical and health research funding programmes. We showed that there is a wide diversity in the number and range of arguments used, but three main scientific aspects were discussed most frequently. These are the following: is it a feasible approach; does the science match the problem , and is the work plan scientifically sound? Nevertheless, these scientific aspects were accompanied by a significant amount of discussion of societal aspects, of which the contribution to a solution is the most prominent. In comparison with scientific panellists, non-scientific panellists, such as health-care professionals, policymakers, and patients, often use a wider range of arguments and other societal arguments. Even more striking was that, even though non-scientific peers were often outnumbered and less vocal in panels, scientists also used a wider range of arguments when non-scientific peers were present.

It is relevant that two health research funders collaborated in the current study to reflect on and improve peer review in research funding. There are few studies published that describe live observations of peer review panel meetings. Many studies focus on alternatives for peer review or reflect on the outcomes of the peer review process, instead of reflecting on the practice and improvement of peer review assessment of grant proposals. Privacy and confidentiality concerns of funders also contribute to the lack of information on the functioning of peer review panels. In this study, both organisations were willing to participate because of their interest in research funding policies in relation to enhancing the societal value and impact of science. The study provided them with practical suggestions, for example, on how to improve the alignment in language used in programme brochures and instructions of review panels, and contributed to valuable knowledge exchanges between organisations. We hope that this publication stimulates more research funders to evaluate their peer review approach in research funding and share their insights.

For a long time, research funders relied solely on scientists for designing and executing peer review of research proposals, thereby delegating responsibility for the process. Although review panels have a discretionary authority, it is important that funders set and supervise the process and the conditions. We argue that one of these conditions should be the diversification of peer review panels and opening up panels for non-scientific peers.

Supplementary material is available at Science and Public Policy online.

Details of the data and information on how to request access is available from the first author.

Joey Gijbels and Wendy Reijmerink are employed by ZonMw. Rebecca Abma-Schouten is employed by the Dutch Heart Foundation and as external PhD candidate affiliated with the Centre for Science and Technology Studies, Leiden University.

A special thanks to the panel chairs and programme officers of ZonMw and the DHF for their willingness to participate in this project. We thank Diny Stekelenburg, an internship student at ZonMw, for her contributions to the project. Our sincerest gratitude to Prof. Paul Wouters, Sarah Coombs, and Michiel van der Vaart for proofreading and their valuable feedback. Finally, we thank the editors and anonymous reviewers of Science and Public Policy for their thorough and insightful reviews and recommendations. Their contributions are recognisable in the final version of this paper.

Abdoul   H. , Perrey   C. , Amiel   P. , et al.  ( 2012 ) ‘ Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices ’, PLoS One , 7 : 1 – 15 .

Google Scholar

Abma-Schouten   R. Y. ( 2017 ) ‘ Maatschappelijke Kwaliteit van Onderzoeksvoorstellen ’, Dutch Heart Foundation .

Alla   K. , Hall   W. D. , Whiteford   H. A. , et al.  ( 2017 ) ‘ How Do We Define the Policy Impact of Public Health Research? A Systematic Review ’, Health Research Policy and Systems , 15 : 84.

Benedictus   R. , Miedema   F. , and Ferguson   M. W. J. ( 2016 ) ‘ Fewer Numbers, Better Science ’, Nature , 538 : 453 – 4 .

Chalmers   I. , Bracken   M. B. , Djulbegovic   B. , et al.  ( 2014 ) ‘ How to Increase Value and Reduce Waste When Research Priorities Are Set ’, The Lancet , 383 : 156 – 65 .

Curry   S. , De Rijcke   S. , Hatch   A. , et al.  ( 2020 ) ‘ The Changing Role of Funders in Responsible Research Assessment: Progress, Obstacles and the Way Ahead ’, RoRI Working Paper No. 3, London : Research on Research Institute (RoRI) .

De Bont   A. ( 2014 ) ‘ Beoordelen Bekeken. Reflecties op het Werk van Een Programmacommissie van ZonMw ’, ZonMw .

De Rijcke   S. , Wouters   P. F. , Rushforth   A. D. , et al.  ( 2016 ) ‘ Evaluation Practices and Effects of Indicator Use—a Literature Review ’, Research Evaluation , 25 : 161 – 9 .

De Wit   A. M. , Bloemkolk   D. , Teunissen   T. , et al.  ( 2016 ) ‘ Voorwaarden voor Succesvolle Betrokkenheid van Patiënten/cliënten bij Medisch Wetenschappelijk Onderzoek ’, Tijdschrift voor Sociale Gezondheidszorg , 94 : 91 – 100 .

Del Carmen Calatrava Moreno   M. , Warta   K. , Arnold   E. , et al.  ( 2019 ) Science Europe Study on Research Assessment Practices . Technopolis Group Austria .

Google Preview

Demicheli   V. and Di Pietrantonj   C. ( 2007 ) ‘ Peer Review for Improving the Quality of Grant Applications ’, Cochrane Database of Systematic Reviews , 2 : MR000003.

Den Oudendammer   W. M. , Noordhoek   J. , Abma-Schouten   R. Y. , et al.  ( 2019 ) ‘ Patient Participation in Research Funding: An Overview of When, Why and How Amongst Dutch Health Funds ’, Research Involvement and Engagement , 5 .

Diabetesfonds ( n.d. ) Maatschappelijke Adviesraad < https://www.diabetesfonds.nl/over-ons/maatschappelijke-adviesraad > accessed 18 Sept 2022 .

Dijstelbloem   H. , Huisman   F. , Miedema   F. , et al.  ( 2013 ) ‘ Science in Transition Position Paper: Waarom de Wetenschap Niet Werkt Zoals het Moet, En Wat Daar aan te Doen Is ’, Utrecht : Science in Transition .

Forsyth   D. R. ( 1999 ) Group Dynamics , 3rd edn. Belmont : Wadsworth Publishing Company .

Geurts   J. ( 2016 ) ‘ Wat Goed Is, Herken Je Meteen ’, NRC Handelsblad < https://www.nrc.nl/nieuws/2016/10/28/wat-goed-is-herken-je-meteen-4975248-a1529050 > accessed 6 Mar 2022 .

Glerup   C. and Horst   M. ( 2014 ) ‘ Mapping “Social Responsibility” in Science ’, Journal of Responsible Innovation , 1 : 31 – 50 .

Hartmann   I. and Neidhardt   F. ( 1990 ) ‘ Peer Review at the Deutsche Forschungsgemeinschaft ’, Scientometrics , 19 : 419 – 25 .

Hirschauer   S. ( 2010 ) ‘ Editorial Judgments: A Praxeology of “Voting” in Peer Review ’, Social Studies of Science , 40 : 71 – 103 .

Hughes   A. and Kitson   M. ( 2012 ) ‘ Pathways to Impact and the Strategic Role of Universities: New Evidence on the Breadth and Depth of University Knowledge Exchange in the UK and the Factors Constraining Its Development ’, Cambridge Journal of Economics , 36 : 723 – 50 .

Huutoniemi   K. ( 2012 ) ‘ Communicating and Compromising on Disciplinary Expertise in the Peer Review of Research Proposals ’, Social Studies of Science , 42 : 897 – 921 .

Jasanoff   S. ( 2011 ) ‘ Constitutional Moments in Governing Science and Technology ’, Science and Engineering Ethics , 17 : 621 – 38 .

Kolarz   P. , Arnold   E. , Farla   K. , et al.  ( 2016 ) Evaluation of the ESRC Transformative Research Scheme . Brighton : Technopolis Group .

Lamont   M. ( 2009 ) How Professors Think : Inside the Curious World of Academic Judgment . Cambridge : Harvard University Press .

Lamont   M. Guetzkow   J. ( 2016 ) ‘How Quality Is Recognized by Peer Review Panels: The Case of the Humanities’, in M.   Ochsner , S. E.   Hug , and H.-D.   Daniel (eds) Research Assessment in the Humanities , pp. 31 – 41 . Cham : Springer International Publishing .

Lamont   M. Huutoniemi   K. ( 2011 ) ‘Comparing Customary Rules of Fairness: Evaluative Practices in Various Types of Peer Review Panels’, in C.   Charles   G.   Neil and L.   Michèle (eds) Social Knowledge in the Making , pp. 209–32. Chicago : The University of Chicago Press .

Langfeldt   L. ( 2001 ) ‘ The Decision-making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome ’, Social Studies of Science , 31 : 820 – 41 .

——— ( 2006 ) ‘ The Policy Challenges of Peer Review: Managing Bias, Conflict of Interests and Interdisciplinary Assessments ’, Research Evaluation , 15 : 31 – 41 .

Lee   C. J. , Sugimoto   C. R. , Zhang   G. , et al.  ( 2013 ) ‘ Bias in Peer Review ’, Journal of the American Society for Information Science and Technology , 64 : 2 – 17 .

Liu   F. Maitlis   S. ( 2010 ) ‘Nonparticipant Observation’, in A. J.   Mills , G.   Durepos , and E.   Wiebe (eds) Encyclopedia of Case Study Research , pp. 609 – 11 . Los Angeles : SAGE .

Luo   J. , Ma   L. , and Shankar   K. ( 2021 ) ‘ Does the Inclusion of Non-academix Reviewers Make Any Difference for Grant Impact Panels? ’, Science & Public Policy , 48 : 763 – 75 .

Luukkonen   T. ( 2012 ) ‘ Conservatism and Risk-taking in Peer Review: Emerging ERC Practices ’, Research Evaluation , 21 : 48 – 60 .

Macleod   M. R. , Michie   S. , Roberts   I. , et al.  ( 2014 ) ‘ Biomedical Research: Increasing Value, Reducing Waste ’, The Lancet , 383 : 101 – 4 .

Meijer   I. M. ( 2012 ) ‘ Societal Returns of Scientific Research. How Can We Measure It? ’, Leiden : Center for Science and Technology Studies, Leiden University .

Merton   R. K. ( 1968 ) Social Theory and Social Structure , Enlarged edn. [Nachdr.] . New York : The Free Press .

Moher   D. , Naudet   F. , Cristea   I. A. , et al.  ( 2018 ) ‘ Assessing Scientists for Hiring, Promotion, And Tenure ’, PLoS Biology , 16 : e2004089.

Olbrecht   M. and Bornmann   L. ( 2010 ) ‘ Panel Peer Review of Grant Applications: What Do We Know from Research in Social Psychology on Judgment and Decision-making in Groups? ’, Research Evaluation , 19 : 293 – 304 .

Patiëntenfederatie Nederland ( n.d. ) Ervaringsdeskundigen Referentenpanel < https://www.patientenfederatie.nl/zet-je-ervaring-in/lid-worden-van-ons-referentenpanel > accessed 18 Sept 2022.

Pier   E. L. , M.   B. , Filut   A. , et al.  ( 2018 ) ‘ Low Agreement among Reviewers Evaluating the Same NIH Grant Applications ’, Proceedings of the National Academy of Sciences , 115 : 2952 – 7 .

Prinses Beatrix Spierfonds ( n.d. ) Gebruikerscommissie < https://www.spierfonds.nl/wie-wij-zijn/gebruikerscommissie > accessed 18 Sep 2022 .

( 2020 ) Private Non-profit Financiering van Onderzoek in Nederland < https://www.rathenau.nl/nl/wetenschap-cijfers/geld/wat-geeft-nederland-uit-aan-rd/private-non-profit-financiering-van#:∼:text=R%26D%20in%20Nederland%20wordt%20gefinancierd,aan%20wetenschappelijk%20onderzoek%20in%20Nederland > accessed 6 Mar 2022 .

Reneman   R. S. , Breimer   M. L. , Simoons   J. , et al.  ( 2010 ) ‘ De toekomst van het cardiovasculaire onderzoek in Nederland. Sturing op synergie en impact ’, Den Haag : Nederlandse Hartstichting .

Reed   M. S. , Ferré   M. , Marin-Ortega   J. , et al.  ( 2021 ) ‘ Evaluating Impact from Research: A Methodological Framework ’, Research Policy , 50 : 104147.

Reijmerink   W. and Oortwijn   W. ( 2017 ) ‘ Bevorderen van Verantwoorde Onderzoekspraktijken Door ZonMw ’, Beleidsonderzoek Online. accessed 6 Mar 2022.

Reijmerink   W. , Vianen   G. , Bink   M. , et al.  ( 2020 ) ‘ Ensuring Value in Health Research by Funders’ Implementation of EQUATOR Reporting Guidelines: The Case of ZonMw ’, Berlin : REWARD|EQUATOR .

Reinhart   M. ( 2010 ) ‘ Peer Review Practices: A Content Analysis of External Reviews in Science Funding ’, Research Evaluation , 19 : 317 – 31 .

Reinhart   M. and Schendzielorz   C. ( 2021 ) Trends in Peer Review . SocArXiv . < https://osf.io/preprints/socarxiv/nzsp5 > accessed 29 Aug 2022.

Roumbanis   L. ( 2017 ) ‘ Academic Judgments under Uncertainty: A Study of Collective Anchoring Effects in Swedish Research Council Panel Groups ’, Social Studies of Science , 47 : 95 – 116 .

——— ( 2021a ) ‘ Disagreement and Agonistic Chance in Peer Review ’, Science, Technology & Human Values , 47 : 1302 – 33 .

——— ( 2021b ) ‘ The Oracles of Science: On Grant Peer Review and Competitive Funding ’, Social Science Information , 60 : 356 – 62 .

( 2019 ) ‘ Ruimte voor ieders talent (Position Paper) ’, Den Haag : VSNU, NFU, KNAW, NWO en ZonMw . < https://www.universiteitenvannederland.nl/recognitionandrewards/wp-content/uploads/2019/11/Position-paper-Ruimte-voor-ieders-talent.pdf >.

( 2013 ) San Francisco Declaration on Research Assessment . The Declaration . < https://sfdora.org > accessed 2 Jan 2022 .

Sarewitz   D. and Pielke   R. A.  Jr. ( 2007 ) ‘ The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science ’, Environmental Science & Policy , 10 : 5 – 16 .

Scholten   W. , Van Drooge   L. , and Diederen   P. ( 2018 ) Excellent Is Niet Gewoon. Dertig Jaar Focus op Excellentie in het Nederlandse Wetenschapsbeleid . The Hague : Rathenau Instituut .

Shapin   S. ( 2008 ) The Scientific Life : A Moral History of a Late Modern Vocation . Chicago : University of Chicago press .

Spaapen   J. and Van Drooge   L. ( 2011 ) ‘ Introducing “Productive Interactions” in Social Impact Assessment ’, Research Evaluation , 20 : 211 – 8 .

Travis   G. D. L. and Collins   H. M. ( 1991 ) ‘ New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System ’, Science, Technology & Human Values , 16 : 322 – 41 .

Van Arensbergen   P. and Van den Besselaar   P. ( 2012 ) ‘ The Selection of Scientific Talent in the Allocation of Research Grants ’, Higher Education Policy , 25 : 381 – 405 .

Van Arensbergen   P. , Van der Weijden   I. , and Van den Besselaar   P. V. D. ( 2014a ) ‘ The Selection of Talent as a Group Process: A Literature Review on the Social Dynamics of Decision Making in Grant Panels ’, Research Evaluation , 23 : 298 – 311 .

—— ( 2014b ) ‘ Different Views on Scholarly Talent: What Are the Talents We Are Looking for in Science? ’, Research Evaluation , 23 : 273 – 84 .

Van den Brink , G. , Scholten , W. , and Jansen , T. , eds ( 2016 ) Goed Werk voor Academici . Culemborg : Stichting Beroepseer .

Weingart   P. ( 1999 ) ‘ Scientific Expertise and Political Accountability: Paradoxes of Science in Politics ’, Science & Public Policy , 26 : 151 – 61 .

Wessely   S. ( 1998 ) ‘ Peer Review of Grant Applications: What Do We Know? ’, The Lancet , 352 : 301 – 5 .

Supplementary data

Email alerts, citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5430
  • Print ISSN 0302-3427
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

70 samples of peer review examples for employees

  • Performance Management

Peer Review Examples: Powerful Phrases You Can Use

Surabhi

  • October 30, 2023

The blog is tailored for HR professionals looking to set up and improve peer review feedback within their organization. Share the article with your employees as a guide to help them understand how to craft insightful peer review feedback.

Effective employee performance evaluation plays a pivotal role in both personal growth and the maintenance of a productive, harmonious work environment. When considering the comprehensive perspective of 360-degree evaluation, peer review feedback emerges as a crucial element. In this article, we’ll explore the importance of peer review feedback and equip you with powerful peer review examples to facilitate the process.

Peer review feedback is the practice of colleagues and co-workers assessing and providing meaningful feedback on each other’s performance. It is a valuable instrument that helps organizations foster professional development, teamwork, and continuous improvement.

Peoplebox lets you conduct effective peer reviews within minutes. You can customize feedback, use tailored surveys, and seamlessly integrate it with your collaboration tools. It’s a game-changer for boosting development and collaboration in your team.

See Peoplebox in Action

Why are Peer Reviews Important?

Here are some compelling reasons why peer review feedback is so vital:

Broader Perspective: Peer feedback offers a well-rounded view of an employee’s performance. Colleagues witness their day-to-day efforts and interactions, providing a more comprehensive evaluation compared to just a supervisor’s perspective.

Skill Enhancement: It serves as a catalyst for skill enhancement. Constructive feedback from peers highlights areas of improvement and offers opportunities for skill development.

Encourages Accountability: Peer review fosters a culture of accountability . Knowing that one’s work is subject to review by peers can motivate individuals to perform at their best consistently.

Team Cohesion: It strengthens team cohesion by promoting open communication. and constructive communication. Teams that actively engage in peer feedback often develop a stronger sense of unity and shared purpose.

Fair and Unbiased Assessment: By involving colleagues, peer review helps ensure a fair and unbiased assessment. It mitigates the potential for supervisor bias and personal favoritism in performance evaluations.

Identifying Blind Spots: Peers can identify blind spots that supervisors may overlook. This means addressing issues at an early stage, preventing them from escalating.

Motivation and Recognition: Positive peer feedback can motivate employees and offer well-deserved recognition for their efforts. Acknowledgment from colleagues can be equally, if not more, rewarding than praise from higher-ups.

Now, let us look at the best practices for giving peer feedback in order to leverage its benefits effectively.

Best practices to follow while giving peer feedback

30 Positive Peer Feedback Examples

Now that we’ve established the importance of peer review feedback, the next step is understanding how to use powerful phrases to make the most of this evaluation process.  In this section, we’ll equip you with various examples of phrases to use during peer reviews, making the journey more confident and effective for you and your team .

Must Read: 60+ Self-Evaluation Examples That Can Make You Shine

Peer Review Example on Work Quality

When it comes to recognizing excellence, quality work is often the first on the list. Here are some peer review examples highlighting the work quality:

  • “Kudos to Sarah for consistently delivering high-quality reports that never fail to impress both clients and colleagues. Her meticulous attention to detail and creative problem-solving truly set the bar high.”
  • “John’s attention to detail and unwavering commitment to excellence make his work a gold standard for the entire team. His consistently high-quality contributions ensure our projects shine.”
  • “Alexandra’s dedication to maintaining the project’s quality standards sets a commendable benchmark for the entire department. Her willingness to go the extra mile is a testament to her work ethic and quality focus.”
  • “Patrick’s dedication to producing error-free code is a testament to his commitment to work quality. His precise coding and knack for bug spotting make his work truly outstanding.”

Peer Review Examples on Competency and Job-Related Skills

Competency and job-related skills set the stage for excellence. Here’s how you can write a peer review highlighting this particular skill set:

  • “Michael’s extensive knowledge and problem-solving skills have been instrumental in overcoming some of our most challenging technical hurdles. His ability to analyze complex issues and find creative solutions is remarkable. Great job, Michael!”
  • “Emily’s ability to quickly grasp complex concepts and apply them to her work is truly commendable. Her knack for simplifying the intricate is a gift that benefits our entire team.”
  • “Daniel’s expertise in data analysis has significantly improved the efficiency of our decision-making processes. His ability to turn data into actionable insights is an invaluable asset to the team.”
  • “Sophie’s proficiency in graphic design has consistently elevated the visual appeal of our projects. Her creative skills and artistic touch add a unique, compelling dimension to our work.”

Peer Review Sample on Leadership Skills

Leadership ability extends beyond a mere title; it’s a living embodiment of vision and guidance, as seen through these exceptional examples:

  • “Under Lisa’s leadership, our team’s morale and productivity have soared, a testament to her exceptional leadership skills and hard work. Her ability to inspire, guide, and unite the team in the right direction is truly outstanding.”
  • “James’s ability to inspire and lead by example makes him a role model for anyone aspiring to be a great leader. His approachability and strong sense of ethics create an ideal leadership model.”
  • “Rebecca’s effective delegation and strategic vision have been the driving force behind our project’s success. Her ability to set clear objectives, give valuable feedback, and empower team members is truly commendable.”
  • “Victoria’s leadership style fosters an environment of trust and innovation, enabling our team to flourish in a great way. Her encouragement of creativity and openness to diverse ideas is truly inspiring.”

Feedback on Teamwork and Collaboration Skills

Teamwork is where individual brilliance becomes collective success. Here are some peer review examples highlighting teamwork:

  • “Mark’s ability to foster a collaborative environment is infectious; his team-building skills unite us all. His open-mindedness and willingness to listen to new ideas create a harmonious workspace.”
  • “Charles’s commitment to teamwork has a ripple effect on the entire department, promoting cooperation and synergy. His ability to bring out the best in the rest of the team is truly remarkable.”
  • “David’s talent for bringing diverse perspectives together enhances the creativity and effectiveness of our group projects. His ability to unite us under a common goal fosters a sense of belonging.”

Peer Review Examples on Professionalism and Work Ethics

Professionalism and ethical conduct define a thriving work culture. Here’s how you can write a peer review highlighting work ethics:

  • “Rachel’s unwavering commitment to deadlines and ethical work practices is a model for us all. Her dedication to punctuality and ethics contributes to a culture of accountability.”
  • “Timothy consistently exhibits the highest level of professionalism, ensuring our clients receive impeccable service. His courtesy and reliability set a standard of excellence.”
  • “Daniel’s punctuality and commitment to deadlines set a standard of professionalism we should all aspire to. His sense of responsibility is an example to us all.”
  • “Olivia’s unwavering dedication to ethical business practices makes her a trustworthy and reliable colleague. Her ethical principles create an atmosphere of trust and respect within our team, leading to a more positive work environment.”

Feedback on Mentoring and Support

Mentoring and support pave the way for future success. Check out these peer review examples focusing on mentoring:

  • “Ben’s dedication to mentoring new team members is commendable; his guidance is invaluable to our junior colleagues. His approachability and patience create an environment where learning flourishes.”
  • “David’s mentorship has been pivotal in nurturing the talents of several team members beyond his direct report, fostering a culture of continuous improvement. His ability to transfer knowledge is truly outstanding.”
  • “Laura’s patient mentorship and continuous support for her colleagues have helped elevate our team’s performance. Her constructive feedback and guidance have made a remarkable difference.”
  • “William’s dedication to knowledge sharing and mentoring is a driving force behind our team’s constant learning and growth. His commitment to others’ development is inspiring.”

Peer Review Examples on Communication Skills

Effective communication is the linchpin of harmonious collaboration. Here are some peer review examples to highlight your peer’s communication skills:

  • “Grace’s exceptional communication skills ensure clarity and cohesion in our team’s objectives. Her ability to articulate complex ideas in a straightforward manner is invaluable.”
  • “Oliver’s ability to convey complex ideas with simplicity greatly enhances our project’s success. His effective communication style fosters a productive exchange of ideas.”
  • “Aiden’s proficiency in cross-team communication ensures that our projects move forward efficiently. His ability to bridge gaps in understanding is truly commendable.”

Peer Review Examples on Time Management and Productivity

Time management and productivity are the engines that drive accomplishments. Here are some peer review examples highlighting time management:

  • “Ella’s time management is nothing short of exemplary; it sets a benchmark for us all. Her efficient task organization keeps our projects on track.”
  • “Robert’s ability to meet deadlines and manage time efficiently significantly contributes to our team’s overall productivity. His time management skills are truly remarkable.”
  • “Sophie’s time management skills are a cornerstone of her impressive productivity, inspiring us all to be more efficient. Her ability to juggle multiple tasks is impressive.”
  • “Liam’s time management skills are key to his consistently high productivity levels. His ability to organize work efficiently is an example for all of us to follow.”

Though these positive feedback examples are valuable, it’s important to recognize that there will be instances when your team needs to convey constructive or negative feedback. In the upcoming section, we’ll present 40 examples of constructive peer review feedback. Keep reading!

40 Constructive Peer Review Feedback

Receiving peer review feedback, whether positive or negative, presents a valuable chance for personal and professional development. Let’s explore some examples your team can employ to provide constructive feedback , even in situations where criticism is necessary, with a focus on maintaining a supportive and growth-oriented atmosphere.

Constructive Peer Review Feedback on Work Quality

  • “I appreciate John’s meticulous attention to detail, which enhances our projects. However, I noticed a few minor typos in his recent report. To maintain an impeccable standard, I’d suggest dedicating more effort to proofreading.”
  • “Sarah’s research is comprehensive, and her insights are invaluable. Nevertheless, for the sake of clarity and brevity, I recommend distilling her conclusions to their most essential points.”
  • “Michael’s coding skills are robust, but for the sake of team collaboration, I’d suggest that he provides more detailed comments within the code to enhance readability and consistency.”
  • “Emma’s creative design concepts are inspiring, yet consistency in her chosen color schemes across projects could further bolster brand recognition.”
  • “David’s analytical skills are thorough and robust, but it might be beneficial to present data in a more reader-friendly format to enhance overall comprehension.”
  • “I’ve observed Megan’s solid technical skills, which are highly proficient. To further her growth, I recommend taking on more challenging projects to expand her expertise.”
  • “Robert’s industry knowledge is extensive and impressive. To become a more well-rounded professional, I’d suggest he focuses on honing his client relationship and communication skills.”
  • “Alice’s project management abilities are impressive, and she’s demonstrated an aptitude for handling complexity. I’d recommend she refines her risk assessment skills to excel further in mitigating potential issues.”
  • “Daniel’s presentation skills are excellent, and his reports are consistently informative. Nevertheless, there is room for improvement in terms of interpreting data and distilling it into actionable insights.”
  • “Laura’s sales techniques are effective, and she consistently meets her targets. I encourage her to invest time in honing her negotiation skills for even greater success in securing deals and partnerships.”

Peer Review Examples on Leadership Skills

  • “I’ve noticed James’s commendable decision-making skills. However, to foster a more inclusive and collaborative environment, I’d suggest he be more open to input from team members during the decision-making process.”
  • “Sophia’s delegation is efficient, and her team trusts her leadership. To further inspire the team, I’d suggest she share credit more generously and acknowledge the collective effort.”
  • “Nathan’s vision and strategic thinking are clear and commendable. Enhancing his conflict resolution skills is suggested to promote a harmonious work environment and maintain team focus.”
  • “Olivia’s accountability is much appreciated. I’d encourage her to strengthen her mentoring approach to develop the team’s potential even further and secure a strong professional legacy.”
  • “Ethan’s adaptability is an asset that brings agility to the team. Cultivating a more motivational leadership style is recommended to uplift team morale and foster a dynamic work environment.”

Peer Review Examples on Teamwork and Collaboration

  • “Ava’s collaboration is essential to the team’s success. She should consider engaging more actively in group discussions to contribute her valuable insights.”
  • “Liam’s teamwork is exemplary, but he could motivate peers further by sharing credit more openly and recognizing their contributions.”
  • “Chloe’s flexibility in teamwork is invaluable. To become an even more effective team player, she might invest in honing her active listening skills.”
  • “William’s contributions to group projects are consistently valuable. To maximize his impact, I suggest participating in inter-departmental collaborations and fostering cross-functional teamwork.”
  • “Zoe’s conflict resolution abilities create a harmonious work environment. Expanding her ability to mediate conflicts and find mutually beneficial solutions is advised to enhance team cohesion.”
  • “Noah’s punctuality is an asset to the team. To maintain professionalism consistently, he should adhere to deadlines with unwavering dedication, setting a model example for peers.”
  • “Grace’s integrity and ethical standards are admirable. To enhance professionalism further, I’d recommend that she maintain a higher level of discretion in discussing sensitive matters.”
  • “Logan’s work ethics are strong, and his commitment is evident. Striving for better communication with colleagues regarding project updates is suggested, ensuring everyone remains well-informed.”
  • “Sophie’s reliability is appreciated. Maintaining a high level of attention to confidentiality when handling sensitive information would enhance her professionalism.”
  • “Jackson’s organizational skills are top-notch. Upholding professionalism by maintaining a tidy and organized workspace is recommended.”

Peer Review Feedback Examples on Mentoring and Support

  • “Aiden provides invaluable mentoring to junior team members. He should consider investing even more time in offering guidance and support to help them navigate their professional journeys effectively.”
  • “Harper’s commendable support to peers is noteworthy. She should develop coaching skills to maximize their growth, ensuring their development matches their potential.”
  • “Samuel’s patience in teaching is a valuable asset. He should tailor support to individual learning styles to enhance their understanding and retention of key concepts.”
  • “Ella’s mentorship plays a pivotal role in the growth of colleagues. She should expand her role in offering guidance for long-term career development, helping them set and achieve their professional goals.”
  • “Benjamin’s exceptional helpfulness fosters a more supportive atmosphere where everyone can thrive. He should encourage team members to seek assistance when needed.”
  • “Mia’s communication skills are clear and effective. To cater to different audience types, she should use more varied communication channels to convey her message more comprehensively.”
  • “Lucas’s ability to articulate ideas is commendable, and his verbal communication is strong. He should polish non-verbal communication to ensure that his body language aligns with his spoken message.”
  • “Evelyn’s appreciated active listening skills create strong relationships with colleagues. She should foster stronger negotiation skills for client interactions, ensuring both parties are satisfied with the outcomes.”
  • “Jack’s presentation skills are excellent. He should elevate written communication to match the quality of verbal presentations, offering more comprehensive and well-structured documentation.”
  • “Avery’s clarity in explaining complex concepts is valued by colleagues. She should develop persuasive communication skills to enhance her ability to secure project proposals and buy-in from stakeholders.”

Feedback on Time Management and Productivity

  • “Isabella’s efficient time management skills contribute to the team’s success. She should explore time-tracking tools to further optimize her workflow and maximize her efficiency.”
  • “Henry’s remarkable productivity sets a high standard. He should maintain a balanced approach to tasks to prevent burnout and ensure sustainable long-term performance.”
  • “Luna’s impressive task prioritization and strategic time allocation should be fine-tuned with goal-setting techniques to ensure consistent productivity aligned with objectives.”
  • “Leo’s great deadline adherence is commendable. He should incorporate short breaks into the schedule to enhance productivity and focus, allowing for the consistent meeting of high standards.”
  • “Mila’s multitasking abilities are a valuable skill. She should strive to implement regular time-blocking sessions into the daily routine to further enhance time management capabilities.”

Do’s and Don’t of Peer Review Feedback

Peer review feedback can be extremely helpful for intellectual growth and professional development. Engaging in this process with thoughtfulness and precision can have a profound impact on both the reviewer and the individual seeking feedback.

However, there are certain do’s and don’ts that must be observed to ensure that the feedback is not only constructive but also conducive to a positive and productive learning environment.

Do’s and don’t for peer review feedback

The Do’s of Peer Review Feedback:

Empathize and Relate : Put yourself in the shoes of the person receiving the feedback. Recognize the effort and intention behind their work, and frame your comments with sensitivity.

Ground Feedback in Data : Base your feedback on concrete evidence and specific examples from the work being reviewed. This not only adds credibility to your comments but also helps the recipient understand precisely where improvements are needed.

Clear and Concise Writing : Express your thoughts in a clear and straightforward manner. Avoid jargon or ambiguous language that may lead to misinterpretation.

Offer Constructive Criticism : Focus on providing feedback that can guide improvement. Instead of simply pointing out flaws, suggest potential solutions or alternatives.

Highlight Strength s: Acknowledge and commend the strengths in the work. Recognizing what’s done well can motivate the individual to build on their existing skills.

The Don’ts of Peer Review Feedback:

Avoid Ambiguity : Vague or overly general comments such as “It’s not good” do not provide actionable guidance. Be specific in your observations.

Refrain from Personal Attacks : Avoid making the feedback personal or overly critical. Concentrate on the work and its improvement, not on the individual.

Steer Clear of Subjective Opinions : Base your feedback on objective criteria and avoid opinions that may not be universally applicable.

Resist Overloading with Suggestions : While offering suggestions for improvement is important, overwhelming the recipient with a laundry list of changes can be counterproductive.

Don’t Skip Follow-Up : Once you’ve provided feedback, don’t leave the process incomplete. Follow up and engage in a constructive dialogue to ensure that the feedback is understood and applied effectively.

Remember that the art of giving peer review feedback is a valuable skill, and when done right, it can foster professional growth, foster collaboration, and inspire continuous improvement. This is where performance management software like Peoplebox come into play.

Start Collecting Peer Review Feedback On Peoplebox 

In a world where the continuous improvement of your workforce is paramount, harnessing the potential of peer review feedback is a game-changer. Peoplebox offers a suite of powerful features that revolutionize performance management, simplifying the alignment of people with business goals and driving success. Want to experience it first hand? Take a quick tour of our product.

Take a Product Tour

Through Peoplebox, you can effortlessly establish peer reviews, customizing key aspects such as:

  • Allowing the reviewee to select their peers
  • Seeking managerial approval for chosen peers to mitigate bias
  • Determining the number of peers eligible for review, and more.

Peoplebox lets you choose your peers to review

And the best part? Peoplebox lets you do all this from right within Slack.

Use Peoplebox to collect performance reviews on Slack

Peer Review Feedback Template That You Can Use Right Away

Still on the fence about using software for performance reviews? Here’s a quick ready-to-use peer review template you can use to kickstart the peer review process.

Free peer review template on Google form

Download the Free Peer Review Feedback Form here.

If you ever reconsider and are looking for a more streamlined approach to handle 360 feedback, give Peoplebox a shot!

Frequently Asked Questions

Why is peer review feedback important.

Peer review feedback provides a well-rounded view of employee performance, fosters skill enhancement, encourages accountability, strengthens team cohesion, ensures fair assessment, and identifies blind spots early on.

How does peer review feedback benefit employees?

Peer review feedback offers employees valuable insights for growth, helps them identify areas for improvement, provides recognition for their efforts, and fosters a culture of collaboration and continuous learning.

What are some best practices for giving constructive peer feedback?

Best practices include grounding feedback in specific examples, offering both praise and areas for improvement, focusing on actionable suggestions, maintaining professionalism, and ensuring feedback is clear and respectful.

What role does HR software like Peoplebox play in peer review feedback?

HR software like Peoplebox streamlines the peer review process by allowing customizable feedback, integration with collaboration tools like Slack, easy selection of reviewers, and providing templates and tools for effective feedback.

How can HR professionals promote a culture of feedback and openness in their organization?

HR professionals can promote a feedback culture by leading by example, providing training on giving and receiving feedback, recognizing and rewarding constructive feedback, creating safe spaces for communication, and fostering a culture of continuous improvement.

Table of Contents

What’s Next?

research proposal peer review example

Get Peoplebox Demo

Get a 30-min. personalized demo of our OKR, Performance Management and People Analytics Platform Schedule Now

research proposal peer review example

Take Product Tour

Watch a product tour to see how Peoplebox makes goals alignment, performance management and people analytics seamless. Take a product tour

Subscribe to our blog & newsletter

Popular Categories

  • One on Ones
  • People Analytics
  • Employee Engagement
  • Strategy Execution
  • Remote Work

Recent Blogs

Manoj Khandelwal

Episode 4: People Analytics Maturity in India: Paving the Path for Success

What is Employee Retention & Why Is It Important

Employee Retention: The Ultimate Guide To Retain Top Talent

HR Reporting Templates

A Quick Guide to HR Reporting in 2024 + Free Template

research proposal peer review example

  • OKRs (Aligned Goals)
  • Performance Reviews
  • 360 Degree Employee Reviews
  • Performance Reviews in Slack
  • 1:1 Meetings
  • Business Reviews
  • Automated Engagement Survey
  • Anonymous Messaging
  • Engagement Insights
  • Integrations
  • Why Peoplebox
  • Our Customers
  • Customer Success Stories
  • Product Tours
  • Peoplebox Analytics Talk
  • The Peoplebox Pulse Newsletter
  • OKR Podcast
  • OKR Examples
  • One-on-one-meeting questions
  • Performance Review Templates
  • Request Demo
  • Help Center
  • Careers (🚀 We are hiring)
  • Privacy Policy
  • Terms & Conditions
  • GDPR Compliance
  • Data Processing Addendum
  • Responsible Disclosure
  • Cookies Policy

Share this blog

IMAGES

  1. FREE 10+ Sample Peer Review Forms in PDF

    research proposal peer review example

  2. (PDF) Peer Reviewing Interdisciplinary Papers

    research proposal peer review example

  3. How to Write a Successful Research Proposal

    research proposal peer review example

  4. FREE 10+ Sample Peer Review Forms in PDF

    research proposal peer review example

  5. Sample Peer Review Questions

    research proposal peer review example

  6. 13 Writing Peer Review Worksheet / worksheeto.com

    research proposal peer review example

VIDEO

  1. How To Do Peer Review in English 1010

  2. THIS Got Through Peer Review?!

  3. Creating a research proposal

  4. Module 3

  5. How to write a research proposal using AI

  6. How to write Research proposal for phD? PhD interview

COMMENTS

  1. How to Write a Peer Review

    Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom. Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript ...

  2. How to write a peer review: practical templates, expert examples, and

    Co-reviewing (sharing peer review assignments with senior researchers) is one of the best ways to learn peer review. It gives researchers a hands-on, practical understanding of the process. In an article in The Scientist , the team at Future of Research argues that co-reviewing can be a valuable learning experience for peer review, as long as ...

  3. PDF Research Proposal

    Peer Review: Draft of Research Proposal . Directions: Please type your comments. Include your name as the reviewer. Identify the name of the ... For example, does the author cite and present findings when needed to support assertions, provide the appropriate methodological context for understanding these findings, and provide detailed ...

  4. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  5. Research Proposal Peer Review

    Step 3: Address your peer's questions and concerns included at the top of the draft. Step 4: Write a short paragraph about what the writer does especially well. Step 5: Write a short paragraph about what you think the writer should do to improve the draft. Your suggestions will be the most useful part of peer review for your classmates, so ...

  6. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  7. PDF PEER REVIEW GUIDANCE: RESEARCH PROPOSALS

    Figure 1 - Essential Steps in the Peer Review process Research Proposal/Report -This is usually a research funding proposal or a report intended for publication. Peer Review - The research proposal/report is sent out to two or more independent experts for review. Most journals/funding organisations have an assessment system in place, be it an

  8. Peer Review Examples

    Peer Review Examples. ... The genesis of this paper is the proposal that genomes containing a poor percentage of guanosine and cytosine (GC) nucleotide pairs lead to proteomes more prone to aggregation than those encoded by GC-rich genomes. ... Later, on p.5, they note that the "research is inductive and seeks to build theory rather than ...

  9. How to Write a Research Proposal

    Research proposal examples. Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: "A Conceptual Framework for Scheduling Constraint Management" Example research proposal #2: "Medical Students as Mediators of ...

  10. Peer Review: Research Proposal Memo

    Peer Review: Research Proposal Memo. A large component of Engl 301, was learning how to deliver effective feedback on a peer's work. To do this effectively, the writer must ensure they maintain a YOU attitude throughout and deliver both positive and constructive criticism tactfully. Here is an example of a peer review I completed for a fellow ...

  11. My Complete Guide to Academic Peer Review: Example Comments & How to

    The good news is that published papers often now include peer-review records, including the reviewer comments and authors' replies. So here are two feedback examples from my own papers: Example Peer Review: Paper 1. Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 - Available here

  12. A step-by-step guide to peer review: a template for patients and novice

    The peer review template for patients and novice reviewers ( table 1) is a series of steps designed to create a workflow for the main components of peer review. A structured workflow can help a reviewer organise their thoughts and create space to engage in critical thinking. The template is a starting point for anyone new to peer review, and it ...

  13. Writing a Research Proposal

    Describe the overall research design by building upon and drawing examples from your review of the literature. Consider not only methods that other researchers have used, but methods of data gathering that have not been used but perhaps could be. ... "Crafting a Research Proposal." The Marketing Review 10 (Summer 2010): 147-168; Jones, Mark ...

  14. Peer Review Quick Guide: Detecting Common Mistakes and Considering

    Peer review is the recognized process to help ensure that research proposals and manuscripts meet established standards of excellence. The objectives of this quick guide are to: Increase novice peer reviewers' awareness of common mistakes and dilemmas faced in reviewing research proposals and manuscripts. Suggest strategies for novice peer ...

  15. How to prepare a proposal for a review article

    In that case, a journal will usually ask authors to first send a proposal that introduces the review article. Common elements of proposals, which are sometimes also called pre-submission inquiries ...

  16. How to prepare a Research Proposal

    It puts the proposal in context. 3. The introduction typically begins with a statement of the research problem in precise and clear terms. 1. The importance of the statement of the research problem 5: The statement of the problem is the essential basis for the construction of a research proposal (research objectives, hypotheses, methodology ...

  17. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  18. PDF PROPOSAL feedback sample comments from an nih r18 proposal

    NIH R18 PROPOSAL REVIEW COMMENTS. The points you have expressed do not seem innovative. It is not clear why the collaboration is innovative since prior interventions have combined HIT and motivational interviewing. You will need to thoroughly review the literature (including the articles we have suggested) and identify exactly what sets your ...

  19. Top Tips: Reviewing a Research Proposal

    Remember to praise a good proposal. If you find that the proposal you're reviewing is good, you should say so and explain why. Take your time. Finally, allow enough time to thoroughly read the proposal before writing and submitting your review. If you feel you need more time to complete your review, then contact the funder to request a ...

  20. Evaluation of research proposals by peer review panels: broader panels

    To assess research proposals, funders rely on the services of peer experts to review the thousands or perhaps millions of research proposals seeking funding each year. While often associated with scholarly publishing, peer review also includes the ex ante assessment of research grant and fellowship applications ( Abdoul et al. 2012 ).

  21. Peer Review Examples: Powerful Phrases You Can Use

    Peer Review Examples on Professionalism and Work Ethics. "Noah's punctuality is an asset to the team. To maintain professionalism consistently, he should adhere to deadlines with unwavering dedication, setting a model example for peers.". "Grace's integrity and ethical standards are admirable.

  22. Proposal Peer-Review Examples

    PROPOSAL PEER-REVIEW EXAMPLE 3. After reading your proposal, I thought it was a very interesting concept. You clearly identified the issues of dirt buildup on solar panels, while imposing the fact that without immediate resolve this problem could result in the company losing potential sales in the future. IEEE citation was well done; I was able ...