The University of Melbourne

Holiday opening hours

Opening hours for our libraries and Library Chat have been adjusted over the Easter holidays. Please check our library opening hours page before you visit.

Which review is that? A guide to review types.

  • Which review is that?
  • Review Comparison Chart
  • Decision Tool
  • Critical Review
  • Integrative Review
  • Narrative Review
  • State of the Art Review
  • Narrative Summary
  • Systematic Review
  • Meta-analysis
  • Comparative Effectiveness Review
  • Diagnostic Systematic Review
  • Network Meta-analysis
  • Prognostic Review
  • Psychometric Review
  • Review of Economic Evaluations
  • Systematic Review of Epidemiology Studies
  • Living Systematic Reviews
  • Umbrella Review
  • Review of Reviews
  • Rapid Review

Rapid Evidence Assessment

  • Rapid Realist Review
  • Qualitative Evidence Synthesis
  • Qualitative Interpretive Meta-synthesis
  • Qualitative Meta-synthesis
  • Qualitative Research Synthesis
  • Framework Synthesis - Best-fit Framework Synthesis
  • Meta-aggregation
  • Meta-ethnography
  • Meta-interpretation
  • Meta-narrative Review
  • Meta-summary
  • Thematic Synthesis
  • Mixed Methods Synthesis
  • Narrative Synthesis
  • Bayesian Meta-analysis
  • EPPI-Centre Review
  • Critical Interpretive Synthesis
  • Realist Synthesis - Realist Review
  • Scoping Review
  • Mapping Review
  • Systematised Review
  • Concept Synthesis
  • Expert Opinion - Policy Review
  • Technology Assessment Review
  • Methodological Review
  • Systematic Search and Review

Often referred to synonymously with Rapid Reviews, “Rapid evidence assessments provide a more structured and rigorous search and quality assessment of the evidence than a literature review but are not as exhaustive as a systematic review”. They can be used to “gain an overview of the density and quality of evidence on a particular issue, support programming decisions by providing evidence on key topics, and support the commissioning of further research by identifying evidence gaps” ( Department for International Development, 2017).

Further Reading/Resources

Crawford, C., Boyd, C., Jain, S., Khorsan, R., & Jonas, W. (2015). Rapid Evidence Assessment of the Literature (REAL©): streamlining the systematic review process and creating utility for evidence-based health care. BMC research notes , 8 (1), 1-9. Full Text Thomas, J., Newman, M., & Oliver, S. (2013). Rapid evidence assessments of research to inform social policy: taking stock and moving forward. Evidence & policy: a journal of research, debate and practice , 9 (1), 5-27. Full Text Hamel, C., Michaud, A., Thuku, M., Skidmore, B., Stevens, A., Nussbaumer-Streit, B., & Garritty, C. (2021). Defining rapid reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews. Journal of Clinical Epidemiology , 129 , 74-85. Full Text  

  • << Previous: Rapid Review
  • Next: Rapid Realist Review >>
  • Last Updated: Mar 5, 2024 1:14 PM
  • URL: https://unimelb.libguides.com/whichreview
  • Search Menu
  • Advance Articles
  • Editor's Choice
  • Supplements
  • Patient Perspectives
  • Methods Corner
  • ESC Content Collections
  • Author Guidelines
  • Instructions for reviewers
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Read & Publish
  • About European Journal of Cardiovascular Nursing
  • About ACNAP
  • About European Society of Cardiology
  • ESC Publications
  • Editorial Board
  • Advertising & Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

The problem: what if you need a synthesis of the evidence now, a solution: rapid reviews, examples of rapid reviews, limitations and pitfalls of rapid reviews, data availability.

  • < Previous

Rapid reviews: the pros and cons of an accelerated review process

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Philip Moons, Eva Goossens, David R. Thompson, Rapid reviews: the pros and cons of an accelerated review process, European Journal of Cardiovascular Nursing , Volume 20, Issue 5, June 2021, Pages 515–519, https://doi.org/10.1093/eurjcn/zvab041

  • Permissions Icon Permissions

Although systematic reviews are the method of choice to synthesize scientific evidence, they can take years to complete and publish. Clinicians, managers, and policy-makers often need input from scientific evidence in a more timely and resource-efficient manner. For this purpose, rapid reviews are conducted. Rapid reviews are performed using an accelerated process. However, they should not be less systematic than standard systematic reviews, and the introduction of bias must be avoided. In this article, we describe what rapid reviews are, present their characteristics, give some examples, highlight potential pitfalls, and draw attention to the importance of evidence summaries in order to facilitate adoption in clinical decision-making.

Knowing what rapid reviews are.

Understanding the features and benefits of rapid reviews.

Recognizing the limitations of rapid reviews and knowing when they are not the preferred choice.

Researchers, clinicians, managers, and policy-makers are typical consumers of empirical work published in the scientific literature. For researchers, reviewing the literature is part of the empirical cycle, in order to generate new research questions and to discuss their own study findings. When the available evidence has to be searched for, collated, critiqued, and summarized, systematic reviews are the gold standard. 1 Systematic reviews are rigorous in approach and transparent about how studies were searched, selected, and assessed. Doing so, they limit bias and random error, and hence, they yield the most valid and trustworthy evidence. Systematic reviews can be complemented by meta-analyses to compute an overall mean effect, proportion, or relationship. 2 Systematic reviews and meta-analyses are seen as the pillars of evidence-based healthcare. The rigour in the methodology of a systematic review, however, also means that it often takes between 6 months and 2 years to undertake. 3

Clinicians, managers, and policy-makers also use the literature for their decision-making. They often cannot afford to wait for 2 years to get the answer to their questions by means of a systematic review. The evidence must be synthesized without undue delays. 4 Furthermore, the synthesis and reporting of systematic reviews often fail to address the needs of the users at the point of care 5 and are considered to be too large and too complex. 3 To facilitate the uptake of research findings in clinical practice, other types of reviews with a shorter lead time are needed, and alternative evidence summaries have to be developed. 5

Rapid reviews have been proposed as a method to provide summaries of the literature in a timely and resource-efficient manner by using methods to accelerate or streamline traditional systematic review processes. 5 , 6 It is argued that rapid reviews should be conducted in less than 8 weeks. 4 The purpose of rapid reviews is to respond to urgent situations or political pressures, often in a rapidly changing field. The typical target audiences for rapid reviews are policy-makers, healthcare institutions, managers, professionals, and patient associations. 6 The first rapid reviews were published in the 1960s and proliferated in the mid-2010s. Not surprisingly, the number of rapid reviews have boomed in 2020, in response to the global SARS-CoV-2/COVID-19 pandemic (see Figure 1 ). Indeed, this pandemic has had a huge impact on healthcare delivery, 7–9 and triggered unprecedented clinical questions that needed a prompt answer. 10 Healthcare research, also, has had to adapt swiftly to the drastically changed situation. 11

Number of publications in the Pubmed database (1960–2020) referring to ‘rapid review’ (search performed 16 March 2021).

Number of publications in the Pubmed database (1960–2020) referring to ‘rapid review’ (search performed 16 March 2021).

A rapid review is a ‘a form of knowledge synthesis that accelerates the process of conducting a traditional systematic review through streamlining or omitting various methods to produce evidence for stakeholders in a resource-efficient manner’. 12 There is not a single-validated methodology in conducting rapid reviews. 13 Therefore, variation in methodological quality of rapid reviews can be observed. 14 When adopting the ‘Search, AppraisaL, Synthesis and Analysis (SALSA) framework’ to rapid reviews, it is stipulated that the completeness of the search is determined by time constraints; the quality appraisal is time-limited, if performed at all; the synthesis is narrative and tabular; and the analysis pertains to the overall quality/direction of effect of literature. 15 In Table 1 , we describe the SALSA characteristics of rapid reviews and systematic reviews. Rapid reviews should not be less systematic, and they must adhere to the core principles of systematic reviews to avoid bias in the inclusion, assessment, and synthesis of studies. 4 The typical characteristic of a rapid review is that it provides less in-depth information and detail in its recommendations. 6 It is essential, however, that deviations from traditional systematic review methods are described well in the methods section. This can, for instance, be done by explicating where the PRISMA criteria were omitted or adapted. 4 The speed with which a rapid review is conducted largely depends on the availability of human and financial resources. 4 There is also often a close interaction between the commissioners and the reviewers because the review purports to guide decision-making.

Distinction between rapid and systematic reviews

Based on Grant and Booth. 15

Although rapid reviews do not meet the gold standard of systematic reviews, and therefore do have their limitations (see below), they frequently provide adequate advice on which to base clinical and policy decisions. 13 A direct comparison of the findings from rapid and full systematic reviews showed that the essential conclusions did not differ extensively. 13 Given the importance of rapid reviews, the Cochrane collaboration has established the Cochrane Rapid Reviews Methods Group, which recently developed actionable recommendations and minimum standards for rapid reviews ( Table 2 ). 16

Cochrane rapid review methods recommendations

Reproduced from Garritty et al . 16 published under the CC BY-NC-ND license.

To date, three rapid reviews have been published in the European Journal of Cardiovascular Nursing . 17–19 The first, published in 2017, assessed the efficacy of non-pharmacological interventions on psychological distress in patients undergoing cardiac catheterization. 17 A second rapid review, published in 2020 amidst the first wave of the SARS-CoV-2/COVID-19 pandemic in Asia, Europe, and North America, looked at the evidence for remote healthcare during quarantine situations to support people living with cardiovascular diseases. 18 Given the unprecedented global situation and the sense of urgency, this was a pre-eminent example for which a rapid review was appropriate. A third rapid review, published in 2021, investigated if participation in a support-based intervention exclusively for caregivers of people living with heart failure change their psychological and emotional wellbeing. 19 The authors explicitly chose the streamlined method of a rapid review to inform the methodological approach of a future caregiver-based intervention. 19

Although rapid and systematic reviews have shown to yield similar conclusions, 13 , 20 there are definitely some limitations or pitfalls to bear in mind. For instance, rapidity may lead to brevity. 4 In such cases, the search may be restricted to one database; limited inclusion criteria by date or language; having one person screen and another verify studies; not conducting quality appraisal; or presenting results only as a narrative summary. 14 If only one database is used, it is recommended to search Pubmed, because rapid reviews that did not use Pubmed as a database are more likely to obtain results that differ from systematic reviews. 21 It is also recommended that a quality appraisal of the included studies is not skipped. For this purpose, appraisal tools that account for different methodologies are very suitable, such as the Mixed Methods Appraisal Tool (MMAT). 22 It has also been observed that rapid reviews are often not explicitly defining the methodology that had been used. 4 , 13 Consequently, the search cannot always be replicated and the reasons for the differences between the findings are difficult to comprehend. Further, it is not clear if the review was performed in a systematic fashion, which is also mandatory for rapid reviews. Otherwise, they may bear the risks of any other narrative review or poorly conducted systematic review. 4 Rapid reviews should not be seen as a quick alternative to a full systematic review, 13 and authors must avoid making shortcuts that could lead to bias. 6 Therefore, a thorough evaluation of the appropriateness of a rapid review methodology, being the need for a summary of the evidence without delay, is imperative. If there is no urgent need to obtain the evidence for clinical practice or policy-making, a full systematic review would be more suitable. Furthermore, when there is a high need for accuracy, for instance for clinical guidelines or regulatory affairs, a systematic review is still the best option. 21

Transparency in the description of the methods used is of critical importance to appraise the quality of the rapid review. 4 A scoping review of rapid reviews found that the quality of reporting is generally poor. 14 This may lead to the interpretation that rapid reviews are inherently inferior to full systematic reviews, whereas this is not the case if properly conducted and reported. It is also vital to acknowledge the potential limitations of rapidity.

Since the typical reports of systematic reviews are often too long and too complex for clinicians and decision-makers, 3 new formats of evidence summaries have been developed. 5 Evidence summaries are synopses that summarize existing international evidence on healthcare interventions or activities’. 5 For rapid reviews, reporting the evidence in tabular format is indispensable to be used at the point of care. Such evidence summaries can be even integrated in electronic patient records, to provide recommendations for the care for that patient, based on their specific characteristics. 5 An extensive database with evidence summaries has been developed by the Joanna Briggs Institute ( https://www.wolterskluwer.com/en/know/jbi-resources/jbi-ebp-database, last accessed 27 March 2021 ).

Rapid reviews are meant to inform specific clinical or policy decisions in a timely and resource-efficient fashion. They are conducted within a timeframe of some weeks. The rapidity refers to the accelerated process but should not come at the cost of losing any of the important information that could be expected from full systematic reviews, and the introduction of biases that may jeopardize the validity of the conclusions must be avoided. The quality of rapid reviews is as important as for traditional systematic reviews. Rapid reviews need to be explicit in the methodology that has been used and clearly state how the review differs from a full systematic review. Sufficient attention ought to be given to the evidence summaries because the format of these summaries will largely determine the adoption in clinical care or decision-making.

The article is based on a review of the literature. No specific data sources have been used.

Conflict of interest : none declared.

Munn Z , Stern C , Aromataris E , Lockwood C , Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences . BMC Med Res Methodol 2018 ; 18 : 5 .

Google Scholar

Ruppar T. Meta-analysis: how to quantify and explain heterogeneity? Eur J Cardiovasc Nurs 2020 ; 19 : 646 – 652 .

Khangura S , Konnyu K , Cushman R , Grimshaw J , Moher D. Evidence summaries: the evolution of a rapid review approach . Syst Rev 2012 ; 1 : 10 .

Schünemann HJ , Moja L. Reviews: Rapid! Rapid! Rapid! …and systematic . Syst Rev 2015 ; 4 : 4 .

Munn Z , Lockwood C , Moola S. The development and use of evidence summaries for point of care information systems: a streamlined rapid review approach . Worldviews Evid Based Nurs 2015 ; 12 : 131 – 138 .

Ganann R , Ciliska D , Thomas H. Expediting systematic reviews: methods and implications of rapid reviews . Implement Sci 2010 ; 5 : 56 .

Klompstra L , Jaarsma T. Delivering healthcare at distance to cardiac patients during the COVID-19 pandemic: experiences from clinical practice . Eur J Cardiovasc Nurs 2020 ; 19 : 551 – 552 .

Lauck S , Forman J , Borregaard B , Sathananthan J , Achtem L , McCalmont G , Muir D , Hawkey MC , Smith A , Højberg Kirk B , Wood DA , Webb JG. Facilitating transcatheter aortic valve implantation in the era of COVID-19: recommendations for programmes . Eur J Cardiovasc Nurs 2020 ; 19 : 537 – 544 .

Hill L , Beattie JM , Geller TP , Baruah R , Boyne J , Stolfo GD , Jaarsma T. Palliative care: essential support for patients with heart failure in the COVID-19 pandemic . Eur J Cardiovasc Nurs 2020 ; 19 : 469 – 472 .

Tricco AC , Garritty CM , Boulos L , Lockwood C , Wilson M , McGowan J , McCaul M , Hutton B , Clement F , Mittmann N , Devane D , Langlois EV , Abou-Setta AM , Houghton C , Glenton C , Kelly SE , Welch VA , LeBlanc A , Wells GA , Pham B , Lewin S , Straus SE. Rapid review methods more challenging during COVID-19: commentary with a focus on 8 knowledge synthesis steps . J Clin Epidemiol 2020 ; 126 : 177 – 183 .

Van Bulck L , Kovacs AH , Goossens E , Luyckx K , Jaarsma T , Stromberg A , Moons P. Impact of the COVID-19 pandemic on ongoing cardiovascular research projects: considerations and adaptations . Eur J Cardiovasc Nurs 2020 ; 19 : 465 – 468 .

Hamel C , Michaud A , Thuku M , Skidmore B , Stevens A , Nussbaumer-Streit B , Garritty C. Defining rapid reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews . J Clin Epidemiol 2021 ; 129 : 74 – 85 .

Watt A , Cameron A , Sturm L , Lathlean T , Babidge W , Blamey S , Facey K , Hailey D , Norderhaug I , Maddern G. Rapid versus full systematic reviews: validity in clinical practice? ANZ J Surg 2008 ; 78 : 1037 – 1040 .

Tricco AC , Antony J , Zarin W , Strifler L , Ghassemi M , Ivory J , Perrier L , Hutton B , Moher D , Straus SE. A scoping review of rapid review methods . BMC Med 2015 ; 13 : 224 .

Grant MJ , Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info Libr J 2009 ; 26 : 91 – 108 .

Garritty C , Gartlehner G , Nussbaumer-Streit B , King VJ , Hamel C , Kamel C , Affengruber L , Stevens A. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews . J Clin Epidemiol 2021 ; 130 : 13 – 22 .

Carroll DL , Malecki-Ketchell A , Astin F. Non-pharmacological interventions to reduce psychological distress in patients undergoing diagnostic cardiac catheterization: a rapid review . Eur J Cardiovasc Nurs 2017 ; 16 : 92 – 103 .

Neubeck L , Hansen T , Jaarsma T , Klompstra L , Gallagher R. Delivering healthcare remotely to cardiovascular patients during COVID-19: a rapid review of the evidence . Eur J Cardiovasc Nurs 2020 ; 19 : 486 – 494 .

Carleton-Eagleton K , Walker I , Freene N , Gibson D , Semple S. Meeting support needs for informal caregivers of people with heart failure: a rapid review . Eur J Cardiovasc Nurs 2021 .

Best L , Stevens A , Colin‐Jones D. Rapid and responsive health technology assessment: the development and evaluation process in the South and West region of England . J Clin Eff 1997 ; 2 : 51 – 56 .

Marshall IJ , Marshall R , Wallace BC , Brassey J , Thomas J. Rapid reviews may produce different results to systematic reviews: a meta-epidemiological study . J Clin Epidemiol 2019 ; 109 : 30 – 41 .

Hong QN , Gonzalez-Reyes A , Pluye P. Improving the usefulness of a tool for appraising the quality of qualitative, quantitative and mixed methods studies, the Mixed Methods Appraisal Tool (MMAT) . J Eval Clin Pract 2018 ; 24 : 459 – 467 .

Email alerts

Companion articles.

  • Meeting support needs for informal caregivers of people with heart failure: a rapid review
  • Comment on rapid reviews
  • Response to letter from Dr Riegel and Mr James

Citing articles via

  • Recommend to Your Librarian
  • Advertising and Corporate Services
  • Journals Career Network

Affiliations

  • Online ISSN 1873-1953
  • Print ISSN 1474-5151
  • Copyright © 2024 European Society of Cardiology
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • En español – ExME
  • Em português – EME

Rapid Reviews: the easier and speedier way to evaluate papers, but with some limitations

Posted on 24th September 2020 by Georgina Ford

""

When it comes to reviewing literature there are many different strategies. These vary in their complexity and timescale. As students, the type we are most likely to be able to perform is the rapid review – but what exactly does this entail?

Systematic Reviews

Systematic reviews are the long, detailed papers we commonly have to read. They bring together evidence from sometimes hundreds of different sources to identify corroborating and conflicting results, synthesise data and inform areas for future research. They take up to a year or more to produce with 2 or more people involved.

Rapid Reviews

A rapid review uses shortcuts in the systematic review process, but should still be rigorous and they need to ask a very focused question. You can also have updates of existing reviews incorporating data published since the previous review.

Pros and Cons

Pros: They take much less time to produce and the workload is suitable for a smaller number of reviewers than a full systematic review, hence their suitability for students. They still need a rigorous search method to identify new data to include, however, concessions in breadth and depth of evaluation are made.

Cons: Taking methodological shortcuts does leave rapid reviews more vulnerable to bias and errors. For example, the search for existing studies may be less comprehensive. It can be difficult to access all the literature if it is restricted or in a different language, exposing it to publication bias . The review should include details of these concessions and challenges to give context to any of the claims made.

The writing process

The process of performing a rapid review is laid out below:

  • THE LITERATURE SEARCH •Ask a focused question. Try using the PICO method (Population/Intervention/Comparison/Outcome). •Identify the last systematic review of data answering your question. •Perform a literature search for relevant papers written since the last review, using different iterations, spellings, and phrases. •Use limits on your search to narrow down to papers in your timeframe, language, and study design. •If required, repeat with other search databases.
  • STREAMLINING STUDIES •Go through each paper you have identified and read their abstracts. •Discard those that are not the required study design or sufficiently relevant to your question. •Check you can access all of the remaining studies. •The process up to now should have left you with a manageable number of studies.
  • REVIEW THE DATA •Read the full articles in detail, as many times as you have to. Make notes on them highlighting their methods, and key similarities and differences to the other papers. •Organise similar studies so they are discussed together and compared. •Write a clear conclusions paragraph. Explore any differences between the conclusions you draw and the results of the previous systematic review. Think about why any differences may have occurred, for example changes of policy.
  • REVIEW YOUR PROCESS •Make sure to include a paragraph detailing your search criteria and process. •Your methods and reasons for them should be clearly outlined. We have said that rapid reviews can be shorter and less in-depth but the shortcuts you have taken must be explained. •Write about what you think the limitations of your methods are. •In a separate paragraph, usually at the end, you should discuss what the limitations of the studies themselves are and how these may have affected your conclusions. •It can be easier to leave writing your abstract until the very end, as it needs to be clear and concise. Once you have analysed all of your data and come to your conclusions it is far easier to summarise your work than at the beginning. Starting is always the hardest part, and abstracts are not easy to write anyway.

Here are some examples of published rapid reviews that may help you, particularly look at how they have detailed their methods and search strategies:

Hu et al., (2016): Cisplatin for testicular germ cell tumors: a rapid review

Kreindler et al., (2016): Patient characteristics associated with longer emergency department stay: a rapid review

With the current COVID-19 pandemic, the need for evidence is more urgent, making rapid reviews a very popular choice. One example from Public Health Scotland is: Rapid Review of the literature: Assessing the infection prevention and control measures for the prevention and management of COVID-19 in healthcare settings

An article by Grant and Booth (2009) highlights the key differences between different review types and was very helpful for writing this blog.

References (pdf)

Feature image by  Gerd Altmann  from  Pixabay  

' src=

Georgina Ford

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Risk Communication in Public Health

Learn why effective risk communication in public health matters and where you can get started in learning how to better communicate research evidence.

""

Estrogen synthesis in the brain and neural function

Neurology and endocrinology are today closer than ever in explaining important clinical processes. This blog from Davide aims to provide an extensive overview of the roles played by brain-derived estrogen in neural function.

""

Just a simple fall? A literature review of paramedics’ assessment of older adults who fall and are referred to community care services

This blog presents the abstract of a literature review and critical appraisal of paramedics’ assessment of older adults who fall and are referred to community care services.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Health Res Policy Syst

Logo of hlthresps

What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision making in health policy and practice: a rapid review

Michelle m. haby.

1 Department of Chemical and Biological Sciences, Universidad de Sonora, Hermosillo, Sonora Mexico

2 Centre for Health Policy, Melbourne School of Population and Global Health, The University of Melbourne, Melbourne, Victoria Australia

Evelina Chapman

3 Pan American Health Organization, Brasilia, DF Brazil

Rachel Clark

4 London School of Hygiene and Tropical Medicine, London, United Kingdom

Jorge Barreto

5 Fundação Oswaldo Cruz, Diretoria de Brasília, Brasilia, Brazil

Ludovic Reveiz

6 Knowledge Management, Bioethics and Research, Pan American Health Organization, Washington, DC, United States of America

John N. Lavis

7 McMaster Health Forum, Centre for Health Economics and Policy Analysis, Department of Clinical Epidemiology and Biostatistics, and Department of Political Science, McMaster University, Hamilton, Canada

8 Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA United States of America

Associated Data

The datasets supporting the conclusions of this article are included within the article and its additional files.

Rapid reviews have the potential to overcome a key barrier to the use of research evidence in decision making, namely that of the lack of timely and relevant research. This rapid review of systematic reviews and primary studies sought to answer the question: What are the best methodologies to enable a rapid review of research evidence for evidence-informed decision making in health policy and practice?

This rapid review utilised systematic review methods and was conducted according to a pre-defined protocol including clear inclusion criteria (PROSPERO registration: CRD42015015998). A comprehensive search strategy was used, including published and grey literature, written in English, French, Portuguese or Spanish, from 2004 onwards. Eleven databases and two websites were searched. Two review authors independently applied the eligibility criteria. Data extraction was done by one reviewer and checked by a second. The methodological quality of included studies was assessed independently by two reviewers. A narrative summary of the results is presented.

Five systematic reviews and one randomised controlled trial (RCT) that investigated methodologies for rapid reviews met the inclusion criteria. None of the systematic reviews were of sufficient quality to allow firm conclusions to be made. Thus, the findings need to be treated with caution. There is no agreed definition of rapid reviews in the literature and no agreed methodology for conducting rapid reviews. While a wide range of ‘shortcuts’ are used to make rapid reviews faster than a full systematic review, the included studies found little empirical evidence of their impact on the conclusions of either rapid or systematic reviews. There is some evidence from the included RCT (that had a low risk of bias) that rapid reviews may improve clarity and accessibility of research evidence for decision makers.

Conclusions

Greater care needs to be taken in improving the transparency of the methods used in rapid review products. There is no evidence available to suggest that rapid reviews should not be done or that they are misleading in any way. We offer an improved definition of rapid reviews to guide future research as well as clearer guidance for policy and practice.

Electronic supplementary material

The online version of this article (doi:10.1186/s12961-016-0155-7) contains supplementary material, which is available to authorized users.

In May 2005, the World Health Assembly called on WHO Member States to “establish or strengthen mechanisms to transfer knowledge in support of evidence-based public health and healthcare delivery systems, and evidence-based health-related policies ” [ 1 ]. Knowledge translation has been defined by WHO as: “ the synthesis, exchange, and application of knowledge by relevant stakeholders to accelerate the benefits of global and local innovation in strengthening health systems and improving people’s health ” [ 2 ]. Knowledge translation seeks to address the challenges to the use of scientific evidence in order to close the gap between the evidence generated and decisions being made.

To achieve better translation of knowledge from research into policy and practice it is important to be aware of the barriers and facilitators that influence the use of research evidence in health policy and practice decision making [ 3 – 8 ]. The most frequently reported barriers to evidence uptake are poor access to good quality relevant research and lack of timely and relevant research output [ 7 , 9 ]. The most frequently reported facilitators are collaboration between researchers and policymakers, improved relationships and skills [ 7 ], and research that accords with the beliefs, values, interests or practical goals and strategies of decision makers [ 10 ].

In relation to access to good quality relevant research, systematic reviews are considered the gold standard and these are used as a basis for products such as practice guidelines, health technology assessments, and evidence briefs for policy [ 11 – 14 ]. However, there is a growing need to provide these evidence products faster and with the needs of the decision-maker in mind, while also maintaining credibility and technical quality. This should help to overcome the barrier of lack of timely and relevant research, thereby facilitating their use in decision making. With this in mind, a range of methods for rapid reviews of the research evidence have been developed and put into practice [ 15 – 18 ]. These often include modifications to systematic review methods to make them faster than a full systematic review. Some examples of modifications that have been made include (1) a more targeted research question/reduced scope; (2) a reduced list of sources searched, including limiting these to specialised sources (e.g. of systematic reviews, economic evaluations); (3) articles searched in the English language only; (4) reduced timeframe of search; (5) exclusion of grey literature; (7) use of search tools that make it easier to find literature; and (7) use of only one reviewer for study selection and/or data extraction. Given the emergence of this approach, it is important to develop a knowledge base regarding the implications of such ‘shortcuts’ on the strength of evidence being delivered to decision makers. At the time of conducting this review, we were not aware of any high quality systematic reviews on rapid reviews and their methods.

It is important to note that a range of terms have been used to describe rapid reviews of the research evidence, including evidence summaries, rapid reviews, rapid syntheses, and brief reviews, with no clear definitions [ 15 , 16 , 18 – 22 ]. In this paper, we have used the term ‘rapid review’, despite starting with the term ‘rapid evidence synthesis’ in our protocol, as it became clear during the conduct of our review that it is the most widely used term in the literature [ 23 ]. We consider a broad range of rapid reviews, including rapid reviews of effectiveness, problem definition, aetiology, diagnostic tests, and reviews of cost and cost-effectiveness.

The rapid review presented in this article is part of a larger project aimed at designing a rapid response program to support evidence-informed decision making in health policy and practice [ 24 ]. The expectation is that a rapid response will facilitate the use of research for decision making. We have labelled this study as a rapid review because it was conducted in a limited timeframe and with the needs of health policy decision-makers in mind. It was commissioned by policy decision-makers for their immediate use.

The objective of this rapid review was to use the best available evidence to answer the following research question: What are the best methodologies to enable a rapid review of research evidence for evidence-informed decision making in health policy and practice? Both systematic reviews and primary studies were included. Note that we have deliberately used the term ‘best methodologies’ as it is likely that a variety of methods will be needed depending on the research question, timeframe and needs of the decision maker.

This rapid review used systematic review methodology and adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement [ 25 ]. A systematic review protocol was written and registered prior to undertaking the searches [ 26 ]. Deviations from the protocol are listed in Additional file 1 .

Inclusion criteria for studies

Studies were selected based on the inclusion criteria stated below.

Types of studies

Both systematic reviews and primary studies were sought. For inclusion, priority was given to systematic reviews and to primary studies that used one of the following designs: (1) individual or cluster randomised controlled trials (RCTs) and quasi-randomised controlled trials; (2) controlled before and after studies where participants are allocated to control and intervention groups using non-randomised methods; (3) interrupted time series with before and after measurements (and preferably with at least three measures); and (4) cost-effectiveness/cost-utility/cost-benefit. Other types of studies were also identified for consideration for inclusion in case no systematic reviews and few primary studies with strong study designs (as indicated above in 1–4) could be found. They were initially selected provided that they described some type of evaluation of methodologies for rapid reviews.

Types of participants

Apart from needing to be within the field of health policy and practice, the types of participants were not restricted, and the level of analysis could be at the level of the individual, organisation, system or geographical area. During the study selection process, we made a decision to also include ‘products’, i.e. papers that include rapid reviews as the unit of inclusion rather than people.

Types of articles/interventions

Studies that evaluated methodologies or approaches to rapid reviews for health policy and/or practice, including systematic reviews, practice guidelines, technology assessments, and evidence briefs for policy, were included.

Types of comparisons

Suitable comparisons (where relevant to the article type) included no intervention, another intervention, or current practice.

Types of outcome measures

Relevant outcome measures included time to complete; resources required to complete (e.g. cost, personnel); measures of synthesis quality; measures of efficiency of methods (measures that combine aspects of quality with time to complete, e.g. limiting data extraction to key characteristics and results that may reduce the time needed to complete without impacting on review quality); satisfaction with methods and products; and implementation. During the study selection process the authors agreed to include two additional outcomes that were not in the published protocol but important for the review, namely comparison of findings between the different synthesis methods (e.g. rapid vs. systematic review) and cost-effectiveness.

Publications in English, French, Portuguese or Spanish, from any country and published from 2004 onwards were included. The year 2004 was chosen as this is the year of the Mexico Ministerial Summit on Health Research, where the know-do gap was first given serious attention by health ministers [ 27 ]. Both grey and peer-reviewed literature was sought and included.

Search methods for identification of studies

A comprehensive search of eleven databases and two websites was conducted. The databases searched were CINAHL, the Cochrane Library (including Cochrane Reviews, the Database of Abstracts of Reviews of Effectiveness, the Health Technology Assessment database, NHS Economic Evaluation Database, and the database of Methods Studies), EconLit, EMBASE, Health Systems Evidence, LILACS and Medline. The websites searched were Google and Google Scholar.

Grey literature and manual search

Some of the selected databases index a combination of published and unpublished studies (for example, doctoral dissertations, conference abstracts and unpublished reports); therefore, unpublished studies were partially captured through the electronic search process. In addition, Google and Google Scholar were searched. The authors’ own databases of knowledge translation literature were also searched by hand for relevant studies. The reference list of each included study was searched. Contact was made with nine key authors and experts in the area for further studies, of whom five responded.

Search strategy

Searches were conducted between 15th January and 3rd February 2015 and supplementary searches (reference lists, contact with authors) were conducted in May 2015. Databases were searched using the keywords: “rapid literature review*” OR “rapid systematic review*” or “rapid scoping review*” OR “rapid review*” OR “rapid approach*” OR “rapid synthesis” OR “rapid syntheses” OR “rapid evidence assess*” OR “evidence summar*” OR “realist review*” OR “realist synthesis” OR “realist syntheses” OR “realist evaluation” OR “meta-method*” OR “meta method*” OR “realist approach*” OR “meta-evaluation*” OR “meta evaluation*”. Keywords were searched for in title and abstract, except where otherwise stated in Additional file 2 . Results were downloaded into the EndNote reference management program (version X7) and duplicates removed. The Internet search utilised the search terms: “rapid review”; “rapid systematic review”; “realist review”; “rapid synthesis”; and “rapid evidence”.

Screening and selection of studies

Searches were conducted and screened according to the selection criteria by one review author (MH). The full text of any potentially relevant papers was retrieved for closer examination. This reviewer erred on the side of inclusion where there was any doubt about its inclusion to ensure no potentially relevant papers were missed. The inclusion criteria were then applied against the full text version of the papers (where available) independently by two reviewers (MH and RC). For studies in Portuguese and Spanish, other authors (EC, LR or JB) played the role of second reviewer. Disagreements regarding eligibility of studies were resolved by discussion and consensus. Where the two reviewers were still uncertain about inclusion, the other reviewers (EC, LR, JB, JL) were asked to provide input to reach consensus. All studies which initially appeared to meet the inclusion criteria, but on inspection of the full text paper did not, were detailed in a table ‘Characteristics of excluded systematic reviews,’ together with reasons for their exclusion.

Application of the inclusion criteria by the two reviewers was performed as follows. First, all studies that met the inclusion criteria for participants, interventions and outcomes were selected, providing that they described some type of evaluation of methodologies for rapid evidence synthesis. At this stage, the study type was assessed and categorised by the two reviewers as being a (1) systematic review; (2) primary study with a strong study design, i.e. of one of the four types identified above; or (3) ‘other’ study design (that provided some type of evaluation of methodologies for rapid evidence synthesis). The reason for this was to enable the reviewers to make a decision as to which study designs should be included (based on available evidence, it was not known if sufficient evidence would be found if only systematic reviews and primary studies with strong study designs were included from the outset) and because of interest from the funders in other study types. Following discussion between all co-authors it was decided that it was likely that sufficient evidence could be provided from the first two categories of study type. Thus, the third group was excluded from data extraction but are listed in Additional file 3 .

Data extraction

Information extracted from studies and reviewed included objectives, target population, method/s tested, outcomes reported, country of study/studies and results. For systematic reviews we also extracted the date of last search, the included study designs and the number of studies. For primary studies, we also extracted the year of study, the study design and the population size. Data extraction was performed by one reviewer (MH) and checked by a second reviewer (RC). Disagreements were resolved through discussion and consensus.

Assessment of methodological quality

The methodological quality of included studies was assessed independently by two reviewers using AMSTAR: A MeaSurement Tool to Assess Reviews [ 28 ] for systematic reviews and the Cochrane Risk of Bias Tool for RCTs [ 29 ]. Disagreements in scoring were resolved by discussion and consensus. For this review, systematic reviews that achieved AMSTAR scores of 8 to 11 were considered high quality; scores of 4 to 7 medium quality; and scores of 0 to 3 low quality. These cut-offs are commonly used in Cochrane Collaboration overviews. The study quality assessment was used to interpret their results when synthesised in this review and in the formulation of conclusions.

Data analysis

Findings from the included publications were synthesised using tables and a narrative summary. Meta-analysis was not possible because the included studies were heterogeneous in terms of the populations, methods and outcomes tested.

Search results

Five systematic reviews (from seven articles) [ 18 , 19 , 21 , 30 – 33 ] and one primary study with a strong study design – a RCT [ 34 ] – met the inclusion criteria for the review. The selection process for studies and the numbers at each stage are shown in Fig.  1 . The reasons for exclusion of the 75 papers at full text stage are shown in Additional file 3 . The 12 evaluation studies excluded from data extraction due to weak study designs are also listed at the end of Additional file 3 .

An external file that holds a picture, illustration, etc.
Object name is 12961_2016_155_Fig1_HTML.jpg

Study selection flow chart – Methods for rapid reviews

Characteristics of included studies and quality assessment

Characteristics of the included systematic reviews are summarised in Table  1 , with full details provided in Additional file 4 . All rapid reviews were targeted at healthcare decision makers and/or agencies conducting rapid reviews (including rapid health technology assessments). Only two of the systematic reviews offered a definition of “rapid review” to guide their reviews [ 19 , 30 , 33 ]. Three of the systematic reviews obtained samples of rapid review products – though not necessarily randomly – and examined aspects of the methods used in their production [ 18 , 19 , 21 ]. Three of the systematic reviews reviewed articles on rapid review methods [ 18 , 30 , 32 ]. Two of these also included a comparison of findings from rapid reviews and systematic reviews conducted for the same topic [ 18 , 32 ].

Characteristics of the included systematic reviews. Reviews are ordered chronologically, from most to least recent, and alphabetically within years

a The outcome ‘methods used’ refers to the method used in the included rapid reviews. This outcome is important for determining the quality of the review

HTA health technology assessment, RR rapid review, SR systematic review

None of the systematic reviews that were identified examined the outcomes of resources required to complete, synthesis quality, efficiency of methods, satisfaction with methods and products, implementation, or cost-effectiveness. However, while not explicitly assessing synthesis/review quality, all of the reviews did report the methods used to conduct the rapid reviews. We have reported these details as they give an indication of the quality of the review. Therefore, the outcomes reported in the included systematic reviews and recorded in Table  1 and Additional file 4 do not align perfectly with those proposed in our inclusion criteria. In addition, we have included some information that was not pre-defined but for which we extracted information because it provided important contextual information, e.g. type of product, definition, rapid review initiation and rationale, nomenclature, and content. The reporting of the results was also further complicated by the use of a narrative, rather than a quantitative, synthesis of the results in the included studies.

It is not possible to say how many unique studies are included in these systematic reviews because only one review actually included a list of included studies [ 30 ] and one a characteristics of included studies table (but not in a form that was easy to use) [ 21 ]. However, it is clear that there is likely to be significant overlap in studies between reviews. For example, the most recent systematic review by Hartling et al. [ 31 , 32 ] also included the four previous systematic reviews included in this rapid review [ 18 , 19 , 21 , 30 , 33 ].

The RCT was targeted at healthcare professionals involved in clinical guideline development [ 34 ]. It aimed to assess the effectiveness of different evidence summary formats for use in clinical guideline development. Three different packs were tested – pack A: a systematic review alone; pack B: a systematic review with summary-of-findings tables included; and pack C: an evidence synthesis and systematic review. Pack C is described by the authors of the study as: “ a locally prepared, short, contextually framed, narrative report in which the results of the systematic review (and other evidence where relevant) were described and locally relevant factors that could influence the implementation of evidence-based guideline recommendations (e.g. resource capacity) were highlighted ” [ 34 ]. We interpreted pack C as being a ‘rapid review’ for the purposes of this review as the authors state that it is based on a comprehensive search and critical appraisal of the best currently available literature, which included a Cochrane review, an overview of systematic reviews and RCTs, and additional RCTs but was likely to have been done in a short timeframe. It was also conducted to help improve decision-making. The primary outcome measured was the proportion of correct responses to key clinical questions, whilst the secondary outcome was a composite score comprised of clarity of presentation and ease of locating the quality of evidence [ 34 ]. This study was not included in any previous systematic reviews.

Four of the systematic reviews obtained AMSTAR scores of 2 (low quality) and one a score of 4 (medium quality). No high quality systematic reviews were found. Thus, the findings of the systematic reviews should be taken as indicative only and no firm conclusions can be made. The RCT was classified as low risk of bias on the Cochrane Risk of Bias tool. The quality assessments can be found in Additional file 5 .

Definition of a ‘rapid review’

The five systematic reviews are consistent in stating that there is no agreed definition of rapid reviews and no agreed methodology for conducting rapid reviews [ 18 , 19 , 21 , 30 – 33 ]. According to the authors of one review: “ the term ‘rapid review’ does not appear to have one single definition but is framed in the literature as utilizing various stipulated time frames between 1 and 6 months ” [ 21 , p. 398]. The definitions offered to guide the reviews by Abrami et al. [ 19 ] and Cameron et al. [ 30 ] both use a timeframe of up to 6 months (Table  1 ). Cameron et al. [ 30 ] also include in their definition the requirement that the review contains the elements of a comprehensive search – though they do not offer criteria to assess this.

Abrami et al. [ 19 ] use the term ‘brief review’ rather than ‘rapid review’ to emphasise that both timeframe and scope may be affected. They write that “ a brief review is an examination of empirical evidence that is limited in its timeframe (e.g. six months or less to complete) and/or its scope, where scope may include:

  • the breadth of the question being explored (e.g. a review of one-to-one laptop programs versus a review of technology integration in schools);
  • the timeliness of the evidence included (e.g. the last several years of research versus no time limits);
  • the geographic boundaries of the evidence (e.g. inclusion of regional or national studies only versus international evidence);
  • the depth and detail of analyses (e.g. reporting only overall findings versus also exploring variability among the findings); or
  • otherwise more restrictive study inclusion criteria than might be seen in a comprehensive review .” [ 19 , p. 372].

All other included systematic reviews used the term ‘rapid review’ or ‘rapid health technology assessment’ to describe rapid reviews.

Methods used based on examples of rapid reviews

While the word ‘rapid’ indicates that it will be carried out quickly, there is no consistency in published rapid reviews as to how it is made rapid and which part, or parts, of the review are carried out at a faster pace than a full systematic review [ 18 , 19 , 21 ]. A further complexity is the reporting of methods used in the rapid review, with about 43% of the rapid reviews examined by Abrami et al. [ 19 ] not describing their methodology comprehensively. Three examples of ‘shortcuts’ taken are (1) not using two reviewers for study selection and/or data extraction; (2) not conducting a quality assessment of included studies; and (3) not searching for grey literature [ 18 , 19 , 21 ]. However, it is important to note that the rapid reviews examined in these three systematic reviews were not necessarily selected randomly and, thus, it is not possible to accurately quantify the proportion of rapid reviews taking various ‘shortcuts’ and which ‘shortcuts’ are the most common. The time taken for the reviews examined varied from several days to one year [ 19 ]; 3 weeks to 6 months [ 18 ]; and 7–12 (mean 10.42, SD 7.1) months [ 21 ].

Methods used based on studies of rapid review methods

Methodological approaches or ‘shortcuts’ used in rapid reviews to make them faster than a full systematic review include [ 18 , 19 , 32 ] limiting the number of questions; limiting the scope of questions; searching fewer databases; limited use of grey literature; restricting the types of studies included (e.g. English only, most recent 5 years); relying on existing systematic reviews; eliminating or limiting hand searching of reference lists and relevant journals; narrow time frame for article retrieval; using non-iterative search strategy; eliminating consultation with experts; limiting full-text review; limiting dual review for study selection, data extraction and/or quality assessment; limiting data extraction; limiting risk of bias assessment or grading; minimal evidence synthesis; providing minimal conclusions or recommendations; and limiting external peer review. Harker et al. [ 21 ] found that, with increasing timeframes, fewer of the ‘shortcuts’ were used and that, with longer timeframes, it was more likely that risk of bias assessment, evidence grading and external peer review would be conducted [ 21 ].

None of the included systematic reviews offer firm guidelines for the methodology underpinning rapid reviews. Rather, they report that many articles written about rapid reviews offer only examples and discussion surrounding the complexity of the area [ 30 ].

Supporting evidence for shortcuts

While authors of the included systematic reviews tend to agree that changes to scope or timeframe can introduce biases (e.g. selection bias, publication bias, language of publication bias) they found little empirical evidence to support or refute that claim [ 18 , 19 , 21 , 30 , 32 ].

The review by Ganann et al. [ 18 ] included 45 methodological studies that considered issues such as the impact of limiting the number of databases searched, hand searching of reference lists and relevant journals, omitting grey literature, only including studies published in English, and omitting quality assessment. However, they were unable to provide clear methodological guidelines based on the findings of these studies.

Comparison of findings – rapid reviews versus systematic reviews

A key question is whether the conclusions of a rapid review are fundamentally different to a full systematic review, i.e. whether they are sufficiently different to change the resulting decision. This is an area where the research is extremely limited. There are few comparisons of full and rapid reviews that are available in the literature to be able to determine the impact of the above methodological changes – only two primary studies were reported in the included systematic reviews [ 35 , 36 ]. It is important to note that neither of these studies met, on their own, the inclusion criteria for the review in that they did not have a sufficiently strong study design. Both are included in the list of 12 studies excluded from data extraction (Fig.  1 and Additional file 3 ). Thus, they provide a very low level of evidence.

One of the primary studies compared full and rapid reviews on the topics of drug eluting stents, lung volume reduction surgery, living donor liver transplantation and hip resurfacing [ 30 , 36 ]. There were no instances in which the essential conclusions of the rapid and full reviews were opposed [ 32 ]. The other compared a rapid review with a full systematic review on the use of potato peels for burns [ 35 ]. The results and conclusions of the two reports were different. The authors of the rapid review suggest that this is because the systematic review was not of sufficiently good quality – as they missed two important trials in their search [ 35 ]. However, the limited detail on the methods used to conduct the systematic review makes this case study of limited value. Further research is needed in this area.

Impact of rapid syntheses on understanding of decision makers

The included RCT by Opiyo et al. [ 34 ] examined the impact of different evidence summary or synthesis formats on knowledge of the evidence, with each participant receiving a pack containing three different summaries; they found no differences between packs in the odds of correct responses to key clinical questions. Pack C (the rapid review) was associated with a higher mean composite score for clarity and accessibility of information about the quality of evidence for critical neonatal outcomes compared to systematic reviews alone (pack A) (adjusted mean difference 0.52, 95% confidence interval, 0.06–0.99). Findings from interviews with 16 panellists indicated that short narrative evidence reports (pack C) were preferred for the improved clarity of information presentation and ease of use. The authors concluded that their “ findings suggest that ‘graded-entry’ evidence summary formats may improve clarity and accessibility of research evidence in clinical guideline development ” [ 34 , p. 1].

This review is the first high quality review (using systematic reviews as the gold standard for literature reviews) published in the literature that provides a comprehensive overview of the state of the rapid review literature. It highlights the lack of definition, lack of defined methods and lack of research evidence showing the implications of methodological choices on the results of both rapid reviews and systematic reviews. It also adds to the literature by offering clearer guidance for policy and practice than has been offered in previous reviews (see Implications for policy and practice ).

While five systematic reviews of methods for rapid reviews were found, none of these were of sufficient quality to allow firm conclusions to be made. Thus, the findings need to be treated with caution. There is no agreed definition of rapid reviews in the literature and no agreed methodology for conducting rapid reviews [ 18 , 19 , 21 , 30 – 33 ]. However, the systematic reviews included in this review are consistent in stating that a rapid review is generally conducted in a shorter timeframe and may have a reduced scope. A wide range of ‘shortcuts’ are used to make rapid reviews faster than a full systematic review. While authors of the included systematic reviews tend to agree that changes to scope or timeframe can introduce biases (e.g. selection bias, publication bias, language of publication bias) they found little empirical evidence to support or refute that claim [ 18 , 19 , 21 , 30 , 32 ]. Further, there are few comparisons available in the literature of full and rapid reviews to be able to determine the impact of these ‘shortcuts’. There is some evidence from a good quality RCT with low risk of bias that rapid reviews may improve clarity and accessibility of research evidence for decision makers [ 34 ], which is a unique finding from our review.

A scoping review published after our search found over 20 different names for rapid reviews, with the most frequent term being ‘rapid review’, followed by ‘rapid evidence assessment’ and ‘rapid systematic review’ [ 23 ]. An associated international survey of rapid review producers and modified Delphi approach counted 31 different names [ 37 ]. With regards to rapid review methods and definitions, the scoping review found 50 unique methods, with 16 methods occurring more than once [ 23 ]. For their scoping review and international survey, Tricco et al. utilised the working definition: “ a rapid review is a type of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a short period of time ” [ 23 , 37 ].

The authors of the most recent systematic review of rapid review methods suggest that: “ the similarity of rapid products lies in their close relationship with the end-user to meet decision making needs in a limited timeframe ” [ 32 , p. vii]. They suggest that this feature drives other differences, including the large range of products often produced by rapid response groups, and the wide variation in methods used [ 32 ] – even within the same product type produced by the same group. We suggest that this feature of rapid reviews needs to be part of the definition and considered in future research on rapid reviews, including whether it actually leads to better uptake of research. To aid future research, we propose the following definition: a rapid review is a type of systematic review in which components of the systematic review process are simplified, omitted or made more efficient in order to produce information in a shorter period of time, preferably with minimal impact on quality. Further, they involve a close relationship with the end-user and are conducted with the needs of the decision-maker in mind.

When comparing rapid reviews to systematic reviews, the confounding effects of quality of the methods used must be considered. If rapid syntheses of research are seen as systematic reviews performed faster and if systematic reviews are seen as the gold standard for evidence synthesis, the quality of the review is likely to depend on which ‘shortcuts’ were taken and this can be assessed using available quality measures, e.g. AMSTAR [ 28 ]. While Cochrane Collaboration systematic reviews are consistently of a very high quality (achieving 10 or 11 on the AMSTAR scale, based on our own experience) the same cannot be said for all systematic reviews that can be found in the published literature or in databases of systematic reviews – as is demonstrated by this review where AMSTAR scores were quite low (Additional file 5 ) and a related overview where AMSTAR scores varied between two and ten [ 24 , Additional file one]. This fact has not been acknowledged in previous syntheses of the rapid review literature. It is also possible for rapid reviews to achieve high AMSTAR scores if conducted and reported well. Therefore, it can be easily argued that a high quality rapid review is likely to provide an answer closer to the ‘truth’ than a systematic review of low quality. It is also an argument for using the same tool for assessing the quality of both systematic and rapid reviews.

Authors of the published systematic reviews of rapid reviews suggest that, rather than focusing on developing a formalised methodology, which may not be appropriate, researchers and users should focus on increasing the transparency of the methods used for each review [ 18 , 30 , 33 ]. Indeed, several AMSTAR criteria are highly dependent on the transparency of the write-up rather than the methodology itself. For example, there are many examples of both systematic and rapid review authors not stating that they used a protocol for their review when, in fact, they did use one, leading to a loss of 1 point on the AMSTAR scale. Another example is review authors failing to provide an adequate description of the study selection and data extraction process, thus making it hard for those assessing the quality of the review to determine if this was done in duplicate, which is again a loss of 1 point on the AMSTAR scale.

While it could be argued that none of the included reviews described their review as a systematic review, we believe that it is appropriate to assess their quality using the AMSTAR tool. This the best tool available, to our knowledge, to assess and compare the quality of review methods and considers the major potential sources for bias in reviews of the literature [ 28 , 38 ]. Further, the five reviews included were clearly not narrative reviews as each described their methods, including sources of studies, search terms and inclusion criteria used.

Strengths and limitations

A key strength of this rapid review is the use of high quality systematic review methodology, including the consideration of the scientific quality of the included studies in formulating conclusions. A meta-analysis was not possible due to the heterogeneity in terms of intervention types and populations studied in the included systematic reviews. As a result publication bias could not be assessed quantitatively in this review and no clear methods are available for assessing publication bias qualitatively [ 39 ]. Shortcuts taken to make this review more rapid, as well as an AMSTAR assessment of the review, are shown in Additional file 6 . The AMSTAR assessment is based on the published tool [ 28 ] and additional guidance provided on the AMSTAR website ( http://amstar.ca/Amstar_Checklist.php ).

The current rapid review is evidence that a review can include several shortcuts and be produced in a relatively short amount of time without sacrificing quality, as shown by the high AMSTAR score (Additional file 6 ). The time taken to complete this review was 7 months from signing of contract (November 2014) to submission of the final report to the funder (June 2015). Alternatively, if publication of the protocol on PROSPERO and the start of literature searching (January 2014) are taken as the starting point, the time taken was 5 months.

Limitations of this review include (1) the low quality of the systematic reviews found, with three of the four included systematic reviews judged as low quality on the AMSTAR criteria and the fourth just making it to medium quality (Additional file 5 ); (2) the fact that few primary studies were conducted in developing countries, which is an issue for the generalisability of the results; and (3) restricting the search to articles in English, French, Spanish or Portuguese (languages with which the review authors are competent) and to the last 10 years. However, this was done to expedite the review process and is unlikely to have resulted in the loss of important evidence.

Implications for policy and practice

Users of rapid reviews should request an AMSTAR rating and a clear indication of the shortcuts taken to make the review process faster. Producers of rapid reviews should give greater consideration to the ‘write-up’ or presentation of their reviews to make their review methods more transparent and to enable a fair quality assessment. This could be facilitated by including the appropriate elements in templates and/or guidelines. If a shorter report is required, the necessary detail could be placed in appendices.

When deciding what methods and/or process to use for their rapid reviews, producers of rapid reviews should give priority to shortcuts that are unlikely to impact on the quality or risk of bias of the review. Examples include limiting the scope of the review [ 19 ], limiting data extraction to key characteristics and results [ 32 ], and restricting the study types included in the review [ 32 ]. When planning the rapid review, the review producer should explain to the user the implications of any shortcuts taken to make the review faster, if any.

Producers of rapid reviews should consider maintaining a larger highly skilled and experienced staff, who can be mobilised quickly, and understands the type of products that might meet the needs of the decision maker [ 19 , 32 ]. Consideration should also be given to making the process more efficient [ 19 ]. These measures can aid timelines without compromising quality.

Implications for research

The impact on the results of rapid reviews (and systematic reviews) of any ‘shortcuts’ used requires further research, including which ‘shortcuts’ have the greatest impact on the review findings. Tricco et al. [ 23 , 37 ] suggest that this could be examined through a prospective study that compares the results of rapid reviews to those obtained through systematic reviews on the same topic. However, to do this, it will be important to consider quality as a confounding factor and ensure random selection and blinding of the rapid review producers. If random selection and blinding cannot be guaranteed, we suggest that retrospective comparisons may be more appropriate. Another, related approach, would be to compare findings of reviews (be they systematic or rapid) for each type of shortcut, controlling for methodological quality. Other issues, such as the breadth of the inclusion criteria used and number of studies included would also need to be considered as possible confounding factors.

The development of reporting guidelines for rapid reviews, as are available for full systematic reviews, would also help [ 18 , 25 ]. These should be heavily based on systematic review guidelines but also consider characteristics specific to rapid reviews such as the relationship with the review user.

Finally, future studies and reviews should also address the outcomes of review quality, satisfaction with methods and products, implementation and cost-effectiveness as these outcomes were not measured in any of the included studies or reviews. Effectiveness of rapid reviews in increasing the use of research evidence in policy decision-making is also an important area for further research.

Care needs to be taken in interpreting the results of this rapid review on the best methodologies for rapid review given the limited state of the literature. There is a wide range of methods currently used for rapid reviews and wide range of products available. However, greater care needs to be taken in improving the transparency of the methods used in rapid review products to enable better analysis of the implications of methodological ‘shortcuts’ taken for both rapid reviews and systematic reviews. This requires the input of policymakers and practitioners, as well as researchers. There is no evidence available to suggest that rapid reviews should not be done or that they are misleading in any way.

Acknowledgements

We thank the authors of included studies and other experts in the field who responded to our request to identify further studies that could meet our inclusion criteria and/or responded to our queries regarding their study.

This work was developed and funded under the cooperation agreement # 47 between the Department of Science and Technology of the Ministry of Health of Brazil and the Pan American Health Organization. The funders of this study set the terms of reference for the project but, apart from the input of JB, EC and LR to the conduct of the study, did not significantly influence the work. Manuscript preparation was funded by the Ministry of Health Brazil, through an EVIPNet Brazil project with the Bireme/PAHO.

Availability of data and materials

Authors’ contributions.

EC and JB had the original idea for the review and obtained funding; MH and EC wrote the protocol with input from RC, JB, and LR; MH and RC undertook the article selection, data extraction and quality assessment; MH undertook data synthesis and drafted the manuscript; JL, EC, RC, JB and LR provided guidance throughout the selection, data extraction and synthesis phase of the review; all authors provided commentary on and approved the final manuscript.

Competing interests

The author(s) declare that they have no competing interests. Neither the Ministry of Health of Brazil nor the Pan American Health Organization (PAHO), the funders of this research, have a vested interest in any of the interventions included in this review – though they do have a professional interest in increasing the uptake of research evidence in decision making. EC and LR are employees of PAHO and JB was an employee of the Ministry of Health of Brazil at the time of the study. However, the views and opinions expressed herein are those of the review authors and do not necessarily reflect the views of the Ministry of Health of Brazil or PAHO. JNL is involved in a rapid response service but was not involved in the selection or data extraction phases. MH, as part of her previous employment with an Australian state government department of health, was responsible for commissioning and using rapid reviews to inform decision making.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Abbreviations, additional files.

Changes to the protocol. (DOCX 28 kb)

Search terms. (DOCX 31 kb)

Characteristics of excluded studies and reference details. (DOCX 40 kb)

Characteristics of included studies. (DOCX 47 kb)

Quality assessment of included studies. (DOCX 29 kb)

Shortcuts and quality assessment of this review. (DOCX 30 kb)

Contributor Information

Michelle M. Haby, Email: ua.ude.bleminu@ybah .

Evelina Chapman, Email: moc.liamg@pahcanileve .

Rachel Clark, Email: [email protected] .

Jorge Barreto, Email: rb.zurcoif@oterrabegroj .

Ludovic Reveiz, Email: gro.ohap@lziever .

John N. Lavis, Email: ac.retsamcm@jsival .

  • Open access
  • Published: 30 July 2022

Paper 2: Performing rapid reviews

  • Valerie J. King 1 ,
  • Adrienne Stevens 2 ,
  • Barbara Nussbaumer-Streit 3 ,
  • Chris Kamel 4 &
  • Chantelle Garritty 5  

Systematic Reviews volume  11 , Article number:  151 ( 2022 ) Cite this article

9409 Accesses

18 Citations

1 Altmetric

Metrics details

Health policy-makers must often make decisions in compressed time frames and with limited resources. Hence, rapid reviews have become a pragmatic alternative to comprehensive systematic reviews. However, it is important that rapid review methods remain rigorous to support good policy development and decisions. There is currently little evidence about which streamlined steps in a rapid review are less likely to introduce unacceptable levels of uncertainty while still producing a product that remains useful to policy-makers.

This paper summarizes current research describing commonly used methods and practices that are used to conduct rapid reviews and presents key considerations and options to guide methodological choices for a rapid review.

The most important step for a rapid review is for an experienced research team to have early and ongoing engagement with the people who have requested the review. A clear research protocol, derived from a needs assessment conducted with the requester, serves to focus the review, defines the scope of the rapid review, and guides all subsequent steps. Common recommendations for rapid review methods include tailoring the literature search in terms of databases, dates, and languages. Researchers can consider using a staged search to locate high-quality systematic reviews and then subsequently published primary studies. The approaches used for study screening and selection, data extraction, and risk-of-bias assessment should be tailored to the topic, researcher experience, and available resources. Many rapid reviews use a single reviewer for study selection, risk-of-bias assessment, or data abstraction, sometimes with partial or full verification by a second reviewer. Rapid reviews usually use a descriptive synthesis method rather than quantitative meta-analysis. Use of brief report templates and standardized production methods helps to speed final report publication.

Conclusions

Researchers conducting rapid reviews need to make transparent methodological choices, informed by stakeholder input, to ensure that rapid reviews meet their intended purpose. Transparency is critical because it is unclear how or how much streamlined methods can bias the conclusions of reviews. There are not yet internationally accepted standards for conducting or reporting rapid reviews. Thus, this article proposes interim guidance for researchers who are increasingly employing these methods.

Peer Review reports

Introduction

Health policy-makers and other stakeholders need evidence to inform their decisions. However, their decisions must often be made in short time frames, and they may have other resource constraints, such as the available budget or personnel [ 1 , 2 , 3 , 4 , 5 , 6 ]. Rapid reviews are increasingly being used and are increasingly influential in the health policy and system arena [ 3 , 7 , 8 , 9 , 10 ]. One needs assessment [ 11 ] showed that policy-makers want evidence reviews to answer the right question, be completed in days to weeks, rather than months or years, be accurate and reproducible, and be affordable.

As much as policy-makers may desire faster and more efficient evidence syntheses, it is not yet clear whether rapid reviews are sufficiently rigorous and valid, compared to systematic reviews which are considered the “gold standard” evidence synthesis, to inform policy [ 12 ]. Only a few empirical studies have compared the findings of rapid reviews and systematic reviews on the same topic, and their results are conflicting and inconclusive, leaving questions about the level of bias that may be introduced because of rapid review methods [ 7 , 13 , 14 , 15 , 16 , 17 , 18 , 19 ].

A standardized or commonly agreed-upon set of methods for conducting rapid reviews had not existed until recently, [ 1 , 9 , 14 , 20 , 21 , 22 , 23 ] and while there is little empiric evidence on some of the standard elements of systematic reviews, [ 24 ] those standards are well articulated [ 25 , 26 ]. A minimum interim set of standards has was developed by the Cochrane Rapid Reviews Methods Group [ 1 , 2 ] to help guide rapid review production during the SARS-CoV-19 pandemic, and other researchers have proposed methods and approaches to guide rapid reviews [ 5 , 21 , 22 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 ].

This article gives an overview of potential ways to produce a rapid review while maintaining a synthesis process that is sufficiently rigorous, yet tailored as needed, to support health policy-making. We present options for common methods choices, summarized from descriptions and evaluations of rapid review products and programs in Table 1 , along with key considerations for each methodological step.

The World Health Organization (WHO) published Rapid reviews to strengthen health policy and systems: a practical guide [ 5 ] in 2017. The initial work for this article was completed as a chapter for that publication and included multiple literature searches and layers of peer review to identify important studies and concepts. We conducted new searches using Ovid MEDLINE, the Cochrane Library’s methodology collection, and the bibliography of studies maintained by the Cochrane Rapid Reviews Methods Group, to identify articles, including both examples of rapid reviews and those on rapid review methodology, published after the publication of the WHO guide. We have not attempted to perform a comprehensive identification or catalog of all potential articles on rapid reviews or examples of reviews conducted with these methods. As this work was not a systematic review of rapid review methods, we do not include a flow of articles from search to inclusion and have not undertaken any formal critical appraisal of the articles we did include.

Needs assessment, topic selection, and topic refinement

Rapid reviews are typically conducted at the request of a particular decision-maker, who has a key role in posing the question, setting the parameters of the review, and defining the timeline [ 40 , 41 , 42 ]. The most common strategy for completing a rapid review within a limited time frame is to narrow its scope. This can be accomplished by limiting the number of questions, interventions, and outcomes considered in the review [ 13 , 15 ]. Early and continuing engagement of the requester and any other relevant stakeholders is critical to understand their needs, the intended use of the review, and the expected timeline and deliverables [ 15 , 28 , 29 , 40 , 41 , 42 ]. Policy-makers and other requesters may have vaguely defined questions or unrealistic expectations about what any type of review can accomplish [ 41 , 42 ]. A probing conversation or formal needs assessment is the critical first step in any knowledge synthesis approach to determine the scope of the request, the intended purpose for the completed review, and to obtain a commitment for collaboration over the duration of the project [ 28 , 30 , 41 ]. Once the request and its context are understood, researchers should fully develop the question(s), including any needed refinement with the requester or other stakeholders, before starting the project [ 5 ]. This process can be iterative and may require multiple contacts between the reviewers and the requester to ensure that the final rapid review is fit for its intended purpose [ 41 , 42 ]. In situations where a definitive systematic review might be needed, it may be useful to discuss with the requester the possibility of conducting a full systematic review, either in parallel or serially with the rapid review [ 43 ].

Protocol development

A research protocol clearly lays out the scope of the review, including the research questions and the approaches that will be used to conduct the review [ 44 ]. We suggest using the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement for guidance [ 37 ]. Most reviewers use the PICO format (population, intervention, comparator, outcome), with some adding elements for time frame, setting, and study design. The PICO elements help to define the research questions, and the initial development of questions can point to needed changes in the PICO elements. For some types of research questions or data, other framework variations such as SPICE (setting, perspective, intervention, comparison, evaluation) may be used, although the PICO framework can generally be adapted [ 45 ]. Health services and policy research questions may call for more complex frameworks [ 5 ]. This initial approach assists both researchers and knowledge users to know what is planned and enables documentation of any protocol deviations; however, the customized and iterative nature of rapid reviews means that some flexibility may be required. Some rapid review producers include the concept of methods adjustment in the protocol itself [ 46 , 47 ]. However, changes made beyond the protocol stage and the rationale for making them must be transparent and documented in the final report.

The international prospective register of systematic reviews (PROSPERO) [ 44 ] ( https://www.crd.york.ac.uk/PROSPERO/ ) accepts registration of protocols that include at least one clinically or patient-relevant outcome. The Open Science Framework (OSF) [ 48 ] platform ( https://osf.io/ ) also accepts protocol registrations for rapid reviews. We advise protocol submitters to include the term “rapid review” or another similar term in the registered title, as this will assist tracking the use, validity, and value of rapid reviews [ 1 ]. Protocol registration helps to decrease research waste and allows both requesters and review authors to avoid duplication. Currently, most rapid review producers report using a protocol, but few register their protocols [ 13 , 17 ].

Literature search

Multiple authors have conducted inventories of the characteristics of and methods used for rapid reviews, including the broad categories of literature search, study selection, data extraction, and synthesis steps [ 13 , 15 , 17 , 20 , 24 , 49 ]. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) standards call for documentation of the full search strategy for all electronic databases used [ 38 ]. Most published rapid reviews search two or more databases, with PubMed, Embase, and the Cochrane Library mentioned frequently [ 13 , 17 , 20 , 49 ]. Rapid reviews often streamline systematic review methods by limiting the number of databases searched and the search itself by date, language, geographical area, or study design, and some rapid reviews search only for existing systematic reviews [ 13 , 15 , 17 , 20 , 49 , 50 ]. Other rapid reviews use a layered searching approach, identifying existing systematic reviews and then updating them with a summary of more recent eligible primary studies [ 13 , 15 , 18 , 20 , 36 ]. Studies of simplified search strategies have generally demonstrated acceptable retrieval characteristics for most types of rapid review reports [ 51 , 52 ]. Searching the reference lists of eligible studies (sometimes known as the “snowballing” technique) and searching the gray literature (i.e., reports that are difficult to locate or unpublished) are done in about half of published rapid reviews and may be essential for certain topics [ 13 , 15 , 20 , 49 ]. However, rapid reviews seldom report contact with authors and other experts to identify additional unpublished studies [ 13 , 15 , 20 , 49 ]. One study found that peer review of the search strategy, using a tool such as the PRESS (peer review of electronic search strategies) checklist, [ 39 ] was reported in 38% of rapid reviews, but that it was usually performed internally rather than by external information specialist reviewers [ 13 ]. Peer review of search strategies has been reported to increase retrieval of relevant records, particularly for nonrandomized studies [ 53 ].

Screening and study selection

Methodological standards for systematic reviews generally require independent screening of citations and abstracts by at least two researchers to arrive at a set of potentially eligible references, which are in turn subjected to dual review in full-text format to arrive at a final inclusion set. Rapid reviews often streamline this process, with up to 40% using a single researcher at each stage [ 13 , 15 , 17 , 18 , 20 , 49 ]. Some rapid reviews report verification of a sample of the articles by a second researcher or, occasionally, use of full dual screening by two independent researchers [ 13 , 17 , 20 , 49 ]. One methodological study reported that single screener selection missed an average of 5% of eligible studies, ranging from 3% for experienced reviewers and 6% for those with less experience [ 54 ]. If time and resources allow, we recommend that dual screening of all excluded studies, at both the title and full-text stages, be used to minimize the risk of selection bias through the inappropriate exclusion of relevant studies. However, there is some evidence that the use of a single experienced reviewer for particular topics may be sufficient [ 18 , 46 , 54 ].

Data extraction

As with citation screening and study selection, the number of independent reviewers who extract study data for a rapid review can vary. One study found that the most common approach is single-reviewer extraction (41%), although another 25% report verification of a sample by a second reviewer and nearly as many used dual extraction [ 13 ]. A more recent study reported that only about 10% of rapid reviews examined reported dual data extraction, although nearly twice as many simply did not report this feature [ 17 ]. Data abstraction generally includes PICO elements, although data abstraction was often limited by the scope of the review, and authors were contacted for missing data very infrequently [ 13 ].

Risk-of-bias assessment

Risk-of-bias assessment, sometimes called critical appraisal or methodological quality appraisal, examines the quality of the methods employed for each included study and is a standard element of systematic reviews [ 25 ]. The vast majority of rapid review producers perform some type of critical appraisal [ 17 , 20 ]. Some rapid reviews report the use of a single assessor with verification of a sample of study assessments by another assessor [ 17 , 49 ]. There is no consensus as to which risk-of-bias assessment tools should be used, although most reviews use study design-specific instruments (e.g., an instrument designed for randomized controlled trials (RCTs) if assessing RCTs) intended for assessing internal validity [ 13 , 20 ].

Knowledge synthesis

Nearly all rapid review producers conduct a descriptive synthesis (also often called a narrative synthesis) of results, but a few perform additional meta-analyses or economic analyses [ 13 , 17 , 20 ]. The synthesis that is conducted is often limited to a basic descriptive summary of studies and their results, rather than the full synthesis that is recommended for systematic reviews [ 26 ]. Most rapid reviews present conclusions, recommendations, or implications for policy or clinical practice as another component of the synthesis. Multiple experts also recommend that rapid reviews clearly describe and discuss the potential limitations arising from methodological choices [ 5 , 9 , 13 , 15 , 23 ].

Many systematic review producers use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system [ 55 ] ( http://www.gradeworkinggroup.org/ ) to rate the certainty of the evidence about health outcomes. Guideline developers and others who make recommendations or policy decisions use GRADE to rate the strength of recommendations based on that evidence. The GRADE evidence to decisions (EtD) framework has also been used to help decision-makers developing health system and public health [ 56 ] and coverage [ 57 ] policies. Rapid review authors can also employ GRADE to rate the certainty of synthesized evidence and develop policy implications for decision-makers if time and resources permit. However, the GRADE system works best for interventions that have been subject to RCTs and where there is at least one meta-analysis to provide a single estimate of effect.

Report production and dissemination

Standard templates for each stage of the review, from protocol development to report production, can assist the review team in performing each step efficiently. Use of a report template, with minimum methodological standards, reporting requirements, and standard report sections, can assist the producer in streamlining production of the report and can also enhance transparency [ 15 , 20 , 28 , 40 ]. An extension of the PRISMA statement for rapid reviews is under development and has been registered with the EQUATOR Network [ 58 ]. Until it is available, the PRISMA checklist for systematic reviews can serve as a reporting template to increase the transparency of rapid reviews [ 8 , 40 , 59 ].

Research about review formatting and presentation of rapid review is being conducted, but it is likely that the forms employed and tested will need to be adapted to the individual requester and stakeholder audiences [ 47 ]. Khangura and colleagues [ 28 ] have presented a figure showing formatted sections of a sample report, and many other rapid review producers have examples of reports online that can serve as formatting examples. In addition, findings from evidence summary presentation research for decision-makers in low- and middle-income countries can be translated into other settings [ 60 , 61 ].

Most rapid review producers conduct some form of peer review for the resulting reports, but such review is often internal and may include feedback from the requester [ 13 ]. Most producers disseminate their reports beyond the requester, but dissemination varies by the sensitivity or proprietary nature of the product [ 13 , 20 ]. When reports are disseminated, it is common for them to be posted online, for example, at an organizational website [ 13 , 20 ].

Operational considerations

Evaluations and descriptions of research programs that produce rapid reviews typically include some helpful pragmatic and operational considerations for undertaking a rapid review or developing a rapid review program [ 5 , 15 , 18 , 27 , 28 , 29 , 31 , 36 , 40 , 62 , 63 ]. Highly experienced, permanent staff with the right skill mix, including systematic reviewers, information specialists, methodologists, and content experts [ 15 , 18 , 30 , 40 , 49 ], are essential. It is time-consuming to assemble staff on a per-project basis, so the presence of an existing team (which may only do rapid reviews or may also do systematic reviews or other research) with review infrastructure already in place allows projects to get off to a quick start. The existence of a dedicated team also creates the potential to build relationships with requesters and to cultivate mutual trust. Staff with experience conducting systematic reviews will be familiar with standard methods and may be alert to any needed protocol changes as the review proceeds [ 49 ]. The rapid review team must understand the methodological implications of decisions taken and must convey these implications to the requesters, to allow them to understand the caveats and potential limitations. Continuing relationships and longer-term contracting with requesters, to allow for a quick start and “good faith” initiation of work before a contract is in place, can speed the early development stages [ 31 , 40 ]. It is important for rapid review producers to confirm that the choices they make to streamline the review are acceptable to the requester [ 41 ]. Whether it is a decision to limit the scope to a single intervention or outcome, restrict the literature search to existing systematic reviews, or forgo a meta-analysis, the knowledge user must be aware of the implications of streamlining decisions [ 15 , 27 , 31 , 41 ]. Some programs also emphasize the need for follow-up with review requesters to develop the relationship and continuously improve knowledge products [ 28 , 63 ]. Although it is beyond the scope of this article, we note that both systematic and rapid review producers are currently using various automated technologies to speed review production. There are examples of tools to help search for references, screen citations, abstract data, organize reviews, and enhance collaboration, but few evaluations of their validity and value in report production [ 64 , 65 ]. The Systematic Review Toolbox [ 66 ] ( http://systematicreviewtools.com/ ) is an online searchable database of tools that can help perform tasks in the evidence synthesis process.

Table 1 summarizes the commonly described approaches and key considerations for the major steps in a rapid review that are discussed in detail in the preceding sections.

Suggested approaches to rapid reviews

The previous sections have summarized the numerous approaches to conducting rapid reviews. Abrami and colleagues [ 27 ] summarized several methods of conducting rapid reviews and developed a brief review checklist of considerations and recommendations, which may serve as a useful parallel to Table 2 . A “one-size-fits-all” approach may not be suitable to cover the variety of topics and requester needs put forward. Watt and colleagues [ 9 ] observed over a decade ago, “It may not be possible to validate methodological strategies for conducting rapid reviews and apply them to every subject. Rather, each topic must be evaluated by thorough scoping, and appropriate methodology defined.” Plüddemann and colleagues [ 23 ] advocated for a flexible framework for what they term “restricted reviews,” with a set of minimum requirements and additional steps to reduce the risk of bias when time and resources allow. Thomas, Newman, and Oliver [ 29 ] noted that it might be more difficult to apply rapid approaches to questions of social policy than to technology assessment, in part because of the complexity of the topics, underlying studies, and uses of these reviews. The application of mixed methods, such as key informant interviews, stakeholder surveys, primary data, and policy analysis, may be required for questions with a paucity of published literature and those involving complex subjects [ 29 ]. However, rapid review producers should remain aware that streamlined methods may not be appropriate for all questions, settings, or stakeholder needs, and they should be honest with requesters about what can and cannot be accomplished within the timelines and resources available [ 31 ]. For example, a rapid review would likely be inappropriate as the foundation for a national guideline on cancer treatment due to be launched 5 years in the future. A decision tool, STARR (SelecTing Approaches for Rapid Reviews) has been published by Pandor and colleagues [ 67 ] to help guide decisions about interacting with report requesters, making informed choices regarding to the evidence base, methods for data extraction and synthesis, and reporting on the approaches used for the report.

Tricco and colleagues [ 21 ] conducted an international survey of rapid review producers, using a modified Delphi ranking to solicit opinions about the feasibility, timeliness, comprehensiveness, and risk of bias of six different rapid review approaches. Ranked best in terms of both risk of bias and feasibility was “approach 1,” which included published literature only, based on a search of one or more electronic databases, limited in terms of both date and language. With this approach, a single reviewer conducts study screening, and both data extraction and risk-of-bias assessment are done by a single reviewer, with verification by a second researcher. Other approaches were ranked best in terms of timeliness and comprehensiveness, [ 21 ] representing trade-offs that review producers and knowledge users may want to consider. Because the survey report was based on expert opinion, it did not provide empirical evidence about the implications of each streamlined approach [ 21 ]. However, in the absence of empirical evidence, it may serve as a resource for rapid review producers looking to optimize one of these review characteristics. Given that evidence regarding the implications of methodological decisions for rapid reviews is limited, we have developed interim guidance for those conducting rapid reviews (Table 2 ).

Rapid reviews are being used with increasing frequency to support clinical and policy decisions [ 6 , 22 , 34 ]. While policymakers are generally willing to trade some certainty for speed and efficiency, they do expect rapid reviews to come close to the validity of systematic reviews [ 51 ]. There is no universally accepted definition of a rapid review [ 2 ]. This lack of consensus is, in part, related to the grouping of products with different purposes, audiences, timelines, and resources. Although we have attempted to summarize the major choices available to reviewers and requesters of information, there are few empiric data to guide these choices. We may have missed examples of rapid reviews and methodological research that could add to the conclusions of this paper. However, our approach to this work has been pragmatic, much like a rapid review itself, and is based on our international experience as researchers involved in the Cochrane Rapid Reviews Methods Group, as well as authors who participated in the writing and dissemination of Rapid reviews to strengthen health policy and systems: a practical guide [ 5 ]. This paper has, in addition, been informed by our research about rapid reviews and our collective work across several groups that conduct rapid reviews [ 1 , 68 ]. The Cochrane Rapid Review Methods Group also conducted a methods opinion survey in 2019 and released interim recommendations to guide Cochrane rapid reviews during the SARS-CoV-2 pandemic [ 2 ]. These recommendations are specific to the needs of Cochrane reviews and offer more detailed guidance for rapid review producers than those presented in this paper. We encourage readers to sign up for the Cochrane Rapid Reviews Methods Group newsletter on the website ( https://methods.cochrane.org/rapidreviews/ ) and to check the list of methodological publications which is updated regularly to continue to learn about research pertinent to rapid reviews [ 68 ].

We have summarized the rapid review methods that can be used to balance timeliness and resource constraints with a rigorous knowledge synthesis process to inform health policy-making. Interim guidance suggestions for the conduct of rapid reviews are outlined in Table 2 . The most fundamental key to success is early and continuing engagement with the research requester to focus the rapid review and ensure that it is appropriate to the needs of stakeholders. Although the protocol serves as the starting point for the review, methodological decisions are often iterative, involving the requester. Any changes to the protocol should be reflected in the final report. Methods can be streamlined at all stages of the review process, from search to synthesis, by limiting the search in terms of dates and language; limiting the number of electronic databases searched; using one reviewer to perform study selection, risk-of-bias assessment, and data abstraction (often with verification by another reviewer); and using a descriptive synthesis rather than a quantitative summary. Researchers need to make transparent methodological choices, informed by stakeholder input, to ensure that the evidence review is fit for its intended purpose. Given that it is not clear how these choices can bias a review, transparency is essential. We are aware that an increasing number of journals publish rapid reviews and related evidence synthesis products, which we hope will further increase the availability, transparency, and empiric research base for progress on rapid review methodologies.

Abbreviations

Enhancing the QUAlity and Transparency Of health Research

Grading of Recommendations Assessment, Development and Evaluation

Population, intervention, comparator, outcomes

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Preferred Reporting Items for Systematic Reviews and Meta-Analyses-Protocols

Randomized controlled trial

Setting, Perspective, Intervention, Comparator, Evaluation

SelecTing Approaches for Rapid Reviews

Peer review of electronic search strategies

World Health Organization

Garritty C, Stevens A, Gartlehner G, King V, Kamel C. Cochrane Rapid Reviews Methods Group to play a leading role in guiding the production of informed high-quality, timely research evidence syntheses. Syst Rev. 2016;5(1):184.

Article   PubMed   PubMed Central   Google Scholar  

Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, et al. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13–21.

Article   PubMed   Google Scholar  

Peterson K, Floyd N, Ferguson L, Christensen V, Helfand M. User survey finds rapid evidence reviews increased uptake of evidence by Veterans Health Administration leadership to inform fast-paced health-system decision-making. Syst Rev. 2016;5(1):132.

Thomas J, Newman M, Oliver S. Rapid evidence assessments of research to inform social policy: taking stock and moving forward. Evid Policy. 2013;9(1):5–27.

Article   Google Scholar  

Tricco AC, Langlois EV, Straus SE, editors. Rapid reviews to strengthen health policy and systems: a practical guide. Geneva: World Health Organization; 2017.

Google Scholar  

Langlois EV, Straus SE, Antony J, King VJ, Tricco AC. Using rapid reviews to strengthen health policy and systems and progress towards universal health coverage. BMJ Glob Health. 2019;4(1): e001178.

Hite J, Gluck ME. Rapid evidence reviews for health policy and practice. 2016; https://www.academyhealth.org/sites/default/files/rapid_evidence_reviews_brief_january_2016.pdf . Accessed 20 June 2021.

Moore GM, Redman S, Turner T, Haines M. Rapid reviews in health policy: a study of intended use in the New South Wales’ Evidence Check programme. Evidence Policy. 2016;12(4):505–19.

Watt A, Cameron A, Sturm L, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133–9.

Moore G, Redman S, Rudge S, Haynes A. Do policy-makers find commissioned rapid reviews useful? Health Res Policy Syst. 2018;16(1):17.

Gluck M. Can evidence reviews be made more responsive to policymakers? Paper presented at: Fourth Global Symposium on health systems research: resiliant and responsive health systems for a changing world. 2016; Vancouver.

Wagner G, Nussbaumer-Streit B, Greimel J, Ciapponi A, Gartlehner G. Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey. BMC Med Res Methodol. 2017;17(1):121.

Abou-Setta AM, Jeyaraman M, Attia A, et al. Methods for developing evidence reviews in short periods of time: a scoping review. PLoS ONE. 2016;11(12): e0165903.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Haby MM, Chapman E, Clark R, Barreto J, Reveiz L, Lavis JN. What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision making in health policy and practice: a rapid review. Health Res Policy Syst. 2016;14(1):83.

Hartling L, Guise JM, Kato E, et al. A taxonomy of rapid reviews links report types and methods to specific decision-making contexts. J Clin Epidemiol. 2015;68(12):1451-1462.e1453.

Reynen E, Robson R, Ivory J, et al. A retrospective comparison of systematic reviews with same-topic rapid reviews. J Clin Epidemiol. 2018;96:23–34.

Tricco AC, Zarin W, Ghassemi M, et al. Same family, different species: methodological conduct and quality varies according to purpose for five types of knowledge synthesis. J Clin Epidemiol. 2018;96:133–42.

Eiring O, Brurberg KG, Nytroen K, Nylenna M. Rapid methods including network meta-analysis to produce evidence in clinical decision support: a decision analysis. Syst Rev. 2018;7(1):168.

Taylor-Phillips S, Geppert J, Stinton C, et al. Comparison of a full systematic review versus rapid review approaches to assess a newborn screening test for tyrosinemia type 1. Res Synthesis Methods. 2017;8(4):475–84.

Polisena J, Garritty C, Kamel C, Stevens A, Abou-Setta AM. Rapid review programs to support health care and policy decision making: a descriptive analysis of processes and methods. Syst Rev. 2015;4:26.

Tricco AC, Zarin W, Antony J, et al. An international survey and modified Delphi approach revealed numerous rapid review methods. J Clin Epidemiol. 2016;70:61–7.

Aronson JK, Heneghan C, Mahtani KR, Pluddemann A. A word about evidence: ‘rapid reviews’ or ‘restricted reviews’? BMJ Evid-Based Med. 2018;23(6):204–5.

Pluddemann A, Aronson JK, Onakpoya I, Heneghan C, Mahtani KR. Redefining rapid reviews: a flexible framework for restricted systematic reviews. BMJ Evid-Based Med. 2018;23(6):201–3.

Robson RC, Pham B, Hwee J, et al. Few studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review. J Clin Epidemiol. 2019;106:121–35.

Higgins JPT, Lasserson T, Chandler J, Tovey D, Churchill R. Methodological Expectations of Cochrane Intervention Reviews (MECIR). 2016; https:// https://community.cochrane.org/mecir-manual . Accessed June 20, 2021.

Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration; 2011.

Abrami PC, Borokhovski E, Bernard RM, et al. Issues in conducting and disseminating brief reviews of evidence. Evid Policy. 2010;6(3):371–89.

Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10.

Thomas J, Newman M, Oliver S. Rapid evidence assessments of research to inform social policy: taking stock and moving forward. Evid Policy. 2013;9:5–27.

Varker T, Forbes D, Dell L, et al. Rapid evidence assessment: increasing the transparency of an emerging methodology. J Eval Clin Pract. 2015;21(6):1199–204.

Wilson MG, Lavis JN, Gauvin FP. Developing a rapid-response program for health system decision-makers in Canada: findings from an issue brief and stakeholder dialogue. System Rev. 2015;4:25.

Featherstone RM, Dryden DM, Foisy M, et al. Advancing knowledge of rapid reviews: an analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Syst Rev. 2015;4:50.

Silva MT, Silva END, Barreto JOM. Rapid response in health technology assessment: a Delphi study for a Brazilian guideline. BMC Med Res Methodol. 2018;18(1):51.

Patnode CD, Eder ML, Walsh ES, Viswanathan M, Lin JS. The use of rapid review methods for the U.S. Preventive Services Task Force. Am J Prevent Med. 2018;54(1S1):S19-S25.

Strudwick K, McPhee M, Bell A, Martin-Khan M, Russell T. Review article: methodology for the ‘rapid review’ series on musculoskeletal injuries in the emergency department. Emerg Med Australas. 2018;30(1):13–7.

Dobbins M. Rapid review guidebook: steps for conducting a rapid review. McMaster University;2017.

Moher D, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. System Rev. 2015;4(1):1–9.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et. al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA statement. PLOS Med . 2009;6(7):e1000097.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Haby MM, Chapman E, Clark R, Barreto J, Reveiz L, Lavis JN. Designing a rapid response program to support evidence-informed decision-making in the Americas region: using the best available evidence and case studies. Implement Sci. 2016;11(1):117.

Moore G, Redman S, Butow P, Haynes A. Deconstructing knowledge brokering for commissioned rapid reviews: an observational study. Health Res Policy Syst. 2018;16(1):120.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tricco AC, Zarin W, Rios P, et al. Engaging policy-makers, health system managers, and policy analysts in the knowledge synthesis process: a scoping review. Implementation science : IS. 2018;13(1):31.

Article   PubMed Central   Google Scholar  

Murphy A, Redmond S. To HTA or not to HTA: identifying the factors influencing the rapid review outcome in Ireland. Value Health. 2019;22(4):385–90.

PROSPERO-International prospective register of systematic reviews. https://www.crd.york.ac.uk/prospero/ . Accessed 20 June 2021.

Booth A. Clear and present questions: formulating questions for evidence based practice. In: Library hi tech. Vol 24.2006:355–368.

Garritty C, Stevens A. Putting evidence into practice (PEP) workshop – rapid review course. 2015, 2015; University of Alberta, Edmonton, Alberta

Garritty C, Stevens A, Gartlehner G, Nussbaumer-Streit B, King V. Rapid review workshop: timely evidence synthesis for decision makers. Paper presented at: Cochrane Colloquium; 2016, 2016; Seoul, South Korea.

Open Science Foundation. https://osf.io/ . Accessed 20 June 2021.

Tricco AC, Antony J, Zarin W, et al. A scoping review of rapid review methods. BMC Med. 2015;13:224.

Nussbaumer-Streit B, Klerings I, Dobrescu AI, Persad E, Stevens A, Garritty C, et al. Excluding non-English publications from evidence-syntheses did not change conclusions: a meta-epidemiological study. J Clin Epidemiol. 2020;118:42–54.

Article   CAS   PubMed   Google Scholar  

Nussbaumer-Streit B, Klerings I, Wagner G, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Rice M, Ali MU, Fitzpatrick-Lewis D, Kenny M, Raina P, Sherifali D. Testing the effectiveness of simplified search strategies for updating systematic reviews. J Clin Epidemiol. 2017;88:148–53.

Spry C, Mierzwinski-Urban M. The impact of the peer review of literature search strategies in support of rapid review reports. Research synthesis methods. 2018;9(4):521–6.

Waffenschmidt S, Knelangen M, Sieben W, Buhn S, Pieper D. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC Med Res Methodol. 2019;19(1):132.

The Grade Working Group. GRADE. http://www.gradeworkinggroup.org/ . Accessed 20 June 2021.

Moberg J, Oxman AD, Rosenbaum S, Schunemann HJ, Guyatt G, Florttorp S, et al. The GRADE evidence to decision (EfD) framework for health system and public health decisions. Health Res Policy Syst. 2018;16:45.

Parmelli E, Amato L, Oxman AD, Alonso-Coello P, Brunetti M, Moberg J, et al. GRADE evidence to decision (EtD) framework for coverage decisions. Int J Technol Assess Health Care. 2017;33(2):176–82.

Stevens A, Garritty C, Hersi M, Moher D. Developing PRISMA-RR, a reporting guideline for rapid reviews of primary studies (protocol). 2018. http://www.equator-network.org/wp-content/uploads/2018/02/PRISMA-RR-protocol.pdf . Accessed 20 June 2021.

Kelly SE, Moher D, Clifford TJ. Quality of conduct and reporting in rapid reviews: an exploration of compliance with PRISMA and AMSTAR guidelines. Syst Rev. 2016;5:79.

Mijumbi-Deve R, Rosenbaum SE, Oxman AD, Lavis JN, Sewankambo NK. Policymaker experiences with rapid response briefs to address health-system and technology questions in Uganda. Health Res Policy Syst. 2017;15(1):37.

Rosenbaum SE, Glenton C, Wiysonge CS, et al. Evidence summaries tailored to health policy-makers in low- and middle-income countries. Bull World Health Organ. 2011;89(1):54–61.

McIntosh HM, Calvert J, Macpherson KJ, Thompson L. The healthcare improvement Scotland evidence note rapid review process: providing timely, reliable evidence to inform imperative decisions on healthcare. Int J Evid Based Healthc. 2016;14(2):95–101.

Gibson M, Fox DM, King V, Zerzan J, Garrett JE, King N. Methods and processes to select and prioritize research topics and report design in a public health insurance programme (Medicaid) in the USA. Cochrane Methods. 2015;1(Suppl 1):33–35.

Department for Environment, Food and Rural Affairs. Emerging tools and techniques to deliver timely and cost effective evidence reviews. In. London: Department for Environment, Food and Rural Affairs; 2015.

Marshall CG, J. Software tools to support systematic reviews. Cochrane Methods. 2016;10(Suppl. 1):34–35.

The Systematic Review Toolbox. http://systematicreviewtools.com/ . Accessed 20 June 2021.

Pandor A, Kaltenthaler E, Martyn-St James M, et al. Delphi consensus reached to produce a decision tool for SelecTing Approaches for Rapid Reviews (STARR). J Clin Epidemiol. 2019;114:22–9.

Cochrane Rapd Reviews Methods Group. https://methods.cochrane.org/rapidreviews/ . Accessed 20 June 2021.

Download references

Time to produce this manuscript was donated in kind by the authors’ respective organizations, but no other specific funding was received alliance for health policy and systems research,norwegian government agency for development cooperation, swedish international development cooperation agency, department for international development, uk government

Author information

Authors and affiliations.

The Center for Evidence-Based Policy, Oregon Health & Science University, Portland, Oregon, 97201, USA

Valerie J. King

Epidemiology and Biostatistics, Unit Head, Public Health Agency of Canada, Ottawa, Canada

Adrienne Stevens

Cochrane Austria, Danube University Krems, Krems, Austria

Barbara Nussbaumer-Streit

Canadian Agency for Drugs and Technologies in Health, Ottawa, Canada

Chris Kamel

Global Health & Guidelines Division, Public Health Agency of Canada, Ottawa, Canada

Chantelle Garritty

You can also search for this author in PubMed   Google Scholar

Contributions

The first author drafted the manuscript and was responsible for incorporating all other authors’ comments into the final version of the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Valerie J. King .

Ethics declarations

Competing interests.

All authors are leaders or members of the Cochrane Rapid Reviews Methods Group, and all are producers of rapid reviews for their respective organizations.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

King, V.J., Stevens, A., Nussbaumer-Streit, B. et al. Paper 2: Performing rapid reviews. Syst Rev 11 , 151 (2022). https://doi.org/10.1186/s13643-022-02011-5

Download citation

Received : 10 November 2021

Accepted : 23 June 2022

Published : 30 July 2022

DOI : https://doi.org/10.1186/s13643-022-02011-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rapid review
  • Systematic review
  • Technology assessment
  • Evidence-based medicine

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

rapid evidence assessment vs literature review

This website may not work correctly because your browser is out of date. Please update your browser .

Rapid evidence assessment

Rapid Evidence Assessment is a process that uses a combination of key informant interviews and targeted literature searches to produce a report in a few days or a few weeks.

This process is faster and less rigorous than a full systematic review but more rigorous than ad hoc searching.

The latest REA publications from the UK Government.

This toolkit was designed to help Government Social Researchers to carry out or commission REAs. It contained detailed guidance on choosing the right methods for each stage of an REA  and offers a range of templates and sources to support the successful completion of an REA.

This page is a Stub (a minimal version of a page). You can help expand it. Contact Us  to recommend resources or volunteer to expand the description.

'Rapid evidence assessment' is referenced in:

  • 52 weeks of BetterEvaluation: Week 29: Weighing the data for an overall evaluative judgement
  • 52 weeks of BetterEvaluation: Week 40: How to find evidence and use it well

Framework/Guide

  • Communication for Development (C4D) :  C4D: Realistic
  • Communication for Development (C4D) :  C4D: Synthesise data across studies (research, monitoring data, evaluations)
  • Rainbow Framework :  Synthesise data across evaluations

Back to top

© 2022 BetterEvaluation. All right reserved.

  • Our services

What is a Rapid Evidence Assessment (REA)?

There are various types of reviews. The most authoritative review, i.e. the review that presents the most valid and reliable scientific evidence, is the systematic review. The aim of a systematic review (SR) is to identify all relevant studies on a specific topic as comprehensively as possible, and to select appropriate studies based on explicit criteria. These studies are then assessed to ascertain their internal validity. A systematic approach is applied to selecting studies: the methodological quality of the studies in question is assessed by several researchers independently of each other on the basis of explicit criteria. A SR is therefore transparent, verifiable and reproducible. Because of this the likelihood of bias is considerably smaller in a SR compared to traditional literature reviews.

A Rapid Evidence Assessments (REAs) is another type of evidence summary that can inform practice. An REA applies the same methodology as a SR and both involve the following steps:

1.    Background

2.    Question

3.    Inclusion Criteria

4.    Search Strategy

5.    Study Selection

6.    Data Extraction

7.    Critical Appraisal

8.    Results

       8.1.  Definitions

       8.2.  Causal Mechanism

       8.3.  Main Findings

       8.4.  Moderators and Mediators

9.    Synthesis

10. Limitations

11. Conclusion

12. Implications for Practice

The main way in which these two types of summaries vary is in relation to the time and resources used to produce them and the scope and depth of the results produced. In order to be ‘rapid’ an REA makes concessions in relation to the breadth, depth and comprehensiveness of the search. Aspects of the search may be limited to produce a quicker result:

· Searching: consulting a limited number of databases, and excluding unpublished research.

· Inclusion: only including specific research designs (e.g. meta-analyses or controlled studies)

· Data Extraction: only extracting a limited amount of key data, such as year, population, sector, study desig, sample size, moderators/mediators, main findings, and effect sizes.

· Critical Appraisal: limiting quality appraisal to methodological appropriateness and quality. 

Due to these limitations, an REA may be more prone to bias than a SR. A SR, however, usually takes a team of academics several months (sometimes even more than a year) to produce – as it aims to identify all published and unpublished relevant studies – whereas an REA might take two skilled persons only several weeks. In general, an organization will not have time or financial means to hire a team of academics to conduct a SR on a managerial topic of interest. As a result, an REA is the most widely used method of reviewing the scientific literature within Evidence-Based Management.

Want to conduct an REA?

  • Navigate To
  • Members area
  • Bargelaan 200
  • 2333 CW Leiden
  • The Netherlands
  • Want to stay up to date?
  • Research article
  • Open access
  • Published: 02 November 2015

Rapid Evidence Assessment of the Literature (REAL © ): streamlining the systematic review process and creating utility for evidence-based health care

  • Cindy Crawford 1 ,
  • Courtney Boyd 1 ,
  • Shamini Jain 2 ,
  • Raheleh Khorsan 2 &
  • Wayne Jonas 1  

BMC Research Notes volume  8 , Article number:  631 ( 2015 ) Cite this article

6894 Accesses

24 Citations

5 Altmetric

Metrics details

Systematic reviews (SRs) are widely recognized as the best means of synthesizing clinical research. However, traditional approaches can be costly and time-consuming and can be subject to selection and judgment bias. It can also be difficult to interpret the results of a SR in a meaningful way in order to make research recommendations, clinical or policy decisions, or practice guidelines. Samueli Institute has developed the Rapid Evidence Assessment of the Literature (REAL) SR process to address these issues. REAL provides up-to-date, rigorous, high quality SR information on health care practices, products, or programs in a streamlined, efficient and reliable manner. This process is a component of the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™) program developed by Samueli Institute, which aims at answering the question of “What works?” in health care.

Methods/design

The REAL process (1) tailors a standardized search strategy to a specific and relevant research question developed with various stakeholders to survey the available literature; (2) evaluates the quantity and quality of the literature using structured tools and rulebooks to ensure objectivity, reliability and reproducibility of reviewer ratings in an independent fashion and; (3) obtains formalized, balanced input from trained subject matter experts on the implications of the evidence for future research and current practice.

Online tools and quality assurance processes are utilized for each step of the review to ensure a rapid, rigorous, reliable, transparent and reproducible SR process.

Conclusions

The REAL is a rapid SR process developed to streamline and aid in the rigorous and reliable evaluation and review of claims in health care in order to make evidence-based, informed decisions, and has been used by a variety of organizations aiming to gain insight into “what works” in health care. Using the REAL system allows for the facilitation of recommendations on appropriate next steps in policy, funding, and research and for making clinical and field decisions in a timely, transparent, and cost-effective manner.

Evidence is the basis from which we tell truth from fiction in the natural world and determine value in health care claims. Millions of articles are published in thousands of biomedical journals worldwide [ 1 ]. PubMed, a free resource developed and maintained by the US National Library of Medicine (NLM), at the National Institutes of Health (NIH), is comprised of over 20 million citations for biomedical literature from MEDLINE, life science journals, and online books [ 2 ]. With the emergence of other journal citation resources that are freely available, health care providers, consumers, researchers, and policy makers find themselves inundated with unmanageable amounts of new information from health care research. Most individuals do not have the time, skills and resources to find, appraise and interpret this evidence, nor to incorporate their findings into health care decisions in an appropriate manner. Even in special interest areas that are smaller and more narrowly focused (e.g. liver disease), it is still challenging to stay abreast with all relevant information. Consequently, despite the need for evidence to clearly inform clinical practice and policy, the best evidence is not always used due to lack of knowledge, time, skills and resources needed to quickly synthesize such information and translate that information into meaningful knowledge that can inform practice decisions.

From clinical judgment to systematic evidence evaluation

Effective health care decisions should be evidence-based rather than rely solely on clinical judgment. Such judgments are often made under conditions of uncertainty [ 3 ], and use informal methods which can be fraught with bias and inaccuracy that produce shifting or misleading recommendations in practice. For example, as of 2012, 48 documented controlled trials and seven high quality systematic reviews (SRs) examining the effects of acupuncture on approximately 7433 total participants with substance abuse, (e.g., alcohol, cocaine, crack, nicotine dependencies and other addictions) existed in the peer-reviewed literature. Since acupuncture is widely used for substance abuse and there have been many studies done on this topic, Samueli Institute in 2012 conducted a review of SRs to summarize this evidence and concluded that, based on the current available literature, needle acupuncture was not effective in treating these conditions [ 4 ]. The implications of this review state that acupuncture is not recommended as a therapy for this condition at this time. A now classic example of the limitation of clinical judgment and the need for best evidence synthesis is in the use of hormone replacement therapy (HRT). Extensively used for years in post-menopausal women, clinicians made claims about the benefits of HRT for heart disease, sexual function, hot flashes, reduction of bone loss and prevention of cognitive decline. Subsequent randomized controlled trials (RCTs) and SRs, however, demonstrated that not only were the vast majority of these claims false, but the routine use of HRT was likely harmful [ 5 ]. Similarly, invasive laser procedures continue to be widely used for the treatment of angina from coronary artery disease (CAD) yet SRs of RCTs have shown no benefit of such procedures compared to sham controls and have reported infrequent but serious adverse events and/or interactions [ 6 – 10 ]. Should clinicians continue to perform these procedures? Clinical judgment is also often mis-leading or false when used by itself and the need to integrate best evidence syntheses and a method for translation of the evidence to support judgments is apparent. Without rigorous, transparent and reproducible SR processes to synthesize the best evidence, however, it is difficult to judge the efficacy, effectiveness and safety of a health care claim, identify where the gaps lie to improve the science and make appropriate decisions concerning clinical practice.

From information to knowledge

Mastering and managing the recent explosion of medical information is a difficult task, and evidence-based problem solving skills are essential for responsible decision-making, maintaining quality health care and ensuring good outcomes. As stated, SRs form the foundation for evidence-based medicine by collating all empirical evidence that fit pre-specified eligibility criteria in order to answer a specific research question. While expert opinions and narrative reviews are popular means for organizing data, and can be informative and produced faster and more easily than SRs, they are often subjective and prone to bias. Thus, during a time characterized by large amounts of information and the critical need to make evidence-based decisions, the shift from these analyses towards SRs is not only becoming prominent, but also necessary. Indeed, high quality SRs that clearly summarize evidence have become a crucial component in helping clinicians, patients, and policymakers make accurate decisions about clinical care [ 3 ]. SR methodology holds a key position in summarizing the state of current knowledge and disseminating findings of available evidence [ 3 , 11 ]. In fact, multiple groups such as the Institute of Medicine, the Agency for Healthcare Research and Quality (AHRQ), Cochrane, as well as professional associations, insurance agencies and licensing bodies that provide health care guidelines and recommendations often utilize SR methodology as a basis to offer such recommendations. Having access to and sharing high quality evidence-based SR reports within a particular subject area can help all parties be better informed about the safety, efficacy and effectiveness of treatment claims and make sound, informed decisions. They are an important step in moving from data—to information—to knowledge, provided they are conducted in a transparent, rigorous and meaningful fashion.

Challenges with current systematic review methodology

Inconsistent review standards and processes.

SR methodology used to assess the quality of available literature has gradually improved over the years, with several groups receiving international attention for the development of standards and advancing the science in SRs. Despite this progress, SR methodologies can still present challenges. First, many still vary considerably, and as such, outside reviewers often have difficulty replicating such methodologies. There is a need for improved standardized and reliable protocols and procedures to ensure transparency and produce meaningful information. Second, research questions and data extraction can be chosen without the input of diverse stakeholders, resulting in a narrow scope of the review, and sometimes minimal relevance or utility for making clinical decisions. Third, the subjective nature of quality assessment of research can leave SRs open to bias, resulting in unreliable results. Finally, while SRs help to provide a summary of the evidence, not all provide informative syntheses, perhaps because they lack a structured approach for obtaining expert input on the implications of the evidence for recommendations.

SRs can be cumbersome to execute and quite costly, requiring large amounts of personnel time and budget. Many people grossly underestimate the amount of time needed to perform a comprehensive, rigorous, and evidence-based SR, and subsequently choose to rely on less reliable methods such as expert opinions or narrative reviews. Protocol development, search strategy formation and literature searching, quality assessment and data extraction, discussion of disagreements for study inclusion, coding and quality assessments, acquisition of missing data from authors, and data analysis are all time consuming steps requiring specific skills, training and effort. A large team trained in specific roles/responsibilities at each phase of the review is needed to perform a SR most efficiently. Because lack of resources is sometimes a challenge, training, explicit processes, and the application of online systems can enhance efficiency and decrease cost. The methodology described below incorporates such methods and in turn reduces costs while enhancing the quality of the review.

Addressing challenges of systematic review methodology

Samueli institute’s rapid evidence assessment of the literature.

In order to overcome these challenges and maximize efficiency in the execution and dissemination of good evidence, there is a need for more objective, high quality and up-to-date syntheses provided in a more streamlined manner regarding health care interventions. To fill this need, Samueli Institute has developed a SR process known as the Rapid Evidence Assessment of the Literature (REAL © ). This method utilizes specific tools (e.g., automated online software) and standard procedures (e.g., rulebooks) to rigorously deliver more reliable, transparent and objective SRs in a streamlined fashion, without compromising quality and at a lower cost than other SR methods.

Specifically, the REAL SR process involves (1) the rapid identification of literature relevant to a particular subject matter area (usually related to an intervention for a particular outcome); (2) the use of one or more grading systems to assess the quality and strength of evidence for the topic; (3) a summary of that evidence and; (4) subject matter experts (SMEs) input and assessments of implications for the current use of the intervention in practice. This rapid methodology requires a team-based approach to capitalize on resources and ensure maximum meaning, impact and utility; efficient and consistent review methodologies aimed at reducing time while maintaining quality; careful creation of objective protocols describing how to execute SR processes to ensure both reliability and reproducibility; as well as thoughtful synthesis and interpretation of the data to form a foundation for future work. Consequently, SRs that utilize this more streamlined process (i.e., “REALs”) are more efficient and reliable than some other traditional SR methods. Figure  1 depicts the steps involved in the REAL SR process, also detailed in the remainder of this paper. The REAL process can be used to evaluate interventions or claims in many fields including conventional medicine, complementary and alternative medicine (CAM), integrative health care (aka integrative medicine, IM), wellness, and health promotion, resilience and performance enhancement areas and more. In fact, to date, the REAL process has been applied to several topical areas [ 4 , 12 – 16 ] with more recent published work including a Department of Defense (DoD) funded SR of reviews on acupuncture for the treatment of trauma spectrum response (TSR) components [ 4 ], self-care and integrative health care practices for stress management [ 15 ], self-care and integrative practices for the management of pain [ 14 ] and warm-up exercises for physical performance [ 13 ].

Basic steps of a Rapid Evidence Assessment of the Literature (REAL © )

Real methodology and design

Following a team-based approach to capitalize on resources.

Efficiency is of great importance when stakeholders need immediate, evidence-based answers for “what works”. Many review teams are small in size and reviews can take years to complete. Conversely, to maximize efficiency, Samueli Institute REALs are executed by several well-trained team members, each with specific roles and responsibilities, and often take approximately 3–6 months, from question development to manuscript delivery.

Specifically, a REAL Review Team includes: (1) a Principal Investigator to oversee the entire project; (2) a Review Manager with SR methodology expertise to guide the review process from start to finish; (3) a Search Expert to assist with literature search strategy development and execution; (4) at least two trained Reviewers to screen, extract data and review the quality of the literature; (5) a Reference Manager/Research Assistant to provide administrative and project support; (6) a Statistician to provide guidance regarding the interpretation of complex results or meta-analyses; and (7) at least two SMEs with diverse perspectives related to the review topic to provide guidance and synthesize the overall literature pool. It is important to note, that while Samueli Institute has designed the REAL process to be executed by individuals within these roles/responsibilities, some organizations and entities may be more limited in terms of available personnel. As such, it is reasonable for individuals to be trained to take on multiple roles, although doing so may delay the review process. The division of labor allows for more efficient, accurate and reliable execution of the review steps and reduction of time needed by any one individual. Further, it allows for better compliance with the Institute of Medicine (IOM) recommendations for managing bias and conflicts of interest (COI) when producing reviews and recommendations [ 3 ]. The REAL process follows these IOM recommendations and follows strict criteria at each review step to guard against bias and excludes team members with COIs from portions of the review where objectivity or balance may be compromised.

Involving stakeholders to ensure maximum relevance and translatability

One of the most frequent complaints by clinicians and patients about systematic reviews is that their conclusions have little relevance to daily clinical decisions and so are not of much use. The REAL has built in a process to obtain continuous input from any stakeholder involved in these decisions. In addition to the Review Team, REALs also include a Steering Committee comprised of 4–6 diverse stakeholders (e.g., clinicians, researchers, policy makers, patients and various other relevant stakeholders) chosen by the client and Principal Investigator, who provide guidance throughout the review process. This ensures that the review’s focus stays relevant to the end-user of the SR results and allows for translation to practice to occur more effectively. The Steering Committee seeks to address the “so what” question that so often occurs after a standard SR in which simply “more and better research is needed.” Though integral to the review, the Steering Committee is not involved in the review’s technical steps. This guards against bias during the independent evidence assessment process. Once the Steering Committee and the SMEs review and approve the team’s plans and progress at each review phase the Review Team is then solely focused on conducting the review and analyses in an independent and objective fashion.

Once assembled, it is imperative that both the Review Team and Steering Committee work together to formulate the review’s research question, scope, definitions, and eligibility criteria using the PICO(S) process (i.e., Population, Intervention, Control or Comparison, Outcomes and Study Design) [ 2 ], as well as identify relevant data extraction points for synthesizing the literature. Assembling various stakeholders to pre-define the review’s research question and eligibility criteria sets the tone for the review, ensuring that different perspectives are represented in the review and requiring that all subsequent steps and processes are conducted with this information in mind. This is a critical part of the REAL and ensures the results will have enough meaning and utility for stakeholders. Although involving a large group of voices at the outset to deliberate and agree upon all elements of the SR may seem counterintuitive to increasing efficiency, outlining a clear methodological process up front is imperative to streamlining the remaining systematic processes and so saves time overall. In addition this reduces the chance that the team will have to redefine their research question or processes once the review is underway. Revisions done while the quality assessment of the literature is underway not only costs time and resources, it opens the process to bias. These are reduced in the REAL process.

Enhancing the efficiency and consistency of review methodologies

Utilizing specific search protocols to reduce quantity and improve quality.

The REAL process requires search expertise to build robust literature search strategies as well as iterative input from both the SMEs and Steering Committee members for guidance. REALs do not “exhaustively” search the literature by including grey and non-English language literature, unless essential to the specific research question (e.g., searching Chinese herbal therapy). Instead, they usually include only peer-reviewed literature published in the English language. While the traditional SR considers the inclusion of only English-language studies as a limitation, doing so rarely compromises the outcome or implication for the majority of interventions and claims [ 17 ]. There has been debate, moreover, around the importance of including grey (unpublished) literature. While including such literature can reduce publication bias, it can also result in the overestimation of an intervention’s effects, since unpublished studies are usually more difficult to find, smaller and of lower quality compared to those published in the English language literature [ 18 , 19 ]. Therefore, despite the inherent differences in methods as well as time and cost associated with these processes, the conclusions of a REAL and a SR are usually comparable, and result in the same “bottom line” conclusions about the evidence [ 20 ]. In fact, the synthesis involved in a REAL is often more informative and rigorous than some SR efforts due to the additional assessment systems employed in a REAL compared to standard SRs [ 4 , 21 ] (see Adapting and Developing of Quality Assessment Tools ).

Automating the review to enhance the review process

REALs are more efficient due to not only their focus on English and peer-reviewed literature, but also their use of readily available software systems to automate the review process. These systems have been customized for use with a REAL and streamline many of the review steps including automated article processing and management, eliminating the need for data transcription, automated reliability estimation, real time error and quality checking, and reduction of post-review data collation. Using a specific review system and rulebooks allows researchers to deliver results faster, with improved accuracy and reliability, and provides a complete audit trail of all changes to ensure transparency. Such systems can also be accessed remotely and include messaging features that allow the review team to interact virtually, thereby considerably decreasing costs associated with travel, materials, supplies, and meeting facilities.

Ensuring objectivity to reduce bias

Adapting and developing of quality assessment tools.

Most groups using SRs to develop recommendations and guidelines rely on subject matter experts (SMEs) to evaluate the quality of the research. However, SMEs almost always have a particular point of view (bias) and also are rarely trained in the proper use of quality assessment tools. REALs avoid the use of SMEs in applying quality assessment tools and instead rely on trained review teams. This way, higher standards for accuracy and reliability are obtained. There are many well-accepted quality assessment rating systems available to researchers for evaluating quality and risk of bias. These tools typically focus on internal validity, or whether the results are due to attributional bias issues. These tools are usually quite subjective and variable in how quality criteria are interpreted. Samueli Institute has adapted some of these rating systems to improve their use and objectivity. In addition, we have also developed, validated and incorporated an External Validity Assessment Tool (EVAT © ) [ 16 ] into the REAL process to assess the “real-world” relevance of the research questions being asked. While many SRs typically only evaluate internal validity, REAL uses quality assessment tools to evaluate not only internal validity but external and model validity as well. Thus, all REALs deliver a database of ratings for gaging the attributional (internal validity), generalizability (external validity) and relevance (model validly) of every study. This database has multiple uses for clients even after the specific REAL is completed.

Detailing and applying quality criteria

Due to the inherently subjective nature of interpreting research results, Samueli Institute has created rulebooks to ensure that review teams are: (1) objectively evaluating and “scoring” each included article for quality; and, (2) consistently extracting data in a specific, consistent format, thereby reducing time needed for post-review data cleaning. Reviewers utilize these rulebooks and so provide transparent data extraction as well as a consistent and sufficient inter-rater reliability Cohen’s Kappa (i.e., 90 %), indicating a low level of conflict and high level of agreement between reviewers. These rulebooks are essential for managing and minimizing bias and ensuring the quality of any review. For example, should someone question the basis for any results in a SR, the team can refer to the rulebooks to explain and demonstrate specifically why and how particular articles were scored.

Maintaining transparent reporting

Just as the criteria and parameters whereby reviewers conduct the review are explicitly detailed in rulebooks, all decisions, processes and outcomes relating to each step of the review are maintained in a Review Documentation Checklist throughout the review process. Because this Checklist was developed to adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Guidelines [ 22 ], it not only aids with transparent reporting of results and replication of methods, but can also be used as a guide for how the results can be synthesized into a report and disseminated through peer-reviewed journal publications or other venues. Using this checklist for a manuscript outline also streamlines manuscript preparation as authors have all methodological processes and decisions housed in one place, rather than having to dig through files to find the details from various phases of the review [ 22 ].

Synthesizing and interpreting the data to find meaning

The REAL process is designed to provide a basis for SMEs to identify current implications for research and practice based on the evidence as a whole. In fact, once all individual studies included in the review have been evaluated, SMEs assess the overall literature pool according to the researched outcomes relevant to the research question in order to: (1) determine the quality of the research as a whole; (2) identify gaps in the literature; (3) assess the effectiveness of the intervention or claim as well as the confidence in that effectiveness estimate; and (4) judge the appropriateness for clinical use of the intervention. This is done in the following way. A roundtable is convened with the review team, Steering Committee and SMEs to evaluate the review’s results, the overall literature pool analyses, identified gaps, as well as outline next steps for the particular field of research. Several tools are used to organize the goals and discussion at this roundtable. A synthesis report is produced from this roundtable that is reviewed and modified by the REAL team based on feedback from all participants. These syntheses form a foundation for researchers, clinicians and patients to be better informed about the current state-of-the-science for any intervention, and determine next steps needed in the field of research and practice for use and impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) working group has developed methods for going about synthesizing the literature as a whole that should be used standardly across all systematic reviews [ 23 ].

Laying the foundation for evidence based decisions

REALs are constructed in a way that lays a foundation for future stakeholders to use quality evidence for decision making in multiple areas—research, practice, personal and policy areas. These foundational elements include the evaluated dataset (which can be further updated and added to), effect size estimates, meta-analyses (when possible), and other elements that go into the report such as the quality tools previously described and synthesis and interpretation assessments.

Conducting meta-analyses

Meta-analyses combine the actual quantitative results (e.g., collect and pool effect sizes) of separate studies included in a review, use statistical techniques to determine the overall effect size and confidence in the effect of the intervention, and employ analytic techniques to quantify possible publication bias. They are often costly and time-consuming, and only appropriate when the existing literature suggests that there are sufficient studies with enough homogeneity in outcomes. REALs are designed to form the foundation for subsequent meta-analyses to be conducted, if appropriate. REALs can therefore be utilized as an effective tool for rapidly determining the current state of the literature, and what gaps should be addressed to conduct an effective meta-analysis.

Bridging the gap between evidence and knowledge

There is a considerable barrier to rapidly translating evidence into decision making for clinicians, patients, researchers and policy makers. Although authors of SRs disseminate results through various routes of publication, results often do not reach all parties in ways that allow them to make medical decisions and so do not maximize impact of reviews. The REAL process is one of three components of Samueli Institute’s Scientific Evaluation and Review of Claims in Health Care (SEaRCH) Program and is a key step to forming a foundation upon which the other two SEaRCH components can be used for determining the clinical impact and relevance of evidence. SEaRCH is comprised of the REAL, Claim Assessment Profile (CAP) and Expert Panel Processes (e.g., Clinical, Research and/or Policy Expert Panel) processes as described in this journal issue [ 24 , 25 ]. Together these three segments of SEaRCH can be integrated with each other in order to answer the question of “what works” in health care by providing: (1) a clear description of the intervention and claim being evaluated and its feasibility to engage in future research through the CAP; (2) a rigorous summary of current evidence for the claim gathered through the REAL process and shared with the other components of SEaRCH; (3) a balanced, expert assessment of the appropriateness of use of the intervention with the Clinical EP; evidence-based policy judgments needed to direct implementation of a practice claim with the Policy EP; the value of the research for patient-centered care with the Patient EP; and (4) next research steps needed to move the evidence base about the claim forward with the Research EP. The methods used for the CAP and the EPs process are described in subsequent articles in this set.

Similar to the REAL, expert panels and the CAP employ specific processes and safeguards to reduce variability and bias and promote collaboration and efficient delivery of meaningful results. The CAP can be conducted prior to the REAL to inform the REAL toward specific definitions about a particular claim, or can be conducted in tandem with the REAL for informing the expert panel process. While expert panels can be organized once the REAL process is completed it is important that the review and expert panel processes both remain independent of each other to manage bias and maintain a focus on clinical and patient relevance. To do this properly, SME input and the REAL process need to be carefully managed even as they are linked to the expert panel process. The SEaRCH program is designed to allow for complete interaction between the SR and expert panel process in a manner that remains both impartial and informative in the interaction between SMEs and the trained reviewers [ 3 ]. This process creates distinct, independent teams who not only engage in the literature review and expert panel processes, but also “cross-talk” (under the supervision of a SEaRCH Program Manager and the Steering Committee chair) to ensure that both relevant research questions are being addressed and the rigor of the research is maintained. Specifically, when an expert panel is solicited, the Expert Panel Manager [ 26 ] and REAL Review Manager collaborate to ensure that the panel’s topic of interest is being sufficiently addressed by the REAL. Panelists, based on their expertise, can expand upon the gaps or clinical issues brought forth through the REAL. REALs can assist expert panels to determining appropriateness, clinical guidelines, implementation policies and patient-centeredness of the evidence or for establishing research agendas. Recommendations that emerge through the SEaRCH process can then be shared with stakeholders for maximum impact.

The REAL is a process that streamlines and organizes many elements of systematic reviews in order to insert high quality, rigorous evidence in a more rapid, objective, relevant and cost-efficient manner into decision making processes. Specifically, the REAL (1) follows a team-based approach; (2) utilizes specific search strategies; (3) automates review processes to ensure efficient use of time and skill; (4) involves key stakeholders to guarantee the right questions are being asked and addressed; (5) outlines and adheres to a transparent protocol to ensure objectivity and the management of bias; and (6) forms a foundation for subsequent analyses and expert panels to guide gaps and relevance, particularly when tied into other elements of the SEaRCH process. These features not only increase efficiency, but also assure adherence to reliable and reproducible protocols that provide a more consistent, transparent SR process for evidence-based medicine and decision making by the multiple stakeholders in health care.

By providing background and information on the existing literature, research gaps, and the weaknesses and strengths of current evidence, systematic reviews utilizing the REAL process provide a solid and consistent foundation for making clinical, patient and policy decisions. The, objectivity and efficiency of the REAL process make it a valuable for a variety of organizations and entities that need good evidence for decisions about products, practices or programs currently in use or those being explored for potential use. Decision makers as diverse as a health insurance or regulatory company/agency wanting to know what the evidence is for an intervention in order to decide whether or not it should be covered, or a clinical practice wanting to know if implementing a certain practice would benefit their patients are examples of decisions that can be aided by a REAL.

Training and support for conducting REALs

Samueli Institute has shared their REAL methodology with others in the SR field and continues to extend outreach and support to those interested in using this approach for evidence assessment. The Institute has developed a workshop that teaches participants how to conduct SRs in the step-wise fashion used by the REAL. This workshop is currently offered 2–3 times a year and provides participants with a comprehensive workbook covering theoretical material (i.e., the role and purpose of different types of SRs, their place in delivering evidence-based medicine, role of bias, etc.), practical instructions and guidelines on how to conduct SRs using the REAL process, and allows participants to receive individual coaching on review projects they are developing or conducting. The course and assistance is also offered through an online, self-paced platform (Black Board) complemented by didactics, mentorship, and in person workshops. Samueli Institute also collaborates with other organizations wishing to evaluate a topic using the REAL methodology, and offers guidance and mentorship throughout the review process. These workshops have been done for government and private groups and can be customized for use by any organization interested in applying evidence to health care decision making.

There is a need for reliable, rapid, and transparent evidence to guide effective health care decision-making. The REAL approach was developed to ensure high quality SRs are conducted in a rapid, streamlined, transparent and valid fashion. It has been shown to: (1) reduce the cost of generating reviews for those making informed decisions regarding health care; and, (2) inform the public in a time sensitive, cost-effective and objective manner about the state of the evidence for any health care area.

Detailing the challenges of current SR methodology and the ways in which this rapid SR process addresses those challenges highlights the need for investigators to ensure that reviews are objective, transparent, scientifically valid, and follow a common language and structure for characterizing the strength of evidence across reviews. Adapting an approach like the REAL into current SR processes will not only decrease the variability and improve the quality of SRs, but also allow health care decision makers, including clinicians, patients and policy makers to play a crucial role in developing relevant research questions and for making sound, evidence-based decisions in all of heath care.

For those interested in utilizing the REAL approach and learning more about conducting SRs, training workshops and collaboration opportunities, please visit the Samueli Institute website [ 27 ].

Abbreviations

Agency for Healthcare Research and Quality

coronary artery disease

Claims Assessment Process

conflict of interest

Department of Defense

expert panel

external validity assessment tool

hormone replacement therapy

Institute of Medicine

National Institutes of Health

US National Library of Medicine

population, intervention, control or comparison, outcomes, study design

preferred reporting items for systematic reviews and meta-analyses

randomized controlled trials

Rapid Evidence Assessment of the Literature

Scientific Evaluation and Review of Claims in Health Care

subject matter expert

systematic review

trauma spectrum response

Mulrow C, Chalmers I, Altman D. Rationale for systematic reviews. BMJ. 1994;309:597–9.

Article   PubMed Central   CAS   PubMed   Google Scholar  

Higgins J, Green S (eds.). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011] The Cochrane Collaboration 2011. West Sussex, England: The Cochrane Collaboration; 2011.

Graham R, Mancher M, Wolman D, Greenfield S, Steinberg E. Institute of Medicine. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press; 2011.

Lee C, Crawford C, Wallerstedt D, York A, Duncan A, Smith J, Sprengel M, Welton R, Jonas W. The effectiveness of acupuncture research across components of the trauma spectrum response (tsr): a systematic review of reviews. Syst Rev. 2012;1:46.

Article   PubMed Central   PubMed   Google Scholar  

Moyer VA. Menopausal hormone therapy for the primary prevention of chronic conditions: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2013;158:47–54.

Article   PubMed   Google Scholar  

Cobb LA, Thomas GI, Dillard DH, Merendino KA, Bruce RA. An evaluation of internal-mammary-artery ligation by a double-blind technic. N Engl J Med. 1959;260:1115–8.

Article   CAS   PubMed   Google Scholar  

Dimond EG, Kittle CF, Crockett JE. Comparison of internal mammary artery ligation and sham operation for angina pectoris. Am J Cardiol. 1960;5:483–6.

Leon MB, Kornowski R, Downey WE, Weisz G, Baim DS, Bonow RO, Hendel RC, Cohen DJ, Gervino E, Laham R, et al. A blinded, randomized, placebo-controlled trial of percutaneous laser myocardial revascularization to improve angina symptoms in patients with severe coronary disease. J Am Coll Cardiol. 2005;46:1812–9.

Salem M, Rotevatn S, Stavnes S, Brekke M, Pettersen R, Kuiper K, Ulvik R, Nordrehaug JE. Release of cardiac biochemical markers after percutaneous myocardial laser or sham procedures. Int J Cardiol. 2005;104:144–51.

Salem M, Rotevatn S, Stavnes S, Brekke M, Vollset SE, Nordrehaug JE. Usefulness and safety of percutaneous myocardial laser revascularization for refractory angina pectoris. Am J Cardiol. 2004;93:1086–91.

Linde K. Systematic reviews and meta-analyses. In: Lewith G, Jonas W, Walach H, editors. Clinical research in complementary therapies: principles, problems and solutioins. London: Churchill Livingstone; 2002. p. 187–97.

Chapter   Google Scholar  

York A, Crawford C, Walter A, Walter J, Jonas W, Coeytaux R. Acupuncture research in military and veteran populations: a rapid evidence assessment of the Literature. Med Acupunct. 2011;23:229–36.

Article   Google Scholar  

Zeno S, Purvis D, Crawford C, Lee C, Lisman P, Deuster P. Warm-ups for military fitness testing: rapid evidence assessment of the literature. Med Sci Sports Exerc. 2013;45:1369–76.

Buckenmaier C, Crawford C, Lee C, Schoomaker E. Special issue: Are active self-care complementary and integrative therapies effective for management of chronic pain? A rapid evidence assessment of the literature and recommendations for the field. Pain Med. 2014;15(Suppl 1):S1–113.

Crawford C, Wallerstedt D, Khorsan R, Clausen S, Jonas W, Walter J. Systematic review of biopsychosocial training programs for the self-management of emotional stress: potential applications for the military. Evid Based Complement Altern Med. 2013;2013:747694. doi: 10.1155/2013/747694 .

Khorsan R, Crawford C. External validity and model validity: a conceptual approach for systematic review methodology. Evid Based Complement Altern Med. 2014;2014:694804. doi: 10.1155/2014/694804 .

Moher D, Pham B, Klassen TP, Schulz KF, Berlin JA, Jadad AR, Liberati A. What contributions do languages other than English make on the results of meta-analyses? J Clin Epidemiol. 2000;53:964–72.

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess. 2003;7:1–76.

CAS   PubMed   Google Scholar  

Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007;2:MR000010.

PubMed   Google Scholar  

Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S. Rapid versus full systematic reviews: validity in clinical practice? ANX J Surg. 2008;1037–40.

Davidson J, Crawford C, Ives J, Jonas W. Homeopathic treatments in psychiatry: a systematic review of randomized placebo-controlled studies. J Clin Psychiatry. 2011;72(6):795–807.

Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

The Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group. http://www.gradeworkinggroup.org/ . Accessed 15 Jan 2015.

Hilton L, Jonas W. Claim Assessment Profile Methodology: a method for capturing health care evidence in the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™). To be published in BMC Res Notes. 2015.

Jonas W, Crawford C, Hilton L, Elfenbaum P. Scientific Evaluation and Review of Claims in Health care (SEaRCH™): a streamlined, systematic, phased approach for determining “what works” in health care. To be published in BMC Res Notes. 2015.

Coulter I, Elfenbaum P, Jain S, Jonas W. SEaRCH Expert Panel Process: streamlining the link between evidence and practice. To be published in BMC Res Notes. 2015.

Samueli Institute: Research Services. 2015. https://www.samueliinstitute.org/research-areas/research-services/search-services . Accessed 15 Jan 2015.

Download references

Authors’ contributions

CC and WJ developed and designed the Rapid Evidence Assessment of the Literature (REAL) methodology and the Scientific Evaluation and Review of Claims in Healing (SEaRCH) process. Both were involved in drafting the manuscript and revising it for important intellectual content. CL, SJ and RK have made substantial contributions to the conception and design of the REAL process and SEaRCH, and have been involved in the drafting and critical review of the manuscript for important intellectual content. All authors have given final approval of the version to be published and take public responsibility for the methodology being shared in this manuscript. All authors read and approved the final manuscript.

Acknowledgements

The authors would like to acknowledge Mr. Avi Walter for his assistance with the overall SEaRCH process developed at Samueli Institute, and Ms. Viviane Enslein for her assistance with manuscript preparation.

Funding and disclosures

This project was partially supported by award number W81XWH-08-1-0615-P00001 (United States Army Medical Research Acquisition Activity). The views expressed in this article are those of the authors and do not necessarily represent the official policy or position of the US Army Medical Command or the Department of Defense, nor those of the National Institutes of Health, Public Health Service, or the Department of Health and Human Services.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and affiliations.

Samueli Institute, 1737 King Street, Suite 600, Alexandria, VA, 22314, USA

Cindy Crawford, Courtney Boyd & Wayne Jonas

Samueli Institute, 2101 East Coast Hwy., Suite 300, Corona del Mar, CA, 92625, USA

Shamini Jain & Raheleh Khorsan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Cindy Crawford .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Crawford, C., Boyd, C., Jain, S. et al. Rapid Evidence Assessment of the Literature (REAL © ): streamlining the systematic review process and creating utility for evidence-based health care. BMC Res Notes 8 , 631 (2015). https://doi.org/10.1186/s13104-015-1604-z

Download citation

Received : 01 May 2015

Accepted : 19 October 2015

Published : 02 November 2015

DOI : https://doi.org/10.1186/s13104-015-1604-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rapid Evidence Assessment of the Literature (REAL)
  • Methodology
  • Systematic review process
  • Meta-analysis
  • Evidence-based medicine
  • Scientific Evaluation and Review of Claims in Health Care (SEaRCH)

BMC Research Notes

ISSN: 1756-0500

rapid evidence assessment vs literature review

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

rapid evidence assessment vs literature review

  • International
  • International aid and development

Rapid evidence assessments

DFID rapid evidence assessments provide rigorous and policy relevant syntheses of evidence, carried out in 3-6 months.

Rapid evidence assessments provide a more structured and rigorous search and quality assessment of the evidence than a literature review but are not as exhaustive as a systematic review. They can be used to:

  • gain an overview of the density and quality of evidence on a particular issue
  • support programming decisions by providing evidence on key topics
  • support the commissioning of further research by identifying evidence gaps
  • 8 March 2018
  • Research and analysis
  • 6 July 2017
  • 26 January 2017
  • 24 November 2016
  • 17 November 2016
  • 15 September 2016
  • 29 July 2016
  • 27 July 2016
  • 27 October 2015
  • 28 July 2015

Financial services and SME: rapid evidence assessment added

Public Procurement Reform: rapid evidence assessment added

Legislative Oversight in Public Financial Management: Rapid Evidence Assessment added

The effectiveness of conflict prevention interventions added

Security Sector Reform and Organisational Capacity Building added

4 new assessments added

First published.

Is this page useful?

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.

DistillerSR Logo

About Systematic Reviews

The Difference Between a Rapid Review vs Systematic Review

rapid evidence assessment vs literature review

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

Health policymakers and system implementers are often faced with situations that require critical decisions to be made within the shortest time possible. This makes systematic reviews less practical. Fortunately, rapid review methods are helping to streamline this process. In addition to rapid reviews, there are several other types of review methods that can help move the review and approval process along. Understanding the differences between a peer review vs systematic review and an integrative review vs systematic review is essential to making the right choice for your research. Each of these types of reviews comes with its own advantages and drawbacks. The use of a review type depends on the research needs of the author and the place-time attributes of the intended research.

Systematic Review

A systematic review employs reproducible, analytical approaches to identify, collect, choose, and critically evaluate data from multiple studies that can be included in a scientific review. If you are looking for a systematic review example , you can find all you need to know in the link.

A systematic review seeks to answer a specific predefined research question that should be carefully formulated to guide the review. Mostly the PICO model is used to formulate a concise research question. The research question helps in determining the eligibility criteria used. The review type tells the researcher how to gather information from specified research, and present findings.

Learn More About DistillerSR

(Article continues below)

rapid evidence assessment vs literature review

Rapid Review

A rapid review is the synthesis of evidence designed to provide more timely data for speedy decision-making. Compared to a systematic review, a rapid review takes a much shorter time to complete. Although the approaches used in rapid reviews vary greatly, they usually take less than five weeks. With rapid reviews, there are short deadlines because they omit several phases of the review process that are essential in systematic reviews. The time-decompression aspect of rapid reviews makes them an attractive alternative.

A rapid review is mostly used to:

  • explore a new or developing research topic
  • update a previous review, or
  • evaluate a critical topic

It’s also used to reevaluate existing facts about a policy or practice that was based on systematic-review methods. In rapid reviews, several methods are used to simplify or omit some of the processes used in systematic reviews, including reducing databases, allocating one reviewer for each review stage, omitting or minimizing the use of gray literature (information produced outside traditional publishing and distribution channels), and narrowing the scope of the review.

In terms of impartiality, rapid reviews may be more prone to bias than systematic reviews. The use of several methods stated above may lead to exclusion of studies that may have been impactful in developing a consistent conclusion. The use of these methods develops a certain scope, which constraints the results of a rapid review of that specific scope. However, the extent of this restriction is still unknown. Although many health policymakers and system implementers have embraced rapid reviews, some stakeholders in academia have expressed their reservations, arguing that rapid reviews are “quick and dirty”. But this shouldn’t negate their usefulness, as there is a time and place where a rapid review is exactly what’s needed.

3 Reasons to Connect

rapid evidence assessment vs literature review

Rapid literature review: definition and methodology

Affiliations.

  • 1 Assignity, Cracow, Poland.
  • 2 Public Health Department, Aix-Marseille University, Marseille, France.
  • 3 Studio Slowa, Wroclaw, Poland.
  • 4 Clever-Access, Paris, France.
  • PMID: 37533549
  • PMCID: PMC10392303
  • DOI: 10.1080/20016689.2023.2241234

Introduction: A rapid literature review (RLR) is an alternative to systematic literature review (SLR) that can speed up the analysis of newly published data. The objective was to identify and summarize available information regarding different approaches to defining RLR and the methodology applied to the conduct of such reviews. Methods: The Medline and EMBASE databases, as well as the grey literature, were searched using the set of keywords and their combination related to the targeted and rapid review, as well as design, approach, and methodology. Of the 3,898 records retrieved, 12 articles were included. Results: Specific definition of RLRs has only been developed in 2021. In terms of methodology, the RLR should be completed within shorter timeframes using simplified procedures in comparison to SLRs, while maintaining a similar level of transparency and minimizing bias. Inherent components of the RLR process should be a clear research question, search protocol, simplified process of study selection, data extraction, and quality assurance. Conclusions: There is a lack of consensus on the formal definition of the RLR and the best approaches to perform it. The evidence-based supporting methods are evolving, and more work is needed to define the most robust approaches.

Keywords: Delphi consensus; Rapid review; methodology; systematic literature review.

© 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.

Publication types

IMAGES

  1. Steps: Rapid Review

    rapid evidence assessment vs literature review

  2. Basic steps of a Rapid Evidence Assessment of the Literature (REAL

    rapid evidence assessment vs literature review

  3. Types of Reviews

    rapid evidence assessment vs literature review

  4. Flowchart depicting the steps of the Rapid Evidence Assessment (REA

    rapid evidence assessment vs literature review

  5. Understanding an Integrative Review vs Systematic Review

    rapid evidence assessment vs literature review

  6. Rapid evidence assessment (REA) process

    rapid evidence assessment vs literature review

VIDEO

  1. Day 1: Comprehensive Rapid Literacy Assessment on Grade 2 Pupils

  2. Assessment vs Evaluation

  3. THE TEACHING AND ASSESSMENT OF LITERATURE STUDIES

  4. Rapid Literacy Assessment #aesgrade6-23

  5. What is theory for?

  6. Fastest Literature Review With Unbelievable AI Research Tools

COMMENTS

  1. Rapid Evidence Assessment

    Rapid Evidence Assessment of the Literature (REAL©): streamlining the systematic review process and creating utility for evidence-based health care. BMC research notes, 8(1), 1-9. Full Text Thomas, J., Newman, M., & Oliver, S. (2013). Rapid evidence assessments of research to inform social policy: taking stock and moving forward.

  2. PDF CEBMa Guideline for Rapid Evidence Assessments

    A Rapid Evidence Assessment (REA) provides a balanced assessment of what is known (and not known) in the scientific literature about an intervention, problem or practical issue by using a ... review, and a traditional literature review, please see the Appendix: 'Summarizing Scientific Literature'. Requirements for reviewers To successfully ...

  3. PDF Rapid Evidence Assessments: A guide for commissioners, funders, and

    Rapid Evidence Assessment 45 1.Title (25 words max) 45 2. Short Description and Review Questions (50-100 words) 45 3. Key Findings (200-500 words) 46 4. Review Methods 47 5. Findings and Synthesis 50 CAPE Rapid Evidence Assessments: A guide for commissioners, funders, and policymakers 2

  4. Rapid reviews: the pros and cons of an accelerated review process

    A rapid review is a 'a form of knowledge synthesis that accelerates the process of conducting a traditional systematic review through streamlining or omitting various methods to produce evidence for stakeholders in a resource-efficient manner'. 12 There is not a single-validated methodology in conducting rapid reviews. 13 Therefore, variation in methodological quality of rapid reviews can ...

  5. What are 'rapid reviews' and why do we need them?

    A rapid review is a form of knowledge synthesis that accelerates the process of conducting a traditional systematic review through streamlining or omitting specific methods to produce evidence for stakeholders in a resource-efficient manner. The timeframe of the review depends on resource availability, the quantity and quality of the literature ...

  6. Rapid Evidence Assessment of the Literature (REAL

    Conclusions. The REAL is a rapid SR process developed to streamline and aid in the rigorous and reliable evaluation and review of claims in health care in order to make evidence-based, informed decisions, and has been used by a variety of organizations aiming to gain insight into "what works" in health care.

  7. Rapid Reviews: the easier and speedier way to evaluate papers, but with

    It can be difficult to access all the literature if it is restricted or in a different language, exposing it to publication bias. The review should include details of these concessions and challenges to give context to any of the claims made. The writing process. The process of performing a rapid review is laid out below: THE LITERATURE SEARCH

  8. Rapid evidence assessment: increasing the transparency of an ...

    Rapid evidence assessment (REA), also known as rapid review, has emerged in recent years as a literature review methodology that fulfils this need. It highlights what is known in a clinical area to the target audience in a relatively short time frame. Methods: This article discusses the lack of transparency and limited critical appraisal that ...

  9. Expediting systematic reviews: methods and implications of rapid

    Traditional systematic reviews typically take at least 12 months to conduct. Rapid reviews streamline traditional systematic review methods in order to synthesize evidence within a shortened timeframe. There is great variation in the process of conducting rapid reviews. This review sought to examine methods used for rapid reviews, as well as ...

  10. What are the best methodologies for rapid reviews of the research

    Rapid reviews have the potential to overcome a key barrier to the use of research evidence in decision making, namely that of the lack of timely and relevant research. This rapid review of systematic reviews and primary studies sought to answer the question: ...

  11. Paper 2: Performing rapid reviews

    Needs assessment, topic selection, and topic refinement. Rapid reviews are typically conducted at the request of a particular decision-maker, who has a key role in posing the question, setting the parameters of the review, and defining the timeline [40,41,42].The most common strategy for completing a rapid review within a limited time frame is to narrow its scope.

  12. Rapid literature review: definition and methodology

    Definition. Cochrane Rapid Reviews Methods Group developed methods guidance based on scoping review of the underlying evidence, primary methods studies conducted, as well as surveys sent to Cochrane representative and discussion among those with expertise [Citation 11].They analyzed over 300 RLRs or RLR method papers and based on the methodology of those studies, constructed a broad definition ...

  13. Rapid evidence assessment

    Synonyms: rapid evidence review, rapid review. Rapid Evidence Assessment is a process that uses a combination of key informant interviews and targeted literature searches to produce a report in a few days or a few weeks. This process is faster and less rigorous than a full systematic review but more rigorous than ad hoc searching.

  14. What is a Rapid Evidence Assessment (REA)? » CEBMa

    A SR is therefore transparent, verifiable and reproducible. Because of this the likelihood of bias is considerably smaller in a SR compared to traditional literature reviews. A Rapid Evidence Assessments (REAs) is another type of evidence summary that can inform practice. An REA applies the same methodology as a SR and both involve the ...

  15. Rapid Evidence Assessment of the Literature (REAL

    Background Systematic reviews (SRs) are widely recognized as the best means of synthesizing clinical research. However, traditional approaches can be costly and time-consuming and can be subject to selection and judgment bias. It can also be difficult to interpret the results of a SR in a meaningful way in order to make research recommendations, clinical or policy decisions, or practice ...

  16. (PDF) Rapid evidence assessment: Increasing the transparency of an

    Rapid evidence assessment (REA), also known as rapid review, has emerged in recent years as a literature review methodology that fulfils this need. It highlights what is known in a clinical area ...

  17. Rapid evidence assessments

    Rapid evidence assessments provide a more structured and rigorous search and quality assessment of the evidence than a literature review but are not as exhaustive as a systematic review.

  18. The Difference Between a Rapid Review vs Systematic Review

    Rapid Review. A rapid review is the synthesis of evidence designed to provide more timely data for speedy decision-making. Compared to a systematic review, a rapid review takes a much shorter time to complete. Although the approaches used in rapid reviews vary greatly, they usually take less than five weeks.

  19. Defining Rapid Reviews: a systematic scoping review and thematic

    There is a need for clarity and consistency about what constitutes rapid evaluation; consistent terminology in reporting evaluations as rapid; development of specific methodologies for making evaluation more rapid; and assessment of advantages and disadvantages of rapid methodology in terms of rigour, cost and impact. Expand

  20. Rapid Evidence Assessment of the Literature (REAL ...

    Conclusions: The REAL is a rapid SR process developed to streamline and aid in the rigorous and reliable evaluation and review of claims in health care in order to make evidence-based, informed decisions, and has been used by a variety of organizations aiming to gain insight into "what works" in health care. Using the REAL system allows for the ...

  21. Rapid literature review: definition and methodology

    A rapid literature review (RLR) is an alternative to systematic literature review (SLR) that can speed up the analysis of newly published data. The objective was to identify and summarize available information regarding different approaches to defining RLR and the meth-odology applied to the conduct of such reviews.

  22. Rapid literature review: definition and methodology

    Abstract. Introduction: A rapid literature review (RLR) is an alternative to systematic literature review (SLR) that can speed up the analysis of newly published data. The objective was to identify and summarize available information regarding different approaches to defining RLR and the methodology applied to the conduct of such reviews.